Strange Future Technologies That Will One Day Improve the Medical Simulation Industry
When we close our eyes and imagine the future of medical simulation technologies, we see 3D animated experiences so realistic we can’t even tell virtual patients aren’t real, haptic devices which make us believe we are touching medical equipment, and communication so authentic we would never be able to tell it was a computer mainframe running AI. Think that future is far off? Think again! Check out some of these weird new technologies that we believe will one day affect the healthcare simulation industry.
Simulation Engine Handles a 10,000-Player Gaming Experience
British startup Hadean claims to have created a new simulation engine that would allow thousands of gamers to participate in a battle royale-style fight simultaneously. Hadean will put its Aether Engine to the test on 9 March, and has partnered with Eve Online maker CCP Games to host a massive space fight involving 10,000 players.
Underlying Aether Engine is Hadean’s core product, a cloud operating system called HadeanOS. Aether Engine isn’t a game engine to rival Unity or Unreal, but plugs into these third-party engines and handles the simulation side of games. Game engines handle the graphics side, among other functions. The startup has just raised £7 million ($9.1 million) in a fresh round of funding led by Draper Esprit.
We think that in the future, simulating an entire global population of patients will be entirely possible in the simulators we train in for healthcare. Such technologies will provide clinicians the opportunity to provide care on a micro and macro scale through endless IPE scenarios from minor accidents to massive MCIs.
The simulator system is already being used in healthcare via the biomedical research centre at the Francis Crick Institute to simulate cancer cells undergoing metastasis. The idea is to understand how cancer cells migrate to different parts of the body. Learn more here.
This Person Does Not Exist: Generating Realistic Artificial Faces
Ever seen a person who does not exist? How about a person who does not exist but looks extremely lifelike every time you refresh your browser? Generative Adversarial Networks (GAN) are a relatively new concept in Machine Learning, introduced for the first time in 2014. Their goal is to synthesize artificial samples, such as images, that are indistinguishable from authentic images. A common example of a GAN application is to generate artificial face images by learning from a dataset of celebrity faces. While GAN images became more realistic over time, one of their main challenges is controlling their output, i.e. changing specific features such pose, face shape and hair style in an image of a face.
In Clinical Simulation we need patients who do not really exist, not only to protect HIPPA rights but also to maximize standardized testing by having access to randomly generated cases to maximize learning performance expectations. Having access to an unlimited number of patient faces and physical characteristics will reduce the work load on healthcare educators to create patients rather than be able to focus on learner performance. See it in action here and learn how it works.
GauGAN Turns Doodles into Stunning, Photorealistic Landscapes
A novice painter might set brush to canvas aiming to create a stunning sunset landscape — craggy, snow-covered peaks reflected in a glassy lake — only to end up with something that looks more like a multi-colored inkblot. But a deep learning model developed by NVIDIA Research can do just the opposite: it turns rough doodles into photorealistic masterpieces with breathtaking ease. The tool leverages generative adversarial networks, or GANs, to convert segmentation maps into lifelike images.
Despite lacking an understanding of the physical world, GANs can produce convincing results because of their structure as a cooperating pair of networks: a generator and a discriminator. The generator creates images that it presents to the discriminator. Trained on real images, the discriminator coaches the generator with pixel-by-pixel feedback on how to improve the realism of its synthetic images. Similar to patient faces generation above, we must consider the need to quickly create clinical environments — perhaps this technology could help us to do that? Learn more from NVidia.
Deep Fakes Videos Will Change More Than Just Healthcare Simulation
Imagine faculty members being able to sit in a room in front of a camera and act as the patient, but in front of the learner they appear like anyone else in the world. Enter the world of Deep Fakes, where high powered computers can replace the face of anyone with the facial movements of another.
The implications for medical simulation should again be obvious here. Providing educators with the option to take on the persona of another human being but in a photorealistic and real-time way, will better provide learners the opportunities to engage with the most dynamic range of patients.
Voice Manipulation, Say Anything As Anyone
Similar to Deep Fakes Videos, Adobe last year unveiled an audio project they have been working on that takes a library of voice recordings from a particular individual and then allows creators to make that voice say whatever they want. Sound scary? It is! But the applications for healthcare simulation are again, astounding, as there is a need within our space to create voices of patients that sound much more realistic than the slightly off voice changing of hardware solutions (See our previous article on Voice Changers for Medical Simulation).
“According to Adobe, after about 20 minutes of listening to a voice, users can make the voice say whatever they want just by typing it out. Comedian and director Jordan Peele hosted the event and Adobe tech Zeyu Jin demoed the process by editing an interview with Peele’s comedic partner Keegan-Michael Key. Jin took existing audio of Key, then used the software to make him talk about making out with Peele instead of his wife.” Read more here.