Mary Lou Jepson discussed her work to make wearable MRI at ApplySci’s Wearable Tech + Digital Health + Neurotech conference at the MIT Media Lab on September 19, 2017.
LCD pixels are getting down to the wavelength of light.
The human body is translucent to near infrared light.
They use holography to record all of the light that is scattered, they shine a light on the scattered light to invert the light.
This is antiholography.
They can then adjust the resolution down to the micron.
LCD and camera chips and software can replace a multimillion dollar MRI machine that will work faster.
Nextbigfuture notes that wearable MRI will let first responders know what kind of stroke someone is having (clot or bleedout). You can then give the right drug to save their life in the diamond half hour.
It could be used to monitor and bring MRI everywhere.
They are making them into ski hats or shirts.
Brian Wang is a Futurist Thought Leader and a popular Science blogger with 1 million readers per month. His blog Nextbigfuture.com is ranked #1 Science News Blog. It covers many disruptive technology and trends including Space, Robotics, Artificial Intelligence, Medicine, Anti-aging Biotechnology, and Nanotechnology.
Known for identifying cutting edge technologies, he is currently a Co-Founder of a startup and fundraiser for high potential early-stage companies. He is the Head of Research for Allocations for deep technology investments and an Angel Investor at Space Angels.
A frequent speaker at corporations, he has been a TEDx speaker, a Singularity University speaker and guest at numerous interviews for radio and podcasts. He is open to public speaking and advising engagements.
Unfortunately the conference didn’t feature any of her slides in the video. You can see a few from this presentation from June:
https://www.youtube.com/watch?v=hXUNO_8Oo0s
The rest of the video features some pretty impressive work, well worth a view.
So this should be the end of 2000 dollar per MRI costs and an end to helium addiction?
In a word, “No”. Optical deconvolution is very difficult, and probably will only work to a limited depth anyway.
I recall that the Dr. Jensen mentioned that they should be able to image up to 10 inches with the technique which should cover most cases. I couldn’t find the reference in the video to confirm. They did succeed with imaging 4 inches through a skull this summer.
One of the many impressive attributes of this technology is that it should be able to be incorporated in many more places such as an ambulance, beds, chairs or clothing.
my take on it was along similar lines – even if they get the process working right, it is going to be a hell of a cpu/data-hog.
she said that the process is a billion times better resolution than an MRI.
Well, an average MRI series is approx 5-6 MB for a cross-section of skull, so this process is going to produce at minimum a petabyte sized file.
At the fastest optical transfer rate available, this would translate to 2 hours to just get this transferred, not to mention the ~600 hours to actually transfer it to disk.
in addition there’s a big question of how long it takes to actually *transform* this data into user-readable form. this to me is not a trivial cpu cost – I have no idea exactly how much data optical deconvolution requires, but if its in-line with x-ray crystallography it is a huge amount of data in relation to the output data.
All of this suggests to me that they are going to need to do a hell of a lot of computation here to get something usable – something in line with the LHC, where they have petabytes worth of events but throw away all but .000001% of them to show useful data.
still, fascinating stuff.
If there’s no Magnetic Resonance involved, (And there isn’t.) it’s not “MRI”, Magnetic Resonance Imaging.
This is a very different animal, they should call it ODI; Optical Deconvolution Imaging.
So… ultimate Halloween costume?