From gallantlabucb
Given a large enough database of images and movies, can a scan of the brain reconstruct what the brain has seen? It is pretty freakish but volunteers watched movie clips while a scanner watched their brains and, from their brain activity, a computer made rough reconstructions of what they viewed using a database of movies from YouTube. It’s all experimental right now, requiring hours under an MRI machine and the reconstructed images looking far from what the subjects were actually watching, but who knows how soon this technology will develop into the ability to “read dreams”?
The left clip is a segment of the movie that the subject viewed while in the magnet. The right clip shows the reconstruction of this movie from brain activity measured using fMRI. The reconstruction was obtained using only each subject’s brain activity and a library of 18 million seconds of random YouTube video. (In brief, the algorithm processes each of the 18 million clips through the brain model, and identifies the clips that would have produced brain activity as similar to the measured brain activity as possible. The clips used to fit the model, those used to test the model and those used to reconstruct the stimulus were entirely separate.) Brain activity was sampled every one second, and each one-second section of the viewed movie was reconstructed separately.
For a related video see: http://www.youtube.com/watch?v=KMA23JJ1M1o
For more information about this work, please check our lab web site: http://gallantlab.org
via thestar
I’m not sure what the experiment was trying to achieve, but wouldn’t it have been more useful to show the subjects the same YouTube video and then compare the MRI scans of the different subjects to identify the common features that relate to the video?
It’s then like trying to decipher a long lost language or even match a picture with similar ones using the Google image match feature. Say, take the relatively simple video of an elephant. Show that to hundreds of subjects, use their brain scans to look for common patterns. Then repeat experiment with a video of something else but with an elephant appearing somewhere at some point in the scene and see if you can match the brain scan patterns at that moment to an elephant. If you can, then you know what an elephant looks like when reading a brain scan.
Repeat with other subjects. Then get more detailed until you can differentiate Inspecteur’s Clouseau’s moustache. Something like that. Unscientific? Impossible? Well, the brain does it: interpret the electrical firings into concrete images.