Computers Will Be Able to Read Images From Your Brain Within a Decade



Thanks to OneZero & Thomas Smith for this article:: I have a photographic memory, and I’m a time-space synesthete. That means I can visualize, in photorealistic detail, basically any place I’ve ever been. I can also imagine nonexistent places and fly around them in my brain like I’m in a video game. It’s a cool thing to be able to do for myself, and the ability to imagine a particular shot in advance is helpful in my career as a photographer. I’d love to share these mental images with others, but there’s a catch: I suck at drawing. I can imagine a place like the Cathedral of Notre Dame or the interior of my first apartment in realistic detail, but if I pick up a pen and try to draw what I’m seeing in my mind’s eye, it comes out looking like the cheerful, aimless scribbles of a two-year-old.

I was excited, then, to learn about an artificial intelligence system from researchers at Kyoto University that is able to do something remarkable: Leveraging breakthroughs in deep learning and generative networks, it can read the images a person sees in their mind’s eye and transform them into digital photographs with up to 99% accuracy.

The system works for images the person is seeing in front of their eyes and ones they’re imagining. Currently, the images are low resolution, and the subject needs to be inside an MRI machine for the system to work. But it points to an amazing possibility, and one I never expected to see within my own lifetime — as the tech improves and brain reading hardware gets better, computers will be able to scan our brains and transform our mental images into actual photos we can save and share. And this could arrive within a decade.

The Kyoto University researchers performed their experiments in 2018 and published the results in the journal PLOS Computational Biology in 2019. A 2018 report in Science Magazine details how the researchers’ system works. The researchers placed subjects into an fMRI scanner and recorded activity in their brains. Unlike a traditional MRI, an fMRI measures blood flow in the brain, allowing scientists to determine which brain regions are most active as a subject performs a task. While recording brain signals from the subjects’ visual systems, the researchers showed them thousands of images, displaying each image several times. That gave them a huge database of brain signals, with each set of signals corresponding to a specific image.

The researchers then fed all this data into a deep neural network (DNN), which they trained to produce images. Neural networks are fantastic pattern detectors, and for each photo shown to the test subjects, the researchers had the neural network attempt to produce an image matching the observed patterns of brain activity, refining its output more than 200 times. The end result was a system that could take in fMRI data showing a subject’s brain activity and paint a picture based on what it thought each subject was seeing.

The researchers then threw in a twist: They handed the DNN’s output to an already trained generative network. This type of network is relatively new and is one of the most exciting advances in A.I. to occur over the past decade. These specialized neural networks take in basic inputs and generate wholly new photos and videos, which can be remarkably realistic. Generative networks are the tech behind deepfakes, artificial people, and many Snapchat filters. In this case, the researchers used their generative network to normalize images read from their subjects’ brains and make them more photo-like.

The researchers’ final system took in brain activity data from subjects, turned it into crude photos with the DNN, and then used the generative network to polish those photos into something much more realistic. To test the system’s output, the researchers showed the images it generated to a set of human judges. The judges were also shown a collection of possible input images and asked to match the images read from subjects’ brains to the most similar input image.

More than 99% of the time, the judges matched the output image produced by the system to the actual input image the subject had been viewing. That’s a stunning result: Using only brain signals and A.I., the researchers reconstructed the images in subjects’ brains so well that neutral human judges could match them up to real-world images almost 100% of the time.

You can judge the results for yourself. The first row of images below shows the original photos shown to subjects in the experiment. The second row shows the reconstructed images built from their brain signals. They’re not perfect by any means, but they’re definitely recognizable.

Acknowledgement and thanks to:: OneZero | Thomas Smith
May 30, 2021