Zephyrnet Logo

Machine-learning model creates creepiest Doctor Who images yet – by scanning the brain of a super fan

Date:

AI researchers have attempted to reconstruct scenes from Doctor Who by using machine-learning algorithms to convert brain scans into images.

The wacky experiment is described in a paper released via bioRxiv. A bloke laid inside a functional magnetic resonance imaging (fMRI) machine, with his head clamped in place, and was asked to watch 30 episodes of the BBC’s smash-hit family sci-fi show while the equipment scanned his brain. These scans were then passed to a neural network.

This machine-learning model, dubbed Brain2pix, predicted the scene from Doctor Who that was watched by the human guinea pig using those scans of his brain activity alone. The AI system can’t freely read one’s thoughts nor determine any old picture in your head. It’s designed and trained to recreate the moment in Doctor Who that was being watched just from the observed brain activity.

Each fMRI brain scan was turned into an array of numbers, or a tensor, using receptive field mapping. This technique “is a way to map very specific brain locations onto the visual space as it tells us which point in the brain is responsible for what pixel that you see in your visual space,” Lynn Le, first author of the study and a PhD student at the Radboud University Nijmegen in The Netherlands, told The Register.

The Brain2pix model took those tensors as input, and outputted a visual image, effectively translating activity in the brain into pixels of what was probably being watched. Here’s an example of what that looks like in practice:

drwho

An assistant to remember. Source: Brain2pix

As you can see, the reconstructions are a bit rum. Karen Gillan – who played the role of Amy Pond, Doctor Who’s fictional sidekick in series five to seven – looks more like a terrifying monster or alien from the show. Here are examples of an Ood alien.

ood

Even more terrifying. Click to enlarge. Source: Brain2pix

Brain2pix was trained using data that pairs a specific Doctor Who clip with its corresponding fMRI scan. The model contains a generative adversarial network that recreates the scene and these attempts are passed onto a discriminator network that has to guess whether the machine-learning-made image looks like a real clip from the training data.

If the reconstructed image isn’t quite good enough, the discriminator rejects it and the generator has to try again. Over time, the generator improves and manages to trick the discriminator into believing its images are real.

Experiments that involve converting brain signals into speech or images are often limited in scope. There is significant overlap between the training and testing data. Something like Brain2pix, for example, cannot recreate an image from a brain scan it hasn’t seen before.

brain

An ‘AI’ that can diagnose schizophrenia from a brain scan – here’s how it works (or doesn’t)

READ MORE

That means if a participant was asked to watch a brand new episode of Doctor Who and the fMRI images were given to the model, it would not be able to accurately recreate what the participant had seen. The machine-generated images are also heavily dependent on an individual’s brain scans, too: the software’s training involved learning a specific person’s activity in response to watching the TV show.

It’s difficult to get Brain2pix to transfer what it learned about one participant to another. Even if two people watched the same Doctor Who clip, the neural network would probably be unable to reconstruct the images from someone else’s brain scans if it wasn’t explicitly trained on them.

Still, the researchers believe their work may prove useful in the future. “First, it allows us to investigate how brains represent the environment, which is a key question in the field of sensory neuroscience,” Le told us.

“Second, it demonstrates a promising approach for several clinical applications. An obvious example is a brain-computer interface which would allow us to communicate with locked-in patients by accessing their brain states.”

The dream is that eventually, some day, neuroprosthetics will get good enough to help restore vision for the blind. “Here the goal is to create percepts of vision in blind patients by bypassing their eyes and directly stimulating their brains. Approaches like Brain2pix can in principle be used to determine what kind of percepts might be evoked as a result of such direct neuronal stimulation,” she added. ®

Source: https://go.theregister.com/feed/www.theregister.com/2021/02/12/ai_dr_who/

spot_img

Latest Intelligence

spot_img

Chat with us

Hi there! How can I help you?