Facebook is stopping funding for a brain-reading computer interface

0
38

[ad_1]

Now the answer is in – and it’s not close at all. Four years after announcing a “crazy amazing” project to build a “silent speech” interface using optical mind reading technology, Facebook is postponing the project, saying that consumer brain reading is still very far away.

In blog post, Facebook has said it is discontinuing the project and will instead focus on an experimental wrist controller for virtual reality reads the muscles in his arm. “Although we still believe in the long-term potential of a head-mounted optical head [brain-computer interface] technologies, we have decided to focus our immediate efforts on a different approach to the neural interface that has a closer time path to the market, ”the company said.

Facebook’s brain typing project took him to unexplored territory – including funding brain surgery at a California hospital and are building prototype helmets that could shoot through the skull – and leading tough discussions about whether technology companies should access private information about the brain. In the end, the company seems to have concluded that the research simply will not lead to the product fast enough.

“We have a lot of practical experience with these technologies,” says Mark Chevillet, a physicist and neuroscientist who led a silent speech project until last year but recently swapped roles to study how Facebook manages elections. “Therefore, we can say with certainty that as a consumer interface, the optically silent voice device mounted on the head is still very far away. Possibly longer than we anticipated. “

Thought reading

The reason for the madness around the brain-computer interface is that companies perceive mind-controlled software as a huge breakthrough – just as important as a computer mouse, graphical user interface, or finger-scrolling screen. Moreover, researchers have already shown that if they place electrodes directly into the brain to tap individual neurons, the results are extraordinary. Paralyzed patients with such “implants” can skillfully move the robotic arms i Play video games or type through mind control.

The goal of Facebook was to turn such discoveries into consumer technology that anyone could use, which meant a helmet or headphones that you could put on and take off. “We never intended to make a product for brain surgery,” Chevillet says. Given the social giant’s numerous regulatory problems, CEO Mark Zuckerberg once said that the last thing a company should do is crack a skull. “I don’t want to see congressional hearings on that,” he joked.

In fact, as the brain and computer interfaces progress, there are serious new concerns. What would happen if big technology companies knew what people were thinking? In Chile, lawmakers are even considering a human rights law to protect it brain data, free will, and mental privacy from technology companies. Given Facebook’s poor privacy record, the decision to stop this research could have the secondary benefit of putting some distance between the company and growing concerns about “neuro-rights.”

Facebook’s project focused specifically on a brain controller that could connect with its ambitions in virtual reality; bought the Oculus VR in 2014 for two billion dollars. To get there, the company approached in both directions, Chevillet says. First, it had to be determined whether an “imaginary-speech” interface was possible at all. This was sponsored by research at the University of California, San Francisco, where a researcher named Edward Chang placed electrodes on the surface of the human brain.

While implanted electrodes read data from individual neurons, this technique, called electrocorticography or ECoG, measures from fairly large groups of neurons at once. Chevillet says Facebook hoped it might be possible to detect equivalent signals out of the head.

The UCSF team has made surprising progress and reports today in the New England Journal of Medicine that it used these electrodes to decode speech in real time. The subject was a 36-year-old man, whom researchers call “Bravo-1,” who has lost the ability to create intelligible words after a severe stroke and can only grumble or whine. In their report, Chang’s group says electrodes on the surface of his Bravo-1 brain were able to form sentences on a computer at a rate of about 15 words per minute. The technology involves measuring neural signals in the part of the motor cortex associated with Bravo-1’s efforts to move the tongue and vocal tract the way it imagines to speak.

To achieve that result, Chang’s team asked Bravo-1 to imagine pronouncing one of 50 common words nearly 10,000 times, bringing the patient’s neural signals to a deep learning model. After training the model to match words with neural signals, the team was able to correctly identify the word Bravo-1 that it intended to pronounce in 40% of cases (chances would be about 2%). Despite that, his sentences were full of mistakes. “Hello how are you?” can come out “Hungry how are you.”

But scientists have improved performance by adding a language model – a program that judges which word sequences are most likely in English. This increased the accuracy to 75%. With this approach to the cyborg, the system could predict that the Bravo-1 sentence “I’m correcting my sister” actually means “I like his sister.”

As remarkable as the result may be, there are more than 170,000 words in English, so performance would fall sharply beyond Bravo-1’s limited vocabulary. This means that the technique, while it could be useful as a medical aid, is not close to what Facebook had in mind. “We are applying applications in the foreseeable future in the field of clinical assistive technology, but that is not our job,” says Chevillet. “We’re focused on consumer apps, and there’s a very long way to go.”

Equipment developed by Facebook for diffuse optical tomography, which uses light to measure changes in oxygen in the blood to the brain.

FACEBOOK

Optical failure

Facebook’s decision to give up brain reading is not a shock to researchers studying these techniques. “I can’t say I’m surprised because they hinted that they’re looking at a short time frame and that they’re going to rethink things,” says Marc Slutzky, a professor at Northwestern whose former student Emily Munger was a key engagement on Facebook. “Just speaking from experience, the goal of decoding speech is a great challenge. We are still far from a practical, comprehensive solution. “

Still, Slutsky says the UCSF project is an “impressive next step” that shows both the remarkable possibilities and some limits of the science of brain reading. “It remains to be seen whether you will be able to decode free-form speech,” he says. “A patient who says ‘I want a drink of water’ as opposed to ‘I want my medicine’ – so those are different.” He says that artificial intelligence models could be trained longer and on the brain of more than one person, they could improve quickly.

While the UCSF research was underway, Facebook also paid other centers, such as the Laboratory of Applied Physics at Johns Hopkins, to figure out how to pump light through the skull to read neurons noninvasively. Similar to magnetic resonance imaging, these techniques rely on sensing reflected light to measure the amount of blood flow to brain regions.

It is these optical techniques that remain the biggest stumbling block. Even with recent improvements, including some by Facebook, they are not able to accept neural signals with enough resolution. Another issue, Chevillet says, is that the blood flow that these methods detect occurs five seconds after a group of neurons is activated, making computer control too slow.

[ad_2]

Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here