The days are not far away when you will fear to express your anger, or disgust about a person even silently. The research and invention are under process at the MIT Media Lab. Arnav kapur, a Master’s student working on the device says that it won’t capture random words that does on in your mind. Neither it can read your mind. But the device will be able to listen when you are completely silent but talking to yourself. We often self talk. The device under prgress captures those words. You’re completely silent, but talking to yourself,” he says. “It’s neither thinking nor speaking. It’s a sweet spot in between, which is voluntary but also private. We capture that.”
In the above image, Arnav Kapur is wearing the gadget. It looks like a Bluetooth Headphone or the headset telecallers use. But it’s poles apart in functionality. So how does this basically work? To be honest, explaining the technology is beyond the scope of this article but in simple terms I can say, it’s Cortana or Siri listening to your commands you are silently saying or thinking. It then acts accordingly. What is the use of this device? It aims to bridge the gap between mind and computers. If you think, you can find out a lot of opportunities. For example, mute people might explore a new aspect of their personality using this device.
According to the MIT student, the device is now in nacent state. That is the application only has the capacity to learn about 20 different words. The system can’t understand every word a person says but the ones it had been taught. The training session consists of a user giving voice commands first. When the device gets accustomed, the user is instructed to not utter any word, but whisper it silently. Another opportunity, that you may think of is privacy. For exmaple, we do not always like to give voice commands to Cortana or Siri loudly amid many people. In that case, such a device will be helpful and comforting to users.
Kapur has deplyed a common artificial intelligence tool, also called a neural network. The scientists at MIT media Lab trained the neural network to recognize how different electrical signals correspond to the different words a person could say to themselves. There are more to this. “This is more about how we could bridge the gap between computers and humans,” he says.
Leave a Reply