Massachusetts Institute of Technology (MIT) student, Arnav Kapur, of Delhi, has created a new device called “AlterEgo.” With the device, you can quickly buy a burger or pizza and even request a subway trip without saying a word out loud. This mind-reading headgear is powered by artificial intelligence (AI), allowing users to connect with machines, AI assistants, and others.
According to the sources, TIME magazine included Arnav’s innovations in their list of the 100 Best Inventions of 2020, recognizing Arnav’s achievements. Bone conduction is used by the gadget, which was first demonstrated as a prototype in 2018, to enable internal and private communication. With the ‘AlterEgo’ headgear, users may easily order meals or make a quiet transportation request.
‘AlterEgo’, dubbed by MIT as a non-invasive wearable peripheral brain interface, enables natural language dialogues with computers, AI assistants, and humans through internal articulation, doing away with the requirement for spoken words or visibly visible exterior movements.
The system guarantees a closed-loop interface that keeps the user’s regular auditory experience uninterrupted by outside interference by leveraging bone conduction for audio feedback.
On social media, a video of Arnav Kapur using the ‘AlterEgo’ gadget has gained a lot of popularity. In the video, Kapur answers questions from the interviewer quickly and quietly, assuring viewers of the headset’s extraordinary powers.
The interviewer was amazed with how users seem to have the “entire internet in their head.”
The ‘AlterEgo’ headgear might revolutionize human-computer connection. According to sources, the project aims to revolutionize human-computer interaction and aid those with speech difficulties, particularly those with diseases like multiple sclerosis and amyotrophic lateral sclerosis.
Conclusion:
Delhi-born MIT student Arnav Kapur created “AlterEgo,” a mind-reading helmet powered by AI. The device allows users to order meals and make quiet transportation requests without speaking a word. Powered by bone conduction, the non-invasive wearable peripheral brain interface enables natural language dialogues with computers, AI assistants, and humans through internal articulation. The closed-loop interface ensures uninterrupted auditory experience by leveraging bone conduction for audio feedback. The project aims to revolutionize human-computer interaction and aid those with speech difficulties, particularly those with diseases like multiple sclerosis and amyotrophic lateral sclerosis.