NTT Demonstrates ‘World’s First’ Breakthroughs in Human Information Presentation and Processing
New research advances groundbreaking, sustainable developments in physical and auditory processing
NTT Corporation has accomplished significant breakthroughs in the areas of information and human sciences, with the demonstration of multiple findings categorized as the first of their kind in the world. Each research effort supports NTT’s mission of applying technology for good, especially in the fields of advanced information transmission and processing—as exhibited by NTT’s Innovative Optical and Wireless Network (IOWN) initiative.
Recommended: AiThority Interview with Arijit Sengupta, CEO and Founder at Aible
@NTTPR has announced several “world’s first” breakthroughs in information and human sciences in Osaka. Each effort plays a part in advancing NTT’s vision of a better communication system for society, including the #IOWN Initiative. #TechforGood
These developments advance theories such as communication in the form of dynamic shape presentation using pin-based shape-changing display, speech recognition and extraction of information from speech information.
The research is being displayed at the NTT Communication Science Laboratories Open House 2023 at the NTT West Quintbridge in Osaka. The theme of this year’s event is, “A future where everyone can shine everywhere, colored by diverse knowledge and technology.”
Demonstrations include:
MagneShape: World’s first information presentation interface enabling expression without electricity
NTT demonstrated a new type of tactile presentation technology called MagneShape, a dynamic, pin-based display based on magnetic field patterns. This technology is the first of its type in the world that does not operate using an electric actuator to drive its pins. Additionally, MagneShape advances NTT’s commitment to the reduction of environmental impact by proposing the replacement of electrically powered presentation and interaction technologies with non-electrical mechanisms. The technology also holds promise for improved accessibility for visually impaired people; for example, presenting a pathway for communication via Braille in real time.
The basic configuration of the MagneShape includes a magnetic pin, a housing component, and a magnet sheet. When the magnet sheet is moved, the attractive and repulsive forces generated between the magnetic pins and the magnet sheet move the pins up and down in the housing. No actuator is provided for each pin, and the pins are driven by magnetic force, making the device easy to design, build and use without wiring, soldering or programming.
Recommended: Embrace AI to become a W.I.T.C.H. Leader
ConceptBeam: A new signal processing technology that separates and extracts speech by meaning
The importance of extracting and selecting specific, useful information from the cacophonous noise of our modern, connected society is becoming increasingly difficult on a daily basis. So, NTT has developed the world’s first technology capable of separating and extracting speech signals containing multiple speakers and topics that conform to the meaning specified by voice, images, text, etc. This technology, ConceptBeam, can extract a target signal based on the content of speech and represents a new approach to the physical nature of the signal itself, such as the direction of sound and the independence of the signal’s source.
In the near future, NTT will introduce semantic processing to signal processing and pattern processing, with the aim of realizing a reality in which information of interest can be quickly and accurately identified and be retrieved and utilized for a wide variety of information.
‘Reading minds’ by monitoring fine eye movements
In a new study, NTT researchers investigated the relationship between auditory attention situations and fine eye movements. As a result, the researchers confirmed that even in the absence of large eye movements, the state of auditory attention (e.g., the voice that a person is interested in listening to) is reflected in the unconscious, fine movements of a person’s eyes.
Researchers believe the results indicate the possibility of ‘reading’ a person’s cognitive state, specifically a person’s interests and attention, from the measured data of eye movements in response to audio-visual stimuli. In future, researchers plan to engage in the applied research necessary for the practical application of these findings, aiming to realize rich communication technologies that function based on the activity of a person’s inner mind, which is often difficult to express verbally because of its unconscious nature.
Top AI ML Insights: AiThority Interview with Alex Mans, Founder and CEO at FLYR Lab
[To share your insights with us, please write to sghosh@martechseries.com]
Comments are closed.