Google Assistant: Are we becoming reliant on AI?
Yariv Adan, Product Leader at Google Assistant spoke about how AI is powering the next paradigm shift in Human-Machine Interaction – the Natural User Interface at the RE-WORK’s AI Assistant Summit, London
Whilst Siri was the first mainstream voice assistant many of us became familiar with, there are many personal AI’s coming onto the market to make our lives that little bit easier. Google Assistant which was initially released in 2016 is ‘your own personal Google’ – you can ask it questions, tell it to do things, and now it can even identify what song you’re listening to on the radio similar to Shazam. The AI, which can be found on Google’s Android phones and GoogleHome, is also one of the only voice assistants that hasn’t been given a female name or exclusively a female voice, breaking the gender stereotypes of digital assistants being female helpers. After all, ‘ if we’re ordering our machines around so casually, why do so many of them have to have women’s names?’
Voice assistants historically have problems understanding users for a variety of reasons including accent, context, background noise and sentence structure, and Yariv Adan, Product Leader at Google Assistant is currently leading a team that’s responsible for key pieces of the intelligence of the product, including the ability to understand context, to act proactively, to understand and respond to complex statements in natural language, to “see” using a camera, etc. There is, of course, an ideal intelligent assistant that everyone has in mind, and Google Assistant are trying to imagine the characteristics of this perfect AI and ‘come up with the baby steps we can build to get there.’
At the AI Assistant Summit in London, Yariv spoke about how AI is changing the world as we know it and powering a paradigm shift in human-machine interactions. He explained how Google Assistant, a cutting-edge product in the space, will adapt and grow to become a powerful and helpful assistant that has the potential to change our lives. The progression in the product team is rapid, so in March at the AI Assistant Summit in London, Yariv will be back to share their latest research and demonstrate the implementations in their product.
Google are well known for their employee satisfaction, and part of the package that they offer their team members is what’s known as the ‘20% project’ where employees are encouraged to spend 20% of their time on any initiative / project of their choice that may have nothing to do with their main role. Yariv explained how he began his work in AI through this model:
“Back in 2015, I was working on Video Ads in YouTube, but I was passionate about doing something exciting in the consumer space. Together with two colleagues: Behshad Behzadi – a senior Engineering lead in Google Search, and Ryan Germick – the creative leader of the Google Doodle team, we came up with the idea to build a conversational ‘Assistant’ that can understand and communicate in natural language, has a personality, and can do stuff for you. This was later on merged with other efforts in this space across Google, and became the Google Assistant.”
Google’s mission is ‘to organise the world’s information and make it universally accessible and useful’, and Google Assistant’s aim is to provide access to information (for example education, health, culture, news, financial, municipal and so on), with the aim of improving people’s life, as well as closing the gaps across different nations and demographics. Yariv spoke to us about the importance of combining AI and conversational UI as it has the power to scale expert capabilities that are currently accessible to only a few, and bring them to everyone – breaking barriers across geography, age, literacy, language, culture, and affordability.
As AI progresses at an increasing pace we asked Yariv what developments he’s most excited for and which industries will be most impacted. ‘AI is powering a paradigm shift in human machine interaction.
Conversational UIs have the potential to break free from some of the key limitations of mobile apps:
- Natural language is the most universal UI. Humans are wired to use it, regardless of education, age, and technical literacy.
- Natural language is the most robust UI. It can be used to express any intent, allowing anyone to use any online/offline service, without installing and learning any app.
- Natural language is the most ubiquitous UI. Not just on the phone, but also in the car, wearable, ear buds, TV, appliances, toys, …This new technology humanises machines: they have a voice, personality and humor, memory, camera as eyes, speaker as mouth, microphone as ears, … This has the potential to take the human-machine partnership to new and exciting places.’
As AI assistants become more prevalent in homes as well as on mobile devices, as a society we’re bound to become reliant on such assistants. Join us in London on March 15 & 16 to learn about both the most recent cutting edge progressions and hear how they will impact both business and society.