Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

AiThority Interview with Vanya Cohen, Machine Learning Engineer at Luminoso

AiThority Interview with Vanya Cohen, Machine Learning Engineer at Luminoso
I think everyone on the Science team at Luminoso is fundamentally motivated by questions of how language works and how language conveys and produces meaning. 

Know My Company- Luminoso

Hi Vanya, from spending your time in Seattle to joining an NLP and AI technology company – what inspired you to make this journey?

Growing up in Seattle, I was exposed to tech at a pretty young age. Most of my friends’ parents worked for Microsoft. I spent a lot of my free time working on little coding projects, and even started my own business developing Video game mods in high school.

Movies like 2001: A Space Odyssey captured my imagination, and gave me a sense that AI was going to be an important part of the future, even if it seemed distant at the time. But I really wanted to get involved. In my Senior year of High School, I took an AI summer course at Stanford. It was my first hands-on exposure to AI and I was hooked immediately. I even went and implemented some AI algorithms we learned in the course in the video game mods I had been making.

At Brown University, I focused on taking Math and Cognitive Science courses relevant to AI, and once I had the prerequisites I started taking as many AI courses as I could, too. I also joined the Humans To Robots lab advised by Professors Stefanie Tellex and George Konidaris.

Broadly speaking, the research of the lab involves making robots that are better able to collaborate with humans. Specifically, my Master’s research culminated in a paper about teaching robots to understand objects and their shapes in terms of natural language. You can give a robot equipped with this model a few pieces of furniture or other objects—for example, a few couches—and you can describe them i.e. ‘the sectional couch,’ or the ‘modern sofa with no arms’, and the robot will know which one you’re talking about.

My research struck a nice balance between the practical considerations of Robotics (at the end of the day the robot has to do something new and useful) and the more theoretical work of applying Machine Learning. I also did some work replicating OpenAI’s language model GPT-2, which led to some press and accolades.

When I finished my Master’s, I wanted to work for an NLP company. Fundamentally, I wanted the chance to see the kinds of algorithms I had learned about in the lab working in the real world. I have a real interest in products. I want to see what I’m working on being used by people—I think that’s incredibly validating personally, but also from a research point of view, it’s important to help guide what problems are being worked on. Ultimately, I want machines to understand language, be they robots in the household or software applications, and working at Luminoso is a great place to work on that. I get to deploy all these exciting new tools from NLP academia in the real world.

Read more: AiThority Interview with Marc Butterfield, SVP of Innovation and Disruption at First National Bank of Omaha

Tell us more about the team you work with? What kind of skills and abilities does one need to be part of your technical team?

I think everyone on the Science team at Luminoso is fundamentally motivated by questions of how language works and how language conveys and produces meaning. That’s definitely true for me at least. Our Chief Science Officer and team lead is Robyn Speer. She has done pioneering work in knowledge representation with ConceptNet, a common-sense knowledge base that powers our products at Luminoso. She is also a pioneer in debiasing in NLP—essentially finding ways to mitigate the prevalence of harmful stereotypes learned by models in NLP when they’re trained on language data from the web.

Everyone on the team has great academic skills, really the ability to read and evaluate findings from AI academia and interpret their relevance in the proper context. This also requires strong Math and Communication skills. Our team often needs to brief other parts of the company on the latest developments in NLP, and we work closely with other teams that do more traditional software development.

Along with Math, strong Software Engineering skills are also important—our team is implementing the latest developments in AI-as-services for use by the company and product.

Right now our team is interested in questions of translation (we offer our products natively in many languages), sentiment analysis (ultimately it would be great if computers can understand what kind of emotional valence our text has), and my passion, knowledge representation. Human beings learn a surprising amount of common-sense knowledge early in life, from relatively little data, and are able to apply that knowledge almost effortlessly in diverse situations. If computers are ever going to really understand language as people do, they’ll need the same abilities.

Tell us the basic difference in the technology behind AI, Computer Vision, Robotics and Voice? Where does NLP fit into these sub-domains in AI ahd how you work with these at Luminoso?

Every sub-domain in AI (Computer Vision, NLP, etc.) has its own particular approaches and algorithms to solving problems, but the story of the last several years is that Machine Learning (learning to solve problems from example data without explicit programming) and Deep Learning have become indispensable tools of almost every field in AI. Where Deep Learning is an approach to Machine Learning, loosely inspired by the brain, that involves training networks of parameterized modules from training data.

Robotics is interesting because it’s really an interdisciplinary sport. It involves everything from mechanical engineering to perception (Computer Vision) to NLP (we want the robots to follow our instructions!). There’s even components of user experience design, cognitive science, and industrial design, especially when you have robots working alongside people.

Read more: AiThority Interview with Ariel Assaraf, CEO at Coralogix

Last year, you made some startling developments to OpenAI’s code. What was that all about?

I was wrapping up my robotics research at Brown when OpenAI (an AI startup Co-founded by Elon Musk) announced that they had created the world’s most powerful text generation model called GPT-2. As I always explain, this kind of model is a bit like your phone keyboard, it can predict what word comes next given some context words.

But, it’s insanely good at that. It’s so good you can give it a headline or start of a hypothetical article, for example, “In a shocking finding, scientists discovered a herd of unicorns living in a remote, previously unexplored valley, in the Andes Mountains. Even more surprising to the researchers was the fact that the unicorns spoke perfect English,” and it will just write the article, complete with quotes and supporting facts. The text it generates is actually pretty convincing if you don’t examine it too closely.

OpenAI decided not to release the model to the public, and withheld some details of their research, ostensibly because they feared it could be used to generate fake news. Aaron Gokaslan and I decided we wanted to replicate the model. We’re both interested in language, and Aaron had done work in generating images with generative adversarial networks (GANs).

We did some quick calculations and realized it would be expensive to retrain the model, but that many organizations and malicious actors could pretty easily replicate it. If you assume the model is dangerous, it was clear that bad actors would be able to replicate it, and there was the potential that under-resourced researchers wanting to design ways to detect fake text or otherwise understand the dangers posed by powerful text generators wouldn’t have easy access to the model, and couldn’t do that research. The normal process of academic research had been disrupted in a way that favored malicious parties.

We also believed the model wasn’t likely to be a dangerous generator of fake news, but some future improved versions could be, and we felt it was important to get started on research to mitigate future abuse.

We retrained and released the model, and got some press recognition as a result. I even got a few high-fives at conferences. OpenAI has done some great work investigating whether these kinds of models are currently being used online for malicious purposes, and at present, it seems that they’re not. Following the release of our research to the public, a number of groups at large companies felt comfortable doing the same; text generation is a really vibrant subfield of NLP right now. I highly recommend people check out the kinds of applications being built with GPT-2, like the text adventure game AI Dungeon 2.

Earlier this week, Google’s Chief Sundar Pichai said the future is all about pairing Quantum Computing and AI. As an AI scientist, what are the major technical challenges to merge these two code data science specializations?

A lot of what’s driven recent progress in deep learning has been the increasing power and accessibility of graphics processing units (GPUs). These are specialized pieces of hardware that have historically been used in media applications and video games—but it turns out they’re very good hardware for performing the kinds of calculations needed to train deep neural networks.

Neural networks had been around for decades and for much of that time were considered not particularly practical for Machine Learning applications, but the usage of modern GPUs has driven the Deep Learning revolution, and people can now train neural networks of sufficient complexity on large amounts of data to observe their true potential.

At a fundamental level training, a neural network is an optimization problem, where you try and find parameters for the network such that the network has a minimal error on your training data. Many problems in AI are optimizations of one kind or another. I’m no expert in Quantum Computing, but there’s reason to believe that quantum computers—based on quantum processing units (QPUs)—are particularly suited to solving certain kinds of optimization problems.

I think the hope is that this translates into either faster training of existing Machine Learning architectures, or unlocks the ability to explore new kinds of models beyond current Machine Learning.

Computing and AI Insights

We hear a lot about AIOps and its role in transforming IT and Cloud Services. What opportunities and challenges do you work with on a daily basis at Luminoso?

Luminoso is focused on helping our customers make sense of large volumes of unstructured text: things like product reviews, customer and employee feedback, and forum posts.

I think automating the classification and interpretation of unstructured text feedback – like organizations may see in real-time support tickets or social posts – is one of the best use cases for operationalizing NLP right now.

Present-day NLP offers great tools for addressing these kinds of triage- and sorting-oriented problems. Make no mistake there’s a lot of challenges. It’s one thing for NLP to work in the lab, on certain kinds of known datasets and under best-case conditions. But often times when you apply the same algorithms to real-world data, customer data, they don’t work as well, and sometimes don’t work at all. Proving that these NLP tools are useful for real problems is a big part of the challenge, and fun of working at Luminoso.

Apart from AI and RPAs, we are also seeing the rise of Blockchain in the industry. What are your comments on the role of Blockchain and Crypto platforms? Is AI and Cybersecurity a safe and controllable confluence to deal with? How can the smaller businesses jump into this whole gig economy of AI+ Cybersecurity?

Regarding AI and Cybersecurity, I think one of the biggest challenges for the industry is going to be striking the right balance between doing good research and adhering to scientific norms of open discussion and reproducibility, and making certain we’re not creating tools for people to do harm.

Deep fakes are going to be a real problem. It’s not that hard, even today, to create fake audio and video of people doing inflammatory things. I think the fear is that these can be weaponized as misinformation. With the right timing, they could spark conflict, swing elections, cause financial instability.

Certain parts of the tech industry are looking to trust and verification mechanisms like Blockchain to combat deep fakes and fake news. The basic idea is that, in the same way, that Blockchain establishes a decentralized network of trust for Cryptocurrency, we could do the same for facts and media, and create a system where anyone can quickly and confidently verify the authenticity of a news story, or media imagery. But there are many hurdles and it’s not clear such a system would be viable or even a good idea.

What kind of AI governance policies are we looking at to tackle issues with data thefts, frauds and identity stealing? How can AI regulations, governance and forensics prevent these?

I can really only talk about deep fakes here. There’s certainly a fear that one’s likeness, either in video or audio, could be faked. I suspect this would mostly be done for the purposes of fraud and financial gain. That’s why it’s important to foster a community of researchers who are incentivized to devise means of detecting deep fakes, and for research to be conducted in a responsible way.

Read more: AiThority Interview with Ashkan Rajaee, CEO at TopDevz

Thank you, Vanya! That was fun and hope to see you back on AiThority soon.

Vanya is a Machine Learning Engineer at Luminoso Technologies, Inc.  He is a recent graduate of Brown University with a Master’s in Computer Science ’18. 

He is passionate about developing products and technologies which enable AI to empower ordinary people. He is proficient in deep learning, NLP, and computer vision. Additionally, I have industry experience as a full-stack developer, using AWS and iOS/Windows.

luminoso logo

Luminoso Technologies is a leading artificial intelligence (AI) and natural language understanding (NLU) company that enables companies to rapidly discover value in their unstructured data. Luminoso’s award-winning software applies AI to accurately analyze text-based data for any industry without lengthy setup time or training. Luminoso can analyze unstructured data natively in 14 languages, including Chinese, Korean, Japanese, and Arabic. Companies use the insights that Luminoso’s solutions uncover to streamline their contact center processes, monitor brand perception, and optimize the customer experience. The company is privately held and headquartered in Boston, MA.

Comments are closed, but trackbacks and pingbacks are open.