AiThority Interview With Kevin Gosschalk, Founder and CEO at Arkose Labs
Know My Company
How have you interacted with smart technologies like cybersecurity and fraud analysis?
Our perspective is a bit different than other companies, and we look at both cybersecurity and fraud analysis from a few angles. Since we built our own technology for the cybersecurity space, we like to first look at what is actually possible on a customer-by-customer basis. We then determine what is going to solve the problems as we look at the problem from the perspective of our customers and actively work to solve the challenges and struggles each faces. Then, we ask ourselves if it all makes sense for each specific customer we’re working with before executing on it.
With Arkose Labs entering the US market as a foreign company we get to see things from a different perspective than a typical American company. When we are looking at conferences like BlackHat and RSA, these are huge vendor conferences and you walk around to see all of these promises companies are making. You realize there is a lot of falsehood being spread. There is a lot of naiveness in how companies are positioning their products to the problems they can’t really solve and I think it’s hurting the reputation of cybersecurity in general. This also includes the fraud prevention space, which has a similar problem.
We come from the perspective of “how do you prove it – how do you know for sure what you are saying is real and accurate?”. Our technology is designed with a service level agreement around its ability to eliminate automated attacks – done on a daily basis – and it’s unique. No other company in fraud or in cybersecurity is willing to guarantee the product will continue to perpetually work this should be a baseline standard across the entire industry.
We are very honest about knowing what we don’t know, and that is why we work closely with a black hat and white hat hackers and have implemented a bug bounty program. You look across the industry and you will not see many cybersecurity companies or fraud prevention companies running a bug bounty program. I think it’s really important to work with the people that see and test things we as companies don’t think about.
How did you start in this space? What galvanized you to start Arkose Labs?
My career started in health and biomedical innovation. My background is in engineering and I started my research looking at early signals of diabetes using image stitching and image graphing technology. Shortly after, I moved into building interactive gaming technology for people with intellectual disabilities. This was a step in the direction of a computer scientist with computer vision expertise. I hold a patent in the background of identifying objects and shapes using depth camera technology similar to something you would find in the iPhone X or the Microsoft Kinect, and from there I made the leap into fraud prevention around the concept of building better CAPTCHA-based technology. With my background in computer vision, I understood well the areas that machines were good at solving and that humans were not.
My co-founder comes from a background in user experience and game design. He started at Atari and was then a lead producer at Microsoft. These combined experiences let us build a piece of technology, which we use as our central source of truth. This challenging technology we present to suspicious traffic lets us know factually if a user is authentic. And if they are authentic, are they going to go on and do good things? Or are they inauthentic, a bad actor, and trying to defraud a customer? We’ve been able to build from this strong foundation and create a powerful fraud-fighting technique and tool.
What is Arkose Labs and how does it prevent commercial fraud?
Arkose Labs is a provider of online fraud prevention technology combining user risk assessment and sophisticated enforcement challenges. The company is designed around the philosophy of “how do we make it more expensive for the fraudsters and bad guys to try to get into an account, or to attack a website, than the value they’re trying to extract?” How do we break the economics to the point where fraudsters say “look, this is costing us more money than we are making.” If we succeed in doing this, fraudsters eventually give up in their attacks and go somewhere else. This is what we do for all of our customers and we do that with a combined approach of telemetry, which lets us sort traffic into different buckets.
For example, there are authentic users we want to let through without being impeded and then there are the inauthentic users that we challenge with a proprietary challenge technology we built. This enforcement challenge lets us know as a Machine Learning feedback mechanism whether traffic is doing good things or bad things. And if it’s doing bad things, our challenge is the battleground that we fight the fraudsters in. We have full control of their economics and their costs when they’re trying to attack and overcome these challenges in a very unique way no one else in the industry can do. Typically, fraudsters have the ability to manipulate their data – change their IP address, change their fingerprints, change the data they’re inputting into a form and bypass rate limits to appear unique with each attack. There are technologies being used to detect and mitigate, but we’ve built a technology attackers must overcome, which is quite unique in our space.
What is the use of Gamification technology for Fraud Prevention in 2019? How much has it evolved since the time you first started here?
As far as I can tell, we are the only company in fraud that uses gamification concepts and technologies to both combat fraud, as well as to ensure great user experience. Before we present a challenge, those challenges are put through a game design process where we ask ourselves, “what is the end result we are expecting the end user to do?” There is a process of a challenge – it may be a six-second activity user have to complete – so how do we make sure the user can get through it in the easiest way? We run all of these questions through a typical game design process and look at things, such as how humans perform, what is their baseline metrics for completion and how does a new game perform in A/B split testing of existing technology.
This is something you would do as a game design company and these aspects present difficult problems for computers but are things humans can understand very well. We develop our technology in a way where we’re not impacting the end users, so we’re not losing users as a result due to our care and attention to user experience.
What are the deadliest cyber-crimes that you identified in the past 12-18 months?
Deadliest is interesting. In security in general, we haven’t really seen an incident where due to a cybersecurity attack it’s fundamentally in of itself caused a loss of life. However, I think the general trend of misinformation and control of people’s understanding of knowledge is a very dangerous trend. It’s been well proven that the 2016 elections were manipulated by hackers – alluded to be from Russia. The recent report from Mueller further collaborated this claim and I don’t think it’s to anyone’s surprise in cybersecurity or fraud that this is a hard or difficult thing to do. It’s certainly achievable and we can see how fraudsters make these kinds of attacks. That’s a pretty powerful thing to be able to do if you can influence who becomes president, who becomes a senator and influence public votes.
Even though we don’t do online voting, people’s opinions change based on what they read online so if you can sway people’s opinions by spreading misinformation with scripts, bots, and fraud at scale, then you can influence who they vote for and why. This is a dangerous crime being undertaken right now and it’s having an impact on the global economy.
What is your approach to building fraud prevention models? What is the role of AI and Machine Learning in such models?
We take the mentality of “we don’t trust client-side data” and we always challenge our assumptions of what we think is the reality. Our system is designed to only present a challenge to what we think is inauthentic behavior by default. We don’t want to be challenging good users. However, we know bad guys can manipulate data and what they look like in such a way that there’s no way to really catch them. If you’re always assuming what you think is human is actually human, you’re actually in a problem where an attacker can figure out your assumptions and get around your system. Our challenge is quite unique in that unlike everyone else, we have a real-time feedback loop we do Machine Learning on to classify traffic.
Most companies don’t have the ability to label traffic and be confident in the decision in real-time. Even companies like Facebook hire tens of thousands of people to manually label and manually classify traffic. Consultants work at Google and their job is to go through images of benign objects to classify them, which is a different problem but the example is still true – they don’t have a way to say definitively that they’re accurate using just machines alone.
Again, similar in fraud. Because fraudsters are so good at looking human, they are able to fool or get really close to the edge of a system looking for malicious behavior. With our challenges, we’re able to know if a fraudster is attempting to overcome our technology. We present a challenge when they show the behaviors of what a fraudster would show during that challenge, then we reclassify the user based on how they proceed through the challenge. We use Machine Learning and take action in real-time based on the challenging feedback and, on the flip side, if a user gets through in normal fashion we Machine Learn to the better of the user so they don’t see another challenge. No other company has such an accurate piece of Machine Learning technology being used to train in real-time.
Do they justify their reputation of accurately detecting cyber-crimes and anomalies in real-time?
As I was describing, in most cases fraud prevention technology does okay but they need humans to reinforce because machines can’t know for a fact unless it’s obvious. If a fraudster is making a blatant mistake, such as repeatedly using a credit card clearly identified as stolen elsewhere or they are continuing to use a script – or bot – that has the same IP address. If these mistakes are being made, then systems can catch them in real-time. If the fraudsters realize you’re catching them and they start learning and get better, then it becomes extremely difficult for systems to catch them automatically.
A good fraudster can look completely legitimate and if you’re training on them, you actually start treating them as a legitimate user. There’s a concern around how to make sure we are using AI and Machine Learning to train, but are reinforcing it with what we know is factually correct. This is where control groups with humans take a subset of that traffic and verify the validity of it, which I think is an extremely important aspect of AI and Machine Learning because companies can end up in a situation where the Machine Learning is deciding something is good – even if it’s bad. It starts to reinforce the bad behavior as good behavior and the fraudster gets an even bigger opportunity to do more damage before being stopped.
How do you prepare anti-fraud teams to plan and stay one step ahead of cyber-criminals?
It’s again about understanding why fraudsters are doing it. Why are they trying to get in? It typically comes down to the fraudsters trying to make money and do so by defrauding companies. How do fraudsters do that?
An example would be in the cloud computing space. Typically, cloud computing services will give away free credits so that if you sign up with a service, companies let you test the service for a month and maybe give $100 in free credit. For a malicious actor, there’s a lot they can do so they create an account and get the $100 credit to use on server infrastructure. Now they can spin up a server and ultimately sell services through this server. A fraudster can sell a DDoS service, for example, where a user can buy from them the ability to do a DDoS attack against another company and use the free credits, scripts, and bots to automate the creation of these servers through an account creation flow – then automate the denial of service attack.
Our philosophy is to make it so fraudsters can’t make a positive ROI and it always comes down to the reason why they are trying to attack you. And we try to teach and train around the economics of a cyber attack and break it for a fraudster.
What is your opinion on “Weaponization of AI/Machine Learning”? How do you promote your ideas?
Weaponization can go both ways. You can weaponize AI as a defensive tool, which is what we do. We use AI to look at and detect anomalies in traffic trends. This way we know that a user group out of a bunch of user sessions is fraudulent and we identify what is the common denominator between these users. Are they all the same fraudsters? Are they different fraudsters? Is there some common thread in how a request is made or what fraudsters are doing? We use Machine Learning to look at these sessions to compare everything and figure out the common thread.
In terms of using AI offensively, as a fraudster, you could use AI and Machine Learning to figure out how to bypass security mechanisms. Fraudsters use scripts that test against a login point with the intent of doing an account takeover attack – where they try to brute force login credentials. There may be defenses in the way, such as rate limiting on IP addresses or on client fingerprints, and AI can be used in a way to test randomized aspects of a browser or a tool to mitigate and overcome defenses.
AI is available to everyone and it’s interesting to think of a world where we have quantum computing, where machines can decrypt hashed and encrypted passwords in a matter of seconds or even milliseconds. The time is coming soon where it will no longer be an effort for machines to bypass passwords and we need to get ready for that. The ability for fraudsters to weaponize things such as quantum computing with AI and Machine Learning is a very interesting future to think of.
The Crystal Gaze
What Fraud Prevention start-ups and labs are you keenly following?
We’re very focused inward on what we are building, the problems we are solving and how can we get ahead of what the fraudsters are doing. There’s not a specific startup or lab I would be able to point to and say “this is the company that I’m following with a keen awareness” based on our expertise. We are focused on how we combat fraud and abuse at scale, how we break economics at scale and I think there is technology looking to do those concepts in the right way. It’s definitely the way to build technology that sustainably solves the problem and not just solves the problem for the next couple of years.
What technologies within Machine Learning, big data, and fraud analysis are you interested in?
There are a couple of teams at Arkose Labs looking at different ways of measuring things such as how to confuse the bad guys based on how we weaponize our products internally. For us, we’re also very interested in what AI tools can be used against us so that when we’re presenting a visual challenge to prevent machines we also need to make sure the machines are not getting good at solving our challenges. Otherwise, we’re building in the wrong direction. Fundamentally for us, we’ve always built challenges against the grain of commercial research so that they are unlike reCAPTCHA, where reCAPTCHA presents a grid of images and asks you to associate metadata with those images. Users label those images by selecting all the images with street signs, for example, and those are commercially valuable problems to solve.
As an example, self-driving cars need to be able to automate those questions so they have to be able to look at images of what is around them, then associate and label the data. This is really interesting to us, and we are as keenly interested in what kind of AI and computer research is happening that could be used against us as much as we are in what we can use to help further the cause of preventing abuse.
As a tech leader, what industries you think would be the fastest to adopting Analytics and AI/ML with smooth efficiency? What are the new emerging markets for these technology markets?
Industries that are low margins with a high scale can benefit well from Machine Learning, as it will enable them to learn things previously too difficult to comprehend. This can range across many industries, but I think what Amazon is doing with its Amazon Go stores is very interesting. They are using vast amounts of AI to understand how people interact and move in physical retail stores, how they make decisions (e.g. picking an object up and reading it) and the time it takes to do those kinds of activities. I think this could be applied to various industries.
What’s your smartest work related shortcut or productivity hack?
Optimizing your task flow – I spend a lot of time sending emails. I also spend a lot of time helping unblock other people’s ability to get tasks done and enable my team to do their best work. These two things really demand two processes. One is the ability to retain knowledge at scale, so I use a couple of tools for that. I’ve been using Evernote for several years. I have a cloud-synced to-do list so that anytime I have an idea or anytime we’re talking to people and I need to jot down something I need to talk to someone else about, I can always make a note of that.
On email, email for me is so much of my time and it’s important to find a product that does email great. Recently, I started using a product called Superhuman, which I think is a really good play on how to optimize. It allows you to streamline the ability to send an email, read email, consume and use it as fast as possible, and it’s really designed around how you use a keyboard as efficiently and effectively as possible. It’s a really interesting new tool and I advise everyone to check it out.
Tag the one person in the industry whose answers to these questions you would love to read:
Vijay Bolina, CISO of Blackhawk Network, the company representing the backend of most gift card transactions.
Thank you, Kevin! That was fun and hope to see you back on AiThority soon.
Kevin Gosschalk is the CEO and Founder of Arkose Labs, where he leads a team of people focused on telling computers and humans apart on the Internet. He gained early recognition for his work with the Institute of Health and Biomedical Innovation (QUT) as part of the LANDMark (Longitudinal Assessment of Novel Ophthalmic Diabetic Markers) study, where he developed an innovative mapping technique to detect early signs of diabetes using non- invasive methods. Before Arkose Labs, Kevin worked on gaming hardware for the intellectually disabled at the Endeavour Foundation and built a unique device incorporating Microsoft’s Kinnect Camera technology. Noted for his involvement in interactive development and machine vision, Kevin then turned his expertise to automated abuse and human verification — often regarded as the Internet’s impossible problem. Today, Arkose Labs has transformed the irritating chore of comprehension into an SLA-guaranteed technology that prevents automated abuse for brands like Electronic Arts, Singapore Airlines, and Roblox.
Arkose Labs is solving multi-million dollar online fraud problems for major global businesses in sectors including online marketplaces, travel, banking, social media, ticketing, and online gaming. Our bilateral approach combines global telemetry with a patent pending enforcement challenge that stops fraud without false positives and without impacting the user experience. Arkose Labs is based in San Francisco, Calif., with offices in Brisbane, Australia.