Enough Talk. Let’s Act in Support of the AI Bill of Rights
Window is closing for an AI Bill of Rights
We have been supporting the AI Bill of Rights for over an year now. A lot has happened in the last one year or so. The White House Office of Science and Technology put its weight behind development of a bill of rights and has created a national advisory committee, but probably the most marked development, if the least surprising, is how rapidly AI-powered systems are being deployed in most industries around the world.
I would be hard pressed to find an area of the economy today, from healthcare to shipping, oil and gas extraction to my own field of identity verification, that is not operating more accurately and efficiently because of AI-powered technology.
There are still no widely accepted guidelines or standards and at the rate AI is being incorporated into every aspect of the economy – and our daily lives – the window to set them is rapidly closing. If we don’t act soon, it will be harder to agree on set of standards we all can accept, and it will be more daunting to shut down bad actors who are already beginning to take advantage of the lack of governance in today’s AI environment.
A year is no time at all in the world of governance and standards development. Some standards frameworks take decades to develop. But in the field of technology, a year is an eon. So let’s employ some tried and true principles – don’t let the great be the enemy of the good, for one – and get started.
But Where to Begin With the AI Bill of Rights?
Because my company uses AI technology to help us protect our customers from online fraud, we have given AI standards a lot of careful thought. However, I still see value in a widely accepted set of standards that can guide development and usage of this powerful technology as we go forward. These standards should be reviewed and acknowledged by experts and academics in the field so they provide firm, clear guidance as we all continue to innovate. We need a living document that can evolve and be updated as technology advances.
But the time is now to get started. AI technology is developing so fast that it is the time to jump in front of this fast-moving train.
A good place to start might be to establish an accepted glossary of terms. Today there are as many definitions of AI as there are systems themselves. If major players in the AI technology space can reach agreement on basic definitions, we will be able to use those to help ensure the standards themselves – and the courts that will be called on to rule on future governing disputes – are based on clear, accurate information.
In parallel, we can begin now to determine our own ideas of what a beta set of AI standards should look like. As a company that develops and deploys AI technology, we have adopted a framework that focuses on two aspects of AI usage: what is ethical and what ensures appropriate levels of privacy. “Ethical” includes a commitment to avoid any type of bias in our systems. “Privacy” outlines our unyielding commitment to protecting personal data: transporting, securing, storing and using it with respect and with permission.
In thinking about what my own suggestions for a beta framework would be, I pulled out the provisions I originally suggested as a basic starting point, and they still seem relevant today. I based them on the data privacy protection standards that are closer to worldwide adoption:
- Notification. Companies must be clear and up front with customers about AI and how they are being asked to participate in its use.
- Consent. There must be a clear, easily understood, easy-to-find form of consent.
- O***** instead of opt out. A default opt-in system is not acceptable. Customers must actively o***** to use of an AI-powered system.
- Right to be forgotten. Any framework must include the right to be forgotten. A customer must have the chance to change their mind. There must be a mechanism that allows anyone to require all data about themself be permanently removed from the system.
- Bias measurement and reporting. Bias is tricky and can creep into even the best systems. AI cannot work for the good of all, business benefits and personal rights like, if systems are not carefully developed and then consistently monitored for bias.
This is, obviously, only a start. As a company whose mission is to combat online fraud, we understand the importance of eliminating as many gray areas as possible. At the same time, we don’t want to preclude genuine ethical innovation.
This is an exciting time to be a technology innovator. Things are moving fast, as they should, and I admit to being impatient. But there’s no time to waste. Before we know it, we will be building a whole new generation of AI technology. Let’s grab the opportunity to ensure it is being deployed in ways that make our world better.
The time to talk about an AI Bill of Rights was last year. Now it is time to act.
Comments are closed.