AI Needs Regulation – But Let’s Not Repeat the Mistakes Made With Privacy Laws
AI is becoming more prevalent in our day-to-day lives, from self-parking cars to personalized advertising. With the evolution of AI tools taking great strides forward and seemingly accelerating at the moment, experts in an open letter have recently called for a slow-down or temporary hold on research and development. The letter was signed by Elon Musk, Steve Wozniak, and Yuval Noah Harari, among others, who want to press pause until the potential societal impacts of the tech can be properly assessed and regulatory guidelines and safeguards can be put in place.
Those who have seen the Pixar film Wall-E, you’ll know that the depiction of our future is one where humans have become totally complacent and reliant on technology. It’s scary to think of how AI, if not regulated, could affect consumerism, well-being, and health. But, while many of those fears are unfounded, there should be no doubt that the unintended consequences and side-effects of applying AI in a decision-making role (such as bias and disinformation) are real and can be very serious in practice.
The race is on for AI regulation
It’s truly remarkable how far generative AI has come over the past couple of years, no doubt spurred on by the intense race between Google and Microsoft. As a society, it’s right that we should be concerned that these technology giants and others develop AI responsibility. However, it’s silly to expect them to suddenly now hit the brakes, especially when they have so much to gain. Market players have no incentive to slow down and are all competing for their share. It seems like new tools from the big players are being announced weekly, if not daily, most recently with Meta testing generative AI ad tools. However, this is exactly why regulators should be thinking about the impact of AI and acting right now.
Those working with AI should welcome external regulation. As the legitimate concerns around AI’s potential impacts on society increase, governments around the world are looking at how they can best regulate AI without stifling its incredible potential.
The problem with privacy laws
While there are varying opinions across the world around how AI should be regulated, the most effective approach will be for governments to create new AI-specific laws that help to not only govern it but apply it to our advantage to benefit all parts of society. However, as we have seen with the piecemeal development of data privacy law, if individual countries take their own approach to the regulation of AI, it could cause more harm than good (eg. friction, added costs from a heavy compliance burden, protection gaps, misinformation, and fear). Taking a local approach will lead to AI learning in different ways – instead, we need a globally unified approach.
A United Nations of the Internet
For AI development to benefit all of society, the key pieces that regulation must get right are building global standards around AI system transparency, data privacy, and ethics. We need to create a ‘United Nations of the Internet’ for unified regulation across the world. This means there will be a consistent understanding of its potential and limitations, no one country will have advantages over another, and businesses in multiple markets where they can apply AI seamlessly. It will also be important to unite on regulation for AI to build trust and demonstrate its potential, that way we can help society embrace it, rather than fear it.
ChatGPT is a prime example and a reason that AI regulation is in the spotlight. It has sparked fear that it will take jobs, cause unemployment, and damage society. In actual fact, if regulated and applied correctly, AI will enhance society by opening up more meaningful jobs and personal time rather than repetitive tasks.
Unlocking AI’s Potential
We are standing on the brink of a new wave of human potential, fuelled by the power of AI. To truly unlock its potential for good, we need to learn the lessons of our experience with data privacy laws and find a way to create and agree upon the global standards around the ethical and positive use of AI tools.
Comments are closed.