U.S. Chamber Takes Strong Stance for Technology-neutral AI Laws and Alliances
The world’s largest business organization has unveiled a state-centric roadmap to fuel AI development and regulate AI laws in 2023. The U.S. Chamber of Commerce’s Artificial Intelligence Commission on Competitiveness, Inclusion, and Innovation has highlighted the need to fulfill AI-driven growth with stringent technology-neutral laws that support national and global objectives, particularly of like-minded allies. It called for regulation of AI technology using ethical guidelines.
AI has become the cornerstone of every business organization. By 2030, it will generate $13 trillion in revenue, pushing boundaries of machine learning capabilities in different technologies. From smart city infrastructure to global disaster monitoring and mapping, AI’s role is ubiquitous in every way possible.
Chamber President and CEO Suzanne P. Clark said – “AI is a transformational technology that we are just starting to realize its potential. While there are some risks that need to be managed, AI promises to boost economic opportunity and incomes, accelerate advancement in health outcomes and quality of life, and usher in yet another era of technology innovation that will spawn companies, industries, and jobs not yet imagined. For over a year, the Chamber’s Commission has been working on developing policy recommendations and industry best practices that provide policymakers and business leaders a roadmap to optimize its many benefits and protect against harms.”
In an email statement shared with our news desk, Brad Fisher, CEO, of Lumenova AI highlighted the urgency with which the US Chamber of Commerce came out with the report on AI. Brad said, “The risks of AI are clear to those in-the-know and growing exponentially as new investment pours into the area. Despite the widespread gnashing of teeth over the risks of AI, most of the regulatory guidance so far is of a voluntary nature and many organizations are taking a wait-and-see approach. The main challenge with waiting is that much of the AI that is now being deployed will need to be reworked to comply with regulations – so it’s not just a matter of providing guidance for the future, but rather fixing a big costly problem that already exists. Put another way, we are digging ourselves into a big hole and the longer we wait the harder it will be to climb out.
“The EU is already out in front with its pending AI risk management legislation, which can be beneficial for all. The challenge for those in the US, including governments and companies, is that this EU guidance may become the de facto standard. The challenge is that the US risk tolerance differs from the EU and we may end up over-regulating and squelching innovation or missing out on the most pervasive risks. The US wants to step to the forefront and be a leader in the development and use of AI – we should also step to the forefront in terms of a common-sense approach to manage the risks of AI.”
In another comment, Jeff Hudson, CEO, Venafi said, “It is important and profound that the US Chamber of Commerce is calling for regulation around the responsible use of AI because it introduces high levels of risk into organizations. There are many implications, and one that has not been widely discussed but is perhaps the most impactful is the ability of artificial intelligence to write software (code). Through generative large language models AI, like ChatGPT, almost anyone can easily generate code that is almost good enough to run. It is not perfect or complete, but it is 80% of the way there.
This democratizes coding to a whole new level, allowing everyone to be a pretty good developer now. However, very few people understand if that code is malicious or not – if it is safe or if it will harm you. Since software (code) is eating the world, the unleashing of democratized coding fundamentally alters how we protect the privacy and ensure that the systems our lives depend on are secure. The attack surface is growing very fast, and we are not adapting as quickly.
Securing an organization’s network just got much harder. We are moving to a world where everyone is creating without knowing if it is malicious or not and then letting it loose on customers and in the business.
It is time to admit that organizations cannot let unauthorized code run on their networks if they want to stop the vast majority of attacks. Just as we do not give access to our data to any person who lacks an ID and the authorization to access it, we should not allow machines access to data if they do not have an ID with the correct authorization. Ensuring that AI-written code can be trusted through authentication will be vital if humans and machines are to coexist for the betterment of the human condition.”
The report would provide a doctrine of guidelines for US-based technology companies working with AI and related capabilities. It would open new avenues for collaboration with government and international allies, specifically focusing on enhancing knowledge base in the private and public domains. By 2030, we could see a global platform for collaboration among all AI development organizations and regulators that pitch for upskilling future workforce with ethical AI initiatives.
Comments are closed.