AiThority Exclusive: Responsible Ethics in AI Research
AI has been learning and growing at an impressive rate. Ever since ChatGPT exploded into the public’s consciousness in late 2022, industries and organizations of all kinds have scrambled to understand the implications.
Meanwhile, AI researchers continue to press forward, seeking to dominate this new industry with the smartest, fastest, most capable machines possible. In the process, many ethical concerns have emerged, from AI bias to the abuse of copyrighted material.
Tech experts must not operate with impunity, releasing new technologies onto the world without sufficient consideration of the ethical ramifications at play. The good news is that policymakers, regulators, ethicists, and AI researchers have started coming together and setting up frameworks for the responsible development of AI.
AI’s Dispassionate Discrimination
Incorporating AI throughout our society opens up the possibility of a glorious new beginning. While human beings may be prone to subconscious bias and error, our machines’ dispassionate approach could ensure a more level playing field for all.
Unfortunately, today’s machines risk forfeiting that potential, having adopted all too many of humanity’s unconscious biases and pursued them with chilling disinterestedness. Just consider the case of iTutorGroup. After using an AI tool that rejected people at different rates due to their gender and age, the company was forced to settle with the US Equal Employment Opportunity Commission (EEOC) for $365,000.
It doesn’t matter if the employer meant to discriminate — their reliance on these biased tools exposed them to legal jeopardy and an expensive payout. If AI-based systems don’t comply with the law, it doesn’t matter if they’re new or require more development because the organizations that utilize them can still be charged with and found liable for breaking the law.
Top AI ML Insights:
How AI and ML Drive Efficiency, Creativity, and Innovation
Another way some AI-based systems are running afoul of the law is through their abuse of other people’s intellectual property.
AI Intellectual Property Ignores Other People’s
To teach their machines, some AI researchers feed them copyrighted data. The sources of that information usually don’t even know this is being done, much less grant permission, and they certainly don’t g******* for their contributions or compensated in any other way.
For instance, one of the largest AI-training databases is Common Crawl, which uses CCBot software to scrape the Internet. Traditionally, CCBot has included paywalled articles from The New York Times and other copyrighted sources that either partially or entirely depend on subscription revenue from readers to survive. Common Crawl has agreed to cease these infringements and removed its previous content from The New York Times.
Other media outlets have joined the movement to block CCBot, but their content still finds its way into other AI-training data warehouses. For instance, New York Times articles continue to comprise 1.2 percent of WebText’s dataset. Meanwhile, consumers have little idea that the AI services they use have been created and developed through these means.
One could argue that if the AI service is employed for personal, educational, or noncommercial purposes, accessing copyrighted material could fall under the realm of fair use. However, if it is being used for commercial purposes, then this is clearly unethical. Ironically, unscrupulous AI researchers have been so busy refining their own intellectual property that they have neglected to respect the intellectual property of others.
Forging Ethical Guidelines for AI Research
The time has come to develop clear guidelines for the ethical development of AI. There are many ways AI research could move forward more responsibly. Researchers need to adhere to guardrails and a set of principles that would limit the potential harm of AI development.
Regarding bias, AI companies must audit their products to ensure they do not make decisions based on inappropriate criteria and reify discrimination. If AI will fulfill its promise of a fresh start and better future, it cannot be allowed to cement unfair practices into place.
Regarding copyright issues, the World Digital Governance has launched an initiative to leverage the BigParser dataset, which comprises human-curated, copyright-free information. Projects like these show that contrary to what some would have us believe, we can train AI without infringing on others’ rights.
Another possibility is for AI research enterprises to adopt the equivalent of Human Subjects Committees. These oversight boards that research universities and professional organizations maintain to protect the rights and well-being of study participants. They are made up of experts who serve to check on the exuberance of researchers whose attention may be elsewhere.
Surfing the Wave of AI
AI is unquestionably the wave of the future. Ensuring that humanity surfs it, rather than getting doused by it, means establishing ethical frameworks for AI development. The World Digital Governances initiatives and other projects seek to return AI research to solid footing. All members of the AI research community should participate in these collaborations to enable AI to actualize its full potential.
Recommended:
Comments are closed.