Infosys Launches Open-Source Responsible AI Toolkit to Enhance Trust and Transparency in AI
The initiative furthers Infosys’ commitment to creating an inclusive AI ecosystem, ensuring safety, security, privacy, and fairness
Infosys a global leader in next-generation digital services and consulting, today announced the launch of its open-source Responsible AI Toolkit, a key component of the Infosys Topaz Responsible AI Suite, designed to help enterprises innovate responsibly while addressing the challenges and risks associated with ethical AI adoption.
Also Read: AiThority Interview with James Alger, Co-founder and COO of Qloo
The Infosys Responsible AI Toolkit builds on the Infosys AI3S framework (Scan, Shield, and Steer), equipping enterprises with advanced defensive technical guardrails, including specialized AI models and shielding algorithms, to detect and mitigate issues such as privacy breaches, security attacks, sensitive information leakages, biased output, harmful content, copyright infringement, hallucinations, malicious use, deepfakes and more. It also enhances model transparency by providing insights into the rationale behind AI-generated output, without compromising on performance or user-experience. The open-source toolkit offers flexibility and ease of implementation. It is fully customizable, compatible with diverse models and agentic AI systems, and integrates seamlessly across cloud and on-premise environments. Organizations can access the toolkit here.
Balakrishna D. R. (Bali), Executive Vice President, Global Services Head, AI and Industry Verticals, Infosys, said, “As AI becomes central to driving enterprise growth, its ethical adoption is no longer optional. The Infosys Responsible AI Toolkit ensures that businesses remain resilient and trustworthy while navigating the AI revolution. By making the toolkit open source, we are fostering a collaborative ecosystem that addresses the complex challenges of AI bias, opacity, and security. It’s a testament to our commitment to making AI safe, reliable, and ethical for all.”
Joshua Bamford, Head of Science, Technology and Innovation, British High Commission, said, “Infosys’ commitment to becoming an AI-first business and establishing the Responsible AI Office reflects bold innovation and ethical leadership. By going open source, Infosys is empowering enterprises, startups and SMEs to leverage AI for groundbreaking advancements. Their Responsible AI Toolkit is a benchmark for technological excellence and when paired with a commitment to responsible practices and global sustainability can be an inspiring model for companies worldwide.”
Also Read: AiThority Interview with Jie Yang, Co-founder and CTO of Cybever
Sunil Abraham, Public Policy Director – Data Economy and Emerging Tech, Meta, said, “We congratulate Infosys on launching an openly available Responsible AI Toolkit, which will contribute to advancing safe and responsible AI through open innovation. Open-source code and open datasets is essential to empower a broad spectrum of AI innovators, builders, and adopters with the information and tools needed to harness the advancements in ways that prioritize safety, diversity, economic opportunity and benefits to all.”
Abhishek Singh, Additional Secretary Ministry of Electronics and Information Technology (MeitY), Government of India, said, “I am very happy to learn that Infosys has decided to open source their Responsible AI Toolkit. This will go a long way in making tools available for enhancing Security, Privacy, Safety, Explainability and Fairness in AI based solutions and also help in mitigating bias in AI algorithms and models. This is critical for developing safe, trusted and responsible AI solutions. I am sure, startups and AI developers will greatly benefit from this Responsible AI Toolkit.”
Infosys reaffirmed its commitment to ethical AI last year, with the launch of the Responsible AI Office, and dedicated offerings. It is one of the first companies to receive the ISO 42001:2023 certification on AI management systems and become part of the global dialogue on Responsible AI through membership in industry bodies and government initiatives such as NIST AI Safety Institute Consortium, WEF AIGA and C2PA, AI Alliance, UK FCDO, Stanford HAI, to name a few.
[To share your insights with us as part of editorial or sponsored content, please write to psen@itechseries.com]
Comments are closed.