BSI Supports Responsible AI Management With New Package of Measures
A new package of measures, including certification to a new management system designed to enable the safe, secure and responsible use of Artificial Intelligence (AI) across society, is being launched by BSI, following research showing 61% want global guidelines for the technology.
The scheme, aligned to the recently published international management system standard for AI (BS ISO/IEC 42001), is intended to assist organizations in responsibly using AI, addressing considerations like non-transparent automatic decision-making, the utilization of machine learning instead of human-coded logic for system design, and continuous learning.
Recommended AI News: Riding on the Generative AI Hype, CDP Needs a New Definition in 2024
Susan Taylor Martin, CEO, BSI said: “AI is a transformational technology. For it to be a powerful force for good, trust is critical. This is an important step in empowering organizations to responsibly manage the technology, which in turn offers the opportunity to harness AI to accelerate progress towards a better future and a sustainable world. BSI is proud to be at the forefront of ensuring AI’s safe and trusted integration across society.”
The new package builds on BSI’s portfolio of AI services intended to help shape trust in AI, including AI training courses to equip individuals and organizations with the knowledge and skills necessary to navigate the complex landscape of AI standards and regulations. In this rapidly evolving field, understanding the ethical, legal, and compliance aspects of AI is essential for responsible and sustainable deployment.
Recommended AI News: Ecom Express Launches India’s First AI-Based Address Correction Platform bulls.ai
Algorithm testing is of paramount importance as it directly impacts the reliability, accuracy, and performance of AI systems. AI algorithms, such as machine learning models, deep neural networks, and natural language processing, underpin the decision-making processes of AI applications. BSI’s rigorous testing is essential to validate these algorithms’ correctness and efficiency, ensuring they produce trustworthy results and perform optimally in real-world scenarios.
BSI also offers AI assurance service support for organizations seeking to ensure their AI technologies are used responsibly and ethically. Assessments are designed to foster responsible AI practices, positioning companies for success in the AI-driven future.
BSI is progressing with its objective of becoming a notified body for AI products that require notified body oversight, in the wake of progress on the EU AI Act, as well as providing services to manufacturers and software providers proactively seeking AI Excellence Benchmark assessments of their AI-enabled products and AI management systems.
Manuela Gazzard, Group Director, Regulatory Services, BSI said: “As we seek to expand our AI horizons, whether in medical devices and healthcare, transport, the built environment or any other sector, it’s critical that we complement innovation and progress with safe and ethical deployment. I am delighted that BSI is developing a comprehensive package of training and oversight aligned to the ground-breaking new AI management standard to support organizations to make the most of innovation and ensure it is a force for good for society.”
Recommended AI News: Finastra Leverages Databricks to Enhance Product Development and AI Capabilities
BSI’s recent Trust in AI Poll of 10,000 adults across nine countries found three fifths globally wanted international guidelines to enable the safe use of AI. Nearly two fifths globally (38%) already use AI every day at work, while more than two thirds (62%) expect their industries to do so by 2030. The research found that closing ‘AI confidence gap’ and building trust in the technology is key to powering its benefits for society and planet.
[To share your insights with us, please write to sghosh@martechseries.com]
Comments are closed.