[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

Seekr Launches AI Evaluation Product to Enable Compliance with the President’s AI Action Plan

Seekr Technologies is selected by Semantic AI for joint project that  exposes Russian threats during Ukraine crisis | Semantic AI

New AI Evaluation Ecosystem SeekrGuard helps organizations identify adverse model behaviors and ensure that the U.S. is at the Forefront of Evaluating National Security Risks in Frontier Models

Seekr, a leader in explainable and trustworthy artificial intelligence designed to power mission‑critical decisions in enterprises and government, announced SeekrGuard, to evaluate and certify AI models. SeekrGuard moves beyond generic benchmarks by delivering model evaluation and interrogation capabilities that measure bias, accuracy, and reliability with transparent risk scoring, flexible testing, custom evaluators and audit‑ready governance on their own data, policies, and operational requirements. SeekrGuard model penetration testing is a critical advancement in detecting adverse model behavior.

“When the President released America’s AI Action plan it was made very clear that an evaluation ecosystem was needed to prevent National Security risks and ensure America remains at the very forefront of AI. Seekr answers this call with SeekrGuard.” said Rob Clark, President of Seekr.

Also Read: AiThority Interview Featuring: Pranav Nambiar, Senior Vice President of AI/ML and PaaS at DigitalOcean

Related Posts
1 of 42,317

The risk of Adverse AI Models
The rapid spread of unvetted AI models is exposing U.S. systems and global enterprises to adversarial manipulation, embedded bias, and strategic vulnerabilities at scale. According to McKinsey’s 2024 State of AI survey, roughly two‑thirds of organizations now report regular use of generative AI in at least one business function, a sharp increase from the prior year and a clear sign that adoption is outpacing governance. When AI models and large language models are deployed without rigorous evaluation, they can jeopardize critical decisions and core systems, introducing bias, weak oversight and openings for manipulation that erode trust in both public institutions and private companies.

How Seekr is Redefining AI risk assessment
Unlike static public leaderboards that rely on fixed, generic datasets, SeekrGuard gives control back to the organizations that deploy AI, so they can continuously re‑evaluate models as threats, policies and business conditions change.

SeekrGuard is designed to fix the key gaps in traditional AI risk assessment by using the organization’s own context as the benchmark:

  • Clear scoring. Transparent benchmarking produces side‑by‑side scorecards across real‑world scenarios for every model under evaluation.
  • Quantified model risk. Custom risk profiles let teams define their own risk frameworks and convert them into mission‑ or business‑specific risk scores.
  • Flexible testing. Users can mix and match datasets, evaluators and both open‑weight and proprietary models to run targeted, domain‑specific tests at scale.
  • Custom evaluators and data. Teams can quickly build custom evaluators for edge cases and use Seekr’s AI‑Ready Data Engine in SeekrFlow to turn their own documents into model test datasets on any topic.

Also Read: The End Of Serendipity: What Happens When AI Predicts Every Choice?

[To share your insights with us, please write to psen@itechseries.com ]

Comments are closed.