[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

Diffblue Launches Test Review: New Feature Gives Developers Versatility in Unit Testing Workflows

Diffblue, creators of the world’s first fully-autonomous AI agent for unit testing, today released Test Review, a new feature that allows developers to edit, analyze, and verify Diffblue’s AI-generated unit tests. The feature, which has been in early access over the past six months, is now available for all users of Diffblue’s flagship product, Diffblue Cover.

Also Read: AiThority Interview with Brian Stafford, President and Chief Executive Officer at Diligent

“We hope to win over developers who are apprehensive about integrating a fully-autonomous agent into their development workflow”

Despite high rates of adoption of AI-for-code tools, software developers’ trust in AI-generated code remains low. A recent survey of Google developers found that nearly 40% lack confidence in AI-generated code, even as 75% use it daily. Similarly, a Stack Overflow study found that only about 2% of professional developers have confidence in the accuracy of AI generated code, and nearly half (45%) believe that AI tools are bad or very bad at handling complex tasks. These discrepancies underscore a tension between productivity gains and reliability concerns.

Test Review is specifically designed to tackle this point of friction, empowering developers to make informed decisions about whether to accept AI-generated tests into their codebase. With a human-AI collaboration, developers can build more trust in AI with a more iterative test review process and greater sense of ownership of code quality.

Furthermore, developers are empowered with more versatility of options to benefit from agentic-AI unit test generation or to scrutinize test creation as part of a more collaborative human-AI assistive approach to unit testing. For example, new test generation may benefit from closer oversight, while test maintenance may benefit from full delegation to an AI agent, and in turn, save hours of time. Additionally, for a legacy codebase, developers can benefit from Diffblue Cover’s fully autonomous mode to comprehensively bulk generate unit tests for an entire codebase with a single command.

Related Posts
1 of 41,950

“With a best-in-class product that fully automates unit testing, you might be wondering: why launch a feature that requires some level of manual work?” said Peter Schrammel, Co-Founder & CTO of Diffblue. “Well, the answer is that not everyone is ready to put their full trust into an AI agent, no matter how sophisticated. We’ve recognized that, and wanted to give our users options in terms of how they integrate Diffblue Cover into their workflows – either as an autonomous AI agent, or with a more iterative workflow.”

Designed as an interactive mode in addition to Diffblue’s fully-autonomous agent, Test Review enables a more iterative workflow with higher levels of developer input and control. In Test Review mode, developers can inspect each test and accept them all in one action – the tests are integrated into the codebase, compiled, and work without any further developer intervention. When users don’t wish to keep a test suggestion for whatever reason, they can reject it in one click or amend it. The two interaction modes are available to all users using IntelliJ IDE, so they can select whichever mode best suits their project requirements.

Also Read: AiThority Interview with Carolyn Duby, Field CTO and Cyber Security GTM Lead at Cloudera

Test Review will also contribute to accelerating Diffblue Cover’s high level of accuracy and reliability. The feature’s human validation and feedback will anonymously and consistently improve Diffblue’s powerful agentic AI algorithms.

“We hope to win over developers who are apprehensive about integrating a fully-autonomous agent into their development workflow,” continued Schrammel. “By lowering the barrier to adoption, developers can ease into an AI-powered iterative unit testing workflow, and ultimately, evolve into full autonomy and the remarkable scalability that results from it.”

Automating unit test generation 250x faster than a human developer, Diffblue Cover is widely adopted by medium-to-large enterprises – including four of the 10 largest U.S. banks and several other members of Forbes’ Global 2000 – to increase team productivity, expand test coverage, and deliver exceptional code quality.

Compared to LLM-based GitHub Copilot, Diffblue Cover uses reinforcement learning for higher accuracy and productivity. Diffblue’s recent benchmarking study revealed that its unit test generation agent is 26x more productive than Copilot, and can create 10x more tests than the Copilot-powered developer in the same amount of time.

[To share your insights with us as part of editorial or sponsored content, please write to psen@itechseries.com]

Comments are closed.