Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

Mattermost Introduces “OpenOps” to Speed Responsible Evaluation of Generative AI Applied to Workflows

Mattermost, the secure collaboration platform for technical teams, announced the launch of “OpenOps”, an open-source approach to accelerating the responsible evaluation of AI-enhanced workflows and usage policies while maintaining data control and avoiding vendor lock-in.
OpenOps emerges at the intersection of the race to leverage AI for competitive advantage and the urgent need to run trustworthy operations, including the development of usage and oversight policies and ensuring regulatory and contractually-obligated data controls.
It aims to help clear key bottlenecks between these critical concerns by enabling developers and organizations to self-host a “sandbox” environment with full data control to responsibly evaluate the benefits and risks of different AI models and usage policies on real-world, multi-user chat collaboration workflows.

Latest Insights: How to Get Started with Prompt Engineering in Generative AI Projects

The system can be used to evaluate self-hosted LLMs listed on Hugging Face, including Falcon LLM and GPT4All, when usage is optimized for data control, as well as hyperscaled, vendor-hosted models from the Azure AI platform, OpenAI ChatGPT and Anthropic Claude when usage is optimized for performance.
The first release of the OpenOps platform enables evaluation of a range of AI-augmented use cases including:
Automated Question and Answer: During collaborative and individual work users can ask questions to generative AI models, either self-hosted or vendor-hosted, to learn about different subject matters the model supports.
Discussion Summarization: AI-generated summaries can be created from self-hosted, chat-based discussions to accelerate information flows and decision-making while reducing the time and cost required for organizations to stay up-to-date.
Contextual Interrogation: Users can ask follow-up questions to thread summaries generated by AI bots to learn more about the underlying information without going into the raw data. For example, a discussion summary from an AI bot about a certain individual making a series of requests about troubleshooting issues could be interrogated via the AI bot for more context on why the individual made the requests and how they intended to use the information.
Sentiment Analysis: AI bots can analyze the sentiment of messages, which can be used to recommend and deliver emoji reactions on those messages on a user’s behalf. For example, after detecting a celebratory sentiment an AI bot may add a “fire” emoji reaction indicating excitement.
Reinforcement Learning from Human Feedback (RLHF) Collection: To help evaluate and train AI models, the system can collect feedback from users on responses from different prompts and models by recording the “thumbs up/thumbs down” signals end users select. The data can be used in future to both fine tune existing models, as well as providing input for evaluating alternate models on past user prompts.
This open source, self-hosted framework offers a “Customer-Controlled Operations and AI Architecture,” providing an operational hub for coordination and automation with AI bots connected to interchangeable, self-hosted Generative AI and LLM backends from services like Hugging Face that can scale up to private cloud and data center architectures, as well as scale down to run on a developer’s laptop for research and exploration. At the same time, it can also connect to hyperscaled, vendor-hosted models from the Azure AI platform as well as OpenAI.
“Every organization is in a race to define how AI accelerates their competitive advantage,” says Mattermost CEO, Ian Tien, “We created OpenOps to help organizations responsibly unlock their potential with the ability to evaluate a broad range of usage policies and AI models in their ability to accelerate in-house workflows in concert.”

Related Posts
1 of 40,951

Recommended: AiThority Interview with Abhay Parasnis, Founder and CEO at Typeface

The OpenOps framework recommends a four phase approach to developing AI-augmentations:
1 – Self-Hosted Sandbox – Have technical teams set up a self-hosted “sandbox” environment as a safe space with data control and auditability to explore and demonstrate Generative AI technologies. The OpenOps sandbox can include just web-based multi-user chat collaboration, or be extended to include desktop and mobile applications, integrations from different in-house tools to simulate a production environment, as well as integration with other collaboration environments, such as specific Microsoft Teams channels.
2 – Data Control Framework – Technical teams conduct an initial evaluation of different AI models on in-house use cases, and setting a starting point for usage policies covering data control issues with different models based on whether models are self-hosted or vendor-hosted, and in vendor-hosted models based on different data handling assurances. For example, data control policies could range from completely blocking vendor-hosted AIs, to blocking the suspected use of sensitive data such as credit card numbers or private keys, or custom policies that can be encoded into the environment.
3 – Trust, Safety and Compliance Framework – Trust, safety and compliance teams are invited into the sandbox environment to observe and interact with initial AI-enhanced use cases and work with technical teams to develop usage and oversight policies in addition to data control. For example, setting guidelines on whether AI can be used to help managers write performance evaluations for their teams, or whether researching techniques for developing malicious software can be researched using AI.
4 – Pilot and Production – Once a baseline for usage policies and initial AI-enhancements are available, a group of pilot users can be added to the sandbox environment to assess the benefits of the augmentations. Technical teams can iterate on adding workflow augmentations using different AI models while Trust, Safety and Compliance teams can monitor usage with full auditability and iterate on usage policies and their implementations. As the pilot system matures, the full set of enhancements can be deployed to production environments that can run on a production-ized version of the OpenOps framework.
The OpenOps framework includes the following capabilities:
Self-Hosted Operational Hub: OpenOps allows for self-hosted operational workflows on a real-time messaging platform across web, mobile and desktop from the Mattermost open-source project. Integrations with in-house systems and popular developer tools to help enrich AI backends with critical, contextual data. Workflow automation accelerates response times while reducing error rates and risk.
AI Bots with Interchangeable AI Backends: OpenOps enables AI bots to be integrated into operations while connected to an interchangeable array of AI platforms. For maximum data control, work with self-hosted, open-source LLM models including GPT4All and Falcon LLM from services like Hugging Face. For maximum performance, tap into third-party AI frameworking including OpenAI ChatGPT, the Azure AI Platform and Anthropic Claude.
Full Data Control: OpenOps enables organizations to self-host, control, and monitor all data, IP, and network traffic using their existing security and compliance infrastructure. This allows organizations to develop a rich corpus of real-world training data for future AI backend evaluation and fine-tuning.
Free and Open Source: Available under the MIT and Apache 2 licenses, OpenOps is a free, open-source system, enabling enterprises to easily deploy and run the complete architecture.
Scalability: OpenOps offers the flexibility to deploy on private clouds, data centers, or even a standard laptop. The system also removes the need for specialized hardware such as GPUs, broadening the number of developers who can explore self-hosted AI models.

 Latest Interview Insights : How Telecoms Can Capitalize On The B2B2X Opportunity And Drive Meaningful 5G Revenues

[To share your insights with us, please write to sghosh@martechseries.com] 

Comments are closed.