Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

Avgidea Released Greenative, a Tool for Building and Operating LLMs

Avgidea, a developer and consultant for optimized cloud-native services, releases a new tool Greenative (PAT.P in Japan) for building and operating LLMs in your own infrastructure environment and providing interactive chat services to users.

After installing a base LLM such as Llama 2 from the console, you can build your own LLMs by importing and fine-tuning arbitrary data. The user can then associate the model with a chat thread and start using it immediately.

Greenative Features

Infrastructure environment: Greenative can utilize various infrastructure environments such as on-premise, virtual environment, and public cloud. In addition, when built on a completely isolated infrastructure environment for your own use, you can reduce the risk of data leakage, etc.

Recommended AI Interview:  AiThority Interview with Hendrik Isebaert, CEO at Showpad

Base model installation: Llama 2, ELYZA, OpenChat, and other models can be installed as base models in the Greenative environment with a single click. Other Llama 2 base and other base models will be made available on Greenative from time to time.

Natural Language Processing: Base / custom models can be associated with chat threads for immediate use when they are created. Various natural language processing such as sentence summarization, text generation, question answering, etc. is possible.

Prompt / data registration: prompts and text files can be imported into Greenative and used for fine tuning of the base model.

Related Posts
1 of 41,091

Fine Tuning: Create your own LLMs by fine tuning predefined prompts and data.

GPU resource utilization: Offload inference and fine tuning to GPUs for faster task completion; Greenative manages appropriate GPU resources per task for efficient use of infrastructure resources.

Recommended AI Interview: AiThority Interview with Interview with Daniel O’Brien, GM of Americas at HTC VIVE

Usage scenarios of Greenative

Use of sensitive data: When sensitive or confidential data is difficult to use outside the company, Greenative can be built on the company’s infrastructure to keep all data, queries, and fine-tuned models in an isolated infrastructure environment.​

Custom model building and operation: Custom models can be easily provided to in-house users by fine-tuning the base model from the console, without requiring advanced AI model development and operation skills.
You can also outsource to us the consulting services for building custom models using Greenative, as well as the infrastructure construction and operation on a public cloud (IaaS).

Recommended AI Interview: AiThority Interview with Ryan Nichols, EVP & GM, Service Cloud at Salesforce

[To share your insights with us, please write to sghosh@martechseries.com]

Comments are closed.