Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

MIT Researchers use OpenAI Codex to Build an An ML-based Mathematics Problem-generator

MIT researchers used program synthesis and few-shot learning technique with OpenAI Codex to solve randomly selected universal-level mathematical problems.

OpenAI Codex is one of the most powerful language-to-code GPT3-based neural networking platform for high-speed programming. OpeAI Codex is used in a large number of AI Machine Learning projects in a safe AGI environment. As the demand for Codex programmers increase in the current era, we are witnessing a large number of AI researchers also taking to OpenAI’s GPT3 offering to improve their understanding of neural networks for complex problems. In one such development, a group of machine learning researchers and faculty members belonging to the MIT, Columbia University, Harvard University, and the University of Waterloo have built a machine learning algorithm using OpenAI Codex. This new algorithm can solve, explain and generate complex mathematical problems. These university-level mathematical problems are part of the largest MIT mathematics courses.

MIT research on OpenAI Codex

The researchers at MIT developed three innovations to improve the rate of accuracy of solving university-level mathematical problems. By using OpenAI Codex for solving complex mathematical problems, MIT researchers were able to demonstrate AI’s role in automation-based course evaluation and mathematical question-generation for large-scale examination. Considered as a major milestone in the MIT AI’s research program, this new project could be used to sample different types of problems, equations and plots.

How is MIT’s OpenCodex AI Algorithm Better than other GPT-3 language Models?

In most cases, the latest GPT-3 machine language modeling systems returned with a solve performance of 18.8% using zero-shot learning. With few-shot learning, the performance improved but only slightly to solve only 30.8% of the university questions. In comparison, few-shot pre-trained Codex could automatically solve 81% of the same questions asked randomly.

MIT Researchers illustrate how a single machine-learning model can solve these example problems and solve a wide variety of mathematics courses at scale.
MIT Researchers illustrate how a single machine-learning model can solve these example problems and solve a wide variety of mathematics courses at scale (PNAS)

Why MIT Researchers Succeeded?

Related Posts
1 of 184

MIT researchers used program synthesis and few-shot learning technique with OpenAI Codex to solve randomly selected universal-level mathematical problems.

Traditionally, transformers like GPT-3 are trained for text-only scenarios. While these makes them extremely successful with the modern NLP tasks with zero-shot and few-shot language tasks, they failed to deliver reliable results with mathematical problems. Even with improved few-shot training and chain-of-thought (CoT) prompt, the GPT-3 transformers failed to meet the MATH benchmark. Another problem with pre-trained transformers for mathematical problems is the very high cost of computation. Researchers can’t rely on costly models to evaluate and generate university-level problems even if it means it would save time and resources.

When MIT researchers took to OpenAI Codex’s pretrained transformer for text analysis and fine-tuned it on code to solve MIT mathematical problems, they could see significantly high accuracy rate of 81%.

Role of Python and Other Programming Languages in this MIT Project

MIT researchers used Python programming language and libraries like SymPy, SciPy, and NymPy for automatic contextualization. Where plotting is required, researchers used Matplotlib. Using Codex, MIT researchers were also able to develop questions. Students who took the tests were surveyed to ascertain the levels of difficulties with human-generated questionnaires versus machine-generated questionnaires.

Overall, MIT researchers were successful in working with neural network synthesis techniques and modern programming languages to solve a broad range of mathematical problems, automatically generate machine-written questions and improve outcomes with excellent results.

Source: MIT News

[To share your insights with us, please write to sghosh@martechseries.com]

 

Comments are closed.