Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

Trace Machina Secures $4.7 Million Seed Round, Launches to Develop Simulation Infrastructure for Physical-World AI

Their first product NativeLink is open source at the core and provides engineers with an advanced staging environment for futuristic technologies where safety is paramount such as self-driving cars, aviation, robotics, and other autonomous hardware systems.

Trace Machina, the company building simulation infrastructure for safety-critical technologies in physical-world AI, today launches out of stealth with $4.7 Million in funding. The seed round was led by Wellington Management with participation from Samsung Next, Sequoia Capital Scout Fund, Green Bay Ventures, and Verissimo Ventures. Angel investors include Clem Delangue, CEO of Hugging Face; Mitch Wainer, Co-Founder of DigitalOcean; Gert Lackriet, Director of Applied Machine Learning at Amazon; and other industry leaders from OpenAI and MongoDB.

Also Listen: AI Inspired Series by AiThority.com: Featuring Bradley Jenkins, Intel’s EMEA lead for AI PC & ISV strategies

Trace Machina consists of engineers and product leaders from Apple, Google, MongoDB, and Toyota Research Institute. The company’s first product NativeLink is open source at its core and has already surpassed a thousand stars on GitHub, with contributing engineers from Tesla, General Motors (Cruise), Samsung, and X. NativeLink provides an advanced staging environment for futuristic technologies where safety is paramount and testing is critical, such as self-driving cars, aviation, robotics, and other autonomous hardware systems. The magic behind NativeLink is that it brings AI to the edge and turns local devices into supercomputers.

NativeLink UI

“We’re enabling the next generation to more easily build futuristic technologies like what you read about in sci-fi novels or see in films,” said Marcus Eagan, CEO and Co-Founder of Trace Machina. “This has historically been unattainable or uneconomical due to limitations of existing infrastructure and tools. We’re moving beyond machine learning solely focused on language and pattern matching to a new wave of AI that’s more human-like in its ability to maneuver around obstacles and modify objects.”

Prior to founding Trace Machina, Eagan worked on MongoDB Atlas Vector Search, the company’s first AI product. He’s also contributed to some of the largest and most widely adopted open source projects in the world such as Lucene, Solr, and Superset. Most recently, Eagan finished a project with Nvidia that enables engineers to run Lucene on GPUs and index data 20X faster. Nathan Bruer, Chief Architect and Co-Founder of Trace Machina, worked on secret projects at Google’s X and built autonomous driving software as an engineer at the Toyota Research Institute.

“NativeLink is providing critical infrastructure for the industries of tomorrow such as aerospace and autonomous mobility,” said Van Jones, Deal Lead at Wellington Access Ventures. “Over 100 companies are using NativeLink’s Cloud service to significantly enhance their complex builds. They are taking advantage of simulation infrastructure that simply never existed before.”

Related Posts
1 of 11,742

Also Read: Humanoid Robots And Their Potential Impact On the Future of Work

Key benefits of NativeLink include

AI at the edge: For developers building native mobile applications, NativeLink’s technology is free and avoids the costs and latency associated with traditional cloud infrastructure. This unlocks cloud cost savings typically between 50 to 70% for next-generation companies who need to test their code on GPUs in order to build AI applications.

Increased productivity. NativeLink’s tight feedback loop for building futuristic systems speeds up compilation, testing, simulations, and other workflows by up to 80% through efficient caching to avoid rerunning unchanged code, and parallel processing with remote execution, which together can result in reductions in expensive flakiness of tests on GPUs and other specialized hardware.

Built for scale. NativeLink’s Rust-based architecture eliminates potential errors, race conditions, and stability issues at scale, improving the reliability of critical development pipelines where re-running tests on GPU causes inflated cloud costs.

Leading companies are already leveraging the power of NativeLink in production, including Menlo Security, CIQ (the founding sponsor of Rocky Linux), and Samsung who is an investor in the company. Samsung uses NativeLink to compile, test, and validate software at the operating system level for over a billion devices, with NativeLink servicing billions of requests per month.

“NativeLink has been instrumental in reducing our longest build times from days to hours and significantly improving our engineering efficiency,” said David Barr, Principal Engineer at Samsung. “Its scalable infrastructure and robust caching capabilities have been game-changers for our team – we’re excited about the future possibilities with NativeLink.”

Also Read: AiThority Interview with Anand Pashupathy, Vice President & General Manager, Security Software & Services Division, Intel

[To share your insights with us as part of editorial or sponsored content, please write to psen@itechseries.com]

Comments are closed.