Unity Releases Enhancements for Robotics Industry
Unity, the world’s leading platform for creating and operating real-time 3D (RT3D) content, released its Object Pose Estimation demonstration, which combines the power of computer vision and simulation technologies illustrating how Unity’s AI and Machine Learning capabilities are having real-world impact on the use of robotics in industrial settings. Object Pose Estimation and its corresponding demonstration come on the heels of recent releases aimed at supporting the eminent Robot Operating System (ROS), a flexible framework for writing robot software. The combinations of these Unity tools and others open the door for roboticists to safely, cost-effectively, and quickly explore, test, develop, and deploy solutions.
Unity’s Object Pose Estimation demonstration combines the power of computer vision and simulation technologies illustrating how Unity’s AI and ML capabilities are impacting the use of robotics in industrial settings.
“This is a powerful example of a system that learns instead of being programmed, and as it learns from the synthetic data, it is able to capture much more nuanced patterns than any programmer ever could,” said Dr. Danny Lange, Senior Vice President of Artificial Intelligence, Unity. “Layering our technologies together shows how we are crossing a line, and we are starting to deal with something that is truly AI, and in this case, demonstrating the efficiencies possible in training robots.”
Recommended AI News: Contentstack Partners with Uniform to Deliver Lightning Fast Personalized Sites
Simulation technology is highly effective and advantageous when testing applications in situations that are dangerous, expensive, or rare. Validating applications in simulation before deploying to the robot shortens iteration time by revealing potential issues early. The combination of Unity’s built-in physics engine and the Unity Editor can be used to create endless permutations of virtual environments, enabling objects to be controlled by (an approximation) of the forces which act on them in the real world.
The Object Pose Estimation demo succeeds the release of Unity’s URDF Importer, an open-source Unity package for importing a robot into a Unity scene from its URDF file that takes advantage of enhanced support for articulations in Unity for more realistic kinematic simulations, and Unity’s ROS-TCP-Connector, which greatly reduces the latency of messages being passed between ROS nodes and Unity, allowing the robot to react in near real-time to its simulated environment. Today’s demo builds on this work by showing how Unity Computer Vision tools and the recently released Perception Package can be used to create vast quantities of synthetic, labeled training data to train a simple deep learning model to predict a cube’s position. The demo provides a tutorial on how to recreate this project, which can be extended by applying tailored randomizers to create more complex scenes.
Recommended AI News: iQor Announces New Board of Director Appointments
“With Unity, we have not only democratized data creation, we’ve also provided access to an interactive system for simulating advanced interactions in a virtual setting,” added Lange. “You can develop the control systems for an autonomous vehicle, for example, or here for highly expensive robotic arms, without the risk of damaging equipment or dramatically increasing cost of industrial installations. To be able to prove the intended applications in a high-fidelity virtual environment will save time and money for the many industries poised to be transformed by robotics combined with AI and Machine Learning.”
Recommended AI News: Vevo Launches Moods, an AI Tool for Emotion-Based Ad Targeting