allegro.ai to Showcase Its Deep Learning Perception Platform at the Intel Partner Booth During the Embedded Vision Summit
Deep learning computer vision startup allegro.ai is set to showcase its latest product offering, hosted at the Intel partner booth (booth #307), during the Embedded Vision Summit which will take place in Santa Clara, California on May 20-May 23, 2019.
Founded in 2016, allegro.ai offers the world’s first end-to-end deep learning lifecycle management solution suite focused on perception. The company’s platform and product suite simplify the process of developing and managing deep learning-powered perception solutions – such as for autonomous vehicles, medical imaging, drones, security, logistics and other use cases.
allegro.ai will showcase its solution suite which offers capabilities for experiment management, data management, quality control, continuous learning and more. The platform enables engineering and product managers to get the visibility and control they need, while research scientists focus their time on research and creative output. The result is meaningfully higher quality products, faster time-to-market, increased returns to scale, and materially lower costs.
allegro.ai was founded by seasoned technology veterans leading a team with extensive expertise in computer vision, deep learning, and embedded and high-performance computing. The company’s investors include Robert Bosch Venture Capital GmbH, Samsung Catalyst Fund, Hyundai Motor Company, and other venture funds.
In addition to the showcase at the Intel partner booth, Moses Guttmann, Chief Technology Officer and co-founder of allegro.ai, will be a speaker at the summit. Guttmann, a 20-year technology veteran, is a computer vision and deep learning expert and visionary, with a track record in spearheading innovation in computer vision, perception and deep learning.
Guttmann will discuss Optimizing SSD Object Detection for low-power devices and will describe in detail a data-centric optimization approach to SSD. allegro.ai’s approach drastically lowers the number of priors (“anchors”) needed for the detection, and thus linearly decreases time spent on this costly part of the computation. As a result, specialized processors and custom hardware may be better utilized, yielding higher performance and lower latency regardless of the specific hardware used.