Brelyon Announces ‘Visual Engine,’ a New Architecture for Screen Augmentation Built With NVIDIA NIM
Brelyon leverages NVIDIA NIM microservices for its next-generation generative monitors
Brelyon, the pioneer of immersive display technologies, today announced a proprietary rendering architecture called Visual Engine, built with NVIDIA NIM microservices, to help users more effectively and quickly re-render or augment screen interfaces in real time.
Text-to-image models function in the following way: users type in a prompt in a text box and out comes an image. Image-to-video models let users upload an image along with a prompt to generate new video content. But what if displays could auto-generate new content and new interfaces, adapting to what users are doing in real time?
Also Read: AiThority Interview with Dr. Arun Gururajan, Vice President, Research & Data Science, NetApp
“In pioneering generative display engines using NVIDIA NIM, we’re starting a new chapter in the computer screen experience in many ways,” said Barmak Heshmat, CEO of Brelyon. “Up until now, you were limited by the UI/UX design of each software application. Every frame is rigid; every software has its own design language. For each interface, you would have to spend months learning where each button is placed, and how to use each tool in each piece of software. From now on, the computer can start to learn from you and generate that interface, tailored to your needs as you work more and more with it. This helps AI to upskill and reskill users in different interfaces as opposed to completely taking over the entire job running over the nuances that are required to perform the task.”
“But it’s not just about the enterprise implications of generative displays that excites the Brelyon team; it’s also about the immersiveness,” continued Heshmat. “It’s hard to fathom the massive impact these generative rendering engines are going to have on immersive experiences and gaming. We are familiar with upscaling using gen AI, but now with our approach, your screen is able to render an entire new layer of visuals on top of your game stream, fundamentally taking you from a lower-bandwidth experience to a higher-bandwidth experience and enriching your gaming in a unique way, as opposed to just making it more photoreal or rewriting it from scratch.”
One of the platforms that is enabled by the visual engine is Ultra Reality Extend display. This monitor uses the generative engine to generate new content and augment it into your field of vision at varieties of depths. Visual Engine, however, will be available as a standalone software as well in 2025, running on varieties of screens.
Also Read: More than 500 AI Models Run Optimized on Intel Core Ultra Processors
Powered by NVIDIA NIM
Visual Engine is a real-time content generating platform that uses NIM microservices to achieve faster inferencing, and provide the user with real-time updates when generating interactive interfaces for the user at shade level. NIM microservices speed up inference time and allow the software to run completely at the visual level without any strict dependency on the APIs of different applications. This gives significant versatility for the integration of AI to a vast set of interfaces and experiences that would be otherwise impossible.
[To share your insights with us as part of editorial or sponsored content, please write to psen@itechseries.com]
Comments are closed.