[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

Truveta Research Published in Radiology Advances Introduces New AI Model to Estimate Body Composition From Chest Radiographs

logo

Truveta is proud to announce the publication of its latest peer-reviewed research in Radiology Advances: “XComposition: Multimodal Deep Learning Model to Measure Body Composition Using Chest Radiographs and Clinical Data.” This groundbreaking study demonstrates the power of artificial intelligence to estimate critical body composition measures—such as visceral and subcutaneous fat volumes—from a simple chest radiograph combined with commonly available clinical data. The deep learning model is available as a Python library for others to experiment with in Truveta’s GitHub.

Key findings

The research team developed a multimodal deep learning model that integrates chest radiographs (CXR) with four basic clinical variables (age, s** at birth, height, and weight) to estimate body composition typically measured by CT scans. The study analyzed data from more than 1,100 patients across a subset of Truveta member health systems in the US.

  • The multimodal model accurately estimated subcutaneous fat volume (Pearson’s R: 0.85) and visceral fat volume (Pearson’s R: 0.76).
  • late fusion strategy—combining imaging and clinical data at the decision level—yielded the best results (p < 0.04 for subcutaneous fat volume).
  • The multimodal model outperformed imaging-only and clinical-only approaches across all key body composition metrics (p < 0.001 for subcutaneous fat volume).

Also Read: AiThority Interview with Jonathan Kershaw, Director of Product Management, Vonage

Related Posts
1 of 42,189

Why it matters

Body composition is an important predictor of cardiovascular disease, diabetes, and cancer prognosis. Traditional methods to measure these metrics—such as MRI or CT—are expensive, resource-intensive, and not always accessible to patients. This study shows that a chest radiograph, one of the most common and widely available imaging tests, can serve as a low-cost, scalable tool for estimating body composition when combined with AI.

“Our work shows that we can unlock clinically meaningful insights from a chest X-ray—an exam that millions of people receive each year,” said Ehsan Alipour, MD, PhD, a machine learning post-doctoral researcher at Truveta and lead author of the study. “By combining imaging with just a few simple clinical variables, we created a powerful, accessible way to estimate body composition that could help improve screening, research, and ultimately patient care.”

This study leveraged Truveta Data, the most complete, timely, and representative dataset of de-identified electronic health records (EHR) in the US, contributed by a collective of leading health systems. Imaging data were linked with clinical variables across health systems, enabling the development and validation of this multimodal AI model.

Also Read: A Pulse Check on AI in the Boardroom: Balancing Innovation and Risk in an Era of Regulatory Scrutiny

[To share your insights with us as part of editorial or sponsored content, please write to psen@itechseries.com]

Comments are closed.