Google Develops AI System to Leverage Sound in Det
Post# of 105
Health acoustic sounds contain useful health signals that have the potential to aid in health monitoring as well as disease diagnosis. Recently, scientists at Google Research, in collaboration with researchers at Zambia’s Center of Infectious Disease Research, developed an AI system that can use coughing sounds to diagnose lung disease.
The system, called Health Acoustic Representations (HeAR), is available as a preprint on arXiv.
The investigators started working on the system after healthcare professionals reported that over time, they had learned to tell which patients had coronavirus by how their coughs sounded. For their study, they used an approach similar to the one used to create large-language models. They began by converting human sounds such as a baby and adult coughing, throat clearing, panting, laughing, speaking and breathing, which were obtained from YouTube, into spectrograms.
Once this was done, the researchers blocked a portion of each sound then prompted the system to predict the missing part. This is similar to the way large-language models learn to predict the word that comes next in a sentence. This resulted in a foundation model that the investigators explained could be modified for use in a range of tasks. In their case, the investigators used the system to learn to detect coronavirus and tuberculosis infections.
The investigators compared the accuracy of their system using a standard scale with random guesses. They observed that one data set scored 0.645 on coronavirus detection and 0.739 on tuberculosis detection. This is better, especially when compared to results obtained from other AI systems.
In total, the investigators tested the system with 33 different tasks across six datasets.
In their report, the investigators noted that the reported performance for these tasks leveraged frozen embeddings and linear probes rather than fine-tuning the entire neural network. They then acknowledged that while more research was needed, acoustic testing could someday be used in physicians’ offices to diagnose various lung diseases.
Investigators involved in the study included Pauline Musumali, Solomon Chifwamba, Seke Muzazu, Kachimba Shamaoma, and Francesca Silwamba from Zambia’s Center for Infectious Disease Research; and Yun Liu, Aren Jansen, Rory Pilgrim, Ryan Ehrlich, Timo Kohlberger, Dan Ellis, Eduardo Fonseca, and Marc Wilson from Google Research.
Luyu Wang, Basil Mustafa, Chung-Cheng Chiu, Lucas Smaira and Eric Lau from Google DeepMind provided technical support, guidance and critical feedback to the investigators, while teams from Project Coswara and CoughVID provided datasets for research. The Google Research team also offered hardware and software infrastructure support. The investigators’ findings were reported in arXiv.
As these AI-supported detection tools are taken through the development and commercialization process, patients can rely on existing diagnostic instruments made by companies such as Astrotech Corp. (NASDAQ: ASTC) to obtain definitive diagnoses of any conditions afflicting the patients.
NOTE TO INVESTORS: The latest news and updates relating to Astrotech Corp. (NASDAQ: ASTC) are available in the company’s newsroom at https://ibn.fm/ASTC
Please see full terms of use and disclaimers on the BioMedWire website applicable to all content provided by BMW, wherever published or re-published: http://BMW.fm/Disclaimer