Joint representation learning from french radiological reports and ultrasound images - 3IA Côte d’Azur – Interdisciplinary Institute for Artificial Intelligence Access content directly
Conference Papers Year :

Joint representation learning from french radiological reports and ultrasound images

Abstract

In this study, we explore the value of using a recently proposed multimodal learning method as an initialization for anomaly detection in abdominal ultrasound images. The method efficiently learns visual concepts from radiological reports using natural language supervision and constrastive learning. The underlying requirement of the method is simply the availability of image and textual descriptions pairs. However, in abdominal ultrasound examinations, radiological reports are associated with several images and describe all organs observed during the examination. To address this shortcoming, we automatically construct image and text pairs using 1) deep clustering for abdominal organ classification on ultrasound images and 2) natural language processing tools to extract the corresponding description on the report. We show that pre-training the model with these constructed pairs yields representations that better separate normal classes from abnormal ones on ultrasound images for the kidneys, compared to ImageNet-based representations, with a 10% improvement in macro-average accuracy.
Fichier principal
Vignette du fichier
HIND_DADOUN_ISBI_2023.pdf (1.13 Mo) Télécharger le fichier
Origin : Files produced by the author(s)

Dates and versions

hal-03984528 , version 1 (12-02-2023)

Licence

Copyright

Identifiers

  • HAL Id : hal-03984528 , version 1

Cite

Hind Dadoun, Hervé Delingette, Anne-Laure Rousseau, Eric de Kerviler, Nicholas Ayache. Joint representation learning from french radiological reports and ultrasound images. IEEE ISBI 2023 - International Symposium on Biomedical Imaging, IEEE, Apr 2023, Cartagena de Indias, Colombia. ⟨hal-03984528⟩
50 View
35 Download

Share

Gmail Facebook Twitter LinkedIn More