Making Sense of Medical AI : AI transparency and the configuration of expertise
Author
Summary, in English
Artificial Intelligence (AI) technologies are increasingly researched and applied for medical knowledge discovery and to support or automate clinical decision-making. The aim of this thesis is to increase the knowledge on (1) how AI experts, radiologists, and standardizers make sense of AI, in the processes of medical AI development, clinical use and standardization, and (2) how this sensemaking contributes to configurations of AI transparencies and expertise in sociotechnical entanglements. Specifically, I study the research questions: How are AI experts that are involved in developing AI for medical purposes, and medical professionals, making sense of medical AI? How is AI transparency made sense of in standardizations? And how are AI transparencies made and expertise (re)configured in these processes and sociotechnical entanglements? In studying these questions, I focus on different actors’ practices and reasoning about: ground truthing and transparency in the development of medical AI, integrating and critically engaging with AI in clinical work, and standardization of AI transparency.
Theoretically, this thesis is situated in the fields of Science and Technology Studies (STS), sociology, information science, communication studies and organization studies. An epistemological underpinning of this thesis is the entanglement of social actions and technological and material artefacts. This entails an understanding of the research topic as involving knowledge-making phenomena where the social and the technical, the human and the non-human, are co-constituted in sociotechnical assemblages. Empirically, the research is conducted in three studies using different methods. In the studies, different actors are engaged through: interviews and observations with AI experts working with AI development for medicine and healthcare, a survey study of breast radiologists’ views regarding the integration of AI in breast cancer screening, and a practice-oriented document analysis focusing standard-making of AI transparency. In total, this thesis shows how medical AI is as much a sociotechnical matter as a technical or clinical endeavor. It highlights the complexity of making sense of AI, by different actors’ reasonings and practices and through different processes. Both the role of opacity mitigating practices, as well as the challenges of making AI transparent, are made visible. Moreover, this thesis shows the importance of empirical insights, and stakeholder and context–sensitive approaches to better understand how medical AI is made sense of and how expertise is reconfigured in the process.
Theoretically, this thesis is situated in the fields of Science and Technology Studies (STS), sociology, information science, communication studies and organization studies. An epistemological underpinning of this thesis is the entanglement of social actions and technological and material artefacts. This entails an understanding of the research topic as involving knowledge-making phenomena where the social and the technical, the human and the non-human, are co-constituted in sociotechnical assemblages. Empirically, the research is conducted in three studies using different methods. In the studies, different actors are engaged through: interviews and observations with AI experts working with AI development for medicine and healthcare, a survey study of breast radiologists’ views regarding the integration of AI in breast cancer screening, and a practice-oriented document analysis focusing standard-making of AI transparency. In total, this thesis shows how medical AI is as much a sociotechnical matter as a technical or clinical endeavor. It highlights the complexity of making sense of AI, by different actors’ reasonings and practices and through different processes. Both the role of opacity mitigating practices, as well as the challenges of making AI transparent, are made visible. Moreover, this thesis shows the importance of empirical insights, and stakeholder and context–sensitive approaches to better understand how medical AI is made sense of and how expertise is reconfigured in the process.
Publishing year
2025-05-05
Language
English
Full text
- - 10 MB
Links
Document type
Dissertation
Publisher
Department of Technology and Society, ÃÛ¶¹ÊÓÆµ
Topic
- Science and Technology Studies
- Media and Communications
- Sociology
- Medical Informatics
Keywords
- Artificial intellgience
- Medicine
- Healthcare
- Transparency
- Expertise
- Science and Technology Studies
- STS
- Machine learning
- Information Studies
- Medical Sociology
- AI
- Artificiell intelligens
- ³¾²¹²õ°ì¾±²Ô¾±²Ô±ôä°ù²Ô¾±²Ô²µ
- medicin
- Hälso- och sjukvård
- transparens
- expertis
- Vetenskapssociologi
- Teknik
- STS
Status
Published
Project
- AI in the Name of the Common Good - Relations of data, AI and humans in health and public sector
- Mammography Screening with Artificial Intelligence
- AIR Lund - Artificially Intelligent use of Registers
Research group
- AI and Society
Supervisor
- Stefan Larsson
- Kristina LÃ¥ng
- Katja de Vries
ISBN/ISSN/Other
- ISBN: 9789181044935
- ISBN: 9789181044942
Defence date
4 June 2025
Defence time
09:00
Defence place
Lecture Hall E:1406, building E, Klas Anshelms väg 10, Faculty of Engineering LTH, ÃÛ¶¹ÊÓÆµ, Lund.
Opponent
- Maja Hojer Bruun (Assoc. Prof.)