📣 Call for Papers on participatory governance in #DigitalHealth and #MedicalAI in Frontiers in Sociology! Editors: Cláudia de Freitas, Kira Allmann, Vera Lucia RAPOSO, Simeon Yates and Carlo Botrugno ⏰ Abstract submission deadline extended to July 31st! 💡 Themes of interest include: • Digital health governance gaps, public (dis)trust in digital health, and data diversity deficits • The social, ethical, legal, and personal implications of algorithmic decision-making, including how algorithmic biases are uncovered, exposed, and resisted during algorithmic training and up to the point of care (e.g. in clinical settings) • The disruption, (re)shaping or transformation of patterns of exclusion through digital health technologies and services • The emergence of new forms of advocacy, community mobilization, participation, and citizenship linked to health digitization and the (re)production of inequalities • Opportunities, tensions, and limits to public involvement in digital health governance • Participatory approaches to digital health technologies’ design, implementation, usage, and adaptation. More info: https://lnkd.in/eU_rCPJu
Kira Allmann, Ph.D.’s Post
More Relevant Posts
-
📣 Call for Papers! “Digital health and medical AI: participatory governance, algorithmic fairness and social justice” - a special issue in Frontiers in Sociology I’m co-editing with fabulous colleagues Cláudia de Freitas, Simeon Yates, Vera Lucia RAPOSO, and Carlo Botrugno: Themes of interest include but are not restricted to the following: • Digital health governance gaps, public (dis)trust in digital health, and data diversity deficits • The social, ethical, legal, and personal implications of algorithmic decision-making, including how algorithmic biases are uncovered, exposed, and resisted during algorithmic training and up to the point of care (e.g. in clinical settings) • The disruption, (re)shaping or transformation of patterns of exclusion through digital health technologies and services • The emergence of new forms of advocacy, community mobilization, participation, and citizenship linked to health digitization and the (re)production of inequalities • Opportunities, tensions, and limits to public involvement in digital health governance • Participatory approaches to digital health technologies’ design, implementation, usage, and adaptation Summaries due June 18th, full submissions due Oct 6th. Read the full topic description: https://lnkd.in/eU_rCPJu
Digital health and medical AI: participatory governance, algorithmic fairness and social justice
frontiersin.org
To view or add a comment, sign in
-
"Social determinants of mental health" is a buzzword these days and can often be identified from clinical notes. But do we know which specific determinants’ interventions are driving the progression of opioid use disorder (OUD) for individuals? That's the crucial question we addressed in our latest paper, "Finding Causal Impacts of Social Determinants of Mental Health on Opioid Use Disorder from Clinical Notes," accepted at the IEEE-EMBS International Conference on Biomedical and Health Informatics (BHI). In this work, we developed innovative methods to extract social determinants from clinical text and proposed a novel enhanced subgroup discovery technique to identify the causal effects of these determinants on OUD progression. Congratulations to CUBICS Lab PhD Candidate Madhavi Pagare for her dedication and to Inyene Essien-Aleksi for her valuable clinical guidance! #research #healthcare #AI #clinicalinformatics #congratulations #bhi #embs
To view or add a comment, sign in
-
☑️ *READ ABSTRACT BELOW:* Keywords: artificial intelligence; clinical reasoning; early warning scores; sepsis; triage. Objective: Obtain clinicians' perspectives on early warning scores (EWS) use within context of clinical cases. Material and methods: We developed cases mimicking sepsis situations. De-identified data, synthesized physician notes, and EWS representing deterioration risk were displayed in a simulated EHR for analysis. Twelve clinicians participated in semi-structured interviews to ascertain perspectives across four domains: (1) Familiarity with and understanding of artificial intelligence (AI), prediction models and risk scores; (2) Clinical reasoning processes; (3) Impression and response to EWS; and (4) Interface design. Transcripts were coded and analyzed using content and thematic analysis. Results: Analysis revealed clinicians have experience but limited AI and prediction/risk modeling understanding. Case assessments were primarily based on clinical data. EWS went unmentioned during initial case analysis; although when prompted to comment on it, they discussed it in subsequent cases. Clinicians were unsure how to interpret or apply the EWS, and desired evidence on its derivation and validation. Design recommendations centered around EWS display in multi-patient lists for triage, and EWS trends within the patient record. Themes included a "Trust but Verify" approach to AI and early warning information, dichotomy that EWS is helpful for triage yet has disproportional signal-to-high noise ratio, and action driven by clinical judgment, not the EWS. Conclusions: Clinicians were unsure of how to apply EWS, acted on clinical data, desired score composition and validation information, and felt EWS was most useful when embedded in multi-patient views. Systems providing interactive visualization may facilitate EWS transparency and increase confidence in AI-generated information. Payne VL, J Am Med Inform Assoc. 2024 May 20;31(6):1331-1340. doi: 10.1093/jamia/ocae089. PMID: 38661564; PMCID: PMC11105126. #Gesundheit #Bildung #Fuehrung #Coaching #Mindset #Motivation #Gehirn #Neuroscience #Psychologie #Persoenlichkeitsentwicklung #Kindheit #KeyNoteSpeaker #Humangenetik #Biochemie #Neuroleadership #Ernaehrung #Transformation #Stress #Demografie #Gender #Age #interkulturelleKompetenz #Epigenetik #Veraenderung #EmotionaleIntelligenz #Change #Gesellschaft #Organisationsentwicklung #Philosophie #Beratung # Quantum
To view or add a comment, sign in
-
Insightful new addition to the literature re: stigmatizing or biased language in the EMR shows that Black patients are more likely to have a phrase that casts doubt on the patient’s clinical history in admission notes (1.21 odds ratio). The authors note, the presence of doubt language in EMR notes may: -exacerbate Black patients’ existing feelings of mistrust and disrespect as being knowledgeable about their own symptoms-negatively impact satisfaction with their care and engagement with health care in the future. -May negatively influence subsequent clinician attitudes or transmit biased documentation habits to trainees. The amount of times I’ve heard people, particularly Black family, friends, and patients, share experiences of feeling unheard, dismissed, or doubted when they seek health care is frankly too many to count. We must and can do better to build mutual (clinician-patient) trusting relationships and stop perpetuating biases. Our department of medicine has a health equity initiative to try collectively decrease our use of specific biased language in the EMR and will be sharing this new article with our faculty and learners to further support why language/words matter. Thank you and outstanding work Dr. Lee & colleagues and JAMA, Journal of the American Medical Association #healthequity #wordsmatter
Excited to share my recent paper evaluating racial biases in clinicians' assessments of patient credibility in electronic health records. https://lnkd.in/emHyBbBS Using natural language processing techniques, we examined over 54,000 hospital admission notes (2018-2023). We found that notes written about non-Hispanic Black patients were more likely to contain language that undermined patient credibility compared to notes about non-Hispanic White patients. While many clinicians use words to capture patient-reported symptoms that aren't inherently problematic (e.g., denies, complains, insists), our study demonstrated that the differential use of such words by race reflects a systemic bias in documentation that can adversely affect historically minoritized groups. As more patients gain access to their health records, it's critical for health care organizations to examine and eliminate these biases in their documentation to build patient trust and advance health equity. A HUGE thank you to Gary Weissman Jaya Aysola Graciela Gonzalez Hernandez, MS PhD Davy Weissenbacher Ari Klein Eden Addisu Xinwei Chen for all your mentorship and partnership in this important work!
Race and Ethnicity and Clinician Linguistic Expressions of Doubt in Admission Notes
jamanetwork.com
To view or add a comment, sign in
-
Last week, Google Health introduced #EquityMedQA, a framework and collection of adversarial testing datasets! EquityMedQA is designed to foster collaboration and innovation in the pursuit of equitable AI technologies. The background of EquityMedQA: - #LLMs have potential benefits for addressing complex health information needs but also pose risks of introducing harm and exacerbating health disparities. - The study, published as preprint in arxiv, introduces EquityMedQA as a dataset of manually-curated and LLM-generated questions enriched for adversarial queries, - EquityMedQA emphasizes the importance of diverse assessment methodologies and involving raters of varying backgrounds and expertise in evaluating biases. If you haven’t read the study yet, I urge you to do so! Find the release by Google’s Chief Health Equity Officer Dr Igor Horn here: https://lnkd.in/grnAvHUr For the study check the first comment!
3 ways we are building equity into our health work
blog.google
To view or add a comment, sign in
-
A new study, funded by NIHR (National Institute for Health and Care Research) and Health Data Research UK (HDR UK) and led by King's College London, has demonstrated the potential of a #GenerativeAI tool they developed to predict the health trajectory of patients by forecasting future disorders, symptoms, medications and procedures. The tool – Foresight - is trained on existing healthcare data and uses deep learning to recognise complex patterns in electronic health record data. It could be used to aid clinicians with clinical decision-making and patient monitoring. Read the story: bit.ly/4a5jvzc The study is published in The Lancet: bit.ly/4ahzvOJ Richard Dobson | Prof James Teo | Zeljko K. | CogStack | UCL | King's Health Partners | Institute of Psychiatry, Psychology & Neuroscience | Biostatistics and Health Informatics Department, IoPPN, King's College London | Guy's and St Thomas' NHS Foundation Trust | King's College Hospital NHS Foundation Trust | South London and Maudsley NHS Foundation Trust | King's Faculty of Life Sciences & Medicine
To view or add a comment, sign in
-
Call for Papers MMLA 2024 conference Chicago Postcolonial Studies permanent sectiondeadline for submissions: April 22, 2024The Postcolonial Studies Permanent Section of the Midwest Modern Language Association (MMLA) seeks abstracts in line with this year’s conference’s theme: “Health in/of the Humanities.” We seek scholarly work within the realm of postcolonial studies that intersects with the topics of physical health, mental health, disparities in access and care, communal health, and racial disparities. The following questions are areas of interest for the section:-How does the power imbalance between the Global North and Global South affect one’s access to and quality of one’s healthcare?How do these factors impact one’s health outcomes?-How does race disrupt the availability and/or quality of care?-What can be done to effect change in the face of inequality?-What are the bigger issues that gird current paradigms of subalternity?Other possible topics might include: Narratives of health, sickness and/or recovery Health subcultures • Disability studies Food studies • Religion and health Women’s studies and health Medical Humanities, Narrative Medicine, Health Humanities Health Science Writing Representations of Public, Private, and Global Health Environmental Health Mental/psychological health Psychoanalytic approaches to health Approaches to health in the Digital Humanities Privacy and confidentiality Medical technologies Health professions/institutions/workplaces Please do not feel limited by the above questions and topics, the section welcomes interdisciplinary approaches to all topics that intersect the conference’s theme and postcolonial studies. For consideration, please send a 200–300 word abstract to Jose Intriago Suarez at jose.intriagosuarez@marquette.edu no later than April 22nd.
To view or add a comment, sign in
-
Sharing this excellent and accessible description of the spectrum of evidence from Shiri Sadeh-Sharvit, PhD🎗️.
Evidence-based practice isn’t all or nothing — it’s a spectrum. And I believe healthcare tech companies should start sharing their outcomes and approach, even if the evidence isn’t rock-solid yet. As Chief Clinical Officer at Eleos Health, I’ve watched the standards for scientifically validating healthcare solutions get tougher — and I love it. So, I want to give you an inside look at how we’ve gathered scientific data to support our #BehavioralHealthAI platform, presented through the lens of the 6️⃣ levels of evidence: Level 1 – Expert Opinions: We start by asking the “big brains” in our industry to talk about the benefits of using AI in #BehavioralHealthcare. Drs. Steve hollon from Vanderbilt; Kate Kellogg from MIT; Donna Sheperis, Joe Ruzek, and Dr. Eduardo Bunge from Palo Alto University; and Hanni B. Flaherty, from Yeshiva University have all published peer-reviewed papers outlining why and how an AI solution like Eleos can improve the quality of care. Level 2 – Case Reports: We look at one example of our platform in action — one therapist, one client — to show how our tech works in the real world. Level 3 – Process Research: Using data from hundreds of thousands of therapy sessions, we’ve done 6 studies with researchers like Dr. Simon A. Rego of Montefiore and Ariel Goldstein and Paz Naim from the Hebrew University to understand what happens in therapy. We’re already using what we’ve learned to improve Eleos, and sharing it out to the larger therapist community so that everyone can benefit. Level 4 – Case Series: This is where we track a group of clients with similar conditions and treatments (but no control group). There’s a paper that’s currently in the peer-review process outlining Palo Alto University’s (encouraging) results with Eleos. Level 5 – Randomized Controlled Trials (RCTs): RCTs are the gold standard in research. We compare outcomes from people using Eleos with those getting standard care. The goal is to determine the efficacy of using AI in clinical practice. We’ve already published one RCT with our friends at the Ozark Center, and two additional Eleos RCTs are currently underway! Level 6 – Systematic Reviews & Meta-Analyses: Here, we combine data from multiple studies to get a bigger picture of AI’s impact in behavioral health. We’re just getting started with this. 📈 At Eleos, we’re committed to backing up everything we do with solid evidence. If you want to see more about our research, you can find it here: https://lnkd.in/d3Nb9KWE None of this could be possible without our stellar Data Science and Clinical teams, including Samuel Jefroykin 🎗️Lidor Bahar Amit Spinrad Natalia Szapiro, Yocheved Katz, Israela Feleke Daniel Sand, PhD and many, many more. ✨ Also, I’d love to hear from you. How much does scientific evidence factor into your healthcare decisions? Tell me in the comments! #EvidenceBasedPractice #BehavioralHealth #LeadingWithScience
To view or add a comment, sign in
-
The Council of Europe’s Steering Committee for Human Rights in the fields of Biomedicine and Health (CDBIO) has issued a new report on the Application of Artificial Intelligence (AI) in healthcare and its impact on the ′patient-doctor′ relationship. The report focuses on selected human rights principles of particular relevance to the relationship, namely patient autonomy, professional standards, self-determination regarding health data, and equitable access to health care. Read the full report: https://lnkd.in/d32YZX8E Without a doubt AI offers a lot of opportunities also in our field of child and adolescent psychiatry. It is crucial though to have framework in place that also addresses the potential threats that AI can bring. At the upcoming World Health Summit in the session "Accessible Support in Mental Health: Facing the Silent Epidemic", AI will be part of our discussion. The session will be broadcast live on YouTube on Oct. 15, 2024, CEST: 11:00 AM - 12:30 PM / UTC: 09:00 AM - 10:30 AM https://lnkd.in/dnWQD3Qj #mentalhealth #digitalmentalhealthcare #worldhealthsummit2024 #WHS2024
Artificial Intelligence - Human Rights and Biomedicine - www.coe.int
coe.int
To view or add a comment, sign in
-
⭐Citizen Science for health - and societal impact ⭐ In recent years, citizen and patient involvement in health research has been growing. Citizen Science has great potential to contribute to society and to innovative research. Citizen Science is one of the concrete initiatives in the brand-new SDU-OUH Vision, aiming to reduce the distance and increase the interaction between citizens and researchers, improving data both quantitative and qualitative, enhancing dissemination, and improving societal impact in general. At #SDU (University of Southern Denmark), we are fortunate to have a Citizen Science Knowledge Centre based in our research library, supporting researchers who wish to use Citizen Science methods in their research, benefiting both scientific and societal impact. At Odense University Hospital (OUH), there is the Centre for Research with Patients and Relatives, which works to promote research for and with patients and their families. The Centre for Shared Decision-Making at Sygehus Lillebælt is tasked with implementing shared decision-making across all hospital units and in psychiatry throughout the Region of Southern Denmark. This means a strong support for researchers in Region of Southern Denmark who work with various forms of Citizen Science. Citizen Science in the health sciences can contribute to both scientific and societal impact. By linking Citizen Science with a focus on societal impact, the value for society and end-users can be enhanced as stakeholders and end-users help to define which impacts are important and relevant, particularly in a local context - ranging from welfare innovation and technology, prevention, and mental health to climate change. Increased awareness and use of #CitizenScience within health-related research also led to interesting new articles in the field. One such article is Remmers et al. (2023), which investigates specific factors and conditions to be aware of when working with Citizen Science in health. You can find their paper, ‘Citizen Science for Health: an international survey on its characteristics and enabling factors’, here: https://lnkd.in/dqhx75TV #SDU_SUND #sdulibrary Thomas Kaarsted Bertil Fabricius Dorch Uffe Holmskov Ole Skøtt Lone Ladegaard Laursen Bjarne Dahler-Eriksen Kim Brixen Kurt Espersen Bastian Greshake Tzovaras Gaston Remmers Sabine Wildevuur Sebastian H. Mernild Region Syddanmark
Citizen Science for Health: an international survey on its characteristics and enabling factors
osf.io
To view or add a comment, sign in