Interesting piece on the importance of thoughtful integration of #AI in #peerreview: https://lnkd.in/ej7SFQdE
Anna O'Brien’s Post
More Relevant Posts
-
AI Definitions: AI model collapse - The idea that AI can eat itself by running out of fresh data, so that it begins to train on it’s on product or the product of another AI. This would magnify errors and bias and make rare data more likely to be lost. More AI definitions here: https://lnkd.in/gr5d-SWT
Generative AI — Goforth Solutions, LLC
stephengoforth.com
To view or add a comment, sign in
-
The widespread adoption of artificial intelligence (AI) in various research fields is driven by specialized AI models designed for specific tasks. Still, their biases are influenced by limited data and parameters. The use of large language models (LLMs) and generative AI tools like ChatGPT in research is less clear; these models, trained on vast datasets and human feedback, face challenges in bias identification, goal misgeneralization, hallucinations, and vulnerability to adversarial attacks. Incorporating these tools in writing research manuscripts introduces context-induced algorithmic biases and other unintended negative consequences for academia and knowledge dissemination. #ethicalai #aiethics #ai #ethics #responsibleai https://lnkd.in/gDGwqpXR
Generative AI in Writing Research Papers: A New Type of Algorithmic Bias and Uncertainty in Scholarly Work | Montreal AI Ethics Institute
https://montrealethics.ai
To view or add a comment, sign in
-
An overview of things to consider when engaging with #AI in your research (also includes presentation slides to download)
Academic Writing & Researcher Development Specialist | Building Academic Communities Through Writing
Here is my rough guide for the responsible use of AI in research. As artificial intelligence (AI) becomes increasingly prevalent in academic research, it's crucial for researchers to understand how to use these tools responsibly. This guide, based on my presentation for the GCU Academic Writing Centre and GCU Graduate School, outlines key considerations for using AI in research... #AI
My rough guide for the Responsible Use of AI in Research
http://stephaniezihms.blog
To view or add a comment, sign in
-
Check out this week's International Center for Academic Integrity #IntegrityMatters blog written by Thomas Lancaster about the use of Generative AI in the research process. Some great advice about engaging with AI not just for text production, but focusing instead on different ways to use AI in research. And a great example study about the correlation between pizza preferences and academic conduct! #academicintegrity #artificialintelligence https://lnkd.in/eHG4cpza
Is Generative AI your solution to becoming productive as an academic integrity researcher?
academicintegrity.org
To view or add a comment, sign in
-
The widespread adoption of artificial intelligence (AI) in various research fields is driven by specialized AI models designed for specific tasks. Still, their biases are influenced by limited data and parameters. The use of large language models (LLMs) and generative AI tools like ChatGPT in research is less clear; these models, trained on vast datasets and human feedback, face challenges in bias identification, goal misgeneralization, hallucinations, and vulnerability to adversarial attacks. Incorporating these tools in writing research manuscripts introduces context-induced algorithmic biases and other unintended negative consequences for academia and knowledge dissemination. #ethicalai #aiethics #ai #ethics #responsibleai https://lnkd.in/gtDeEpW3
Generative AI in Writing Research Papers: A New Type of Algorithmic Bias and Uncertainty in Scholarly Work | Montreal AI Ethics Institute
https://montrealethics.ai
To view or add a comment, sign in
-
“Generative AI has injected a lot of variables into the equation of scholarly publishing. And there’s not yet a shared framework for understanding what those implications are.” Dylan Ruediger shares key takeaways from a recent Ithaka S+R report exploring what #GenAI could mean for #ScholarlyPublishing, in conversation with Kathryn Palmer. Read more from Inside Higher Ed: https://lnkd.in/g42qBKm4
Scholarly publishing world slow to embrace generative AI
insidehighered.com
To view or add a comment, sign in
-
Here is my rough guide for the responsible use of AI in research. As artificial intelligence (AI) becomes increasingly prevalent in academic research, it's crucial for researchers to understand how to use these tools responsibly. This guide, based on my presentation for the GCU Academic Writing Centre and GCU Graduate School, outlines key considerations for using AI in research... #AI
My rough guide for the Responsible Use of AI in Research
http://stephaniezihms.blog
To view or add a comment, sign in
-
👀 Sharing this thought-provoking article from Semafor for our friends who manage a peer review process for their #conference or academic journal. Here's a quick overview... Researchers are increasingly turning to AI to review papers for conferences, creating debate in the industry on the merits of the practice. ❓ Should it be used to help automate the review process? ❓ Or, does it degrade the research process? ❓ Does your organization need to update its policies to address the use of AI/large language models in both the submission and review process? What do you think? https://lnkd.in/g8BSzbJK
Researchers warned against using AI to peer review academic papers | Semafor
semafor.com
To view or add a comment, sign in
-
🤔 Oh the evolving world of AI policies 🤯 ➡️ The whole world is grappling with understanding & instituting policies to deal with #GenerativeAI ➡️ In this newsletter, Chirag Jay Patel and I highlight the current policies ➡️ We begin by laying the groundwork based on the global updates in the tech world ➡️ We dive deep in the policy world in the #ScholarlyPublishing industry, specifically highlighting select perspectives from the publishers, journals, editors, authors, libraries, higher education, & such 👉🏽 In the comments below, please share other policies that you are familiar with to help us generate a roadmap of the current recommendations to further guide the #ScholCom community. 👉🏽 Your input will also facilitate current efforts of the Society for Scholarly Publishing's AI Community of Interest (AI CoIN) that Jay and I are co-facilitating. 🌟 A special shout-out to Avi Staiman for highlighting Perplexity as a responsible NLP tool in this newsletter https://lnkd.in/eCbCJvma
Augmenting Scholarly Publishing: Intelligent Emerging Tools & Trends
chiragjaypatel.substack.com
To view or add a comment, sign in
-
The #genAI policies are coming fast and furious https://lnkd.in/e4RjDSMG In our most recent issue, Chhavi Chauhan and I take a quick look back at our future of work issue https://lnkd.in/eqYyQwSt with recent developments at Turnitin and Klarna around AI playing a larger role at work The real focus is #ai policies from the EU, US, #scholarlypublishing, COPE, #editors, #libraries, #academia, ISMPP (International Society for Medical Publication Professionals), #ICMJE, and Public Relations Society of America (PRSA) Avi Staiman spotlights Perplexity Do read the latest from Claire Baytas and Dylan Ruediger at Ithaka S+R https://lnkd.in/exKwzg6m Please take the polls: Which best describes your journal's policy on use of generative AI tools like ChatGPT in manuscripts? How concerned are you about authors potentially misusing generative AI tools (e.g. passing off AI-generated text as their own writing)?
🤔 Oh the evolving world of AI policies 🤯 ➡️ The whole world is grappling with understanding & instituting policies to deal with #GenerativeAI ➡️ In this newsletter, Chirag Jay Patel and I highlight the current policies ➡️ We begin by laying the groundwork based on the global updates in the tech world ➡️ We dive deep in the policy world in the #ScholarlyPublishing industry, specifically highlighting select perspectives from the publishers, journals, editors, authors, libraries, higher education, & such 👉🏽 In the comments below, please share other policies that you are familiar with to help us generate a roadmap of the current recommendations to further guide the #ScholCom community. 👉🏽 Your input will also facilitate current efforts of the Society for Scholarly Publishing's AI Community of Interest (AI CoIN) that Jay and I are co-facilitating. 🌟 A special shout-out to Avi Staiman for highlighting Perplexity as a responsible NLP tool in this newsletter https://lnkd.in/eCbCJvma
Augmenting Scholarly Publishing: Intelligent Emerging Tools & Trends
chiragjaypatel.substack.com
To view or add a comment, sign in