Hemant Bhargava’s Post

Us academics, and journal editors, have had frequent conversations lately about the role of AI/LLM in scholarly writing. This article in The Conversation provides some useful perspectives - and it covers issues such as credit for the writing/ideas, bias, hallucination - and of course the challenge of enforcement if any bans are made. And it ends with a good observation that the real goal should be to prevent sloppiness and maintain good quality. Would love to hear all your thoughts on this. https://lnkd.in/gxtfKPub #academia #journals #articles #genAI

AI-assisted writing is quietly booming in academic journals. Here’s why that’s OK

AI-assisted writing is quietly booming in academic journals. Here’s why that’s OK

theconversation.com

Jeff Greenfeld

Venture Fellow @ Laconia | Consultant @ Innovation Strategy | Investment Analyst | MBA, Finance & Strategy

7mo

When I was worked in a lab characterizing novel high coordinate metal-metal complexes, the paper was a way to celebrate the efforts that went in to the research. It was the feather in the cap where researchers could put their signature on their findings. There’s also a great deal of garbage that gets pumped out and circulated, like that paper a year or two ago that the media picked up on (despite this claim being made several times a year for the last decade). I agree with the article’s conclusion, which is an issue that’s plagued academic journals for as long as I’ve been reading them. But that is a problem that’s compounded by an onslaught of submissions enabled by LLMs. I don’t think LLMs are particularly useful to people who are already talented communicators. Rather, they enable poor communicators to become better / faster. I can imagine benefits and drawbacks to that. I look forward to discriminative AI models that will detect generative AI. At some point maybe we’ll just have a layer in society of AI models engaged in virtual combat to counteract each other. Seems like as efficient a use of energy as bitcoin.

Like
Reply
Matthew Harris CSSBB PMP MBA

National Director Business Process Services & Transformation | Lean Six Sigma Master Black Belt

7mo

Interestingly, LinkedIn just tried to offer me a premium service to have AI write my response. No thank you, but I am not sure I want an ethical prohibition. "Don't be sloppy,” is a good principle. On the flip side, common misspellings could be a subtle clue AI was not involved, so a norm of minor sloppyness could become a helpful principal😉 to find where the humans are.

Shiva Shekhar

Associate Professor, Tilburg University. Academic consultant at Compass Lexecon.

6mo

It’s hard to do that if the papers are 50 pages long (generated by AI) and reviewers are not given much time/credit for their work. Then it just becomes a spam problem and everyone is worse off.

Like
Reply
See more comments

To view or add a comment, sign in

Explore topics