Skip to main content
<span style="color: rgb(36, 49, 66); background-color: rgb(255, 255, 255);">On October 24, 2024, Jane Hartsock, JD, MA, presented 'The Sky is Falling! The Ethical Implications of Artificial Intelligence and Large Language Models" as part of our Bioethics Grand Rounds Series. Professor Hartsock contends that while the development of LLMs are unavoidable, perhaps desirable to some extent, there are serious concerns that may outweigh their benefits and the impact(s) that they will have on academics, truth, and our society.</span>

AI and Bioethics: Professor Hartsock's View

5055-Schwartz, Peter

5055-Schwartz, Peter

Jane Hartsock, JD, MA, faculty investigator at the IU Center for Bioethics and System Director of Clinical and Organizational Ethics at IU Health, presented the first IU Bioethics Grand Rounds of the year last week on AI, a topic that has become an obsession for many of us in bioethics.  Check out the recording here.  It was a terrific talk, and it inspired me to write this blog post to summarize what she said and add some thoughts of my own.

Bioethics has been defined since the beginning as a field that thinks about new technologies that have promise and peril, such as ventilators (in the 1960s), in vitro fertilization (in the 1970s), and stem cells and cloning (1990s), just to name a few.  Each technology promised advances in the battle against disease and death but also carried risks and challenges to our understanding of humanity and life.  AI certainly fits.

Hartsock started, like all good bioethicists, by clarifying what she was talking about:  not just “Narrow AI,” like a chess computer or an EKG machine that analyzes the rhythm, which just does one thing very well, but instead about Large Language Models (LLMs) like ChatGPT that appear to some to reflect “Artificial General Intelligence,” carrying out many cognitive tasks across multiple domains.  Hartsock thinks that the current LLMs do not achieve general intelligence, while I and others think they may, but we can agree that LLMs present radical new possibilities and challenges, compared to what came before.

Here are my favorite points that she made.  First, even though she thinks that AI will cause more harm than good (as I do), the United States has to continue developing this technology as fast as possible, remaining a world leader, since if we don’t, others will, and there is real risk to that.  Like for the industrial revolution, there’s no future in just ignoring the technology.

Second, the risk to the way we live our life is real.  Any increase in efficiency, which AI may certainly deliver, is not good or bad in itself.  If AI lets you get more work done each hour, you might be able to cut your work day short, giving you more time with family, friends, and hobbies.  Or, you might be given more work.  Given our capitalist system, all signs point to more work, same hours, not same work, less hours.  That means AI may help our bosses, not us. 

Finally, the threat to education and knowledge is real.  Current LLMs can already write passable student papers ( “passable” in two senses:  they can pass for the real thing, and will get a passing mark).  This is a threat to all of us teachers, trying to figure out how to educate and evaluate students when there is an easy and available way for them to cheat and avoid learning.  The papers produced by the first generation of LLMs were easy to spot since the LLMs didn’t do references well, but the latest versions of LLMs have overcome that problem and can produce references that look real but often aren’t.  Often they cite something that doesn’t exist, isn’t legitimate, or doesn’t support the claim they are cited for.  It takes real work to check each reference, and most readers, and even teachers and professors, won’t. 

This leads to the deeper and even more worrisome problem that Hartsock raised.  Once AI generates text that is false but is written believably and with fake citations, it can serve as a source for writing in the future, serving as the basis for more false claims.  And these claims will look correct even to those who check the citation.  Research becomes fully disconnected from reality, and “agnotology” results, undermining all knowledge and understanding.

That’s a big and world-changing danger, perhaps not quite as bad as AI taking over the world and killing all humans (see The Terminator (1984) and other movies).  But it’s still extremely worrisome, at least to those of us who care about knowledge and truth, which should be everybody.  Check out Hartsock’s lecture here to hear her excellent talk, and stay tuned to this blog and the IU Center for Bioethics for more on AI.


Default Author Avatar IUSM Logo
Author

Peter Schwartz, MD, PhD

Peter H. Schwartz, MD, PhD, is the director of the IU Center for Bioethics and associate professor of medicine at IU School of Medicine.
The views expressed in this content represent the perspective and opinions of the author and may or may not represent the position of Indiana University School of Medicine.