skip to main content
College of Liberal Arts
University of Mississippi

News You Can Use: Pros and Cons of Using AI

UM experts share benefits, liabilities of artificial intelligence technologies

Marc Watkins (left), a lecturer in composition and rhetoric, discusses generative artificial intelligence during an AI Summer Institute on campus. While AI technologies can analyze huge amounts of data and generate reports, emails and other materials, users must tread carefully, Watkins warns. Photo by Eliot Parker

Marc Watkins (left), a lecturer in composition and rhetoric, discusses generative artificial intelligence during an AI Summer Institute on campus. While AI technologies can analyze huge amounts of data and generate reports, emails and other materials, users must tread carefully, Watkins warns. Photo by Eliot Parker

AUGUST 24, 2023 BY EDWIN B. SMITH

As artificial intelligence continues its global spread, two University of Mississippi experts advise the public to be aware of both the benefits and liabilities of this trendy technology.

Generative AI, such as ChatGPT, can help with productivity by automating certain tasks in the workplace, said Marc Watkins, lecturer in composition and rhetoric. The technology can analyze massive amounts of data quickly to generate emails, memos and reports, but users must use it carefully.

“While generative AI can save time, it also hallucinates material, inventing facts and then conveying that material confidently,” Watkins said. “We need to use caution and not put our complete faith and trust into automated systems that predict the best possible answer.”

ROBERT CUMMINGS

Robert Cummings

Most productivity applications – such as word processors, spreadsheet programs and presentation tools – soon will include generative AI features, said Robert Cummings, UM executive director of academic innovation and associate professor of writing and rhetoric.

“Thus, we know that in order to be productive members of the workforce, our students will also need to incorporate AI generator technologies into their workflows while remaining aware of the emerging best practices around their ethical uses,” Cummings said.

The technology can be used in educational settings to personalize learning and assist students and faculty members with writing, reading, research and speech recognition, Watkins said.

“To be AI literate is to understand how generative AI systems function, noting what affordances they can offer and being wary of the perils associated with their misuse,” he said. “Everyone, from students, faculty, staff to employers and administrators, will need AI literacy.”

The ethical challenges posed by generative AI are numerous, but Watkins said deepfakes top his list of liabilities.

“I think generative AI beyond text that is used to create deepfake images, videos and voice generators poses challenges to election integrity, harassment and even our sense of reality,” Watkins said. “There’s also the moral and ethical issue posed by scraping data from people without their consent and using them to train new generative AI models.”

AI technologies also have been used to create facial recognition software, but those systems have been shown to disproportionally misidentify people of color, he said.

Watkins

Marc Watkins

Cummings expects AI will have a dramatic impact on workplace productivity systems.

“For example, writing on word processors will change,” he said. “Rather than starting with a blank screen when writing, say, a sales report, many users may prefer to start with an AI-generated sales report and edit that draft. This will dramatically change our writing experiences.

“More and more, writing skills will rely on reviewing and revising AI output, rather than inventing one’s own text.”

As these practices become more accepted, expectations for individual productivity will shift to account for the wise and efficient use of AI generative tools, Cummings predicted

“At this stage, almost all AI-generated content needs to be read and evaluated thoroughly,” he said. “There is no sentience behind its output. Rather, tokens are put together based on probabilities, and human readers assign meaning to the tokens when they read them.”

Cummings noted that using generative AI is a legal and ethical gray area.

“Most of these large language models are built on massive databases, so large that we have to assume that the data in them are stolen data,” he said. “What this means for the legality of their outputs is up for debate.”