Understanding Hidden Interaction Codes that Drive Business Success
A new study from Associate Vice Dean Peer Fiss and a former Marshall PhD students found that “category interaction codes” influence how consumers evaluate, buy, and use products.
AI-Assisted Emails May Put Trustworthiness at Risk in Workplace Communications
AI-Assisted Emails May Put Trustworthiness at Risk in Workplace Communications
New research from Professor Peter Cardon suggests managers may undermine their trustworthiness when they use medium to high levels of AI assistance in email communication.
[iStock Photo]
Generative artificial intelligence (AI) has taken on several roles in the workplace: calculator, personal assistant, and data analyst, to name a few. Yet, what are the consequences when technology replaces interpersonal communication itself?
New research from Peter Cardon, the Warren Bennis Chair in Teaching Excellence and professor of clinical business communication, and Anthony Coman (University of Florida) investigates the drawbacks of AI-assisted emails and the resulting perception of managers who send them. The study — “Professionalism and Trustworthiness in AI-Assisted Workplace Writing: The Benefits and Drawbacks of Writing With AI,” published in the International Journal of Business Communication — presented 1,100 subjects with manager emails, highlighting the sections of text that had been written or edited with the help of AI.
The researchers found if managers used only a small amount of AI in business communications, participants still considered them to be caring and competent at their job. On the other hand, emails largely composed by programs such as ChatGPT and Claude were judged to be inauthentic and damaged subjects’ perceptions of the managers.
“If employees think the manager used AI for a little bit of editing, there’s almost no penalty to that … but as soon as they think the manager did most of the writing with AI, they think that manager’s uncaring,” Cardon said. “They feel like the manager doesn't deserve the job in the first place because they're not competent enough and they fundamentally don't trust the manager.”
According to Cardon’s other research, this discrepancy increases between emails that deal with emotions versus those that are strictly informational. Workers will tolerate routine AI-written messages, but for communications that require more “emotional labor” such as persuasive writing, performance reviews, or sympathy emails, employees more harshly judge the use of the technology.
“[People] might draft something out using AI … but they haven’t gone through the emotional labor,” Cardon said. “That’s where I think people are wanting more and more of the verbal conversations and even the in-person get-togethers where they can make the real judgements about the warmth of the relationship.”
On the contrary, employees were less critical of their own use of AI, even in the same situations where they more harshly criticized managers.
“[Employees] assume that they’re reading through their own email, ensuring that it reflects their own voice,” Cardon explained. “Whereas when the manager does it, they’re imagining that manager not wanting to take the time to carefully think through the message … [Employees will] generally read into the fact that [managers] didn’t take the time as a reflection that [managers] don’t care about the person.”
It is going to be to the point where AI is indistinguishable from what a person writes, but that also becomes a situation where the level of skepticism grows.
— Peter Cardon
Warren Bennis Chair in Teaching Excellence / Professor of Clinical Business Communication
In this study, Cardon and his team pointed out to participants which sections had been written or edited by AI. The professor noted, however, that past research subjects struggled to distinguish AI-generated text from those written by humans. Cardon speculates that as the gap between AI and human communication decreases, so too will readers’ trust in all written communication.
“It is going to be to the point where AI is indistinguishable from what a person writes, but that also becomes a situation where the level of skepticism grows,” Cardon said. “Just by virtue of having the skepticism that someone may have used AI for most of the message is pretty problematic in terms of interpersonal communication, and it poses some pretty significant challenges for leaders and managers.”
Cardon explained that managers who rely exclusively on written communication may cultivate an “air of suspicion” among their subordinates and be viewed as inauthentic, untruthful, and nontransparent. The impact isn’t solely social, the professor pointed out; it could affect the bottom line.
“There are hundreds and hundreds of studies that show that when there’s low trust, employees don’t perform as well at the team level,” Cardon said. “Moving all the way up to the organizational level, if you look at measures of trust and transparency, you have lower company-wide performance in terms of revenues and ROI.”
As AI progresses at rapid rates, Cardon observes that AI-generated text will only continue to grow less distinguishable from human communication, especially with new memory advancements allowing the technology to speak seamlessly in a user’s voice. With each step forward, the professor posits that employees and supervisors alike must become hyper-conscious of how AI can erode interpersonal trust.
“We literally have millions of people in the workplace right now who are engaged in AI-mediated communication,” Cardon said. “As the tools increasingly gain the capability to speak in our individual voices, that’s where I think we’re going to see the implications in terms of interpersonal relationships. People need to be aware of what may happen if they only represent themselves in ways that AI may have generated.”
RELATED
Understanding Hidden Interaction Codes that Drive Business Success
A new study from Associate Vice Dean Peer Fiss and a former Marshall PhD students found that “category interaction codes” influence how consumers evaluate, buy, and use products.
MBV Education Helps Veteran Transition From Military into Defense Career at Anduril
With the backing of his degree and the support of the Trojan Network, Randall Parkes MBV ’21 seized an opportunity in cutting-edge defense technology.
Cited: Emily Nix in Financial Times
The Financial Times cites Nix’s study that shows women suffer earnings and employment falls after cohabitating with an abusive partner.
Marshall Faculty Publications, Awards, and Honors: October 2025
We are proud to highlight the many accomplishments of Marshall’s exceptional faculty recognized for recently accepted and published research and achievements in their field.
Knight Foundation and USC Marshall Commit $4 Million to Advance Purpose-Driven AI Research
The research initiative aims to create ethical, human-forward outcomes for cutting-edge technology like AI.