JQCO, Ph.D. [in training]

Commentary from a communications perspective

Generative AI: Eliminating barriers to entry for information disorder

Published by

on

Misinformation (or information disorder as some of the leading industry voices refer to it) has indeed become a serious concern for its potential to disrupt civility and damage democracy, with its power being used to sway elections and sow public discord in an increasingly polarized world (Kandel, 2020). Layering the unexplored potential of generative AI onto this already massive problem further complicates the issue. In Misinformation reloaded?, Simon et al. (2013) present four arguments for AI’s role in causing more information disorder in the digital age have merit on the surface. Creating and publishing inaccurate information has certainly become easier to do at scale with AI, and the personalization available through the technology increases the potential for engagement and spread.

Increased quantity

Simon et al. (2023) argues that despite the abundance of misinformation and the ease of access to it, it is only consumed by a small group of vocal users, which makes AI-generated false information less of a threat. While numbers are important in determining the true north of public discourse, the vocality and intense partisanship of few but very active consumers cannot be discounted. When there is more content being fed into the ecosystem by this relatively small group, it gives the appearance that there are more people supporting the agenda. Silence from the majority does nothing in balancing political conversation when only one side is actively engaged. Add to that the rising isolation within groups of the same shared values and ideals (because people are finding it increasingly difficult to speak to others they disagree with politically) and you have the Spiral of Silence theory working at full steam against influencing democratic processes (Khanna, 2022).

Increased quality

Another argument that the authors put forth is that technologies have been around to increase the quality of misinformation long before AI entered the public imagination. However, what AI has done is to eliminate entirely the need for technical knowledge in order to accomplish this. Photoshop has been used to doctor images and create false narratives for a long time, but that needed some skill from bad actors to create the image. With generative AI, that skill requirement is gone. It has become easier to blur the line between fact and fiction and manipulate public opinion (Gonzales, 2023), and the most important part is that today, anyone can do it.

Increased personalization

The authors also brings up a good point in that the technology to target people with tailor-made content to increase the potential for engagement is not directly tied to advancements in AI. However, this brings to light the need to policy to protect the public from having their data used by bad actors to improve the success rate of their targeting. Online behavioral advertising is used all over the web, most prominently in e-commerce, to improve the effectiveness of digital marketing. This application sounds benign enough – useful, even – but when one’s online behaviors are leveraged against entire groups of people to influence their democratic will, it becomes a much more serious conversation. This practice is known to be almost single-handedly responsible for the worst privacy problems today. It has also steered the development of hardware technology to improve the capability of our own devices to spy on us (Cyphers and Schwartz, 2022).

Overall, the dangers of generative AI against democratic society lie not just in its technical capability, but its ability to lower or even completely eliminate the barrier to entry for bad actors to capitalize on misinformation opportunities.

References

Cyphers, B., & Schwartz, A. (2022, November 17). Ban online behavioral advertising. Electronic Frontier Foundation. https://www.eff.org/deeplinks/2022/03/ban-online-behavioral-advertising 

Gonzales, O. (2023, November 8). Ai misinformation: How it works and ways to spot it. CNET. https://www.cnet.com/news/misinformation/ai-misinformation-how-it-works-and-ways-to-spot-it/ 

Kandel, N. (2020). Information disorder syndrome and its management. Journal of Nepal Medical Association, 58(224). https://doi.org/10.31729/jnma.4968 

Khanna, K. (2022, January 6). Can we talk? most feel Americans get along, but vocal minority more active, divided – CBS news poll. CBS News. https://www.cbsnews.com/news/americans-political-dialogue-opinion-poll/ 

Simon, F. M., Altay, S., & Mercier, H. (2023). Misinformation reloaded? fears about the impact of generative AI on misinformation are overblown. Harvard Kennedy School Misinformation Review. https://doi.org/10.37016/mr-2020-127  

One response to “Generative AI: Eliminating barriers to entry for information disorder”

  1. The science of gist: Communicating facts to activate emotion and motivate action – JQCO.PhD Avatar

    […] The bottom line is, fact is not separate from emotion. The greatest line of political conversation of this decade is arguably “Facts do not care about your feelings,” but clearly, there is a relationship between the two concepts, and they are nowhere near diametrically opposed. Facts can cause people to feel strong emotions about a certain issue, and they can influence people to behave in a certain way. If presented well, they can be strong enough to enact social progress. This is the new challenge for communication in the Age of Information Disorder. […]

    Like

Leave a reply to The science of gist: Communicating facts to activate emotion and motivate action – JQCO.PhD Cancel reply