Harmful or risky AI-driven products – digital programmatic advertising

Programmatic advertising is the automated serving of digital advertisements in real time based on individual advertisement impression opportunities (Busch, 2016). In 2024, UK programmatic ad spend reached an estimated £3.7 billion, with spending set to approach £54 billion by 2028.

More than four in every five British pounds invested in digital advertising in the United Kingdom (UK) are transacted programmatically, and this share is projected to keep increasing (Statista, 2024).

The rise of digital programmatic advertising has profoundly reshaped the landscape of consumer interaction, yet it has also precipitated significant online harms, particularly among vulnerable populations such as children, and women and girls. The use of AI with big data and machine learning algorithms to serve advertising content exploits consumer data, usually without sufficient consent or transparency, leading to privacy violations and the manipulation of users (Yuan, et al 2024; Chen, et al 2019). The digital advertising ‘attention economy’ uses AI-driven media placement that prioritizes user engagement over user well-being to drive advertising revenues. This system incentivises sensationalist, polarizing, and harmful content and misinformation that captures user attention but may detrimentally impact mental health and social behaviour (Jones, 2024; Liang, 2022).

Having existed as a legal ‘blind-spot’ (Wu, 2018) the current lack of risk assessment and mitigation obligations, coupled with a lack of robust online age verification mechanisms, allowing children to access inappropriate content in the first place, is often exacerbated by ad revenue prioritisation algorithms that expose them to further harmful materials. For example, harmful content that promotes or glorifies self-injury, suicide or eating disorders (Ofcom, 2024a), as well as children’s illegal access to pornography and content featuring child sexual exploitation and abuse (CSEA). Exposure to online CSEA content may then drive a range of harmful behaviours, including the sharing and circulation of child sexual abuse material, online grooming of children—where perpetrators may coerce children into sending sexual images of themselves—sexual extortion, and, in some cases, arranging in-person child sexual abuse (Ofcom, 2024b). Indeed, there has been an alarming 87% increase in reported child sexual abuse material cases since 2019 with over 32 million reports globally (WeProtect Global Alliance, 2023) and, shockingly a 360% increase in ‘self-generated’ sexual imagery of 7-10-year-olds from 2020 to 2022 (Internet Watch Foundation, 2023).

 Author 

Karen Middleton

 

 References 

Busch, Oliver (2016), “The Programmatic Advertising Principle,” in Programmatic Advertising, Oliver Busch, ed., Cham, Switzerland: Springer, 3–15.

Chen, G., Xie, P., Dong, J., & Wang, T. (2019). Understanding programmatic creative: The role of AI. Journal of Advertising, 48(4), 347-355.

Internet Watch Foundation. (2023). ‘Self-generated’ child sexual abuse. https://www.iwf.org.uk/annual-report-2023/trends-and-data/self-generated-child-sex-abuse/

Jones, J. (2024). Don’t fear artificial intelligence, question the business model: How surveillance capitalists use media to invade privacy, disrupt moral autonomy, and harm democracy. Journal of Communication Inquiry, 01968599241235209.

Liang, M. (2022). The end of social media? How data attraction model in the algorithmic media reshapes the attention economy. Media, Culture & Society, 44(6), 1110-1131.

Ofcom. (2024a). Tech firms must tame toxic algorithms to protect children online. https://www.ofcom.org.uk/online-safety/protecting-children/tech-firms-must-tame-toxic-algorithms-to-protect-children-online/

Ofcom. (2024b). Tackling child sexual abuse under the online safety regime https://www.ofcom.org.uk/online-safety/protecting-children/tackling-child-sexual-abuse-under-the-online-safety-regime/

Ofcom. (2024c). Implementing the Online Safety Act: Protecting children from online pornography https://www.ofcom.org.uk/online-safety/protecting-children/implementing-the-online-safety-act-protecting-children/

Statista. (2024). Programmatic advertising spending in the United Kingdom (UK) from 2017 to 2028. https://www.statista.com/statistics/1147800/programmatic-advertising-spend-forecast-uk

WeProtect Global Alliance (2023). Global Threat Assessment 2023. https://www.weprotect.org/global-threat-assessment-23/

Wu, T. (2018). Blind spot: The attention economy and the Law. Antitrust LJ, 82, 771.

Yuan, Q. M., Fletcher-Brown, J., Middleton, K., &  Lliyanaarachchi, G. (2024/ n.d.). Navigating Gendered Impacts of Corporate Surveillance in the Digital Advertising Ecosystem: Privacy and Consumer Behaviour. Manuscript in preparation.