The AI Act needs a practical definition of ‘subliminal techniques’ (because those used in Advertising aren’t enough)

While the draft EU AI Act prohibits harmful ‘subliminal techniques’, it doesn’t define the term – we suggest a broader definition that captures problematic manipulation cases without overburdening regulators or companies, write Juan Pablo Bermúdez, Rune Nyrup, Sebastian Deterding and Rafael A. Calvo.

Juan Pablo Bermúdez is a Research Associate at Imperial College London; Rune Nyrup is an Associate Professor at Aarhus University; Sebastian Deterding is a Chair in Design Engineering at Imperial College London; Rafael A. Calvo is a Chair in Engineering Design at Imperial College London.

If you ever worried that organisations use AI systems to manipulate you, you are not alone. Many fear that social media feeds, search, recommendation systems, or chatbots can unconsciously affect our emotions, beliefs, or behaviours.

The EU’s draft AI Act articulates this concern mentioning “subliminal techniques” that impair autonomous choice “in ways that people are not consciously aware of, or even if aware not able to control or resist” (Recital 16, EU Council version). Article 5 prohibits systems using subliminal techniques that modify people’s decisions or actions in ways likely to cause significant harm.

This prohibition could helpfully safeguard users. But as written, it also runs the risk of being inoperable. It all depends on how we define ‘subliminal techniques’ – which the draft Act does not do yet.

Why narrow definitions are bound to fail

The term ‘subliminal’ traditionally refers to sensory stimuli that are weak enough to escape conscious perception but strong enough to influence behaviour; for example, showing an image for less than 50 milliseconds.

Defining ‘subliminal techniques’ in this narrow sense presents problems. First, experts agree that subliminal stimuli have very short-lived effects at best, and only move people to do things they are already motivated to do.

Further, this would not cover most problematic cases motivating the prohibition: when an online ad influences us, we are aware of the sensory stimulus (the visible ad).

Furthermore, such legal prohibitions have been ineffective because subliminal stimuli are, by definition, not plainly visible. As Neuwirth’s historical analysis shows, Europe prohibited subliminal advertising more than three decades ago, but regulators have hardly ever pursued cases.

Thus, narrowly defining ‘subliminal techniques’ as subliminal stimulus presentation is likely to miss most manipulation cases of concern and end up as dead letter.

A broader definition can align manipulation and practical concerns

We agree with the AI Act’s starting point: AI-driven influence is often problematic due to lack of awareness.

However, unawareness of sensory stimuli is not the key issue. Rather, as we argue in a recent paper, manipulative techniques are problematic if they hide any of the following:

  • The influence attempt. Many internet users are not aware that websites adapt based on personal information to optimize “customer engagement”, sales, or other business concerns. Web content is often tailored to nudge us towards certain behaviours, while we remain unaware that such tailoring occurs.
  • The influence methods. Even when we know that some online content seeks to influence, we frequently don’t know why we are presented with a particular image or message – was it chosen through psychographic profiling, nudges, something else? Thus, we can remain unaware of how we are influenced.
  • The influence’s effects. Recommender systems are meant to learn our preferences and suggest content that aligns with them, but they can end up changing our preferences. Even if we know how we are influenced, we may still ignore how the influence changed our decisions and behaviours.

To see why this matters, ask yourself: as a user of digital services, would you rather not be informed about these influence techniques?

Or would you prefer knowing when you are targeted for influence; how influence tricks push your psychological buttons (that ‘Only 1 left!’ sign targets your aversion to loss); and what consequences influence is likely to have (the sign makes you more likely to purchase impulsively)?

We thus propose the following definition:

Subliminal techniques aim at influencing a person’s behaviour in ways in which the person is likely to remain unaware of (1) the influence attempt, (2) how the influence works, or (3) the influence attempt’s effects on decision-making or value- and belief-formation processes.

This definition is broad enough to capture most cases of problematic AI-driven influence; but not so broad as to become meaningless, nor excessively hard to put into practice. Our definition specifically targets techniques: procedures that predictably produce certain outcomes.

Such techniques are already being classified, for example, in lists of nudges and dark patterns, so companies can check those lists and ensure that they either don’t use them or disclose their usage.

Moreover, the AI Act prohibits, not subliminal techniques per se, but only those that may cause significant harm. Thus, the real (self-)regulatory burden lies with testing whether a system increases risks of significant harm—arguably already part of standard user protection diligence.

Conclusion

The default interpretation of ‘subliminal techniques’ would render the AI Act’s prohibition irrelevant for most forms of problematic manipulative influence, and toothless in practice.

Therefore, ensuring the AI Act is legally practicable and reduces regulatory uncertainty requires a different, explicit definition – one that addresses the underlying societal concerns over manipulation while not over-burdening service providers.

We believe our definition achieves just this balance.

(The EU Parliament draft added prohibitions of “manipulative or deceptive techniques”, which present challenges worth discussing separately. Here we claim that subliminal techniques prohibitions, properly defined, could tackle manipulation concerns.)

Source: The AI Act needs a practical definition of ‘subliminal techniques’ – EURACTIV.com

Robin Edgar

Organisational Structures | Technology and Science | Military, IT and Lifestyle consultancy | Social, Broadcast & Cross Media | Flying aircraft

 robin@edgarbv.com  https://www.edgarbv.com