Loading...
As personal data, behavioural patterns, social networks, emotions, preferences, and even digital replicas increasingly become resources that can be digitised, commodified, packaged, traded, and monetised, a fundamental question arises: should individuals hold enforceable rights to know, control, and benefit from these digital extensions of the self?
The ‘Bulk Sale’ of a Ukrainian Woman’s Digital Replica
In 2023, Olga Loiek, a 21-year-old woman from Ukraine, discovered that her face and voice had appeared in dozens of videos circulating on Chinese social media platforms, including RedNote (Xiaohongshu) and Bilibili. In these videos, ‘she’ was made into a ‘Russian girl’ promoting China–Russia friendship and advertising products in Mandarin, a language she had never learned.
These ‘girls,’ bearing Loiek’s facial features and voice and appearing under names such as Sofia, Natasha, and April, were generated and deployed at scale through AI systems, entirely without her knowledge or consent.
This wholesale exploitation of Loiek’s digital replica occurred notwithstanding the fact that she ought to retain sovereignty over her own appearance and personality. Beyond the absence of any economic compensation, her nationality, political stance, and identity were deliberately reconstructed and publicly disseminated. Such conduct constitutes not merely unauthorised commercial use, but a profound interference with personal dignity and the right to self-determination of identity.
Structural Gaps in Personality Rights Protection
Under existing legal frameworks, claims based on unjust enrichment or tortious liability may, in principle, be available in cases involving the unauthorised use of another’s likeness. Victims may theoretically seek restitution of commercial profits generated from the conduct; however, such claims are often constrained by significant evidentiary and doctrinal hurdles.
More fundamentally, the scope and clarity of personality rights protection remain markedly insufficient. Taking Taiwan as an example, Article 18 of the Civil Code permits claims for damages or emotional distress only where specific statutory authorisation exists, and includes merely traditional categories such as bodily integrity, health, reputation, liberty, privacy, and sexual autonomy that were explicitly listed. It does not, however, expressly address the unauthorised use of facial features, voice, or behavioural patterns as digital identity markers, compelling courts to rely on expansive interpretations of ‘other personality interests’ to accommodate such harms.
This interpretive approach, while pragmatic, produces legal uncertainty, as the scope of protection, elements of liability, and available remedies remain unstable and unpredictable.
‘Digital Human Trafficking’
In the Web 2.0, and increasingly Web 3.0, era, digital replicas comprising appearance, voice, accounts, and virtual avatars have not only become integral components of personal identity, but also function as ‘molecules’ and ‘assets’ that establish the digital world.
Within this structure, the collection, packaging, exploitation, and resale of an individual’s digital replica under conditions where the person is neither fully informed nor able to refuse, becomes structurally and ethically indistinguishable from human trafficking in its essential logic of commodifying, pricing, and circulating human beings as objects of trade.
In this sense, the unauthorised use of deepfake technologies to generate and manipulate a person’s face or voice for commercial or political purposes may properly be characterised as a new form of ‘Digital Human Trafficking.‘
Power Asymmetry and Digital Exploitation
The imbalance of power and information between individuals and platforms is particularly exposed under the current digital ecosystem. Digital replicas may be rapidly exploited and monetised, while the original subject remains unaware, unable to exercise meaningful control, and excluded from the value generated through their own identity.
As digital replicas are gradually acknowledged as components of personality as well as assets within social and economic interaction, legal systems shall correspondingly recognise rights of knowledge, control, and benefit-sharing vested in the individual.
From Product Design to Legal Recognition
Recent developments suggest an emerging shift towards institutional recognition of these concerns. Certain generative video AI products now allow users to designate themselves or others as ‘reusable characters,’ with management of usage scope, sharing permissions, and accessibility for others.
More explicitly, in 2024 the US state of Tennessee enacted the Ensuring Likeness, Voice, and Image Security Act (also known as the ‘ELVIS Act’), which directly addresses unauthorised AI-generated imitation of a person’s voice or likeness and imposes both civil and criminal liability. Such efforts can serve as valuable reference points for future regulatory design.
From Physical Persons to Digital Personality
Since the nineteenth century, personal liberty has been fought for and human trafficking has been protested against, leading to the establishment of international law and human rights frameworks that prevent human beings from becoming tradable objects. In today’s digital world, we face the very same fight for freedom after ‘personality has been digitised.’
Within Taiwan’s current legal framework, Article 18 of the Civil Code adopts a ‘special provision reservation,’ which significantly limits effective protection and leaves violations of digital identity in a legal grey area. Even where victims are aware that their facial features, voice, behavioural style, and other ‘digital identity characteristics’ are deepfaked, batch-misappropriated, or commercially monetised, they often lack a clear, predictable, and readily applicable remedy path, falling into systemic disadvantages regarding evidence collection, jurisdiction, damage calculation, and timely takedown procedures.
From Principles to Enforceable Rights
The recent passage of Taiwan’s Artificial Intelligence Basic Act has established foundational governance principles grounded in human-centred values and fundamental rights protection. The critical challenge now lies in translating these abstract principles into operable rights and procedures.
Specifically, future legal development shall address:
- Institutionalisation of digital identity rights: the formal recognition of digital identity rights, including informed consent, authorisation, withdrawal, traceability, and benefit-sharing;
- Rapid remedy mechanisms: effective and rapid remedies, such as takedown mechanisms, evidence preservation, and accountability tracing;
- Responsibility allocation standards: a clear division of responsibilities among platforms, model providers, and users.
Conclusion
As the virtual world is increasingly permeating in everyday life, the legal protection of ‘digital personality’ shall evolve in step with the times.
To what extent should individuals hold rights over their digital replicas? Should such rights be authorised, revoked, and traced? Finally, what remedies should be available when digital replicas are violated?
These questions are no longer merely matters of technical design or ethical discussion, but constitute one of the most pressing rule of law challenges of the digital age.
本文同步刊登於 張蓉菁 Substack