[ad_1]
At a two-day conference on “Artificial Intelligence and Justice”, held at Spain’s Supreme Court and organised by the Asociación de Letrados del Tribunal Supremo (ALTS), Judge Manuel Marchena warned that AI is rapidly changing how societies understand truth and fairness. Addressing an audience in Madrid, he outlined the legal and ethical challenges posed by deepfakes, algorithmic profiling and the concentration of digital power — challenges that, he argued, Europe must urgently confront.
Spain’s Supreme Court judge Manuel Marchena has warned that artificial intelligence (AI), deepfakes and the data economy are transforming how societies understand truth, democracy and justice. Speaking before students and professionals, his lecture explored the profound legal and ethical changes triggered by rapid technological development.
Why “Free” Platforms Are Not Really Free
Marchena began by asking a simple question: if platforms like Twitter (now X), WhatsApp or Google charged one euro a year, would people pay? Most likely yes, he argued. And even at ten euros, millions would continue using the services.
Yet these companies — some valued in the tens or hundreds of billions — charge nothing. For Marchena, this reveals that their true business model is personal data, often described as “the oil of the 21st century”. That data, he said, has geopolitical value and gives enormous influence to the companies that control it.
A New Digital Class: Harari’s “Irrelevants”
Citing historian Yuval Noah Harari, the judge warned that digital systems may create a new social category: the irrelevants. These are people whose data do not interest algorithms and whose voices risk being excluded from political and economic processes.
Marchena suggested that this emerging divide is already visible in how algorithms determine what citizens see, read and discuss online.
Deepfakes and the “Right Not to Be Deceived”
One of Marchena’s strongest warnings concerned the spread of AI-generated deepfakes. Today, he said, it is possible to produce videos or audios in which public figures appear to say things they never said. By the time a person proves the content is fake, the political or reputational damage may already be irreversible.
This is why many institutions, including the European Parliament, are developing regulatory tools to protect what Marchena called the emerging “right not to be deceived.”
He linked these concerns to the new EU Artificial Intelligence Act, which restricts high-risk AI systems and bans certain biometric surveillance practices that threaten citizens’ rights.
Nanotechnology, Quantum Computing and a Robotised Society
Marchena also referenced ideas published by Mustafa Suleyman, co-founder of DeepMind, describing technologies that will radically transform society: quantum computing, nanotechnology, and robotics.
He imagined a future where large buildings could be constructed entirely by autonomous robots operating continuously, “365 days a year, 24 hours a day”, with profound implications for labour markets and public services.
Kasparov vs Deep Blue: When Machines Surpassed Human Intuition
To illustrate the evolution of machine capability, the judge revisited the historic matches between Garry Kasparov and IBM’s Deep Blue. While a human chess master can think a few moves ahead, Deep Blue evaluated hundreds of millions of possible positions per second.
For Marchena, this symbolic moment still matters: it marks the point at which machines undeniably surpassed human intuition in certain domains.
Predictive Algorithms and the End of Privacy?
The lecture also referenced a University of Cambridge study showing how algorithms can infer personality traits and preferences from social-media activity. With enough data, the system may understand a person better than their partner — or even their parents.
Marchena warned that this raises not only privacy risks but also questions about autonomy: what happens if individuals start trusting algorithms more than their own judgment?
AI, Human Rights and Legal Safeguards
The judge acknowledged that experts disagree: some fear AI’s existential risks, while others argue that AI will never replicate the depth of human intuition or moral reasoning.
He criticised transhumanist ideas of “digital immortality”, describing them as technological myths that confuse a digital copy with genuine personal identity.
These concerns echo recent European debates on human rights and AI, including issues explored in The European Times’ reporting on AI and fundamental rights.
Democratic Accountability and Deepfake Protection
Marchena’s warnings align with a broader European trend: new rules against deepfakes, image manipulation and abuses of biometric data. Denmark, for example, has been drafting laws to protect citizens against ultra-realistic digital impersonations created without consent.
At EU level, investigations such as the EU Ombudswoman’s inquiry into AI standards reflect growing concern about accountability, non-discrimination and algorithmic transparency.
Justice in an Algorithmic Age
According to Marchena, justice systems cannot remain neutral or passive in the face of such profound changes. Courts already deal with digital evidence, algorithmic policing tools and media narratives that evolve faster than judicial procedures.
The challenge, he argued, is to ensure legal transparency, non-discrimination and effective remedies for those harmed by automated decisions. Technology must remain accountable to democratic principles.
A Call for Critical Thinking
Marchena concluded by urging the public — especially younger generations — to maintain critical distance from algorithmic recommendations. The future of democracy and justice, he suggested, depends on the capacity of citizens to question digital systems rather than surrender to them.
[ad_2]
Source link



