Electronic Signatures, ChatGPT, and Deepfakes: Can We Still Trust the Source?

After the initial excitement came disillusionment. Many early users of ChatGPT quickly realized how confidently it produces incorrect information often with convincing detail simply to please its user. A similar example of misplaced trust occurred during Google Bard’s expensive demo failure, where the chatbot falsely claimed that the James Webb Telescope had captured the first image of an exoplanet.

Unintentional misinformation generated by AI algorithms is concerning enough, but the rise of deepfakes represents a much more alarming threat. Manipulated audio and video could transform “fake news” from an online annoyance into a weapon of mass deception. Among the most famous examples is the fabricated video of Barack Obama hurling insults at Donald Trump.

Meanwhile, Twitter now rebranded as X added fuel to the fire with the failure of its “blue badge” system, which allowed impersonation due to the lack of true identity verification.

The Common Thread: Trust in the Source

All these examples share a core issue: the problem of trust in the information’s origin. With ChatGPT, the source is effectively unknown. With deepfakes, the identity of the real author is deliberately falsified. And on platforms like Twitter, verification is often superficial and devoid of rigor.

Establishing a reliable and secure way to authenticate the source of information or digital content has long challenged archivists and regulatory bodies. The eIDAS regulation addresses part of this issue by setting rules for digital trust across Europe. The underlying principle is straightforward: archiving digital documents is good, signing them electronically during archiving is better but verifying their authenticity before archiving is best.

That final step relies on the complex process of electronic signature verification, including certificate validation and non-revocation checks, which must take place before a document enters an archive.

The Paradox of Authenticity

Curiously, even archivists have debated how far they should go in verifying authenticity. A striking observation from the study Integrity, Signature, and Archiving Processes by Françoise Banat-Berger and Anne Canteaut encapsulates the dilemma:

“No one asks an archivist to verify the handwritten signatures on incoming documents. The archivist’s duty is to guarantee the integrity of the archiving process even for forgeries. In fact, the archivist must be able to prove that false documents were perfectly preserved.”

This paradox extends naturally to today’s digital challenges. Imagine being able to prove, without any possibility of manipulation, that a specific piece of information truly originated from NASA or that a video of Barack Obama was indeed produced by the real U.S. president.

A 2019 paper, Protecting World Leaders Against Deep Fakes (link), explored biometric-based solutions for identity verification in such contexts. Similarly, the certificate-chain mechanism of electronic signatures could serve as an inspiration for designing authenticity-verification frameworks in digital media.

Authenticity Alone Is Not Enough

Even if authenticity could be established with certainty, that would not guarantee truthfulness. Authentic documents can still contain false information. History is filled with examples of trusted authorities spreading misleading or incorrect claims. The so-called “Nobel disease,” where some Nobel Prize winners later advocate pseudoscientific theories, illustrates how the argument from authority can fail.

Thus, an additional layer a proof of veracity  is needed alongside proof of authenticity. Fact-checking plays a crucial role here, but the scientific study of “veracity assessment” is still in its infancy, as highlighted in this research.

Unlike electronic signature validation, truth verification remains a largely undeveloped area of focus in digital archiving. The question is: for how long can we ignore it?

Human Oversight and the Role of Workflows

One potential safeguard lies in human validation within archiving workflows. Requiring human approval before any document enters a digital archiving system could provide an extra layer of trust especially for sensitive or high-impact content.

This approach may not scale perfectly, but it reinforces accountability. By combining human oversight, electronic certification, and automated verification, the information ecosystem could begin to rebuild trust from the ground up.

Toward a New Foundation of Digital Trust

In the long term, meaningful progress in digital information management whether in AI, archiving, or content dissemination will depend on restoring trust. Establishing verifiable authenticity mechanisms and developing reliable truth-assessment methods could form the cornerstone of this renewed trust model.

Such a cross-disciplinary effort may well be the key to the survival and credibility of the digital information sector itself.

Scroll to Top