Successfully Unsubscribed

Please allow up to 10 days for your unsubscription request to be processed.

Potential Perils of Deepfake Tech: A CEO’s Stark Warning

tech-gadgets

By Talia M.

- Dec 1, 2025

In a surprising move, Sam Altman, the chief executive of OpenAI, the firm responsible for the breakthrough AI 'ChatGPT', is issuing startling alerts about the potential impact of his own products and other similar technologies. On a podcast with venture capital firm Andreessen Horowitz earlier this year, Altman boldly predicted that "really bad stuff" could come from such technology.

This caution doesn't spring from baseless speculation. Videos generated by OpenAI's novel Sora software rapidly started appearing across social media platforms, catapulting the application to the top of Apple's U.S. App Store. Amongst these were deepfakes of historical figures and even Altman himself, involved in unseemly activities. This resulted in OpenAI prohibiting users from creating videos involving Martin Luther King Jr. on the Sora platform.

The question arises, why would Altman's firm, while given caution, also contribute to the worrying situation? Altman's perspective is that if society is going to face "incredible video models that can deepfake anyone," it needs to be prepared. Instead of further developing such pivotal tech behind the scenes, both society and AI need to adapt together––guardrails and norms can only be established with early exposure and co-evolution.

Artificial Intelligence-generated deepfakes hold the power to create real-looking videos. This might lead to disbelieving what you see on social media, be it news footage, financial tips, or otherwise. Already, tricksters are manipulating similar tools for fraud. Altman urges a cautious approach-always question the authenticity of the content encountered on social media platforms before accepting it as reality.

The impact of these deepfakes isn’t merely about falsehood in videos. When algorithms many can’t comprehend are majorly influencing our decisions, the consequences are often unpredictable and can be daunting. Altman warns this could lead to unexpected chain reactions, sparking rapid shifts in information, politics, and societal trust.

Despite these daunting prospects, Altman rejects the idea of regulating this technology, citing more drawbacks than benefits. Nonetheless, he advocates for "very careful safety testing" for what he dubs “extremely superhuman” models. “I think we'll develop some guardrails around it as a society," he concluded.