China’s RealAI Launches World’s First Real-Time AI Content Detection Tool Am...

TMTPOST--Chinese AI safety company RealAI has launched RealBelieve, the world's first product capable of detecting AI-generated content in real time.

Unlike previous AI-generated content (AIGC) detection products that required uploads, RealBelieve, released last Friday, is designed for end users, offering proactive real-time detection. It can verify facial authenticity in video streams and alert users to potential AI-generated content while browsing the web.

RealBelieve provides text, image, video, and audio detection services through file uploads and real-time notifications via a browser plugin, transforming passive defense into active protection. RealBelieve is now open for beta testing.

Tian Tian, the CEO of RealAI, said that RealAI has already served over 100 clients in government, finance, and other sectors. As AI technology advances, the priority of safety within AI development continues to rise. If AI technology is to advance towards superintelligence in the future, a comprehensive safety system must be established to ensure the development of safe superintelligence technology.

"If we don’t address AI safety now, we may not get another chance," Tian warned.

At the 2024 BAAI Conference, Zhang Hongjiang, the Chairman of the Academic Advisory Committee of the Beijing Academy of Artificial Intelligence (BAAI), emphasized the rapid development of large models over the past year and the need to address the serious and urgent nature of artificial intelligence (AI) safety issues.

"When we examine AI safety issues from different perspectives, apart from understanding societal biases, misinformation, potential job displacement, large-scale automation by autonomous robots, and the resulting economic impacts, we must also focus on the potential catastrophic risks or misuse incidents that could lead to human risks," Zhang said. "There are many AI safety statements and petitions, but it’s more important to set clear goals, invest resources, take actions, and jointly address these risks."

Huang Tiejun, Chairman of BAAI, added that AI has entered a highly risky phase in terms of safety. Addressing AI safety risks requires a five-level classification of artificial general intelligence (AGI) capabilities. Humanity should commit to resolving AI safety issues and enhance international cooperation to ensure AI technology remains controllable and welcome the advent of safe AGI.

AI safety issues have remained a concern in recent years. In January, an American pop singer was targeted with malicious AI-generated fake photos that spread rapidly on social media, causing distress to the singer. In February, a Hong Kong company suffered a fraud loss of HK$200 million when an employee was deceived during a video conference by a "leader" who was actually a deepfake AI. The fraudsters used publicly available information to recreate the CFO's image and voice, crafting a realistic but fake video conference.

Ilya Sutskever, OpenAI’s co-founder and former chief scientist, has started a new AI company focused on safety. In a post on June 19, Sutskever revealed Safe Superintelligence Inc. (SSI), a startup with “one goal: creating a safe and powerful AI system.”

The announcement describes SSI as a startup that “approaches safety and capabilities in tandem,” letting the company quickly advance its AI system while still prioritizing safety. It also calls out the external pressure AI teams at companies like OpenAI, Google, and Microsoft often face, saying the company’s “singular focus” allows it to avoid “distraction by management overhead or product cycles.”

“Our business model means safety, security, and progress are all insulated from short-term commercial pressures,” the announcement reads. “This way, we can scale in peace.” In addition to Sutskever, SSI is co-founded by Daniel Gross, a former AI lead at Apple, and Daniel Levy, who previously worked as a member of technical staff at OpenAI.

Sutskever revealed that SSI has launched the world's first dedicated safe superintelligence lab, with a singular goal and product: a safe superintelligence. However, SSI has yet to disclose its shareholders, research team, or profit model.

Sutskever's departure from OpenAI was largely due to disagreements with OpenAI CEO Sam Altman and the core management team, especially over how to manage the safe development of super AI and AGI.

Tian told TMTPost AGI that the conflict between Altman and Sutskever centers on their differing approaches to AI safety. Sutskever's new company was established to pursue AI safety objectives.

Tian highlighted that Sutskever, along with Turing Award winner Geoffrey Hinton and others, believe that AI safety issues are now "extremely urgent." If not addressed now, the opportunity may be lost, making future remediation impossible.

"The same applies to large models. Although we have high expectations for large models across many fields, there are currently very few typical applications in serious scenarios due to AI safety concerns. Without solving the safety and controllability issues, no one will trust or use AI in serious contexts," Tian explained. If companies neglect safety and push forward recklessly, it could lead to a series of safety hazards, potentially posing risks to humanity as a whole.

Before the release of ChatGPT, Sutskever had already warned about the potential threats AGI poses to human society.

He compared the relationship between AGI and humans to that between humans and animals, noting, "I think a good analogy would be the way humans treat animals. It's not that we hate animals. I think humans love animals and have a lot of affection for them, but when the time comes to build a highway between two cities, we are not asking the animals for permission. We just do it because it's important for us. And I think by default that's the kind of relationship that's going to be between us and AGls which are truly autonomous and operating on their own behalf."

AGI, which refers to AI possessing intelligence equal to or surpassing that of humans, once seemed distant. However, with OpenAI's launch of the fourth-generation model GPT-4 and ongoing training of GPT-5, AGI now appears within reach.