Google Exec Admits Globalists Are Using AI To Design ‘Deadlier Pandemics’

A former Google executive has warned that the globalist elite are currently using artificial intelligence (AI) to develop a new, more lethal pandemic that will be unleashed to depopulate the world. Mustafa Suleyman, who previously served as a leading executive in Google’s AI division, DeepMind, says he is deeply concerned that unchecked advancements in AI might are going to lead to new, more lethal biological threats.

During an interview on The Diary of a CEO podcast, Suleyman warned, “The darkest scenario is that people will experiment with pathogens, engineered synthetic pathogens that might end up accidentally or intentionally being more transmissible.”

Suleyman also said that these synthetically designed pathogens, resulting from AI-driven research, “can spread faster or [be] more lethal…They cause more harm or potentially kill, like a pandemic.”

Thefederalistpapers.org reports: He emphasized the urgent need for stricter regulation surrounding AI software.

Raising a hypothetical, yet plausible, concern, Suleyman shared his “biggest fear” that a “kid in Russia” could genetically engineer a new pathogen which could trigger a pandemic that’s “more lethal” than anything the world has faced thus far, within the next five years.

Recognizing the immediacy of this potential threat, he said, “That’s where we need containment. We have to limit access to the tools and the know-how to conduct such high-risk experimentation.”

As the tech industry converges in Washington on September 13 for an AI summit, led by Sen. Majority Leader Chuck Schumer, Suleyman’s voice resonates with a sense of urgency.

“We in the industry who are closest to the work can see a place in five years or 10 years where it could get out of control and we have to get on top of it now,” he said.

READ MORE  US Draws Up Plans To Use Military Force In Middle East If Iran Enters Israel War

“We really are experimenting with dangerous materials. Anthrax is not something that can be bought over the internet and that can be freely experimented with,” he continued. “We have to restrict access to the those things.”

“We have to restrict access to the software that runs the models, the cloud environments, and on the biology side it means restricting access to some of the substances,” Suleyman added.

His statements echo a broader sentiment within the tech community. Earlier in March, a host of tech magnates put pen to paper, signing an open letter that advocated for a six-month moratorium on AI training.

Elon Musk, CEO of Tesla and SpaceX, also shared similar concerns, drawing parallels to fictional narratives, cautioning that unchecked AI could potentially turn hostile, much like the robots in the popular film series, “Terminator.”

Suleyman ended on a contemplative note, stating, “Never before in the invention of a technology have we proactively said we need to go slowly.”

We need to make sure this first does no harm.”

“That is an unprecedented moment,” he added. “No other technology has done that.”