munkle
Diamond Member
- Dec 18, 2012
- 5,528
- 9,632
- 2,130
So these madmen are creating something smarter than them, that they cannot control, and are unpredictable. Why not legalize making rocket launchers in your garage if you are smart enough to build one?
NDTV World: "AI Robot Attacks Crowd At China Festival, Internet Says ''So It Begins'""

"Shocking footage has captured the moment a humanoid robot went rogue and charged at a crowd of festival-goers. The video, taken on February 9 at the Spring Festival Gala in Tianjin, northeast China, shows the robot, clad in a vibrant jacket, suddenly lunging towards a group of stunned onlookers gathered behind a barricade. Security personnel swiftly intervened, dragging the erratic robot away from the crowd to prevent any potential harm.
Event organizers downplayed the incident, attributing it to a "robotic failure." They assured that the robot had successfully passed safety tests before the event and emphasised that measures have been taken to prevent such an incident from occurring again, Metro reported.
Watch the video here:
The robot in question is a "humanoid agent AI avatar" made by Unitree Robotics. According to reports, a software glitch is believed to have triggered the robot's erratic behaviour.
This incident is not an isolated one as there have been previous cases of rogue AI making headlines, including an instance where a robot attacked an engineer at Tesla's Texas factory. The machine pinned him down and inflicted injuries with its claws on their back and arm, leaving behind a trail of blood, according to an official incident report.
In many of these cases, software malfunctions have been identified as the underlying cause, highlighting the importance of robust testing and quality control in AI development.
Growing concerns have been raised about the potential risks and consequences of machines on human life. Reacting to the video, one user wrote, "So it begins... An AI-controlled robot attacked a human."
Another commented, "Just a little preview of our bright future." A third said, "Can we please work out ALL the glitches before being released upon the public?" A fourth added, "Should we be worried that AIs and robots can turn dangerous against humans due to glitch.""
MORE: The Economic Times: "Video of robot hitting people in China goes viral, Internet asks “should we be worried?”
Stop AI Group
AI Gurus Including Elon Musk and "Godfather" of AI Call for Moratorium on Super-Intelligent AI
In May of 2023, barely noticed in the media, the biggest brains behind AI called for a pause in the progress of their own work, due to concerns that it might go out of control and endanger humanity. But the call was ignored by governments and business and quickly forgotten.
Le Monde reported:in "Elon Musk and hundreds of experts call for 'pause' in AI development":
"We must "pause" the advance of artificial intelligence, say over a thousand experts and researchers in the sector, including Tesla CEO Elon Musk, in an open letter published Tuesday, March 28. They want to suspend research for a period of six months on systems more powerful than GPT-4, the new language processing model launched in mid-March by OpenAI. This is the company behind the ChatGPT chatbot, a business co-founded by Musk himself. The "pause" would serve to develop better safeguards for such software, deemed a "risk to humanity."
"Recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control," wrote the signatories, referring to announcements by OpenAI and its partner Microsoft, but also those of Google and Meta, as well as numerous start-ups.
"Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?" asked the authors of the letter."
Also in 2023, the "Godfather" of AI, Dr. Geoffrey Hinton, quit his job at Google and said he "regretted his life's work."
The New York Times reported in May 2023 in ‘The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead":
"Geoffrey Hinton was an artificial intelligence pioneer. In 2012, Dr. Hinton and two of his graduate students at the University of Toronto created technology that became the intellectual foundation for the A.I. systems that the tech industry’s biggest companies believe is a key to their future...
Dr. Hinton said he has quit his job at Google, where he has worked for more than a decade and became one of the most respected voices in the field, so he can freely speak out about the risks of A.I. A part of him, he said, now regrets his life’s work....
Somewhere down the line, tech’s biggest worriers say, it could be a risk to humanity.
“It is hard to see how you can prevent the bad actors from using it for bad things,” Dr. Hinton said."
MORE: The Last American Vagabond: The Stargate Project: What You’re Not Being Told: Trump is partnering with technocrats to promote mRNA injections, AI, and transhumanism.
NDTV World: "AI Robot Attacks Crowd At China Festival, Internet Says ''So It Begins'""
"Shocking footage has captured the moment a humanoid robot went rogue and charged at a crowd of festival-goers. The video, taken on February 9 at the Spring Festival Gala in Tianjin, northeast China, shows the robot, clad in a vibrant jacket, suddenly lunging towards a group of stunned onlookers gathered behind a barricade. Security personnel swiftly intervened, dragging the erratic robot away from the crowd to prevent any potential harm.
Event organizers downplayed the incident, attributing it to a "robotic failure." They assured that the robot had successfully passed safety tests before the event and emphasised that measures have been taken to prevent such an incident from occurring again, Metro reported.
Watch the video here:
The robot in question is a "humanoid agent AI avatar" made by Unitree Robotics. According to reports, a software glitch is believed to have triggered the robot's erratic behaviour.
This incident is not an isolated one as there have been previous cases of rogue AI making headlines, including an instance where a robot attacked an engineer at Tesla's Texas factory. The machine pinned him down and inflicted injuries with its claws on their back and arm, leaving behind a trail of blood, according to an official incident report.
In many of these cases, software malfunctions have been identified as the underlying cause, highlighting the importance of robust testing and quality control in AI development.
Growing concerns have been raised about the potential risks and consequences of machines on human life. Reacting to the video, one user wrote, "So it begins... An AI-controlled robot attacked a human."
Another commented, "Just a little preview of our bright future." A third said, "Can we please work out ALL the glitches before being released upon the public?" A fourth added, "Should we be worried that AIs and robots can turn dangerous against humans due to glitch.""
MORE: The Economic Times: "Video of robot hitting people in China goes viral, Internet asks “should we be worried?”
Stop AI Group
AI Gurus Including Elon Musk and "Godfather" of AI Call for Moratorium on Super-Intelligent AI
In May of 2023, barely noticed in the media, the biggest brains behind AI called for a pause in the progress of their own work, due to concerns that it might go out of control and endanger humanity. But the call was ignored by governments and business and quickly forgotten.
Le Monde reported:in "Elon Musk and hundreds of experts call for 'pause' in AI development":
"We must "pause" the advance of artificial intelligence, say over a thousand experts and researchers in the sector, including Tesla CEO Elon Musk, in an open letter published Tuesday, March 28. They want to suspend research for a period of six months on systems more powerful than GPT-4, the new language processing model launched in mid-March by OpenAI. This is the company behind the ChatGPT chatbot, a business co-founded by Musk himself. The "pause" would serve to develop better safeguards for such software, deemed a "risk to humanity."
"Recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control," wrote the signatories, referring to announcements by OpenAI and its partner Microsoft, but also those of Google and Meta, as well as numerous start-ups.
"Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?" asked the authors of the letter."
Also in 2023, the "Godfather" of AI, Dr. Geoffrey Hinton, quit his job at Google and said he "regretted his life's work."
The New York Times reported in May 2023 in ‘The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead":
"Geoffrey Hinton was an artificial intelligence pioneer. In 2012, Dr. Hinton and two of his graduate students at the University of Toronto created technology that became the intellectual foundation for the A.I. systems that the tech industry’s biggest companies believe is a key to their future...
Dr. Hinton said he has quit his job at Google, where he has worked for more than a decade and became one of the most respected voices in the field, so he can freely speak out about the risks of A.I. A part of him, he said, now regrets his life’s work....
Somewhere down the line, tech’s biggest worriers say, it could be a risk to humanity.
“It is hard to see how you can prevent the bad actors from using it for bad things,” Dr. Hinton said."
Expert shows AI doesn't want to kill us, it has to
MORE: The Last American Vagabond: The Stargate Project: What You’re Not Being Told: Trump is partnering with technocrats to promote mRNA injections, AI, and transhumanism.