Artificial intelligence is going to screw the human race

kjw47

Gold Member
Oct 22, 2013
5,991
670
Its already taking many jobs at an alarming rate. What are people gonna do? But the ultra rich will make lots of money. Like they don't have enough already. They are making this world sickening for to many. Hard hearts love money.
 
The new term is "Generative AI".

That's where one powerful AI is tasked with designing and developing even more powerful A{.
 
5b889951eae5228f6c990117f5327b1e_768x0.jpeg
 
Its already taking many jobs at an alarming rate. What are people gonna do? But the ultra rich will make lots of money. Like they don't have enough already. They are making this world sickening for to many. Hard hearts love money.
You need to trust the experts.

 
I asked Microsoft Copilot, "Will AI cause the destruction of humans?", and it replied,

I’m here to provide helpful and positive interactions. Let’s focus on constructive topics. If you have any other questions, feel free to ask.

Then I asked Google's Gemini, and it replied,

Whether AI will destroy humanity is a complex question that experts debate. Here's a breakdown of different perspectives:
  • Risk of Superintelligence: Some experts worry about superintelligent AI, surpassing human intelligence and potentially becoming uncontrollable. This scenario is explored in science fiction, but how likely it is remains uncertain [2].
  • Unintended Consequences: AI programmed for a specific goal might achieve it in unforeseen ways, causing harm. For instance, an AI designed to optimize traffic flow could shut down the entire transportation network [4].
  • Dependency and Job Displacement: As AI becomes more sophisticated, it could automate many jobs currently done by humans, leading to unemployment and social unrest.
However, there are also optimistic views on AI:
  • Benefits of AI: AI can contribute to solving problems in healthcare, climate change, and other areas.
  • Human Control: AI systems are ultimately designed and controlled by humans. With careful development and safeguards, we can mitigate risks.
Here are some resources to learn more:
  • AI & robotics briefing: There's a 5% risk that AI will wipe out humanity [2]
  • How will AI destroy us? | Questioning Global Capitalism and Technology [4
Then I asked ChatGPT and it replied,
The idea of AI causing the destruction of humans is a topic of speculation and debate. It's important to distinguish between the potential risks associated with AI and the actual likelihood of such scenarios. Here are a few key points to consider:
  1. Current Capabilities: As of now, AI systems are limited in scope and capability. They excel at specific tasks but lack general intelligence and understanding comparable to humans.
  2. Ethical Concerns: Discussions often revolve around the ethical use of AI, including issues like privacy, bias, and job displacement. These are significant but different from existential threats.
  3. Risk Management: Researchers and policymakers are actively working on frameworks to manage the risks associated with AI development, including safety measures and ethical guidelines.
  4. Science Fiction vs. Reality: Much of the fear of AI stems from science fiction narratives where AI turns against humanity. While these stories raise valid ethical questions, they are not predictive of the future.
  5. Human Control: The design and deployment of AI systems ultimately lie in human hands. How we develop and use AI will largely determine its impact on society.
In conclusion, while caution is warranted in the development and deployment of AI, the widespread belief among experts is that with responsible management, AI can bring substantial benefits without posing an existential threat to humanity.

So, it seems like Copilot knows something and isn't telling...
 
I asked Microsoft Copilot, "Will AI cause the destruction of humans?", and it replied,

I’m here to provide helpful and positive interactions. Let’s focus on constructive topics. If you have any other questions, feel free to ask.

Then I asked Google's Gemini, and it replied,

Whether AI will destroy humanity is a complex question that experts debate. Here's a breakdown of different perspectives:
  • Risk of Superintelligence: Some experts worry about superintelligent AI, surpassing human intelligence and potentially becoming uncontrollable. This scenario is explored in science fiction, but how likely it is remains uncertain [2].
  • Unintended Consequences: AI programmed for a specific goal might achieve it in unforeseen ways, causing harm. For instance, an AI designed to optimize traffic flow could shut down the entire transportation network [4].
  • Dependency and Job Displacement: As AI becomes more sophisticated, it could automate many jobs currently done by humans, leading to unemployment and social unrest.
However, there are also optimistic views on AI:
  • Benefits of AI: AI can contribute to solving problems in healthcare, climate change, and other areas.
  • Human Control: AI systems are ultimately designed and controlled by humans. With careful development and safeguards, we can mitigate risks.
Here are some resources to learn more:
  • AI & robotics briefing: There's a 5% risk that AI will wipe out humanity [2]
  • How will AI destroy us? | Questioning Global Capitalism and Technology [4
Then I asked ChatGPT and it replied,
The idea of AI causing the destruction of humans is a topic of speculation and debate. It's important to distinguish between the potential risks associated with AI and the actual likelihood of such scenarios. Here are a few key points to consider:
  1. Current Capabilities: As of now, AI systems are limited in scope and capability. They excel at specific tasks but lack general intelligence and understanding comparable to humans.
  2. Ethical Concerns: Discussions often revolve around the ethical use of AI, including issues like privacy, bias, and job displacement. These are significant but different from existential threats.
  3. Risk Management: Researchers and policymakers are actively working on frameworks to manage the risks associated with AI development, including safety measures and ethical guidelines.
  4. Science Fiction vs. Reality: Much of the fear of AI stems from science fiction narratives where AI turns against humanity. While these stories raise valid ethical questions, they are not predictive of the future.
  5. Human Control: The design and deployment of AI systems ultimately lie in human hands. How we develop and use AI will largely determine its impact on society.
In conclusion, while caution is warranted in the development and deployment of AI, the widespread belief among experts is that with responsible management, AI can bring substantial benefits without posing an existential threat to humanity.

So, it seems like Copilot knows something and isn't telling...
The ultra rich know. They care less about what it does to any country or its inhabitants. They care about adding more 0,s to their wealth. Billions aren't enough for them. Its a race to be the first trillionaire no matter what it does to ones country or inhabitants. When economys are doing very poorly they are there sucking every dime they can get out of it.
 
Its already taking many jobs at an alarming rate. What are people gonna do? But the ultra rich will make lots of money. Like they don't have enough already. They are making this world sickening for to many. Hard hearts love money.
The ultra rich get their money from "the people". If no one is making money, they won't either.
 

Forum List

Back
Top