JimBowie1958
Old Fogey
- Sep 25, 2011
- 63,590
- 16,767
- 2,220
Lol, seems perfectly logical if you are basing them on normal human thinking patterns, but they are not doing that, as the algorithms are randomly developed.
Study: Robots Capable Of Developing Prejudice On Their Own - Study Finds
Embracing stereotypes or even forming a simple opinion about others may seem like a trait exclusive to humans, but a recent study shows that robots can develop prejudice and even discriminate in similar ways to people, too.
You might think that’s because they’re programmed that way, but the research by computer science and psychology experts at Cardiff University shows that robots and machines using artificial intelligence are capable of generating prejudice on their own.
Joined by researchers from MIT, the Cardiff team explained this discriminatory behavior by suggesting robots could identify, copy, and learn this behavior from one another. Previous research has shown that computer algorithms have exhibited prejudiced behaviors and attitudes, such as racism and sexism, but researchers believe the algorithms learned it from public records and other data created by humans. The Cardiff and MIT researchers wanted to see if AI could evolve prejudicial groups on its own.
Study: Robots Capable Of Developing Prejudice On Their Own - Study Finds
Embracing stereotypes or even forming a simple opinion about others may seem like a trait exclusive to humans, but a recent study shows that robots can develop prejudice and even discriminate in similar ways to people, too.
You might think that’s because they’re programmed that way, but the research by computer science and psychology experts at Cardiff University shows that robots and machines using artificial intelligence are capable of generating prejudice on their own.
Joined by researchers from MIT, the Cardiff team explained this discriminatory behavior by suggesting robots could identify, copy, and learn this behavior from one another. Previous research has shown that computer algorithms have exhibited prejudiced behaviors and attitudes, such as racism and sexism, but researchers believe the algorithms learned it from public records and other data created by humans. The Cardiff and MIT researchers wanted to see if AI could evolve prejudicial groups on its own.