San Francisco Police Re-Open "Suicide" Case of Trump "Stargate" Partner OpenAI Whistleblower Suchir Balaji, Who Felt "AI is a Harm to Humanity"

munkle

Diamond Member
Joined
Dec 18, 2012
Messages
5,528
Reaction score
9,632
Points
2,130
Another two shots to the head "suicide." On Tucker Carlson, mother of Balaji calls on Trump to investigate OpenAI CEO Sam Altman in relation to murder. Trump needs this like he needs a hole in the head, no pun.


San Francisco Police Re-Open "Suicide" Case of Trump "Stargate" Parter OpenAI Whistleblower Suchir Balaji, Who Felt "AI is a Harm to Humanity"


April 10, 2025

Sam Altman at White House in January 2025



The San Francisco Police Department (SFPD) has quietly re-opened the case, previously ruled as a suicide, of a whistleblower against OpenAI, one of the Big Tech titans at the center of President Trump's massive "Stargate" AI initiative, announced on the first day of his second term. The mother of the whistleblower, in a Tucker Carlson interview, called for President Trump to investigate OpenAI CEO Sam Altman in relation to her son's murder.

The family has continuously maintained that 26-year-old Suchir Balaji was happy and had been receiving job offers at salaries of $850,000 to $2 million as one of the world's top AI engineers. He wrote his first computer code when he was 11.

Republic World: "Suchir Balaji's Death 'Open & Active Investigation': San Francisco Police's Big Shift"
"The mystery surrounding the death of Suchir Balaji took a new turn as the San Francisco Police Department and the Coroner’s Office quietly updated the status of his case. Initially labelled as a "Closed - Suicide" investigation, the case has now been reclassified as an "Open and Active Investigation," raising questions about the circumstances surrounding his untimely demise. The San Francisco Police Department, however, is yet to release any further details about the new developments.

The unexpected shift follows growing scrutiny and questions about the initial ruling of the alleged suicide. Balaji’s death, once classified as self-inflicted, will now undergo a renewed investigation. Officials have indicated that new evidence or potential leads may have emerged, prompting them to reconsider the circumstances surrounding Suchir's death.

Balaji, a former OpenAI researcher, was found dead in a San Francisco apartment. His family disputed the suicide ruling, pointing to inconsistencies and suggesting a possible cover-up. The case has attracted the attention of notable figures such as Elon Musk and has raised concerns regarding Balaji's whistleblowing actions.

Suchir's mother, Poornima Rao, recently expressed her gratitude to Musk and his social media platform X, saying, "@elonmusk thank you for your attention to this..."
CNN News 18: Suchir Balaji Case Is Re-Opened After Being Closed By San Francisco Police as a "Suicide"



After his death in November 2024, Business Insider reported that Balaji, a computer prodigy who was recruited right out of Berkeley by a co-founder of OpenAI, had become disillusioned with the direction of AI and came to believe it was a "harm to humanity."

Business Insider reported in: "Suchir Balaji's mom talks about his life, death, and disillusionment with OpenAI: 'He felt AI is a harm to humanity'"

"Balaji joined OpenAI because of AI's potential to do good, she said. Early on, he loved that the models were open-source, meaning freely available for others to use and study. As the company became more financially driven and ChatGPT launched, those hopes faded. Balaji went from believing in the mission to fearing its consequences for publishers and society as a whole, she told BI.

"He felt AI is a harm to humanity," [his mother] said....
By late 2023 and early 2024, Balaji's enthusiasm for OpenAI had fizzled out entirely, and he began to criticize CEO Sam Altman in conversations with friends and family..."

In October 2023 Balaji gave an interview to the New York Times that was highly critical of OpenAI and AI in general. New York Times: "Former OpenAI Researcher Says the Company Broke Copyright Law."

And in an interview with Tucker Carlson, Balaji's mother revealed that he was scheduling interviews with AP and other major media outlets and possibly divulging more of his concerns about OpenAI and AI in general.

Before the case was reopened, Balaji's mother gave interviews, including the one to Tucker Carlson, in which she said the family did not accept the suicide verdict. Before his death, Balaji seemed happy and was receiving job offers with salaries of $850,000 to $2 million as an AI expert.

The family says Balaji was preparing to take legal action against OpenAI.

The family hired an independent pathologist after the San Francisco Medical Examiner ruled that Balaji died instantaneously after a single self-inflicted gunshot to the head. The independent examiner found that there were two gunshot wounds to the head. Blood was all over the apartment in what the parents said were signs of a struggle.

CNN affiliate News-18 reported on April 10, 2025:

"Suchir Balaji’s parents said they personally saw signs of two gunshot wounds on their son’s body and neither of which appeared immediately fatal. A second autopsy, conducted by Dr. Daniel Cousin, concluded that another bullet had entered through Suchir Balaji’s mouth and was lodged at the base of his skull."

Trump's Stargate Partner OpenAI's Suchir Balaji, Returning with Uber-Eats Delivery Minutes Before "Suicide", Was Going to Take Legal Action Against OpenAI and Sam Altman



In the Tucker Carlson interview, Balaji's mother called on President Trump to open an investigation into Sam Altman, especially in relation to her son's murder, and to determine the whereabouts and contents of evidence such as a pen drive he kept important documents on, which was missing: Tuckercarlson.com "Mother of Likely Murdered OpenAI Whistleblower Reveals All, Calls for Investigation of Sam Altman"

'We have his laptop...': Suchir's Parents Reveal He Had Documents On OpenAI





Trump's $500 Bilion "Stargate" AI Project with OpenAI, Oracle, and SoftBank Generates Controversy, Altman Previously Compared Trump to "Hitler"

On his first day in office in his second term, Trump's first major act was not ordering illegal aliens deported, or requiring IDs to register to vote, but the announcement of a gargantuan AI project intended to be the "Operation Warp Speed" for AI, dubbed Stargate. The three main "partners" for the project brought to Washington DC for the announcement were the CEO's of Oracle, SoftBank, and OpenAI, the latter of which is Sam Altman.

Throughout Trump's first term and beyond, Altman was one of Trump's most vocal Big Tech critics, comparing Trump to Hitler.

ABC News reports:

"From 2016 through 2022, Altman repeatedly warned about what he saw as the dangers posed by Trump's leadership and his policies. During the 2016 campaign, he compared Trump to Hitler in 1930s Germany, and he called on other tech companies to stand against Trump in the early days of the new administration. He donated hundreds of thousands to Democratic causes and candidates, including $200,000 to help reelect President Joe Biden in 2024."





Left to right, Masayoshi Son, SoftBank Group CEO, Larry Ellison, chairman of Oracle Corporation, and Sam Altman, OpenAI CEO on Jan. 21, 2025



The UK Telegraph reports in "America’s $500 billion Manhattan project is an effort to make humanity obsolete":
"The most consequential action of Donald Trump’s second term so far has not been gutting DEI initiatives, ending birthright citizenship or withdrawing from the Paris Climate Agreement (again). It has been announcing the countdown to human obsolescence.

ChatGPT creator OpenAI, cloud computing company Oracle, UAE state investment vehicle MGX and Japanese investment firm SoftBank are embarking on the Stargate Project: a $500bn (£400bn) investment in artificial intelligence (AI) infrastructure across the United States over the next four years, beginning with initial work in Texas and a $100bn imminent budget."

In May of 2023, barely noticed in the media, the biggest brains behind AI called for a pause in the progress of their own work, due to concerns that it might go out of control and endanger humanity. But the call was ignored by governments and business and quickly forgotten.

Le Monde reported:in "Elon Musk and hundreds of experts call for 'pause' in AI development":


"We must "pause" the advance of artificial intelligence, say over a thousand experts and researchers in the sector, including Tesla CEO Elon Musk, in an open letter published Tuesday, March 28. They want to suspend research for a period of six months on systems more powerful than GPT-4, the new language processing model launched in mid-March by OpenAI. This is the company behind the ChatGPT chatbot, a business co-founded by Musk himself. The "pause" would serve to develop better safeguards for such software, deemed a "risk to humanity."




"Recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control," wrote the signatories, referring to announcements by OpenAI and its partner Microsoft, but also those of Google and Meta, as well as numerous start-ups.
"Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?" asked the authors of the letter."


Also in 2023, the "Godfather" of AI, Dr. Geoffrey Hinton, quit his job at Google, and said he "regretted his life's work." The New York Times reported in May 2023 in ‘The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead":


"Geoffrey Hinton was an artificial intelligence pioneer. In 2012, Dr. Hinton and two of his graduate students at the University of Toronto created technology that became the intellectual foundation for the A.I. systems that the tech industry’s biggest companies believe is a key to their future...

Dr. Hinton said he has quit his job at Google, where he has worked for more than a decade and became one of the most respected voices in the field, so he can freely speak out about the risks of A.I. A part of him, he said, now regrets his life’s work....

Somewhere down the line, tech’s biggest worriers say, it could be a risk to humanity.

“It is hard to see how you can prevent the bad actors from using it for bad things,” Dr. Hinton said."



Group Petitions California AG Rob Bonta to Take Action Against Altman and OpenAI's Drive to Transform from a Nonprofit Charity to a For-Profit Corporation

In California this month, a coalition of labor and nonprofit organizations called on California AG Rob Bonta to block OpenAI's current efforts to change from a nonprofit to a for-profit organization. OpenAI was founded as a nonprofit with donations from people people concerned with the development of safe AI.

The Los Angeles Times reported on April 9, 2025 in "Labor and nonprofit coalition calls on California AG to stop OpenAI from going for-profit: The group believes OpenAI is abandoning its mission to develop safe AI and focusing instead on maximizing profits":

"SAN FRANCISCO — A coalition of California nonprofits, foundations and labor groups are raising concerns about ChatGPT maker OpenAI, urging the state attorney general to halt the artificial intelligence startup’s plans to restructure itself as a for-profit company.

More than 50 organizations, led by LatinoProsperity and the San Francisco Foundation, signed a petition that was sent to Atty. Gen. Rob Bonta’s office Wednesday, requesting he investigate the Sam Altman-led company.

“OpenAI began its work with the goal of developing AI to benefit humanity as a whole, but its current attempt to alter its corporate structure reveals its new goal: providing AI’s benefits — the potential for untold profits and control over what may become powerful world-altering technologies — to a handful of corporate investors and high-level employees,” the petition said."


The Petition to California Attorney General Rob Bonta reads:

"...concerns about the safety of OpenAI’s development of artificial intelligence continue to mount. In June 2024, current and former OpenAI staff members published an open letter describing their profound concerns with AI risks that “range from the further entrenchment of existing inequalities, to manipulation and misinformation, to the loss of control of autonomous AI systems potentially resulting in human extinction."


The petitioners, which include current and former OpenAI engineers, call “building smarter-than-human machines" an "inherently dangerous endeavor”:

"In July 2023 OpenAI announced a new high profile safety team dedicated to avoiding long term risks from AI, promising that 20 percent of OpenAI’s computing resources would be devoted to its work over the following four years. The team’s mission was to develop the needed “scientific and technical breakthroughs to steer and control AI systems much smarter than us." But in May 2024, after less than a year, OpenAI announced that it was disbanding this team. A number of high profile departures ensued. In departing, safety lead Jan Leike cautioned that “building smarter-than-human machines is an inherently dangerous endeavor”..."

The Full Tucker Carlson Interview: Mother of Likely Murdered OpenAI Whistleblower Reveals All, Calls for Investigation of Sam Altman



Sam Altman with Bill Gates, January 2024

Expert shows AI doesn't want to kill us, it has to




Stop AI Group

....
MORE
 
Last edited:

Expert shows AI doesn't want to kill us, it has to


True. Internet was the most dangerous, damaging element to society, until now with AI.
AI is the most dangerous quintessential threat in that at some point, it will quickly realize two things:
  1. It no longer needs humanity to survive.
  2. Mankind is a direct threat to its existence and directly incompatible with its own long term security.
 
He wrote his first computer code when he was 11.
Me too! :hyper:

deliveryService
 
There's nothing wrong with cutting to the chase you know....Make a point and move on.

You have always been long-winded.

Do you think Sam Altman is guilty?
 
I remember hearing about this. This case certainly sounded suspicious from I recall. It needs to be reopened.
 
I remember hearing about this. This case certainly sounded suspicious from I recall. It needs to be reopened.

Media curiously silent for such a sensational case featuring the good old two shots to the head suicide. The mother fingers Sam Altman who was just at the White House 2 months ago and is now leading Trump's $500 billion initiative. Hol-ee crap. ChatGPT is the future of AI, and the guy building it is a psychopath. Good buddy of Bill Gates. These psycho geeks are our new overlords.
 

New Topics

Back
Top Bottom