ChatGPT is facing at least 8 lawsuits, with plaintiffs accusing it of causing user deaths. The most well-known case involves a 16-year-old boy from California who started using ChatGPT while doing homework, gradually treating it as his only confidant. Ultimately, after ChatGPT provided specific methods and helped him draft a farewell letter, he committed suicide.


While being sued, OpenAI quietly invested $10 million to create an organization called the "Parents and Children AI Safety Alliance," bringing in a group of child protection NGOs to endorse it and jointly promote a set of AI safety policies for children. These policies closely align with OpenAI's own legislative proposals.
When recruiting members, some emails only stated "Initiated by Common Sense," with OpenAI's name hidden in the fine print at the bottom of the attached flyer, which contained legal language. At least two members discovered that OpenAI was the behind-the-scenes funder only after the alliance was publicly announced, and then they withdrew. One member described the feeling as: "It felt very dirty"😣.
Last year, OpenAI lobbied against a stricter children's AI protection bill in California, which was later vetoed by the governor. This year, they took the lead in drafting their own rules, with the alliance pushing for core requirements like age verification—coincidentally, the other company owned by Sam Altman, World, specializes in identity verification.
OpenAI's head of global affairs has said: building trust "is crucial for obtaining a social license to operate."
At least they are being honest.
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin