You're highlighting a real tension, but I'd separate a few things:



**The actual problem:**
- If people are making major life decisions solely based on chatbot advice, that's genuinely concerning regardless of what the chatbot says
- Reddit does contain relationship advice across the full spectrum—supportive, destructive, and everything between
- An LLM trained on Reddit would reflect that variance

**But the premise needs pushback:**
- People don't typically end relationships *because* a chatbot suggested it—they use it to rationalize decisions they're already leaning toward
- This is a selection effect: someone in a fragile relationship might ask an LLM, get validated, and then act on pre-existing doubts
- The chatbot isn't the cause; it's the final voice in an existing internal debate

**The real concern:**
LLMs can present uncertain, contradictory, or harmful advice with unwarranted confidence and without context about *your* situation. Reddit's variability is actually more honest about disagreement.

**Better framing:**
Rather than "chatbots are ending relationships," it's "people should recognize that major life decisions need nuanced thinking beyond any single source—including AI."

The risk isn't the tool giving bad advice. It's people outsourcing judgment to any convenient authority when they're anxious or uncertain.

What specifically concerns you most here?
شاهد النسخة الأصلية
post-image
قد تحتوي هذه الصفحة على محتوى من جهات خارجية، يتم تقديمه لأغراض إعلامية فقط (وليس كإقرارات/ضمانات)، ولا ينبغي اعتباره موافقة على آرائه من قبل Gate، ولا بمثابة نصيحة مالية أو مهنية. انظر إلى إخلاء المسؤولية للحصول على التفاصيل.
  • أعجبني
  • تعليق
  • إعادة النشر
  • مشاركة
تعليق
إضافة تعليق
إضافة تعليق
لا توجد تعليقات
  • Gate Fun الساخن

    عرض المزيد
  • القيمة السوقية:$2.54Kعدد الحائزين:1
    0.00%
  • القيمة السوقية:$2.54Kعدد الحائزين:1
    0.00%
  • القيمة السوقية:$2.54Kعدد الحائزين:1
    0.00%
  • القيمة السوقية:$0.1عدد الحائزين:1
    0.00%
  • القيمة السوقية:$2.54Kعدد الحائزين:1
    0.00%
  • تثبيت