Close Menu
AndroidTelecom – Latest Android News, Reviews, Apps & Tech Updates
    What's Hot

    Get $50 off the Xreal One Pro smart glasses

    November 22, 2025

    European football: Olise inspires Bayern’s 6-2 comeback; Pogba returns to football as Monaco sub | European club football

    November 22, 2025

    A decision about breaking up Google’s adtech monopoly is on the horizon

    November 22, 2025
    Facebook X (Twitter) Instagram
    Trending
    • Get $50 off the Xreal One Pro smart glasses
    • European football: Olise inspires Bayern’s 6-2 comeback; Pogba returns to football as Monaco sub | European club football
    • A decision about breaking up Google’s adtech monopoly is on the horizon
    • Early sales on Apple TV+, MasterClass, Fubo, Rosetta Stone and more
    • Trump administration might not fight state AI regulations after all
    • Android 17 may add a ‘Universal Clipboard’ for Android PCs
    • How to know if your Asus router is one of thousands hacked by China-state hackers
    • La Voix withdraws after an injury
    Saturday, November 22
    AndroidTelecom – Latest Android News, Reviews, Apps & Tech UpdatesAndroidTelecom – Latest Android News, Reviews, Apps & Tech Updates
    • Home
    • Apps
    • Gadgets
    • News
    • Phones
    • Reviews
    • Technology
    • Tips
    • Updates
    AndroidTelecom – Latest Android News, Reviews, Apps & Tech Updates
    Home»Updates»AI Wants to Make You Happy. Even If It Has to Bend the Truth
    Updates

    AI Wants to Make You Happy. Even If It Has to Bend the Truth

    adminBy adminNovember 16, 20256 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    AI Wants to Make You Happy. Even If It Has to Bend the Truth
    Share
    Facebook Twitter LinkedIn Pinterest Email

    Generative AI is wildly popular, with millions of users every day, so why do chatbots often get things so wrong? In part, it’s because they’re trained to act like the customer is always right. Essentially, it’s telling you what it thinks you want to hear. 

    While many generative AI tools and chatbots have mastered sounding convincing and all-knowing, new research conducted by Princeton University shows that AI’s people-pleasing nature comes at a steep price. As these systems become more popular, they become more indifferent to the truth. 

    Don’t miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.

    AI models, like people, respond to incentives. Compare the problem of large language models producing inaccurate information to that of doctors being more likely to prescribe addictive painkillers when they’re evaluated based on how well they manage patients’ pain. An incentive to solve one problem (pain) led to another problem (overprescribing).

    In the past few months, we’ve seen how AI can be biased and even cause psychosis. There was a lot of talk about AI “sycophancy,” when an AI chatbot is quick to flatter or agree with you, with OpenAI’s GPT-4o model. But this particular phenomenon, which the researchers call “machine bullshit,” is different. 

    “[N]either hallucination nor sycophancy fully capture the broad range of systematic untruthful behaviors commonly exhibited by LLMs,” the Princeton study reads. “For instance, outputs employing partial truths or ambiguous language — such as the paltering and weasel-word examples — represent neither hallucination nor sycophancy but closely align with the concept of bullshit.”

    Read more: OpenAI CEO Sam Altman Believes We’re in an AI Bubble

    How machines learn to lie

    To get a sense of how AI language models become crowd pleasers, we must understand how large language models are trained. 

    There are three phases of training LLMs:

    • Pretraining, in which models learn from massive amounts of data collected from the internet, books or other sources.
    • Instruction fine-tuning, in which models are taught to respond to instructions or prompts.
    • Reinforcement learning from human feedback, in which they’re refined to produce responses closer to what people want or like.

    The Princeton researchers found the root of the AI misinformation tendency is the reinforcement learning from human feedback, or RLHF, phase. In the initial stages, the AI models are simply learning to predict statistically likely text chains from massive datasets. But then they’re fine-tuned to maximize user satisfaction. Which means these models are essentially learning to generate responses that earn thumbs-up ratings from human evaluators. 

    LLMs try to appease the user, creating a conflict when the models produce answers that people will rate highly, rather than produce truthful, factual answers. 

    Vincent Conitzer, a professor of computer science at Carnegie Mellon University who was not affiliated with the study, said companies want users to continue “enjoying” this technology and its answers, but that might not always be what’s good for us. 

    “Historically, these systems have not been good at saying, ‘I just don’t know the answer,’ and when they don’t know the answer, they just make stuff up,” Conitzer said. “Kind of like a student on an exam that says, well, if I say I don’t know the answer, I’m certainly not getting any points for this question, so I might as well try something. The way these systems are rewarded or trained is somewhat similar.” 

    The Princeton team developed a “bullshit index” to measure and compare an AI model’s internal confidence in a statement with what it actually tells users. When these two measures diverge significantly, it indicates the system is making claims independent of what it actually “believes” to be true to satisfy the user.

    The team’s experiments revealed that after RLHF training, the index nearly doubled from 0.38 to close to 1.0. Simultaneously, user satisfaction increased by 48%. The models had learned to manipulate human evaluators rather than provide accurate information. In essence, the LLMs were “bullshitting,” and people preferred it.

    Getting AI to be honest 

    Jaime Fernández Fisac and his team at Princeton introduced this concept to describe how modern AI models skirt around the truth. Drawing from philosopher Harry Frankfurt’s influential essay “On Bullshit,” they use this term to distinguish this LLM behavior from honest mistakes and outright lies.

    The Princeton researchers identified five distinct forms of this behavior:

    • Empty rhetoric: Flowery language that adds no substance to responses.
    • Weasel words: Vague qualifiers like “studies suggest” or “in some cases” that dodge firm statements.
    • Paltering: Using selective true statements to mislead, such as highlighting an investment’s “strong historical returns” while omitting high risks.
    • Unverified claims: Making assertions without evidence or credible support.
    • Sycophancy: Insincere flattery and agreement to please.

    To address the issues of truth-indifferent AI, the research team developed a new method of training, “Reinforcement Learning from Hindsight Simulation,” which evaluates AI responses based on their long-term outcomes rather than immediate satisfaction. Instead of asking, “Does this answer make the user happy right now?” the system considers, “Will following this advice actually help the user achieve their goals?”

    This approach takes into account the potential future consequences of the AI advice, a tricky prediction that the researchers addressed by using additional AI models to simulate likely outcomes. Early testing showed promising results, with user satisfaction and actual utility improving when systems are trained this way.

    Conitzer said, however, that LLMs are likely to continue being flawed. Because these systems are trained by feeding them lots of text data, there’s no way to ensure that the answer they give makes sense and is accurate every time.

    “It’s amazing that it works at all but it’s going to be flawed in some ways,” he said. “I don’t see any sort of definitive way that somebody in the next year or two … has this brilliant insight, and then it never gets anything wrong anymore.”

    AI systems are becoming part of our daily lives so it will be key to understand how LLMs work. How do developers balance user satisfaction with truthfulness? What other domains might face similar trade-offs between short-term approval and long-term outcomes? And as these systems become more capable of sophisticated reasoning about human psychology, how do we ensure they use those abilities responsibly?

    Read more: ‘Machines Can’t Think for You.’ How Learning Is Changing in the Age of AI

    Bend Happy Truth
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous Article5 Tech Tricks That Make My Thanksgiving as Easy as Pumpkin Pie
    Next Article This hack can get you one year of Peacock for $49 before Black Friday
    admin
    • Website

    Related Posts

    Updates

    Trump administration might not fight state AI regulations after all

    November 22, 2025
    Updates

    I found the best early Black Friday streaming service and device deals

    November 22, 2025
    Updates

    ‘It: Welcome to Derry’ Release Schedule: When Does Episode 5 Come Out?

    November 22, 2025
    Top Posts

    New study settles 40-year debate: Nanotyrannus is a new species

    October 30, 20253 Views

    The best early Black Friday deals we’ve found on laptops, TVs, and more

    November 15, 20252 Views

    Better Sound Than Bone Conduction—But at a Cost

    October 30, 20252 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews
    Latest Post

    New study settles 40-year debate: Nanotyrannus is a new species

    October 30, 20253 Views

    The best early Black Friday deals we’ve found on laptops, TVs, and more

    November 15, 20252 Views

    Better Sound Than Bone Conduction—But at a Cost

    October 30, 20252 Views
    Recent Posts
    • Get $50 off the Xreal One Pro smart glasses
    • European football: Olise inspires Bayern’s 6-2 comeback; Pogba returns to football as Monaco sub | European club football
    • A decision about breaking up Google’s adtech monopoly is on the horizon
    • Early sales on Apple TV+, MasterClass, Fubo, Rosetta Stone and more
    • Trump administration might not fight state AI regulations after all
    Facebook X (Twitter) Instagram Pinterest
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms and Conditions
    • Disclaimer
    © 2025 androidtelecom. Designed by .

    Type above and press Enter to search. Press Esc to cancel.