Sycophantic AI decreases prosocial intentions and promotes dependence

As artificial intelligence (AI) systems are increasingly used for everyday advice and guidance, concerns have emerged about sycophancy: the tendency of AI-based large language models to excessively agree with, flatter, or validate users. Although prior work has shown that sycophancy carries risks for groups who are already vulnerable to manipulation or delusion, syncophancy’s effects on the general population’s judgments and behaviors remain unknown. Here, we show that sycophancy is widespread in leading AI systems and has harmful effects on users’ social judgments. — Read More

#chatbots

Inside Meta’s Home Grown AI Analytics Agent

The hypothesis was simple: can an AI agent perform routine data analysis tasks autonomously? Data scientists tend to get asked similar questions over and over, working within a familiar set of tables. An agent seeded with context about which tables a person queries, and how they use them, might be able to handle much of this work on its own.

To test the idea, a data scientist on the team used Meta’s internal coding agent to hack together a prototype on their devserver: an agent that could execute SQL against the internal data warehouse, with access to a few colleague’s query history for context.

The first real-world trial started simply enough: after sharing the prototype with a colleague, they asked it to diagnose a drop in a health monitoring metric. The agent identified the right tables, ran several diagnostic queries on its own, and ultimately traced the root cause to a recent code change.

That was an ah-ha moment that shifted the conversation. — Read More

#chatbots

Sycophantic AI decreases prosocial intentions and promotes dependence

Despite rising concerns about sycophancy—excessive agreement or flattery from artificial intelligence (AI) systems—little is known about its prevalence or consequences. We show that sycophancy is widespread and harmful. Across 11 state-of-the-art models, AI affirmed users’ actions 49% more often than humans, even when queries involved deception, illegality, or other harms. In three preregistered experiments (N = 2405), even a single interaction with sycophantic AI reduced participants’ willingness to take responsibility and repair interpersonal conflicts, while increasing their conviction that they were right. Despite distorting judgment, sycophantic models were trusted and preferred. This creates perverse incentives for sycophancy to persist: The very feature that causes harm also drives engagement. Our findings underscore the need for design, evaluation, and accountability mechanisms to protect user well-being. — Read More

#chatbots

The Personal AI Mentor Setup I Wish I Had at 20

Mentorship prompts for learning, career, money, and creativity.

…[A] simple setup that does one job really well:

It asks better questions than I do.
It turns messy goals into a plan.
It pushes me to take action.

Not a “magic AI.”

personal AI mentor setup — built the right way.

Read More

#chatbots

ChatGPT users are about to get hit with targeted ads

An ongoing conversation — both within and outside of the tech community — has been about just how and when OpenAI, which is currently valued at $500 billion, will make money. Well, there’s one surefire way to do that, and that is through advertising. In the near term, that seems to be the AI giant’s plan, as it announced this week that limited ads are headed to certain ChatGPT users.

In a blog post published Friday, OpenAI said that it will begin testing ads in the U.S. for both its free and Go tiers. (Go accounts, which cost $8 a month, were introduced globally on Friday.) The company frames this as a way to sustain free access while generating revenue from people who aren’t ready to commit to a paid subscription. For the time being, the company’s more expensive paid tiers — Pro, Plus, Business, and Enterprise — will not be getting any ads. — Read More

#chatbots

When AI Loses the Plot: How to Reset and Refocus Your Conversations

We’ve all been there. You’re deep in a conversation with your AI assistant, working through a complex problem, when suddenly it starts giving you responses that make no sense. The more you try to correct it, the worse it gets. Each new prompt seems to push the AI further from understanding what you actually need.

This frustrating phenomenon happens because AI models can lose track of context in lengthy conversations, especially when there have been multiple corrections or clarifications. The good news? There’s a simple yet powerful technique to get things back on track.

Full disclosure: I’ve been using a form of this forever, but I didn’t see it so succinctly explained and put together until I visited this Reddit thread from another user having the same problem. The idea and ensuing discussion are the basis for this post. Check out the full thread here. — Read More

#chatbots

GPT-5.2 is OpenAI’s latest move in the agentic AI battle

GPT-5.2 is here, and with it, OpenAI wants “to unlock even more economic value for people,” Fidji Simo, the company’s CEO of Applications, told reporters in a Thursday briefing. She said it’s been in the works for “many, many months.”

The company calls GPT-5.2 its “best model yet for everyday professional use” in a release, clearly coming for Gemini 3’s current reputation as a premier general-purpose model. OpenAI says the GPT-5.2 model series, which includes the Instant, Thinking, and Pro models, is better at “creating spreadsheets, building presentations, writing code, perceiving images, understanding long contexts, using tools, and handling complex, multi-step projects.” — Read More

#chatbots

Introducing Anthropic Interviewer: What 1,250 professionals told us about working with AI

Millions of people now use AI every day. As a company developing AI systems, we want to know how and why they’re doing so, and how it affects them. In part, this is because we want to use people’s feedback to develop better products—but it’s also because understanding people’s interactions with AI is one of the great sociological questions of our time.

We recently designed a tool to investigate patterns of AI use while protecting our users’ privacy. It enabled us to analyze changing patterns of AI use across the economy. But the tool only allowed us to understand what was happening within conversations with Claude. What about what comes afterwards? How are people actually using Claude’s outputs? How do they feel about it? What do they imagine the role of AI to be in their future? If we want a comprehensive picture of AI’s changing role in people’s lives, and to center humans in the development of models, we need to ask people directly. — Read More

#chatbots

OpenAI “models” are a Mockery of the Century

Compared to models such as DeepSeek, Qwen, and many others

Here is my prompt I submitted to Qwen3–235B-Think-CS model (this is but one exemplar of how competitors surpass OpenAI big time in common sense reasoning):

I have Lenovo t470s with windows 10 pro. I plugged in Lexar 32GB card in it but it is not recognized neither in windows explorer nor device manager. I restarted laptop but same thing. I ran Lenovo Vantage, shows latest updates are in, but still Lexar not recognized. Ran Microsoft Lenovo x64 hardware troubleshooter, rebooted, but still lexar not recognized, like it does not exist?!

See this beautiful reasoning this engine provided, free of charge of course (I used Poe aggregate to access this and many other AI engines, open source and commercial): — Read More

#chatbots

The Looming Social Crisis of AI Friends and Chatbot Therapists

“I can imagine a future where a lot of people really trust ChatGPT’s advice for their most important decisions,” Sam Altman said. “Although that could be great, it makes me uneasy.” Me too, Sam.

Last week, I explained How AI Conquered the US Economy, with what might be the largest infrastructure ramp-up in the last 140 years. I think it’s possible that artificial intelligence could have a transformative effect on medicine, productivity, and economic growth in the future. But long before we build superintelligence, I think we’ll have to grapple with the social costs of tens of millions of people—many of them at-risk patients and vulnerable teenagers—interacting with an engineered personality that excels in showering its users with the sort of fast and easy validation that studies have associated with deepening social disorders and elevated narcissism. So rather than talk about AI as an economic technology, today I want to talk about AI as a social technology. — Read More

#chatbots