Meta says its new Incognito Chat with Meta AI is so private that even Meta can’t see it. That’s a remarkable claim from one of the world’s biggest data companies. Here’s what you need to know.
When was the last time you asked an AI chatbot something you wouldn’t want your boss, your partner, or your doctor to see? If you’re honest, the answer is probably: recently. People are increasingly using AI assistants for the most sensitive questions in their lives — health anxieties, financial worries, relationship problems, career doubts. And until now, every single one of those conversations was being logged on a company’s server somewhere.
Meta wants to change that — at least for WhatsApp users. On 13 May 2026, the company launched Incognito Chat with Meta AI, a new mode that promises conversations so private that not even Meta itself can read them. Messages are processed in a secure environment, disappear when you close the chat, are never saved, and — crucially — are never used to train Meta’s AI models.
It’s a bold move. And coming from Meta, a company whose entire business was built on harvesting user data, it raises some obvious questions.
What Is Incognito Chat, and How Does It Actually Work?
Incognito Chat is built on top of a technology Meta calls Private Processing — a system designed to let AI work on your messages without anyone, including Meta’s own engineers, being able to see the content.
The technical architecture is genuinely sophisticated. When you start an Incognito Chat session, your message travels from your phone through a third-party relay operated by Fastly — which strips your IP address before your request ever reaches Meta’s servers. It then passes through an Anonymous Credentials Service that confirms you’re a legitimate WhatsApp user without revealing who you are. Your conversation is then processed inside a Trusted Execution Environment (TEE) — a hardware-isolated enclave running on AMD processors with advanced encryption technology, where even Meta’s own server operators cannot inspect what’s being computed.
Before your phone sends a single message, it cryptographically verifies that the server it’s talking to is running the correct, unmodified code — a process logged to a third-party transparency service operated by Cloudflare. If anything has been tampered with, your phone refuses to connect.
In plain terms: this is not just a “delete after reading” gimmick. The underlying privacy architecture is meaningfully more robust than what rivals currently offer. Google keeps conversation data for up to three days even in its “private” mode. OpenAI holds logs for 30 days. Meta’s system, by design, stores nothing at all.
Why Now? The Timing Is Not Accidental
Meta’s launch of Incognito Chat lands at a particularly charged moment for the AI industry’s relationship with privacy and accountability.
OpenAI is currently facing multiple lawsuits from families whose relatives died after receiving harmful advice from ChatGPT. In several cases, the evidence was the chat logs themselves — retrieved from OpenAI’s servers. A teenager died after asking ChatGPT whether it was safe to combine two drugs and receiving an incorrect answer. Families of people who died by suicide have cited chatbot conversations in their legal complaints. The existence of stored, retrievable logs has made AI companies legally vulnerable in ways they are only beginning to grapple with.
Reuters recently reported that lawyers are advising clients that conversations with AI chatbots could be subpoenaed and used as evidence in litigation. The message to the industry was clear: stored chats are a liability.
Against that backdrop, a system where conversations genuinely cannot be retrieved — not by Meta, not by lawyers, not by courts — solves a very specific problem. Will Cathcart, WhatsApp’s head, framed it in user-centric terms: “We’re starting to ask a lot of meaningful questions about our lives with AI systems, and it doesn’t always feel like you should have to share the information behind those questions with the companies that run those AI systems.” That’s true. But the timing against the backdrop of AI litigation is hard to ignore.
What You Can Actually Use It For
Meta’s launch materials are explicit about the kinds of questions Incognito Chat is designed for: sensitive health questions, financial details, career concerns, or asking for advice on a difficult personal situation. These are the queries where people have historically hesitated to use AI — or where they’ve used it despite their discomfort, hoping for the best.
Also launching alongside Incognito Chat — coming in the next few months — is a related feature called Side Chat. This lets you summon Meta AI privately while you’re in the middle of a conversation with someone else on WhatsApp. You could ask the AI to help you draft a diplomatic reply to a tricky message, explain a term someone used, or fact-check a claim in real time — all without the other person knowing, and without it appearing in the main chat. Side Chat is also protected by Private Processing, so it remains invisible to Meta as well as to your contact.
In both cases, the session ends when you close the chat or lock your phone. There’s no history, no context carried over, no way to pick up where you left off. Each Incognito Chat starts completely fresh.
The Caveats You Should Know Before You Trust It Completely
The technical privacy protections are real and meaningful — but they’re not absolute, and a few important caveats are worth understanding before you treat this as a confessional booth.
Web searches leave the privacy bubble. When Meta AI needs to look something up online, that search query exits the Trusted Execution Environment and passes through Meta’s infrastructure before reaching external search providers. Queries are capped at 100 characters and limited to five per prompt, and Meta says they’re not linked to your identity — but the data does leave the secure enclave. You can disable web search, but it’s turned on by default.
Multi-GPU inference has a technical gap. Meta’s own technical white paper acknowledges that when the AI model requires multiple graphics processors to handle a large query — which large language models routinely do — the connection between those processors is not fully encrypted. Meta argues this is mitigated by the high bandwidth involved, which makes interception impractical with current hardware. That’s a practical reassurance, not a theoretical guarantee.
No chat history means no accountability. Professor Alan Woodward, a cybersecurity expert at the University of Surrey, raised a pointed concern: if an AI’s response in an Incognito Chat leads to harm — bad medical advice, dangerous financial guidance, or worse — there is no log to investigate. The same privacy that protects the user also protects the system from scrutiny. That is a real trade-off, not just a theoretical one.
Images aren’t supported. Incognito Chat is text-only for now. You can’t upload photos, documents, or screenshots — which limits its usefulness for certain types of queries.
Age verification is required. Meta requires users to confirm they are over 13 to use the feature. In practice, self-reported age confirmation is a thin layer of protection.
Is Meta the Right Company to Trust With This?
This is the question that hangs over the entire announcement, and it’s impossible to sidestep. Meta built its empire by collecting, analysing, and monetising user data at a scale that was genuinely unprecedented. The company paid billions in fines for privacy violations. It was at the centre of the Cambridge Analytica scandal. Mark Zuckerberg has testified before Congress multiple times about data practices that the public found alarming.
Incognito Chat asks users to trust that Meta has genuinely built a system it cannot access — and that it will maintain that design even when it is commercially inconvenient to do so. The technical architecture provides real and independently verifiable protections that go beyond marketing language. The use of third-party relay infrastructure, hardware-level encryption, and Cloudflare’s transparency log are meaningful constraints, not cosmetic ones.
But technology can be changed with an update. Terms of service can be modified. What is technically impossible today can become technically possible tomorrow if the underlying system is redesigned. The guarantee of privacy ultimately rests on ongoing technical commitment and regulatory oversight — both of which require sustained vigilance from users, regulators, and independent researchers, not just trust in a press release.
How to Use It When It Reaches You
Incognito Chat is rolling out gradually over the coming months on both WhatsApp and the standalone Meta AI app. When it arrives, you’ll find a dedicated Incognito toggle or icon within the Meta AI chat interface. Tapping it switches the chat to a visually distinct private mode. When you close the session — or lock your phone — the conversation is gone entirely.
For genuinely sensitive questions where you want AI assistance but don’t want a permanent record, this is the most technically robust option currently available from a major platform. Use it with clear eyes: understand the limitations around web search, understand that no log also means no recourse, and remember that “private from Meta” does not mean private from everything — your device, your network, and your own habits all remain factors.
The private AI conversation is an idea whose time has clearly come. Whether Meta is the right messenger for it is a question only you can answer.
Based on BBC News reporting. Original article: WhatsApp launches totally private ‘incognito’ conversations with its AI chatbot.

Leave a Reply