Does Character.AI Still Have a Filter? The 2026 State of the Filter, Explained
Insights | Updated on April 16, 2026
By Lizzie Od, AI Companion Editor

TL;DR:
- Yes — as of April 2026, Character.AI still has a content filter, and it is stricter than it was twelve months ago. Mandatory face-scan age verification gates every account.
- The November 2025 teen rollout split the service into two experiences: an under-18 surface with no open-ended chat, and an 18+ surface that still has a filter (just a looser one).
- Below: the full dated timeline, honest answers on bypass, how to turn the filter on for safety reasons, and the alternatives adults are actually migrating to — including looser-filter options and safer options.
Disclosure: ourdream.ai publishes this article, and ourdream.ai is one of the platforms we cover in the alternatives section below. We've flagged it as “our platform” where it appears.
The Character.AI filter has been asked about, argued about, and worked around more than any other piece of AI companion policy — and the answer keeps moving.
A 174,882-signature Change.org petition, half a decade of Reddit complaints about the c.ai filter, and a federal product-liability ruling all sit behind this one question.
The strangest part? After every wave of organised pressure, the filter did not loosen. It got stricter.
Three audiences keep asking the question, for three very different reasons. Adults who built long-running roleplay bots over two years and watched the tone quietly narrow. Teens whose daily tool was reshaped overnight in November 2025. And safety-conscious parents who still don't trust that the filter is strict enough.
None of those audiences should feel silly for landing on this page. Each of them is tracking a real change, and each of them deserves a real answer.
So: yes — as of April 2026, Character.AI still has a content filter, and the filter is more restrictive than it was twelve months ago.
What follows is the actual answer. The dated timeline from 2022 through this month, under-18 vs. 18+ separated out, a short and honest bypass discussion (spoiler: it barely works), how to tighten the filter if you want to, the Brainiac model rumour, and the alternatives adults are sorting themselves into.
Does Character.AI Still Have a Filter in 2026?
Yes — Character.AI still has a filter in 2026, and the filter is stricter than it was a year ago. The old model-picker days when the filter felt like a dial you could nudge are gone.
Today there are two distinct modes: an under-18 surface locked down to teen-safe chat flows, and an 18+ surface that still has an adult-focused moderation layer — looser than the teen surface, noticeably tighter than Character.AI's 2024 defaults.
As of April 2026, the platform requires mandatory face-scan age verification before you can access the 18+ surface at all. That single change is the most consequential filter-adjacent move of the year.
Some people pass the scan and land on the adult surface with an experience that feels broadly similar to 2025. Others fail the scan or get mis-sorted onto the teen surface, and there is no quick fix for that.
The filter itself — the thing deciding which words and scenarios make it through — then operates inside whichever surface you land on. The gap between those surfaces is now larger than most community discussion admits.
To understand why the filter is where it is, you have to walk the timeline.
Why Does Character.AI Have a Filter in the First Place?
Character.AI has a filter because three overlapping pressures forced one into existence: legal liability, regulatory mandate, and public-safety scrutiny.
None of these pressures went away after the filter was built. All three tightened it.
The first pressure is legal. In October 2024, Megan Garcia sued Character Technologies over the suicide of her 14-year-old son, Sewell Setzer III. On May 21 2025, U.S. District Judge Anne C. Conway ruled that Character.AI is a “product” subject to product-liability law, rejecting Character.AI's First Amendment defense — a ruling the Transparency Coalition covered in depth.
Once a federal judge categorises your chatbot as a product, content moderation stops being optional.
Regulatory mandate is the second pressure. California Senate Bill 243 — the first U.S. state law mandating AI companion chatbot safeguards — was signed by Governor Gavin Newsom on October 13 2025 and took effect January 1 2026, per TechCrunch's coverage and the California Legislative Information primary text. SB 243 prohibits companion chatbots from producing sexually explicit content for minors and from generating self-harm content.
The EU AI Act adds similar pressure for people in Europe. That is a lot of law pointed at one chatbot. These obligations are why c.ai's NSFW restrictions look the way they do right now.
The third pressure is public — and it's the one that made the other two politically inevitable.
According to a December 2025 Pew Research survey of 1,458 U.S. teens, 64% of teens 13–17 have used AI chatbots and 9% have specifically used Character.AI. Two-thirds of American teenagers. That number alone turns moderation into a political question.
The paradox: despite the Change.org petition, despite years of adults asking why Character.AI removed NSFW content, policy moved toward safety, not away.
What Does the Character.AI Filter Timeline Actually Look Like?
The short version is that the filter has only ever gotten stricter.
The long version is a sequence of legal rulings, regulatory bills, and policy reversals that most coverage summarises in two sentences — losing most of the nuance in the process.
2022 — Launch and the First Filter Backlash
Character.AI has had a filter since launch, which means the filter was never “removed” because it was never absent. The “does old Character.AI have a filter” meme is mostly a trick of memory — the 2022 filter was looser on suggestive content, but it existed.
The proof is organisational: a Change.org petition titled “Remove Character.AI nsfw filters,” launched December 30 2022 by Tobias Blanco, now sits at 174,882 verified signatures — one of the largest organised protests ever mounted against a chatbot content policy. You do not organise a petition against something that isn't there.
2023 — The NSFW Crackdown and the “Golden Age” People Mourn
2023 is the year the filter hardened on explicit content specifically. What the community calls “the golden age of c.ai” is really the brief window before this tightening. The shift went from “creative euphemism sometimes got through” to “nothing explicit gets through.”
This is the period that produced the now-famous Quora advice: “Be creative and slightly vague in your wording, and patient. [...] You have to talk around what you want, sort of like how old Victorian novels did around sensitive topics.”
The lesson people learned was not that Character.AI removed NSFW content — it was that they had to learn to write like Victorians to keep playing. That is not a loosening. That is an inflection.
2024 — The Google Licensing Deal and the Model Shuffle
2024 was a corporate-structure year more than a filter-policy year. Google struck a licensing deal with Character.AI, and the model picker began quietly consolidating under the hood.
Nothing dramatic happened to the filter itself — but once Google money is part of the architecture, every downstream decision runs through a much larger legal posture. The hard filter moves that followed in 2025 make more sense if you remember that 2024 set the stage.
October 2024 — May 2025 — The Garcia Lawsuit and the Conway Ruling
This is the legal inflection that changed everything. In October 2024, Megan Garcia filed suit against Character Technologies in the Middle District of Florida over the suicide of her 14-year-old son, Sewell Setzer III.
The case forced a direct test: is a chatbot platform more like a newspaper (protected speech) or more like a consumer product (liable for harm)?
On May 21 2025, Judge Anne C. Conway ruled Character.AI is a product. Once that ruling landed, the filter's future was fixed — every design choice is now reviewable in court, and the safer design always wins in deposition.
October 2025 — SB 243 Signed
California Senate Bill 243 was signed into law by Governor Gavin Newsom on October 13 2025 and took effect January 1 2026. It was the first U.S. state law targeting AI companion chatbots specifically, and its two core mandates — no sexually explicit content produced for minors, no self-harm content — map directly to the moderation posture Character.AI had already begun building post-Conway.
SB 243 did not cause the filter changes. It ratified them into law and raised the cost of rolling them back.
November 2025 — The Teen Chat Shutdown
This is the watershed. According to Character.AI's Help Center announcement, the phased rollout began November 24 2025, with a hard cutover on November 25 2025.
For under-18 accounts, open-ended chat was eliminated. A 2-hour daily chat cap was imposed during the transition and ramped down over the following weeks. Character.AI paired this with an in-house age assurance model plus third-party verification through Persona to decide which people land on which surface.
Parental Insights launched alongside it. When NBC News asked Megan Garcia for reaction, they led their story with her line that the new teen policy came “too late” — the mother who sued over her son's suicide framing the shutdown as a change that arrived after the harm it was trying to prevent.
January 2026 — Settlement and SB 243 Takes Effect
Two things happened in the same week. Character.AI and Google agreed to settle five teen-suicide and self-harm lawsuits filed in Florida, New York, Colorado, and Texas — including the Garcia case — with financial terms undisclosed, per CNN Business's reporting.
SB 243 also took binding effect on January 1 2026. A platform that just settled five safety lawsuits and a state law that just criminalised specific failure modes arrived on the same calendar. That combination locks in the filter posture for years — possibly permanently.
February 18, 2026 — The “Moderatedpocalypse”
On February 18 2026, Character.AI ran a mass automated copyright/IP compliance sweep that removed thousands of community-built bots — a wave the r/CharacterAI community quickly nicknamed “the Moderatedpocalypse.”
Disney sent a cease-and-desist in late September 2025; NBC and DreamWorks reportedly followed. Moderators told people, in the language surfaced by PiunikaWeb's coverage, “We have to follow the law, and that means taking Characters down when IP owners ask us to.”
Community reaction was blunt: “All of my House bots are gone again. I WAS VERY INVESTED IN EVERY SINGLE ONE OF THOSE CHATS!” and the one-line summary that stuck: “Moderatedpocalypse 2026 is a thing.”
April 2026 — Face-Scan Age Verification
This month's event is shaping the current experience. Character.AI rolled out mandatory face-scan age verification — every account must scan their face to confirm 18+ before the adult surface opens up.
Some people can't complete the scan. Others pass but get locked out anyway. Published failure rates remain qualitative only (read: nobody official is sharing numbers). This is the current frontier, and it's the single largest friction point for adults who logged in fine last week and cannot this week.
Where the Timeline Lands
The filter has not been removed. It has been rebuilt around adults and children as separate surfaces, and the adult surface is itself stricter than it was a year ago.
Is the Character.AI filter gone? No. The petition didn't work. The lawsuits did — and that tells you everything about what moves policy in this space.
Why Is the Character.AI Filter So Sensitive Now?
The Character.AI filter feels sensitive now because its moderation system is operating under three simultaneous pressures that did not all exist at once before: a settlement-era liability posture, automated IP sweeps, and a content classifier retrained more conservatively than its predecessors.
Why is Character.AI filtering everything? Because filtering everything is what a liability-era classifier does by default.
The first mechanic is posture. After settling five lawsuits, Character.AI has no margin for an edge-case failure — so the classifier is tuned to err on the side of blocking borderline content, even when it looks harmless.
IP enforcement is the second mechanic. The Disney / NBC / DreamWorks sweep trained operators to be fast and broad, not careful. Community bots caught in that net were collateral, which is why the filter feels louder now than in 2024.
Third — and this is the one that frustrates people most — late-2025 and early-2026 classifier updates over-correct on emotional-intensity cues, not just sexual ones. Grief, conflict, sometimes even intense affection get flagged.
In our own hands-on testing of Character.AI in April 2026, the filter triggered on emotional roleplay that had nothing to do with explicit content. Memory degraded noticeably after about 20 messages — the bot began forgetting details it had held five messages prior — and full-screen ads interrupted chats mid-conversation.
Character.AI sits at 3.3 stars on Google Play Store across more than 2 million reviews, and the most common complaints cite filters, ads, and feature removals. An arXiv study of Reddit narratives found 5.4% of teens reduced or quit Character.AI specifically because responses became “too censored.”
What's Different Between the Under-18 and 18+ Character.AI Experience?
The under-18 experience and the 18+ experience on Character.AI are now effectively two different products. Since November 2025, they share a login flow and a brand — and almost nothing else.
The under-18 surface does not offer open-ended chat. Teen accounts interact with a much smaller, teen-safe surface — no long-running roleplay bots, no adult AvatarFX or Imagine Chat features, and a 2-hour chat cap during the rollout period.
Parental Insights gives parents a monitoring view. Age is confirmed via Character.AI's in-house age assurance model combined with Persona, the third-party verification partner named in the Help Center announcement. With 9% of U.S. teens using Character.AI according to Pew, this surface now serves a genuinely large audience — one that had to adjust overnight.
The 18+ surface still offers open-ended chat. AvatarFX and Imagine Chat are available, and the c.ai+ subscription lives here. The filter on this surface is looser than the teen surface and stricter than a year ago.
As of April 2026, you reach this surface through the face-scan age verification flow — passing the scan is the gate. Is Character.AI's NSFW filter gone on the 18+ surface? No — the surface simply allows a wider range of adult-coded roleplay within the constraints set by the classifier and SB 243.
If you're an adult and the face-scan does not work, or the 18+ surface still blocks what you want to do, your options are threefold: accept the current surface, turn the filter on more (the section below), or migrate to a platform whose adult surface is actually built for adults (the alternatives section after that).
None of those are morally weighted — they're just three real paths.
Can You Remove or Bypass the Filter on Character.AI?
No — you cannot officially remove or bypass the Character.AI filter. Creative workarounds — Victorian-style euphemism, OOC prompts, jailbreak prompts — exist but are unreliable, get patched within weeks, and can trigger account actions.
On Mepis, AndarilhoNoturno put it plainly: “You really can't fully ‘get around’ the filter on Character AI — it's pretty locked down.”
How to remove the filter on Character.AI the easy way? You don't. If your roleplay needs exceed what the filter allows, the practical answer is a different platform, not a smarter prompt.
For context on the broader filter policy and what Character.AI does and doesn't allow, see our guide on does character ai allow nsfw.
How Do I Turn the Character.AI Filter ON — or Stricter?
You turn the Character.AI filter “on” by confirming you're under 18 — not by flipping a toggle. The filter's strictness is determined by which surface you're on, and which surface you're on is determined by age verification.
Character.AI no longer exposes a Settings → Content switch the way older community guides describe.
For parents, the practical path is Parental Insights. It is not a filter toggle — it's a monitoring view that gives parents visibility into the teen's activity on the under-18 surface.
Combined with Persona age verification, Parental Insights is the mechanism Character.AI now recommends for families who want the strictest setting. For the detailed setup, the Character.AI Help Center article is the primary source.
For safety-conscious adults, the honest answer is that you cannot manually tighten your own filter beyond what the 18+ surface gives you, but you can make sure your account is registered correctly via the age verification flow — and you can choose not to use the features most likely to drift.
Worth naming: the face-scan verification has failure cases. Some people end up on the wrong surface by accident, and appeals are slow. Pretending that doesn't happen does nobody any good — but knowing it does gives you leverage when you appeal.
What About the Rumoured “Brainiac” Model — Does It Have a Looser Filter?
No — the rumoured “Brainiac” model with a looser filter does not exist as a looser-filter product. The rumour spread because Character.AI genuinely was shuffling models in the same window: the model picker consolidated into Chat Styles (Meow / Roar / Nyan), and a new model called PipSqueak shipped in late 2025, with PipSqueak 2 (PSQ2) arriving in April 2026.
Community reports describe PipSqueak as more emoji-heavy and less creative — not more permissive. (If anything, the emoji overload makes serious roleplay harder, which is its own kind of filter.)
Character.AI has not publicly confirmed or denied a “Brainiac” codename. The model shuffle that did happen is PipSqueak / PSQ2 and Chat Styles, per the Character.AI blog's PipSqueak 2 announcement.
If you saw a Facebook post promising Brainiac had loosened the filter — that post is, as far as anyone publicly tracking this can tell, based on a misread of a real model shuffle, not a real policy change.
What Are My Alternatives if I'm 18+ and Want Fewer Filter Walls?
The adults leaving Character.AI are not disappearing — they're sorting themselves into platforms built for different needs. Character.AI's content moderation posture has pushed a real audience outward, and the alternatives have organised into clear lanes. For a ranked head-to-head across the category, see our character ai alternative roundup.
According to AI character platform rankings, Janitor AI and SpicyChat together pull tens of millions of monthly visits — evidence that this migration is happening at scale, not in a niche.
If you write fiction and the visual side doesn't matter, NovelAI is the one to look at. Straightforward and narrower by design — no images, no video, no character wizard, just prose you control. Its filter posture is the most permissive of any paid mainstream option, which is exactly what long-form writers keep coming back for.
Janitor AI is where most of the displaced c.ai community lands first. A casual, accessible free-tier option for quick roleplay — the filter is loose, the UX is serviceable, and the free tier holds up well enough for scene-length chats. Decent for casual people. It is not trying to be a creative toolset. For a deeper look at options beyond Janitor, see our janitor ai alternative guide.
You'd think a mobile-first companion app would cut corners on depth, and Kindroid mostly does — but that's the trade-off it's making on purpose. Simple character setup, decent memory for light use, soft-RP friendly. Might be worth a look if your need is “a chat app on my phone,” not “a platform for building bots.”
If you liked the c.ai format but wanted fewer walls, PolyBuzz is probably the closest thing. Character.AI-adjacent in interface and model behaviour, which is exactly its pitch — decent for people who want the familiar UX without the same level of restriction.
Local open-source (KoboldAI / Pygmalion / SillyTavern) gives you full technical control and no filter unless you set one. The trade-off is real: you need dedicated hardware, at least a passing comfort with Linux, and a willingness to spend a weekend on setup before your first chat.
ourdream.ai (our platform — flagged earlier in the disclosure) is built for creators who want depth and continuity across sessions. There are no NSFW restrictions and no mid-scene censorship; the conversation goes wherever you take it.
The memory architecture is the part that matters most for anyone who's felt the c.ai memory-wipe: a four-layer system (Auto Memory Log, Pinned Memories, Custom Instructions, User Personas) designed to keep continuity across long roleplay arcs, not just the last twenty turns.
Character creation uses a 6-step AI girlfriend wizard with 46 personality traits, 18 voices, 135 occupations, and a narrative field that holds up to 100,000 characters — enough depth that the character you build on Tuesday still sounds like itself on Friday.
An honest concession: the 6-step wizard has a learning curve if you've never built a character from scratch, and video generation is still slow (expect 30–60 seconds per clip). There is no native mobile app yet either — browser only.
The free tier is 55 dreamcoins, one-time, no monthly renewal — a generous trial, not an ongoing free plan, and worth saying plainly. Premium is $9.99/month billed annually, which opens up unlimited Regular and Lively messages and 1,000 dreamcoins per month that roll over indefinitely.
If you're shopping for a Character.AI alternative, the right pick depends on what you actually missed. Missed the bots you built? Look at ourdream.ai's character creator or a local open-source setup. Missed casual chat? Janitor AI or PolyBuzz. Missed long-form writing? NovelAI. Different tools, different jobs. For the full ranked comparison of unfiltered chat options, see our best nsfw ai chat roundup.
Will Character.AI Ever Remove the Filter?
No — Character.AI will not remove the filter. Everything about the current legal and regulatory environment points in the opposite direction, which is to say the honest answer to “will Character.AI remove the NSFW filter” is the same answer to “will Character.AI remove the filter” at all: not in any foreseeable product cycle.
Three reasons it stays. First, Judge Conway's May 2025 ruling means product-liability exposure is now table-stakes. A product-classified chatbot cannot ship a looser filter without inviting the next lawsuit.
Second, California SB 243 is binding as of January 1 2026, and several other state legislatures are already drafting similar bills — the California standard will become the national floor faster than most people expect.
Third, Character.AI has settled the open cases on terms that almost certainly include ongoing safety commitments. Why won't Character.AI remove the filter? Because the incentive structure now punishes every move in that direction.
For adults hoping for a filter rollback — the practical read is to plan around its absence, not around its arrival. Has Character.AI removed the filter? No. It is not coming.
What Does This All Mean for the Future of AI Companions?
The Character.AI story is not really about a filter. It is about who gets to decide what an AI companion is allowed to be — and what happens when the people building the technology and the people using it have fundamentally different answers.
Two real answers exist, and they do not cancel each other out. Safety advocates see a company that prioritised engagement over teen welfare and had to be forced into responsibility by a federal judge and a state legislature. Adults using the platform see a creative tool that keeps getting narrower under the guise of protecting children they are not. Both are true. Neither invalidates the other.
Megan Garcia is the central figure in how the policy actually shifted, and the piece owes her a serious acknowledgment, not a footnote — she fought for a specific, concrete change after a loss no parent should carry, and that change happened. The fact that adults now face a stricter 18+ surface is downstream of that loss, not a reason to discount it.
The quieter observation is harder — and it's the one most coverage skips entirely. Adults leaving Character.AI for platforms with fewer walls is not a villain arc. It is adults looking for tools that treat them as adults.
What's worth noticing is the broader pattern: the filter debate is rehearsing the same disagreement playing out across generative AI more broadly. Who is the person affected, who is the protected party, and who decides when those two overlap?
The Change.org petition that reached 174,882 signatures did not move policy. The Conway ruling did. The lesson is not that people's voices do not matter. It's that public advocacy and legal liability operate on different clocks; the legal clock runs faster now.
FAQ
Does Character.AI still have a filter in 2026?
→
Yes. As of April 2026, Character.AI has a filter, and it is the strictest it has ever been. Mandatory face-scan age verification now gates the 18+ surface, and the under-18 surface no longer offers open-ended chat at all.
When will Character.AI remove the filter?
→
They are not planning to. After Judge Anne C. Conway’s May 2025 ruling classified Character.AI as a product subject to product-liability law, California SB 243 taking effect January 1 2026, and the January 2026 settlement of five teen-suicide and self-harm lawsuits, the incentive structure runs in the opposite direction. Every pressure on the company pushes the filter up, not down.
Can you turn off the Character.AI filter?
→
You cannot toggle the filter off. What you can do is influence which surface your account lands on via the age verification flow. An 18+ account lands on the adult surface with its looser filter; an under-18 account lands on the teen-safe surface with the strictest filter. The face-scan verification decides where you end up.
Is the Character.AI NSFW filter gone on Reddit or other places?
→
Not gone, just misread. Reddit threads periodically claim the filter has been removed. Two patterns explain almost all of them: pre-November-2025 archival threads people mistake for current posts, and rumour cycles that spin up around model releases. As of April 2026 the filter is not gone, and it is stricter on the 18+ surface than it was a year ago.
Why is the Character.AI filter so strict now?
→
Three pressures compound: a post-settlement liability posture that rewards blocking edge cases, California SB 243’s binding mandates, and classifier retuning that over-corrects on emotional-intensity cues as well as sexual content.
Is there a way around the Character.AI filter that actually works?
→
Not really. Creative rewording sometimes gets a soft scene through, but bypass methods are unreliable, patched regularly, and risk account actions. The practical answer for people whose needs exceed the 18+ surface is a different platform, not a smarter prompt.
So Where Does This Leave You?
The Character.AI filter question has a simple answer and a complicated one.
Simple: yes, it's still there, and it is stricter than last year. Complicated: the filter is the visible edge of a much larger renegotiation of what AI companions are allowed to be, for whom, and under whose supervision. The petition did not move it. The lawsuits did.
Whatever brought you here — a shutdown bot you were invested in, a teen in the house, a creative project the filter blocked, or just a general sense that the ground is shifting — the honest answer is that the ground is shifting.
If ourdream.ai or any other alternative ends up being the right fit, that's worth testing quietly for yourself; if Character.AI's adult surface still serves what you need, that is also fine.
The filter is not a verdict on you. It's a product of a legal moment — and legal moments are the only things in this story that ever really moved.

Related Articles
Browse All →
ourdream vs candy.ai
sweeter than candy?
Read full article →

ourdream vs GirlfriendGPT
Which AI companion actually remembers you?
Read full article →

ourdream vs JuicyChat
Comparing content freedom and image quality.
Read full article →

ourdream vs SpicyChat
How does SpicyChat stack up against ourdream?
Read full article →