What AI Model Does Character.AI Use? The Full Answer (2026)
Insights | Updated on April 20, 2026
By Lizzie Od, Editor & AI Roleplay Enthusiast

TL;DR:
The short answer to what ai model does character ai use is this — Character.AI runs on its own proprietary in-house large language model, publicly named through the C-series (C1.2) and the Kaiju family of dense Transformer models in three sizes (13B, 34B, 110B). It is not ChatGPT, not GPT-4, not LLaMA, not Claude. The stack was built from scratch by Noam Shazeer and Daniel De Freitas, ex-Google engineers from the LaMDA team. Since the August 2024 Google licensing deal, Character.AI has also been integrating open-source base models alongside its own — so the exact model behind any given chat today is a moving target.
There is a surprising amount of confusion about what Character.AI actually runs on — and some of the answers ranking highest on Google are wrong. The top Quora result confidently says LLaMA. Several explainer sites hedge or imply it is some kind of ChatGPT wrapper. A lot of Reddit threads assume the answer must be an OpenAI model because OpenAI dominates the cultural imagination of what “AI” means in 2026.
Here is the direct answer, up front: Character.AI runs on its own proprietary in-house model — the C-series (C1.2) and the Kaiju family. Not ChatGPT. Not LLaMA. Not Claude. The rest of this guide is the context that makes that sentence mean something.
Why Is There So Much Confusion About What Character.AI Runs On?
There is a surprising amount of confusion about what Character.AI actually runs on — and some of the answers ranking highest on Google are wrong. The top Quora result confidently says LLaMA. Several explainer sites hedge or imply it is some kind of ChatGPT wrapper. A lot of Reddit threads assume the answer must be an OpenAI model because OpenAI dominates the cultural imagination of what “AI” means in 2026.
The strange part is the gap between how much people use Character.AI and how little they know about what powers it. Roughly 20 million monthly active users (per SimilarWeb traffic data), an average session north of seventeen minutes, engagement that dwarfs ChatGPT's — and almost none of those people could name the model driving the conversations they keep coming back to. It is both the most intimate AI many of them have ever talked to and the most mysterious.
Here is the direct answer, up front: Character.AI runs on its own proprietary in-house model — the C-series (C1.2) and the Kaiju family. Not ChatGPT. Not LLaMA. Not Claude. The rest of this guide is the context that makes that sentence mean something.
What AI Model Does Character.AI Actually Use?
The AI model Character.AI uses is its own proprietary in-house large language model — publicly known through the C-series (C1.2) and the Kaiju family of dense Transformer-based models in three sizes: Small at 13 billion parameters, Medium at 34 billion, and Large at 110 billion. That is what powers Character.AI at the architectural level.
“Proprietary in-house” can sound like marketing, so it is worth being concrete. Character.AI built the whole stack themselves — training pipeline, base model, post-training, serving infrastructure. There's no OpenAI API key humming under the hood. No fine-tune of LLaMA sits behind a curtain either. The company trained its own foundation models from scratch.
The Kaiju family, disclosed in November 2025, is dense Transformer-based autoregressive LLMs with int8 quantization, multi-query attention, sliding-window attention, and cross-layer cache sharing — per the company's own research post, sliding-window and global attention layers interleave at roughly a 5:1 ratio, and an optional classifier head outputs token-level safety metrics directly from the model. Character.AI states plainly that these models were “heavily optimized for engaging conversation and serving efficiency… rather than a focus on academic benchmarks.” That is why you will not find Character.AI at the top of MMLU leaderboards. It was never built for that.
Here are the named models that have publicly come out of this program:
- C-series: C1.1 and C1.2, announced alongside the March 2023 Series A — the first generation of Character.AI's in-house model to get product-level naming.
- Kaiju family: Small (13B), Medium (34B), Large (110B), disclosed in November 2025 as the model family the company has built to date.
Character.AI's own app-store listing puts the refutation in the company's own words:
“Unlike ChatGPT, Character AI is powered by our own proprietary technology founded on large language models (LLMs) we build and develop from scratch.”
One important piece of nuance: in a public statement accompanying the August 2024 Google deal, Character.AI said it would be “making greater use of third-party LLMs alongside our own.” Translation — the model Character.AI is running on any given day may be a Kaiju descendant, a fine-tuned open-source base, or a blend. The in-house lineage is real. The single-source-of-truth framing is outdated.
Is Character.AI Powered by ChatGPT or OpenAI?
No — Character.AI is not powered by ChatGPT or OpenAI, and it never has been. The team built its own model stack from scratch, and they do not route requests to an OpenAI API. The company's own app-store listing makes the point bluntly: Character.AI is powered by “our own proprietary technology founded on large language models we build and develop from scratch.”
To be specific about what Character.AI does not run on:
- Not ChatGPT — no GPT-3, GPT-3.5, GPT-4, or GPT-4o. Character.AI does not use any OpenAI model.
- Not LLaMA — the top Quora answer ranking for this query names Meta's LLaMA. It is wrong. Character.AI's Kaiju family sits on the same broad Transformer architecture LLaMA is built on, but they are separate models from separate companies.
- Not Claude — different company, different stack. Anthropic and Character.AI have no licensing or technical relationship.
- Not Google Gemini — despite the August 2024 Google licensing deal, Character.AI is not a Gemini front-end. Google got a license to Character.AI's technology, not the other way around.
Character.AI has never made a marketing campaign out of explaining its model the way OpenAI has, and the company's public disclosures are scattered across blog posts, app-store copy, and research papers. But the answer itself is not ambiguous.
Who Actually Built the Model Behind Character.AI?
The model behind Character.AI was built by Noam Shazeer and Daniel De Freitas, two ex-Google engineers whose résumés explain everything about why Character.AI did not need to wrap somebody else's model. Both were inside Google when the idea of putting a genuinely good conversational model in consumers' hands was being actively blocked — and both left specifically to do the thing Google wouldn't ship.
Shazeer is a co-author of the 2017 paper “Attention Is All You Need,” which introduced the Transformer architecture. For readers who don't work in ML: the Transformer is the architecture under essentially every modern large language model, GPT included.
That credential matters. If there is a short list of living humans with the credibility to build a foundation model from scratch, Shazeer is on it — and this is the single most underrated fact about Character.AI's origin story, because it tells you the company's moat was not capital, it was that the guy who invented the architecture decided to show up in-house.
De Freitas spent his last years at Google leading Meena, an open-domain chatbot released in 2020 that became the direct technical precursor to LaMDA. The Meena paper describes a 2.6B-parameter model tuned for conversational sensibility and specificity. De Freitas wanted to ship Meena publicly. Google wouldn't let him. The stories vary on how polite the internal disagreements were, but the upshot is not in dispute — De Freitas eventually stopped waiting.
Both founders went on to co-author the LaMDA technical report in 2022 before leaving Google to found Character.AI. Here is the important nuance — Character.AI's model is not LaMDA. It is adjacent lineage. Same authors, separate proprietary stack, rebuilt from scratch at a new company. Character.AI was founded in 2021 and launched publicly in beta in September 2022. Wikipedia's Character.ai entry holds the basic biographical facts.
What Happened With the Google Deal in August 2024?
What happened with the Google deal in August 2024 was a non-exclusive licensing agreement worth approximately $2.7 billion — not an acquisition, not a merger, not a full exit.
Google paid roughly $2.7 billion to license Character.AI's LLM technology non-exclusively. Shazeer, De Freitas, and approximately thirty research-team members rejoined Google inside Google DeepMind, and Character.AI itself was valued at approximately $2.5 billion at peak. Shazeer was appointed co-technical lead on Google's Gemini project alongside Jeff Dean and Oriol Vinyals, per reporting from TechCrunch and Axios.
And what the deal isn't: it is not an acquisition. Character.AI remained a standalone company with its own product. Google did not take ownership of the consumer app, did not gain access to user chat logs, and did not receive Character.AI's chat history database. Google bought a license to the technology. That's it.
Shazeer's own framing at the time was carefully corporate:
“I am confident that the funds from the non-exclusive Google licensing agreement, together with the incredible Character.AI team, positions Character.AI for continued success in the future.”
What it means for the model you're chatting with: Character.AI still trains and ships its own models. The November 2024 “greater use of third-party LLMs” language means the exact composition can vary day to day. Some technical observers read that statement as a signal that Character.AI was quietly moving off its own foundation models entirely — a read that is plausible but unconfirmed, and the Kaiju disclosure in November 2025 complicates it. What is confirmed: Character.AI is not a Gemini front-end, and your chats are not being piped into Google's systems.
How Does Character.AI's Model Compare to ChatGPT and Claude?
Character.AI's model compares to ChatGPT and Claude on the handful of axes that actually matter to the searcher — who owns it, what it's optimized for, whether you can get at it programmatically — and the comparison reveals why people who love Character.AI love it specifically, not interchangeably with the others.
| Dimension | Character.AI | ChatGPT (GPT-4o) | Claude (Anthropic) |
|---|---|---|---|
| Owner / builder | Character.AI (in-house) | OpenAI | Anthropic |
| Architecture | Proprietary Transformer (Kaiju, C-series) | Proprietary Transformer (GPT) | Proprietary Transformer (Claude) |
| Design optimization | Engaging conversation, persona consistency | General-purpose assistance | Reasoning + long context |
| Avg session length | ~17m 23s | 7–12 min | Not publicly benchmarked |
| Persona consistency | High (core product) | Low (assistant default) | Medium (supports personas) |
| Content filter | Strict — no NSFW, teen-segmented since Dec 2024 | Strict | Strict (long refusal list) |
| Memory window | Short; noticeable degradation around ~20 messages per our testing | 128K+ context | 200K+ context |
| Public API | No | Yes (paid) | Yes (paid) |
| Free tier | Yes, full access | Yes, rate-limited | Yes, rate-limited |
People who want a general-purpose assistant will be better served by ChatGPT or Claude, full stop. For readers focused on long-running roleplay where the character has to stay in character across hours of back-and-forth, Character.AI's model is the one explicitly designed for that job — which is the whole point of the product, and the reason people describe it as irreplaceable even while they complain about the filter.
Can You Access Character.AI's Model Through an API or Run It Locally?
No — Character.AI does not offer a public API, and you cannot run its model locally. The stack is closed: web, iOS, and Android clients only. No developer keys. No self-hosting. For a Character.AI-style setup under your own control, open-source LLMs like Llama 3 or Mistral running in LM Studio or Ollama are the closest analog — though you lose the character library and the memory tuning.
Is Character.AI's Model Actually Getting Worse Over Time?
Yes — Character.AI's model is getting worse at some of the things its power users originally loved it for, and the company is mostly transparent about why. The nerfing question comes up in every long-running Character.AI subreddit thread, and the honest read is that the critics are not wrong. Something real has changed.
The most widely-shared sentiment on r/CharacterAI, surfaced through 404 Media's reporting, is that personas have flattened and filter refusals have crept into conversations that used to run cleanly. “All of a sudden, my bots had completely reverted back to a worse state than they were when I started using CAI,” one regular wrote in mid-2024. Another: “No bot is themselves anymore and it's just copy and paste. I'm tired of smirking, amusement, a pang of, feigning, and whatever other bs comes out of these bots' limited ass vocabularies.”
From the company's side, the biggest recent change is the under-18 safety model that started rolling out on November 24, 2024 — announced on Character.AI's blog and covered by TechCrunch. The rollout was prompted in part by an October 2024 wrongful-death lawsuit filed by the family of Sewell Setzer III; that's the one-sentence version. One 2025 academic survey of teen AI-companion users found that 5.4% of respondents who had used Character.AI self-reported reducing or quitting specifically because responses had become “too censored.”
In our hands-on testing in April 2026, the patterns above tracked with what we saw — persona flatness creeping into the back half of longer sessions, filter triggers on prompts that were explicitly non-sexual, memory degradation becoming noticeable around the twenty-message mark. None of this is unusual for a free-tier LLM under heavy serving pressure. It's just the current state of the product.
Readers whose main frustration is the filter do have alternatives. Janitor AI and CrushOn.AI occupy a similar roleplay niche with looser filters. Technical users can run Llama 3 or Mistral locally. ourdream.ai is a proprietary uncensored option in the same category. None of these replaces Character.AI's character library — they are a pressure release, not a migration.
What Should We Take Away From How Character.AI Built Its Model?
What we should take away from how Character.AI built its model is that none of the genuinely hard questions about AI companionship have easy answers — and Character.AI is the product where those questions arrived first, at scale, years before the industry caught up.
One 2025 academic analysis of shared Character.AI chat-log transcripts found 92.9% of the people in the sample had at least one companionship-oriented conversation. Average session lengths run more than twice the ChatGPT average. Whatever the model is doing, it is doing something general-purpose assistants are not — and the question of what it means when people spend more time with a conversational AI than with the most capable general one is the question the industry is not actually ready to answer.
Privacy is the second tension. Character.AI's chats are uploaded to Character.AI's servers by default. Under the August 2024 Google deal, Google did not gain access to user chats — but Character.AI itself retains them, and the legal environment around conversational AI is still being written. Mozilla's privacy reports on AI chatbots are worth consulting. The Sewell Setzer III case is a reminder that the stakes are not theoretical.
And then the filter debate. The under-18 model is a response to real harm. It is also, per the 5.4% figure, the reason some teens stopped using the product. Both things can be true at once. Who gets to decide where the line sits between safety and paternalism when the product is emotional, not transactional? That question does not have a policy-paper answer.
FAQ
Is Character.AI generative AI?
→
Yes, Character.AI is generative AI. It generates text responses token by token using a Transformer-based large language model, the same broad category of architecture as ChatGPT and Claude — just trained and served by Character.AI rather than OpenAI or Anthropic.
Does Character.AI use ChatGPT, GPT-3, or GPT-4?
→
No, Character.AI does not use GPT-3, GPT-4, or any OpenAI model. Per the company's own app-store listing, the product is powered by our own proprietary technology founded on large language models we build and develop from scratch.
Does Character.AI use LLaMA or any Meta model?
→
No. The Quora answer currently ranking for this query claims LLaMA; it is wrong. Character.AI's Kaiju family shares the Transformer architecture Meta's LLaMA is built on — but they are separate models from separate companies.
Is Character.AI really AI, or is it just scripted responses?
→
That depends on what you mean by really. Nothing in Character.AI is scripted. Every response is generated from scratch by a large language model, token by token. What some people mistake for scripting is the model falling into repetitive phrasing patterns under its current training objectives — a model-quality issue, not a different underlying technology.
Can I access Character.AI's model through an API or run it locally?
→
No. Character.AI offers no public API, and its model is not available for self-hosting. The closest analog under your own control is running an open-source LLM like Llama 3 or Mistral locally.
Will Google train on my Character.AI chats after the August 2024 deal?
→
Because the August 2024 deal was a non-exclusive technology license rather than a data transfer, no — Google did not receive access to user chat logs. Character.AI itself still stores and processes your chats on its own servers, per its privacy policy.
Why does Character.AI forget what I said earlier in the conversation?
→
This is the context window limit in action. Every large language model has a fixed amount of recent conversation it can see at once, measured in tokens. Character.AI's context window is shorter than ChatGPT's or Claude's, and in our hands-on testing memory degradation becomes noticeable around the twenty-message mark. The model isn't forgetting on purpose — it's simply not being shown the earlier turns.
Where to Start
What this means for anyone still using Character.AI is that the model you're chatting with right now is in motion — and that is not necessarily a bad thing, but it's worth understanding. Character.AI's original model was a genuinely ambitious bet by two of the best conversational-AI researchers in the world. The August 2024 Google deal changed the incentive structure. The November 2024 strategic shift changed the serving composition. The December 2024 teen-model split changed who can even access what. The product many people loved in 2023 is not exactly the same product today.
That does not make it bad. It does make it shifting — and if your relationship to the tool is the kind where the model matters (if you notice the difference between a persona that stays in character and one that drifts), you should probably pay attention to Character.AI's own research blog the way you'd pay attention to any other tool in active development. The mystery of what powers Character.AI turns out to be less a mystery than a moving target. Whether that bothers you is the actual question. If you want an alternative without filter walls, start with ourdream.ai.

Related Articles
Browse All →
ourdream vs candy.ai
sweeter than candy?
Read full article →

ourdream vs GirlfriendGPT
Which AI companion actually remembers you?
Read full article →

ourdream vs JuicyChat
Comparing content freedom and image quality.
Read full article →

ourdream vs SpicyChat
How does SpicyChat stack up against ourdream?
Read full article →