Skip to content

Introducing Wu Ultra (Experimental Preview)

Published November 10, 2025

We're releasing Wu Ultra, our largest translation model to date, and this one really is experimental.

Unlike our other models in preview, Wu Ultra is intentionally undertrained, using only a fraction of our existing data. We're using it as a testbed for some pretty significant changes to how we approach modeling and data curation, and we wanted to get it in your hands early to see how it performs in the real world.

Our main goal with Wu Ultra is to make translations feel more human. More idiomatic and fluent while staying accurate. This turns out to be surprisingly hard. The path of least resistance for any translation model is to mirror the original sentence structure and translate things fairly literally. It's safer, more predictable, and generally leads to fewer outright errors. But it also doesn't sound like how a native speaker would actually write. That's what we're trying to change.

So far, the results are promising but mixed. Wu Ultra shows a meaningful 4.06% improvement in our new idiomaticity metric compared to Wu Max. The trade-off is that it currently trails Wu Max by roughly 2% on our updated LiTERatE benchmark. This gap comes from two factors: Wu Ultra being undertrained (which is intentional at this stage), and the model prioritizing more natural phrasing that sometimes introduces minor accuracy issues. Both are things we're actively working on.

A heads up: Because Wu Ultra is undertrained, it may not follow our formatting guidelines (like system boxes) or custom instructions as reliably as our other models. If precise formatting or adherence to custom instructions is critical for your workflow, we'd recommend sticking with Wu Max for now.


Here's what the difference looks like in practice:

Sample 1:

Wu Max:

"The answers align?" Liu Xie's smile faded slightly, but surprisingly, he didn't press the matter further. Instead, he beckoned Yang Qi to sit beside his bed, clearly regarding him as a confidant. His tone remained gentle as he said, "Palace Attendant, come closer. I recently spoke with Palace Attendant Ma about an incident from your past during Emperor Ling's reign."

"Back then, Emperor Ling asked you which emperor, himself or Emperor Huan, was superior," Liu Xie continued, not waiting for Yang Qi's response, like a child eager to show off a newly learned character. "You replied that comparing them was like comparing Yu Shun to Tang Yao."

Wu Ultra:

"His answer remains the same?"

Liu Xie's smile faded slightly. Surprisingly, he didn't press the matter. Instead, he beckoned Yang Qi to sit beside him on the couch, clearly treating him as a trusted confidant. His tone remained gentle as he said, "Come closer, Palace Attendant. I was just speaking with Palace Attendant Ma, and he recounted a story about you from the time of Emperor Ling."

Without waiting for Yang Qi to respond, Liu Xie continued, like a child eager to show off a newly learned word. "Emperor Ling asked you to compare him to Emperor Huan—who was superior and who was inferior. You replied that comparing them was like comparing Yu Shun to Tang Yao."

Sample 2:

Wu Max:

Old Master Cheng had already passed away. Why would his descendants be coming to his door now?

Could it be... that after Old Master Cheng's death, his family discovered the exorbitant sums he'd spent on Lin Zhishui's calligraphy, suspecting their elder had been swindled? Was that why they'd come?

After all, Lin Zhishui was just an obscure folk calligrapher, still so young, yet he sold his pieces for tens of thousands of yuan...

That was a bit excessive.

But still, Lin Zhishui reasoned, as a distinguished calligraphy master—even if I'm not famous—selling a piece for tens of thousands of yuan isn't particularly outrageous, is it?

Wu Ultra:

Old Man Cheng had already passed away. Why would his descendants show up now?

Could it be... because Old Man Cheng had spent so much money on his calligraphy? After his death, his family discovered it, thought their elder had been scammed, and came to confront him?

After all, he was just an obscure folk calligrapher, and so young. Selling a single piece for tens of thousands of yuan...

It was a tad excessive.

But still, he reasoned, I'm a master calligrapher, even if I'm not famous. Selling a piece for tens of thousands isn't that outrageous, is it?

In both samples, you can see Wu Ultra making structural choices that prioritize natural flow. Shorter sentences, more conversational phrasing ("show up" instead of "be coming to his door"), and rhythm that feels closer to how someone would actually think or speak.

But Sample 2 also shows the trade-off we're still working on. Notice how Wu Ultra's pronoun usage gets a bit ambiguous ("his calligraphy" and "his family" referring to different people). Wu Max is clearer here by explicitly naming "Lin Zhishui" and "Old Master Cheng." This is exactly the kind of thing we're iterating on—keeping the naturalness while tightening up clarity.


We're planning to push updates to Wu Ultra at least bi-weekly as we continue training and refining this approach. Our goal is to keep improving idiomaticity while narrowing (and eventually closing) that accuracy gap.

A quick note on pricing: Wu Ultra is our first Premium model, which means it runs on credits rather than your daily/weekly quota. You can read more about how our new credit and quota system works in this announcement. Pro-tier users get 10,000 credits on the 1st of each month, and all existing users received 1,000 bonus credits at launch to try things out.

This is where you come in. During this experimental phase, we really want to understand your preference. Do you prefer the more idiomatic approach, even if it occasionally means a small accuracy trade-off? Or would you rather stick with something closer to the source structure?

When you try Wu Ultra, we’d love to hear:

  • Where it feels more like a native speaker than Wu Max.

  • Any spots where it sounds great but gets details wrong.

Please post all your examples, and feedback in our Discord's #feedback channel.

Your feedback will directly help us refine this new method, allowing us to bring these fluency and quality gains to our other models as well. More updates coming soon.