- Uncontrolled data growth, swelling budgets, and data classification are among the top risks for data management in 2026.
Zencoder, the Silicon Valley startup that builds AI-powered coding agents, released a free desktop application on Monday that it says will fundamentally change how software engineers interact with artificial intelligence — moving the industry beyond the freewheeling era of "vibe coding" toward a more disciplined, verifiable approach to AI-assisted development.
The product, called Zenflow, introduces what the company describes as an "AI orchestration layer" that coordinates multiple AI agents to plan, implement, test, and review code in structured workflows. The launch is Zencoder's most ambitious attempt yet to differentiate itself in an increasingly crowded market dominated by tools like Cursor, GitHub Copilot, and coding agents built directly by AI giants Anthropic, OpenAI, and Google.
"Chat UIs were fine for copilots, but they break down when you try to scale," said Andrew Filev, Zencoder's chief executive, in an exclusive interview with VentureBeat. "Teams are hitting a wall where speed without structure creates technical debt. Zenflow replaces 'Prompt Roulette' with an engineering assembly line where agents plan, implement, and, crucially, verify each other's work."
The announcement arrives at a critical moment for enterprise software development. Companies across industries have poured billions of dollars into AI coding tools over the past two years, hoping to dramatically accelerate their engineering output. Yet the promised productivity revolution has largely failed to materialize at scale.
Why AI coding tools have failed to deliver on their 10x productivity promise
Filev, who previously founded and sold the project management company Wrike to Citrix, pointed to a growing disconnect between AI coding hype and reality. While vendors have promised tenfold productivity gains, rigorous studies — including research from Stanford University — consistently show improvements closer to 20 percent.
"If you talk to real engineering leaders, I don't remember a single conversation where somebody vibe coded themselves to 2x or 5x or 10x productivity on serious engineering production," Filev said. "The typical number you would hear would be about 20 percent."
The problem, according to Filev, lies not with the AI models themselves but with how developers interact with them. The standard approach of typing requests into a chat interface and hoping for usable code works well for simple tasks but falls apart on complex enterprise projects.
Zencoder's internal engineering team claims to have cracked a different approach. Filev said the company now operates at roughly twice the velocity it achieved 12 months ago, not primarily because AI models improved, but because the team restructured its development processes.
"We had to change our process and use a variety of different best practices," he said.
Inside the four pillars that power Zencoder's AI orchestration platform
Zenflow organizes its approach around four core capabilities that Zencoder argues any serious AI orchestration platform must support.
Structured workflows replace ad-hoc prompting with repeatable sequences (plan, implement, test, review) that agents follow consistently. Filev drew parallels to his experience building Wrike, noting that individual to-do lists rarely scale across organizations, while defined workflows create predictable outcomes.
Spec-driven development requires AI agents to first generate a technical specification, then create a step-by-step plan, and only then write code. The approach became so effective that frontier AI labs including Anthropic and OpenAI have since trained their models to follow it automatically. The specification anchors agents to clear requirements, preventing what Zencoder calls "iteration drift," or the tendency for AI-generated code to gradually diverge from the original intent.
Multi-agent verification deploys different AI models to critique each other's work. Because AI models from the same family tend to share blind spots, Zencoder routes verification tasks across model providers, asking Claude to review code written by OpenAI's models, or vice versa.
"Think of it as a second opinion from a doctor," Filev told VentureBeat. "With the right pipeline, we see results on par with what you'd expect from Claude 5 or GPT-6. You're getting the benefit of a next-generation model today."
Parallel execution lets developers run multiple AI agents simultaneously in isolated sandboxes, preventing them from interfering with each other's work. The interface provides a command center for monitoring this fleet, a significant departure from the current practice of managing multiple terminal windows.
How verification solves AI coding's biggest reliability problem
Zencoder's emphasis on verification addresses one of the most persistent criticisms of AI-generated code: its tendency to produce "slop," or code that appears correct but fails in production or degrades over successive iterations.
The company's internal research found that developers who skip verification often fall into what Filev called a "death loop." An AI agent completes a task successfully, but the developer, reluctant to review unfamiliar code, moves on without understanding what was written. When subsequent tasks fail, the developer lacks the context to fix problems manually and instead keeps prompting the AI for solutions.
"They literally spend more than a day in that death loop," Filev said. "That's why the productivity is not 2x, because they were running at 3x first, and then they wasted the whole day."
The multi-agent verification approach also gives Zencoder an unusual competitive advantage over the frontier AI labs themselves. While Anthropic, OpenAI, and Google each optimize their own models, Zencoder can mix and match across providers to reduce bias.
"This is a rare situation where we have an edge on the frontier labs," Filev said. "Most of the time they have an edge on us, but this is a rare case."
Zencoder faces steep competition from AI giants and well-funded startups
Zencoder enters the AI orchestration market at a moment of intense competition. The company has positioned itself as a model-agnostic platform, supporting major providers including Anthropic, OpenAI, and Google Gemini. In September, Zencoder expanded its platform to let developers use command-line coding agents from any provider within its interface.
That strategy reflects a pragmatic acknowledgment that developers increasingly maintain relationships with multiple AI providers rather than committing exclusively to one. Zencoder's universal platform approach lets it serve as the orchestration layer regardless of which underlying models a company prefers.
The company also emphasizes enterprise readiness, touting SOC 2 Type II, ISO 27001, and ISO 42001 certifications along with GDPR compliance. These credentials matter for regulated industries like financial services and healthcare, where compliance requirements can block adoption of consumer-oriented AI tools.
But Zencoder faces formidable competition from multiple directions. Cursor and Windsurf have built dedicated AI-first code editors with devoted user bases. GitHub Copilot benefits from Microsoft's distribution muscle and deep integration with the world's largest code repository. And the frontier AI labs continue expanding their own coding capabilities.
Filev dismissed concerns about competition from the AI labs, arguing that smaller players like Zencoder can move faster on user experience innovation.
"I'm sure they will come to the same conclusion, and they're smart and moving fast, so I'm sure they will catch up fairly quickly," he said. "That's why I said in the next six to 12 months, you're going to see a lot of this propagating through the whole space."
The case for adopting AI orchestration now instead of waiting for better models
Technical executives weighing AI coding investments face a difficult timing question: Should they adopt orchestration tools now, or wait for frontier AI labs to build these capabilities natively into their models?
Filev argued that waiting carries significant competitive risk.
"Right now, everybody is under pressure to deliver more in less time, and everybody expects engineering leaders to deliver results from AI," he said. "As a founder and CEO, I do not expect 20 percent from my VP of engineering. I expect 2x."
He also questioned whether the major AI labs will prioritize orchestration capabilities when their core business remains model development.
"In the ideal world, frontier labs should be building the best-ever models and competing with each other, and Zencoders and Cursors need to build the best-ever UI and UX application layer on top of those models," Filev said. "I don't see a world where OpenAI will offer you our code verifier, or vice versa."
Zenflow launches as a free desktop application, with updated plugins available for Visual Studio Code and JetBrains integrated development environments. The product supports what Zencoder calls "dynamic workflows," meaning the system automatically adjusts process complexity based on whether a human is actively monitoring and on the difficulty of the task at hand.
Zencoder said internal testing showed that replacing standard prompting with Zenflow's orchestration layer improved code correctness by approximately 20 percent on average.
What Zencoder's bet on orchestration reveals about the future of AI coding
Zencoder frames Zenflow as the first product in what it expects to become a significant new software category. The company believes every vendor focused on AI coding will eventually arrive at similar conclusions about the need for orchestration tools.
"I think the next six to 12 months will be all about orchestration," Filev predicted. "A lot of organizations will finally reach that 2x. Not 10x yet, but at least the 2x they were promised a year ago."
Rather than competing head-to-head with frontier AI labs on model quality, Zencoder is betting that the application layer (the software that helps developers actually use these models effectively) will determine winners and losers.
It is, Filev suggested, a familiar pattern from technology history.
"This is very similar to what I observed when I started Wrike," he said. "As work went digital, people relied on email and spreadsheets to manage everything, and neither could keep up."
The same dynamic, he argued, now applies to AI coding. Chat interfaces were designed for conversation, not for orchestrating complex engineering workflows. Whether Zencoder can establish itself as the essential layer between developers and AI models before the giants build their own solutions remains an open question.
But Filev seems comfortable with the race. The last time he spotted a gap between how people worked and the tools they had to work with, he built a company worth over a billion dollars.
Zenflow is available immediately as a free download at zencoder.ai/zenflow.
Zoom Video Communications, the company best known for keeping remote workers connected during the pandemic, announced last week that it had achieved the highest score ever recorded on one of artificial intelligence's most demanding tests — a claim that sent ripples of surprise, skepticism, and genuine curiosity through the technology industry.
The San Jose-based company said its AI system scored 48.1 percent on the Humanity's Last Exam, a benchmark designed by subject-matter experts worldwide to stump even the most advanced AI models. That result edges out Google's Gemini 3 Pro, which held the previous record at 45.8 percent.
"Zoom has achieved a new state-of-the-art result on the challenging Humanity's Last Exam full-set benchmark, scoring 48.1%, which represents a substantial 2.3% improvement over the previous SOTA result," wrote Xuedong Huang, Zoom's chief technology officer, in a blog post.
The announcement raises a provocative question that has consumed AI watchers for days: How did a video conferencing company — one with no public history of training large language models — suddenly vault past Google, OpenAI, and Anthropic on a benchmark built to measure the frontiers of machine intelligence?
The answer reveals as much about where AI is headed as it does about Zoom's own technical ambitions. And depending on whom you ask, it's either an ingenious demonstration of practical engineering or a hollow claim that appropriates credit for others' work.
How Zoom built an AI traffic controller instead of training its own model
Zoom did not train its own large language model. Instead, the company developed what it calls a "federated AI approach" — a system that routes queries to multiple existing models from OpenAI, Google, and Anthropic, then uses proprietary software to select, combine, and refine their outputs.
At the heart of this system sits what Zoom calls its "Z-scorer," a mechanism that evaluates responses from different models and chooses the best one for any given task. The company pairs this with what it describes as an "explore-verify-federate strategy," an agentic workflow that balances exploratory reasoning with verification across multiple AI systems.
"Our federated approach combines Zoom's own small language models with advanced open-source and closed-source models," Huang wrote. The framework "orchestrates diverse models to generate, challenge, and refine reasoning through dialectical collaboration."
In simpler terms: Zoom built a sophisticated traffic controller for AI, not the AI itself.
This distinction matters enormously in an industry where bragging rights — and billions in valuation — often hinge on who can claim the most capable model. The major AI laboratories spend hundreds of millions of dollars training frontier systems on vast computing clusters. Zoom's achievement, by contrast, appears to rest on clever integration of those existing systems.
Why AI researchers are divided over what counts as real innovation
The response from the AI community was swift and sharply divided.
Max Rumpf, an AI engineer who says he has trained state-of-the-art language models, posted a pointed critique on social media. "Zoom strung together API calls to Gemini, GPT, Claude et al. and slightly improved on a benchmark that delivers no value for their customers," he wrote. "They then claim SOTA."
Rumpf did not dismiss the technical approach itself. Using multiple models for different tasks, he noted, is "actually quite smart and most applications should do this." He pointed to Sierra, an AI customer service company, as an example of this multi-model strategy executed effectively.
His objection was more specific: "They did not train the model, but obfuscate this fact in the tweet. The injustice of taking credit for the work of others sits deeply with people."
But other observers saw the achievement differently. Hongcheng Zhu, a developer, offered a more measured assessment: "To top an AI eval, you will most likely need model federation, like what Zoom did. An analogy is that every Kaggle competitor knows you have to ensemble models to win a contest."
The comparison to Kaggle — the competitive data science platform where combining multiple models is standard practice among winning teams — reframes Zoom's approach as industry best practice rather than sleight of hand. Academic research has long established that ensemble methods routinely outperform individual models.
Still, the debate exposed a fault line in how the industry understands progress. Ryan Pream, founder of Exoria AI, was dismissive: "Zoom are just creating a harness around another LLM and reporting that. It is just noise." Another commenter captured the sheer unexpectedness of the news: "That the video conferencing app ZOOM developed a SOTA model that achieved 48% HLE was not on my bingo card."
Perhaps the most pointed critique concerned priorities. Rumpf argued that Zoom could have directed its resources toward problems its customers actually face. "Retrieval over call transcripts is not 'solved' by SOTA LLMs," he wrote. "I figure Zoom's users would care about this much more than HLE."
The Microsoft veteran betting his reputation on a different kind of AI
If Zoom's benchmark result seemed to come from nowhere, its chief technology officer did not.
Xuedong Huang joined Zoom from Microsoft, where he spent decades building the company's AI capabilities. He founded Microsoft's speech technology group in 1993 and led teams that achieved what the company described as human parity in speech recognition, machine translation, natural language understanding, and computer vision.
Huang holds a Ph.D. in electrical engineering from the University of Edinburgh. He is an elected member of the National Academy of Engineering and the American Academy of Arts and Sciences, as well as a fellow of both the IEEE and the ACM. His credentials place him among the most accomplished AI executives in the industry.
His presence at Zoom signals that the company's AI ambitions are serious, even if its methods differ from the research laboratories that dominate headlines. In his tweet celebrating the benchmark result, Huang framed the achievement as validation of Zoom's strategy: "We have unlocked stronger capabilities in exploration, reasoning, and multi-model collaboration, surpassing the performance limits of any single model."
That final clause — "surpassing the performance limits of any single model" — may be the most significant. Huang is not claiming Zoom built a better model. He is claiming Zoom built a better system for using models.
Inside the test designed to stump the world's smartest machines
The benchmark at the center of this controversy, Humanity's Last Exam, was designed to be exceptionally difficult. Unlike earlier tests that AI systems learned to game through pattern matching, HLE presents problems that require genuine understanding, multi-step reasoning, and the synthesis of information across complex domains.
The exam draws on questions from experts around the world, spanning fields from advanced mathematics to philosophy to specialized scientific knowledge. A score of 48.1 percent might sound unimpressive to anyone accustomed to school grading curves, but in the context of HLE, it represents the current ceiling of machine performance.
"This benchmark was developed by subject-matter experts globally and has become a crucial metric for measuring AI's progress toward human-level performance on challenging intellectual tasks," Zoom’s announcement noted.
The company's improvement of 2.3 percentage points over Google's previous best may appear modest in isolation. But in competitive benchmarking, where gains often come in fractions of a percent, such a jump commands attention.
What Zoom's approach reveals about the future of enterprise AI
Zoom's approach carries implications that extend well beyond benchmark leaderboards. The company is signaling a vision for enterprise AI that differs fundamentally from the model-centric strategies pursued by OpenAI, Anthropic, and Google.
Rather than betting everything on building the single most capable model, Zoom is positioning itself as an orchestration layer — a company that can integrate the best capabilities from multiple providers and deliver them through products that businesses already use every day.
This strategy hedges against a critical uncertainty in the AI market: no one knows which model will be best next month, let alone next year. By building infrastructure that can swap between providers, Zoom avoids vendor lock-in while theoretically offering customers the best available AI for any given task.
The announcement of OpenAI's GPT-5.2 the following day underscored this dynamic. OpenAI's own communications named Zoom as a partner that had evaluated the new model's performance "across their AI workloads and saw measurable gains across the board." Zoom, in other words, is both a customer of the frontier labs and now a competitor on their benchmarks — using their own technology.
This arrangement may prove sustainable. The major model providers have every incentive to sell API access widely, even to companies that might aggregate their outputs. The more interesting question is whether Zoom's orchestration capabilities constitute genuine intellectual property or merely sophisticated prompt engineering that others could replicate.
The real test arrives when Zoom's 300 million users start asking questions
Zoom titled its announcement section on industry relations "A Collaborative Future," and Huang struck notes of gratitude throughout. "The future of AI is collaborative, not competitive," he wrote. "By combining the best innovations from across the industry with our own research breakthroughs, we create solutions that are greater than the sum of their parts."
This framing positions Zoom as a beneficent integrator, bringing together the industry's best work for the benefit of enterprise customers. Critics see something else: a company claiming the prestige of an AI laboratory without doing the foundational research that earns it.
The debate will likely be settled not by leaderboards but by products. When AI Companion 3.0 reaches Zoom's hundreds of millions of users in the coming months, they will render their own verdict — not on benchmarks they have never heard of, but on whether the meeting summary actually captured what mattered, whether the action items made sense, whether the AI saved them time or wasted it.
In the end, Zoom's most provocative claim may not be that it topped a benchmark. It may be the implicit argument that in the age of AI, the best model is not the one you build — it's the one you know how to use.
Presented by SAP
In an era where anyone can spin up an LLM, the real differentiator isn’t the AI technology itself, but the institutional knowledge it’s grounded in. Internal and partner consultants leading operational transformation can’t risk hallucinated guidance when their recommendations impact integrated processes across supply chain, manufacturing, finance, and other core functions.
"Grounded AI is non-negotiable, because accuracy isn’t optional when we’re doing million-dollar transformation projects within the SAP ecosystem, for example," says Natalie Han, VP and chief product officer, gen AI at SAP Business AI. "Retrieval-augmented generation technology, and the ability to anchor responses in trusted enterprise knowledge, helps ensure accurate code interpretation, best-practice guidance, and clean-core decision support. It's how we bring real trust into AI-powered consulting."
A fully grounded AI assistant like SAP Joule for Consultants has tremendous value in production use cases, she adds. SAP Joule has terabytes of institutional data that's continuously curated and updated, so a consultant is assured they're getting up-to-the-minute SAP best practices and methodologies when relying on Joule, while at the same time accelerating project delivery.
"We’re saving rework time by 14%, and saving consultants 1.5 hours per day per user, which is huge when you consider how expensive consultants are now," Han says. "Early adopters like Wipro have estimated they've saved 7 million hours on a manual basis for their consultants."
The foundation of SAP Joule
SAP Joule is as certified as any consultant, says Sachin Kaura, chief architect, SAP Business AI. The tool was born in 2023, when GPTs famously passed a simulated bar exam and ignited buzz around the ability of LLMs to handle large amounts of context. It is widely acknowledged that the SAP ecosystem, along with its associated domain ontology and taxonomy, is incredibly vast and can be very complex to navigate. The question became, how could an AI co-pilot be used to navigate that complexity when it was actually grounded within the SAP ecosystem itself?
Sachin Kaura began experimenting with frontier LLM models by putting them through the same certification exams SAP consultants take. The early results were poor, but after extensive context tuning and a focus on delivering value to the partner ecosystem, Joule now consistently scores 95% or higher.
"Not only were we testing from a data perspective, but we were able to work with all of our consultants to get what we call the golden data set," Han added. "It’s non-deterministic, language-based, and thoroughly grounded in human consultant expertise. We partnered with the whole consulting organization to manually label the golden data set across all of the products. That’s become the foundation for everything we do even now."
A state-of-the-art indexing pipeline
Joule for Consultants stays up-to-date in real time. A state-of-the-art indexing pipeline pushes new SAP documentation and release content into the model as soon as it’s published, giving consultants confidence that every answer reflects the most current guidance.
"This is pure engineering work done by our data scientists and engineers, using a lot of underlying SAP technology," Kaura explains. "We leverage the SAP business foundation layer, document grounding services, and a lot of purpose-built systems to stay on top of current events in the system."
SAP Business AI also has board-level alignment, ensuring this isn’t just a one-team effort but a company-wide priority. They’ve built strong internal partnerships with content owners across SAP — including SAP Learning, SAP Community, SAP Help, product teams, and consultant teams. Together, they continuously update proprietary content such as SAP Notes, Knowledge Base Articles (KBAs), and other domain-specific guidance that reflects SAP’s evolving best practices.
All of this means Joule for Consultants can take that continuously refreshed data and deliver answers in near real time. It's the kind of research that would otherwise take a consultant hours. But information pulled directly from the source gives consultants the most current and authoritative guidance available, helping eliminate the early-stage missteps that can derail a project months later when scoping wasn’t aligned with the latest capabilities.
Ensuring enterprise-grade security
SAP is building a product that is relevant, reliable, and responsible, Han says. As a company founded in Europe, it takes data privacy seriously, adhering to the GDPR and other EU company regulations. At the core of SAP Business AI is the AI Foundation, the AI operating system that governs AI with built-in security, ethics, and orchestration, using automation and intelligence to manage lifecycles, optimize resources, and boost resilience.
All the LLMs SAP and its customers use operate within the AI foundation, which protects private and proprietary data from being leaked. Beyond data protection, SAP treats bias, ethics, and security at an enterprise level as well, with humans in the loop to run checks and balances.
"We have an enterprise-grade security framework as well as prompt injection and guardrail testing," Kaura says. "The orchestration layer, built within the AI Foundation, anonymizes inputs as well as moderates them to prevent malicious content. That ensures that the output we give to our customers is relevant to the SAP ecosystem, relevant to the domain they’re asking about, and not just generic LLM excess. This set of tools, from the framework layer to the application layer to the product standards, and also the very thorough testing is critical to securing our product. Then and only then can it reach our customers and partners."
Pushing the limits of Joule for Consultants
"We’re barely scratching the surface of what LLMs and agentic AI can offer," Han says. "Accessing knowledge is just the beginning. We’re going to have a much deeper understanding of customers’ SAP systems and be able to help them implement and transform their journey. The product team and our engineers are working to make the tool more transformative, able to unearth more insights, connect with customers’ systems, and understand and optimize their processes, including generating code and handling customer code migration."
The next step is adding a second layer of grounding. SAP’s customer base is vast, and its partner ecosystem has implemented countless business scenarios. Grounding Joule in SAP’s institutional knowledge was the first milestone; the next is layering in each customer’s own proprietary context — historical system data, process designs, implementation blueprints, and internal documentation. This turns Joule from SAP-aware to customer-aware, delivering guidance that aligns with how a business actually operates.
“Think of it as grounding your knowledge on top of SAP knowledge — giving you more accurate and relevant guidance,” Kaura says. “Information that might otherwise be lost can sit on top of Joule for Consultants. Our system processes it and ensures it comes to you in the right manner and at the right time.”
This expanded grounding also lets Joule adjust its guidance to the consultant’s role — whether they’re working as an architect, a functional consultant, or a technical consultant.
"We deliver the information they need for a particular customer configuration," Han explains. "Then we can not only answer generic questions, but we can answer their particular configuration. From there it’s one step ahead to generating more insights and taking more actions."
Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. For more information, contact sales@venturebeat.com.
- The German energy company generated $265 million by selling the idle coal plant to Amazon.
- AI drove big sales gains in data center infrastructure in the third quarter – mostly fueled by hyperscalers rushing to meet AI accelerator demand.
- Providers are shifting buildouts to power-rich regions and prioritizing early PPAs, substation delivery timelines, and modular, high-density designs to secure capacity at speed.
We've heard (and written, here at VentureBeat) lots about the generative AI race between the U.S. and China, as those have been the countries with the groups most active in fielding new models (with a shoutout to Cohere in Canada and Mistral in France).
But now a Korean startup is making waves: last week, the firm known as Motif Technologies released Motif-2-12.7B-Reasoning, another small parameter open-weight model that boasts impressive benchmark scores, quickly becoming the most performant model from that country according to independent benchmarking lab Artificial Analysis (beating even regular GPT-5.1 from U.S. leader OpenAI).
But more importantly for enterprise AI teams, the company has published a white paper on arxiv.org with a concrete, reproducible training recipe that exposes where reasoning performance actually comes from — and where common internal LLM efforts tend to fail.
For organizations building or fine-tuning their own models behind the firewall, the paper offers a set of practical lessons about data alignment, long-context infrastructure, and reinforcement learning stability that are directly applicable to enterprise environments. Here they are:
1. Reasoning gains come from data distribution, not model size
One of Motif’s most relevant findings for enterprise teams is that synthetic reasoning data only helps when its structure matches the target model’s reasoning style.
The paper shows measurable differences in downstream coding performance depending on which “teacher” model generated the reasoning traces used during supervised fine-tuning.
For enterprises, this undermines a common shortcut: generating large volumes of synthetic chain-of-thought data from a frontier model and assuming it will transfer cleanly. Motif’s results suggest that misaligned reasoning traces can actively hurt performance, even if they look high quality.
The takeaway is operational, not academic: teams should validate that their synthetic data reflects the format, verbosity, and step granularity they want at inference time. Internal evaluation loops matter more than copying external datasets.
2. Long-context training is an infrastructure problem first
Motif trains at 64K context, but the paper makes clear that this is not simply a tokenizer or checkpointing tweak.
The model relies on hybrid parallelism, careful sharding strategies, and aggressive activation checkpointing to make long-context training feasible on Nvidia H100-class hardware.
For enterprise builders, the message is sobering but useful: long-context capability cannot be bolted on late.
If retrieval-heavy or agentic workflows are core to the business use case, context length has to be designed into the training stack from the start. Otherwise, teams risk expensive retraining cycles or unstable fine-tunes.
3. RL fine-tuning fails without data filtering and reuse
Motif’s reinforcement learning fine-tuning (RLFT) pipeline emphasizes difficulty-aware filtering — keeping tasks whose pass rates fall within a defined band — rather than indiscriminately scaling reward training.
This directly addresses a pain point many enterprise teams encounter when experimenting with RL: performance regressions, mode collapse, or brittle gains that vanish outside benchmarks. Motif also reuses trajectories across policies and expands clipping ranges, trading theoretical purity for training stability.
The enterprise lesson is clear: RL is a systems problem, not just a reward model problem. Without careful filtering, reuse, and multi-task balancing, RL can destabilize models that are otherwise production-ready.
4. Memory optimization determines what is even possible
Motif’s use of kernel-level optimizations to reduce RL memory pressure highlights an often-overlooked constraint in enterprise settings: memory, not compute, is frequently the bottleneck. Techniques like loss-function-level optimization determine whether advanced training stages are viable at all.
For organizations running shared clusters or regulated environments, this reinforces the need for low-level engineering investment, not just model architecture experimentation.
Why this matters for enterprise AI teams
Motif-2-12.7B-Reasoning is positioned as competitive with much larger models, but its real value lies in the transparency of how those results were achieved. The paper argues — implicitly but persuasively — that reasoning performance is earned through disciplined training design, not model scale alone.
For enterprises building proprietary LLMs, the lesson is pragmatic: invest early in data alignment, infrastructure, and training stability, or risk spending millions fine-tuning models that never reliably reason in production.
Presented by Capital One Software
Tokenization is emerging as a cornerstone of modern data security, helping businesses separate the value of their data from its risk. During this VB in Conversation, Ravi Raghu, president, Capital One Software, talks about the ways tokenization can help reduce the value of breached data and preserve underlying data format and usability, including Capital One’s own experience leveraging tokenization at scale.
Tokenization, Raghu asserts, is a far superior technology. It converts sensitive data into a nonsensitive digital replacement, called a token, that maps back to the original, which is secured in a digital vault. The token placeholder preserves both the format and the utility of the sensitive data, and can be used across applications — including AI models. Because tokenization removes the need to manage encryption keys or dedicate compute to constant encrypting and decrypting, it offers one of the most scalable ways for companies to protect their most sensitive data, he added.
"The killer part, from a security standpoint, when you think about it relative to other methods, if a bad actor gets hold of the data, they get hold of tokens," he explained. "The actual data is not sitting with the token, unlike other methods like encryption, where the actual data sits there, just waiting for someone to get hold of a key or use brute force to get to the real data. From every angle this is the ideal way one ought to go about protecting sensitive data."
The tokenization differentiator
Most organizations are just scratching the surface of data security, adding security at the very end, when data is read, to prevent an end user from accessing it. At minimum, organizations should focus on securing data on write, as it’s being stored. But best-in-class organizations go even further, protecting data at birth, the moment it’s created.
At one end of the safety spectrum is a simple lock-and-key approach that restricts access but leaves the underlying data intact. More advanced methods, like masking or modifying data, permanently alter its meaning — which can compromise its usefulness. File-level encryption provides broader protection for large volumes of stored data, but when you get down to field-level encryption (for example, a Social Security number), it becomes a bigger challenge. It takes a great deal of compute to encrypt a single field, and then to decrypt it at the point of usage. And still it has a fatal flaw: the original data is still right there, only needing the key to get access.
Tokenization avoids these pitfalls by replacing the original data with a surrogate that has no intrinsic value. If the token is intercepted — whether by the wrong person or the wrong machine — the data itself remains secure.
The business value of tokenization
"Fundamentally you’re protecting data, and that’s priceless," Raghu said. "Another thing that’s priceless – can you use that for modeling purposes subsequently? On the one hand, it’s a protection thing, and on the other hand it’s a business enabling thing."
Because tokenization preserves the structure and ordinality of the original data, it can still be used for modeling and analytics, turning protection into a business enabler. Take private health data governed by HIPAA for example: tokenization means that data canbeused to build pricing models or for gene therapy research, while remaining compliant.
"If your data is already protected, you can then proliferate the usage of data across the entire enterprise and have everybody creating more and more value out of the data," Raghu said. "Conversely, if you don’t have that, there’s a lot of reticence for enterprises today to have more people access it, or have more and more AI agents access their data. Ironically, they’re limiting the blast radius of innovation. The tokenization impact is massive, and there are many metrics you could use to measure that – operational impact, revenue impact, and obviously the peace of mind from a security standpoint."
Breaking down adoption barriers
Until now, the fundamental challenge with traditional tokenization has been performance. AI requires a scale and speed that is unprecedented. That's one of the major challenges Capital One addresses with Databolt, its vaultless tokenization solution, which can produce up to 4 million tokens per second.
"Capital One has gone through tokenization for more than a decade. We started doing it because we’re serving our 100 million banking customers. We want to protect that sensitive data," Raghu said. "We’ve eaten our own dog food with our internal tokenization capability, over 100 billion times a month. We’ve taken that know-how and that capability, scale, and speed, and innovated so that the world can leverage it, so that it’s a commercial offering."
Vaultless tokenization is an advanced form of tokenization that does not require a central database (vault) to store token mappings. Instead, it uses mathematical algorithms, cryptographic techniques, and deterministic mapping to generate tokens dynamically.This approach is faster, more scalable, and eliminates the security risk associated with managing a vault.
"We realized that for the scale and speed demands that we had, we needed to build out that capability ourselves," Raghu said. "We’ve been iterating continuously on making sure that it can scale up to hundreds of billions of operations a month. All of our innovation has been around building IP and capability to do that thing at a battle-tested scale within our enterprise, for the purpose of serving our customers."
While conventional tokenization methods can involve some complexity and slow down operations, Databolt seamlessly integrates with encrypted data warehouses, allowing businesses to maintain robust security without slowing performance or operations. Tokenization occurs in the customer’s environment, removing the need to communicate with an external network to perform tokenization operations, which can also slow performance.
"We believe that fundamentally, tokenization should be easy to adopt," Raghu said. "You should be able to secure your data very quickly and operate at the speed and scale and cost needs that organizations have. I think that’s been a critical barrier so far for the mass scale adoption of tokenization. In an AI world, that’s going to become a huge enabler."
Don't miss the whole conversation with Ravi Raghu, president, Capital One Software, here.
Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. For more information, contact sales@venturebeat.com.
Enterprises that want tokenizer-free multilingual models are increasingly turning to byte-level language models to reduce brittleness in noisy or low-resource text. To tap into that niche — and make it practical at scale — the Allen Institute of AI (Ai2) introduced Bolmo, a new family of models that leverage its Olmo 3 models by “bytefiying” them and reusing their backbone and capabilities.
The company launched two versions, Bolmo 7B and Bolmo 1B, which are “the first fully open byte-level language model,” according to Ai2. The company said the two models performed competitively with — and in some cases surpassed — other byte-level and character-based models.
Byte-level language models operate directly on raw UTF-8 bytes, eliminating the need for a predefined vocabulary or tokenizer. This allows them to handle misspellings, rare languages, and unconventional text more reliably — key requirements for moderation, edge deployments, and multilingual applications.
For enterprises deploying AI across multiple languages, noisy user inputs, or constrained environments, tokenizer-free models offer a way to reduce operational complexity. Ai2’s Bolmo is an attempt to make that approach practical at scale — without retraining from scratch.
How Bolmo works and how it was built
Ai2 said it trained the Bolmo models using its Dolma 3 data mix, which helped train its Olmo flagship models, and some open code datasets and character-level data.
The company said its goal “is to provide a reproducible, inspectable blueprint for byteifying strong subword language models in a way the community can adopt and extend.” To meet this goal, Ai2 will release its checkpoints, code, and a full paper to help other organizations build byte-level models on top of its Olmo ecosystem.
Since training a byte-level model completely from scratch can get expensive, Ai2 researchers instead chose an existing Olmo 3 7B checkpoint to byteify in two stages.
In the first stage, Ai2 froze the Olmo 3 transformer so that they only train certain parts, such as the local encoder and decoder, the boundary predictor, and the language modeling head. This was designed to be “cheap and fast” and requires just 9.8 billion tokens.
The next stage unfreezes the model and trains it with additional tokens. Ai2 said the byte-level approach allows Bolmo to avoid the vocabulary bottlenecks that limit traditional subword models.
Strong performance among its peers
Byte-level language models are not as mainstream as small language models or LLMs, but this is a growing field in research. Meta released its BLT architecture research last year, aiming to offer a model that is robust, processes raw data, and doesn’t rely on fixed vocabularies.
Other research models in this space include ByT5, Stanford’s MrT5, and Canine.
Ai2 evaluated Bolmo using its evaluation suite, covering math, STEM reasoning, question answering, general knowledge, and code.
Bolmo 7B showed strong performance, outperforming character-focused benchmarks like CUTE and EXECUTE, and also improving accuracy over the base LLM Olmo 3.
Bolmo 7B outperformed models of comparable size in coding, math, multiple-choice QA, and character-level understanding.
Why enterprises may choose byte-level models
Enterprises find value in a hybrid model structure, using a mix of models and model sizes.
Ai2 makes the case that organizations should also consider byte-level models not only for robustness and multilingual understanding, but because it “naturally plugs into an existing model ecosystem.”
“A key advantage of the dynamic hierarchical setup is that compression becomes a toggleable knob,” the company said.
For enterprises already running heterogeneous model stacks, Bolmo suggests that byte-level models may no longer be purely academic. By retrofitting a strong subword model rather than training from scratch, Ai2 is signaling a lower-risk path for organizations that want robustness without abandoning existing infrastructure.


