What if the most important AI release of 2025 isn’t an app dazzling consumers, but an agent quietly reshaping systems beneath the surface? Manus AI, launched by Chinese startup Butterfly Effect in March, is that kind of release. It doesn’t aim for headlines or social media dominance. Instead, it positions itself as an infrastructural agent, sovereign and systemic.
This is why Manus matters: it marks a transformation in AI, from tools that attract attention to agents that embed silently, reconfiguring how systems work at scale. While companies like OpenAI and Anthropic focus on user-facing models and media buzz, Manus signals a different play: making AI a core part of logistics, governance, and state-backed operations.
We argue here that Manus AI is not just another product. It’s an early warning signal of how the next wave of AI competition will be fought, not over consumer apps, but over embedded systems.
Let’s start with the basics, not just the technical specs but the strategic meaning behind them.
Manus AI is an autonomous agent, not a chatbot or narrow assistant. It integrates different high-performing models, including Anthropic’s Claude 3.5 Sonnet and customized versions of Alibaba’s open-source Qwen. Still, its strength is not just in combining models. It’s in how it organizes them.
The system’s multi-agent architecture is designed to break down complex, cross-domain tasks into subtasks that can be executed concurrently and asynchronously in the cloud. While traditional AI systems need detailed, step-by-step human prompts, Manus is built to plan, adapt, and self-correct across layers of decision-making. This module autonomously negotiates shipping contracts in five languages, outperforming competitors by closing deals faster and handling more legal and logistical variations in real time. According to internal benchmarks, Manus achieved GAIA scores (Global Autonomous Intelligence Assessment) of 86.5% on basic tasks, 70.1% on intermediate tasks, and 57.7% on complex tasks, significantly outpacing OpenAI’s reported scores of 74.3%, 69.1%, and 47.6%, respectively. But again, these numbers only tell part of the story. The more important point is why this architecture matters.
Manus isn’t optimized for consumer engagement or enterprise app integrations. It’s optimized for workflows, the connective tissue of operations. Its asynchronous design allows it to operate without continuous human oversight, coordinating subtasks across domains like logistics, municipal planning, financial auditing, and even surveillance. This makes Manus a prototype of what I would call “embedded autonomy”: not an AI you call on occasionally, but one that lives inside your systems, executing processes, making micro-decisions, and integrating outcomes continuously.
Where Western companies prioritize modularity, AI as a tool you plug in and control, Manus prioritizes self-directed orchestration. That’s a profound architectural choice, because it shifts power away from the user’s direct control and toward the system’s internal logic. It’s not just offering you predictions or answers; it’s running parts of your operations. And this is what makes Manus strategically different: it’s not designed to serve as a flexible tool at the edge; it’s designed to become part of the core.
To understand Manus AI’s significance, we need to step back and ask: What kind of AI strategy does it embody? The answer, I argue, is this: Manus AI isn’t just a tool, it’s a tactic within China’s larger game of infrastructural entrenchment.
Where much of the Western AI race centers on modular applications: chatbots, search assistants, image generators, productivity plugins, the Manus approach is fundamentally different. It’s not trying to own the attention layer. It’s trying to own the operational layer. This is why we see Manus aligned so closely with China’s Digital Silk Road initiative. While most headlines in the West focus on AI model performance or API competition, China is embedding its AI systems, like Manus, directly into the digital backbones of emerging markets.
While China pushes forward with cohesive industrial policy and streamlined deployment, Western developers are constrained by fractured regulatory landscapes. For example, the EU AI Act (effective August 2024) introduces a risk-based framework, but full enforcement won’t begin until August 2026. The United States, meanwhile, faces a patchwork of over 550 state-level AI bills introduced across 45 states just in 2024. This fragmented approach slows down infrastructural AI deployments. China, by contrast, faces fewer internal regulatory delays and supports coordinated industrial policy to push AI like Manus directly into foreign systems. This is not about chasing market share for revenue’s sake. It’s about embedding standards and shaping dependencies.
If we think back to the 20th century, the most prominent tools were not the most visible consumer products. They were the underlying infrastructures: the oil pipelines, the telecommunication cables, the satellite networks. Manus AI suggests that the 21st-century equivalent may well be autonomous software agents, systems designed not for attention, but for control.
We need to be precise here: no one is claiming that Manus AI has already redrawn the global map of AI power. What I am saying is that Manus serves as an early signal, a clear indicator of where China’s AI strategy is heading and what kinds of systems may define the next phase of global AI competition. Let’s unpack that.
First, consider where Manus has already shown up: it’s been prominently featured on China’s state broadcaster CCTV, a sign that the Chinese government sees it not just as a commercial product, but as a strategic asset worth publicizing. This suggests an alignment between state-level goals and private-sector innovation, a dynamic often absent in the more fragmented Western AI ecosystem.
Manus points to the emergence of a blueprint for embedded autonomy. This blueprint doesn’t depend on dominating headlines or social media cycles. It depends on embedding quietly, accumulating operational dependencies, and gradually positioning itself as indispensable. In other words, it’s not trying to be the loudest player in the room; it’s trying to be the one you can’t remove from the building. Why does this matter? Because most Western narratives about AI competition still focus on consumer-facing battles: Which chatbot is the most engaging? Which model produces the best images? Which API generates the most revenue? But the Manus playbook suggests we should be looking elsewhere…
The real competition may not be over user attention, but over control layers: the invisible layers that manage supply chains, coordinate infrastructure, and execute cross-border decisions. In this framing, Manus is not just a tool doing tasks, it’s a signal that AI’s future lies in quiet, systemic integration. We are at a turning point where AI no longer just augments human decision-making at the edges, but begins to anchor itself deep within operational systems.
And it’s worth emphasizing: my considerations are still speculative. Manus is young. It was only launched in March 2025. But the signals it emits, its strategic positioning, its deployment patterns, its government amplification, all point toward a future where the most powerful AI agents are not the ones we interact with directly, but the ones embedded invisibly into the systems we depend on.
If Manus AI signals a transformation toward embedded autonomy, we must ask: what are the risks of such a transformation?
First, let’s address the most immediate challenge: opacity. Manus operates as a “black box” system. Its multi-agent architecture, while effective, makes its decision path difficult to trace. For the municipalities, logistics hubs, or enterprises relying on it, this creates an important problem: they can see what the system outputs but often can’t explain how it got there. This undermines user autonomy. When decision-making systems are unreadable, users have limited capacity to challenge, audit, or adjust the AI’s behavior. Over time, this can shift control away from human operators and into the hands of automated logic loops that only a handful of engineers truly understand.
Second, we face the risk of soft alignment and value capture. Even without explicit bias or manipulation, embedded AI systems like Manus bring with them implicit assumptions. Studies (for example, those analyzing value alignment in autonomous systems) show that AI agents can lead user behavior, shaping decisions in ways that align with the system’s programmed preferences, not necessarily the user’s goals or cultural context. In an international setting, where Manus is deployed across regions with different political, social, and ethical norms, this raises uncomfortable questions: whose values get embedded in the system? Are local actors unknowingly adopting foreign operational frameworks through their reliance on Manus?
Third, there’s the structural risk of technocratic drift. As autonomous agents take over more complex tasks: logistics optimization, municipal planning, financial analysis, there’s a growing temptation to replace public deliberation with technical decision-making. While efficiency improves, the space for democratic debate shrinks. Decisions that were once the subject of political negotiation become automated outputs from embedded systems. This isn’t an argument against AI. It’s an argument for resilient governance frameworks that evolve alongside these new capabilities. Existing AI governance debates often focus on LLMs or generative systems, but Manus-style agents introduce a new category of risk: governing invisible, embedded AI layers that shape important systems from the inside.
To meet all these challenges, we need not just better explainability tools, but clear accountability mechanisms, transparent deployment protocols, and multi-stakeholder engagement in how such agents are designed, installed, and overseen.
If there’s one message to take from Manus AI’s emergence, it’s this:
We are entering a phase where modern AI systems will not be the ones we see, interact with, or talk about publicly. They will be the ones that quietly become infrastructure, the ones that operate beneath the surface, reshaping how systems function without ever needing public attention.
Manus AI is not flashy, nor viral. It is not designed to win headlines or app store downloads. It is designed to embed, to integrate into layers of logistics, municipal governance, cross-border trade, and operational control systems. It is a strategic move in AI development: from models to agents, from consumer apps to systemic automation. And here’s the provocation I leave for Western AI firms, policymakers, and strategic planners:
Are you sure that you are watching the right battlefield?
While OpenAI, Anthropic, Google, and others race for breakthroughs in generative models, product integrations, and monetization strategies, what’s embedding itself beneath their feet? What invisible systems are taking hold in the operational layers, the ports, the grids, the financial exchanges, that determine how economies and states actually run? We may be too focused on the spectacle of AI to see the silent shift in agency. Manus AI challenges us to reconsider what matters most in the global AI race: attention or entrenchment?
If models were the first wave of AI innovation, agents like Manus represent the second, and they are arriving with sovereign backing, systemic ambitions, and a blueprint for reshaping how AI integrates into modern life. The challenge now is not just technical. It is strategic: will we recognize the silent power of infrastructural AI agents in time to shape their governance, balance their influence, and decide what values get embedded into the systems we will all soon depend on?
Disclaimer: The views and opinions expressed in this article are solely those of the writer and do not reflect the views or positions of the OneStart team or its affiliates.”
I hold a PhD in Computer Science and Electrical Engineering and currently serve as an associate professor at the Faculty of Electronic Engineering. My academic background and professional experience have provided me with expertise in Data Science and Intelligent Control, which I actively share with students through teaching and mentorship. As the Chief of the Laboratory for Intelligent Control, I lead research in modern industrial process automation and autonomous driving systems.
My primary research interests focus on the application of artificial intelligence in real-world environments, particularly the integration of AI with industrial processes and the use of reinforcement learning for autonomous systems. I also have practical experience working with large language models (LLMs), applying them in various research, educational, and content generation tasks. Additionally, I maintain a growing interest in AI security and governance, especially as these areas become increasingly critical for the safe and ethical deployment of intelligent systems.