OneStart

Winning the AI Race Won’t Mean Winning the Future

Introduction

The global AI race is often seen as the Space Race of the 20th century. Countries are pushing hard to build the most innovative and capable AI. Headlines use phrases like “AI race” or “race to dominance.” It sounds exciting. But it’s also misleading. AI isn’t a moon to be conquered. It’s not a prize to be achieved. It’s a tool that will reshape everything, from healthcare to national defense, education, and labor markets. And rushing to “win” may cost us more than we imagine.

Today, over 70 countries have adopted more than 930 national AI policy initiatives. Governments are racing. But what are they racing toward? The idea of an “AI race” implies a finish line. A point at which one country pulls ahead and achieves dominance, taking all the rewards. But AI isn’t a sprint. It’s a marathon with no fixed ending. And the risks of treating it like a sprint are real.

According to the Council on Foreign Relations, AI is now one of the most disruptive forces in geopolitics and geoeconomics. Technology already influences defense strategies, supply chains, energy systems, and political stability. A 2023 Brookings article reports that 49% of U.S. adults believe AI’s risks outweigh its benefits, while 52% said they felt worried or “more concerned than excited” about the rise of AI. Even so, racing ahead without caution might do more harm than good. History shows that high-speed tech races often lead to oversight gaps. In AI, that could mean biased algorithms, unsafe automation, or misuse of surveillance and warfare. For instance, Flyvbjerg et al. analyzed 210 transportation infrastructure projects and found forecast errors exceeded 20%, with rail projects averaging 106% overestimation and road projects showing ±20% inaccuracies.

So, here’s the real question: What if the true victory in AI isn’t about finishing first but finishing, right?

While different nations have introduced AI guidelines, these efforts remain fragmented, often voluntary, and rarely supported. The race mentality persists, not due to a lack of regulation but due to a lack of alignment, coordination, and shared purpose.

What if the real danger isn’t that we’re doing nothing but that we’re moving too fast without moving together? Even well-intentioned laws and frameworks can’t fix what fragmented ambition breaks. The risks of biased automation, exploitative surveillance, and global inequality aren’t caused only by inaction but by uncoordinated action. And this is the challenge we must now confront.

The Problem with The “Race” Mentality

The moment AI becomes a race; safety protocols lose their clarity. Nations aren’t just developing AI. They’re competing to weaponize it, regulate it last, and dominate first. This nationalistic drive creates a world where collaboration is seen as a weakness, and secrecy is viewed as a strategy.

Geopolitical tensions are already rising. Countries see AI as a path to global influence. The U.S., China, and the EU have parallel plans to dominate AI. But without coordination, that ambition turns into fragmentation. According to the Council on Foreign Relations, AI is now a primary factor in military planning and even trade negotiations. AI’s trajectory will define not only economic strength but also geopolitical clout, according to the CFR. Yet even as countries move to regulate AI, these efforts remain disconnected. There is no shared timeline, no unified support mechanism, and no collective standard. Nations are responding to the same challenge but in silence. The result is a patchwork of ethics codes and pilot programs that cannot address the global scale of AI risks.

Most countries still favor deployment over deliberation. Of more than 930 global AI policy initiatives, fewer than 20% include any mandatory oversight or real-time risk controls. Even when pilot programs emerge, such as Spain’s regulatory sandbox, they often remain isolated and non-binding. This fragmentation is the deeper issue: while the language of ethics has spread, the infrastructure of accountability has not.

Surveillance misuse represents a growing concern. In the rush to utilize AI for national security, many governments are turning to facial recognition and predictive analytics. For instance, a study of 1,636 AI export deals found that 250 were from China, making It the top exporter of facial recognition technology. The U.S. followed with 215 deals. Much of this tech lands in autocratic or vulnerable democracies. When AI is treated as a defense asset, ethics become optional. And inequality keeps growing. The AI race is not a level playing field.

Countries with less computing power, data infrastructure, or research funding are left behind. Many rely on foreign models they can’t thoroughly inspect or modify. According to the Carnegie Endowment, nearly 60% of the world’s top AI researchers are based in the U.S., as a sign of brain drain and limited local capacity in the Global South.

If AI development continues in isolation, we won’t build shared prosperity – we’ll build silos. And the AI race won’t unite us.

The Alternative: A Cooperative Model

These initiatives demonstrate that it’s possible to slow the pace, but they remain an exception, not the norm. Cooperation is emerging, but it’s not yet the default. In a world still dominated by competitive ambition, early frameworks are promising but still fragile. What if the answer isn’t to win the AI race but to change the game entirely?

Instead of pushing to be first, countries can choose to move forward together. Collaboration can lead to safer, more inclusive, and more human-centered AI. And it’s not just a dream: it’s already happening.

In 2019, the OECD introduced the world’s first intergovernmental framework for trustworthy AI. These OECD AI Principles promote five values: human rights, fairness, transparency, safety, and accountability.

Today, they are endorsed by more than 46 countries, including the EU, the U.S., Japan, and several non-OECD members. Since then, the movement has grown fast. By mid‑2024, the same database had expanded to over 1,000 policy initiatives across approximately 70 countries. This movement shows something important. We may be moving from national competition to global cooperation.

Countries are also developing new approaches to govern AI collectively.

France has established a National Consultative Committee on Digital Ethics and AI, bringing together diverse voices. Yet the committee holds no legislative power; its influence depends on voluntary uptake by regulators and industry.

Canada established an Advisory Council on AI to inform policy, but its recommendations remain non-binding and are not always reflected in legislation.

Japan developed ethical guidelines for developers and users of AI systems, but these remain voluntary, and there’s no centralized body to monitor them. These aren’t isolated efforts. They are signs of a shared mindset.

Beyond government, we’re also seeing joint initiatives at the global level. The Global Partnership on AI comprises 29 member countries, but coordination across jurisdictions remains limited, and no unified support structure exists. Much of the partnership’s work focuses on research and recommendations, not binding commitments. It supports research, policy development, and the safe deployment of AI, all under the auspices of an OECD-hosted secretariat.

And this new model doesn’t slow innovation; it can accelerate it. When countries share data, computing power, and best practices, they remove roadblocks to progress. They allow smaller nations and communities to participate. That’s not just fair. It’s efficient. The future of AI doesn’t have to be a race. It can be a relay, where we pass knowledge forward together.

Without institutional convergence and global alignment, cooperation remains a hopeful experiment rather than an established norm.

Redefining What “Winning” Means

So, if we can’t stop the race entirely, perhaps we can change the rules. What if the real competition were about who can build the most inclusive and accountable systems, not just the fastest? This change doesn’t mean giving up on innovation or leadership. It means redefining what success looks like so that progress is measured by value to society, not just velocity.

For decades, “winning” in tech meant building quicker, scaling, and always staying ahead. But AI is different. Bigger isn’t always better. And faster can be dangerous. What if winning meant something else? Today, many nations are beginning to rethink success in AI, not just as a race to build faster models but as a mission to ensure systems are inclusive, fair, and trustworthy. But these changes are not yet the norm; they are early signals, not outcomes. If we want inclusion to define the future, we must ensure that these efforts become systemic standards.

What if the goal was inclusion, not just innovation? In 2025, Japan updated its AI guidelines to prioritize human dignity and fairness over market share. Singapore’s 2024 AI governance frameworks now rank systems based on how well they protect human rights, not just how well they perform. These are signs of intent, not yet widespread leadership. Until these priorities are mainstreamed and supported, inclusion will remain a principle rather than a practice.

We may see a future where AI success is measured in trust. Trust in AI has declined in many places, dropping from 50% in 2019 to just 35% in 2025, according to the Edelman Trust Barometer. Winning back trust means slowing down, asking more complex questions, and building with transparency.

However, success may also mean sustainability. Germany’s “AI lighthouses for the environment, climate, nature, and resources” initiative, a model for broader European efforts, provides funding to AI projects fighting energy inefficiency and biodiversity loss. By 2024, the German Federal Environment Ministry had allocated significant means to these programs. Canada funds AI with social missions like health equity through its “Pan‑Canadian AI for Health” principles. This initiative emphasizes the inclusion of AI in public systems, with a significant focus on funding for underserved populations. Even the industry is rethinking priorities. Open-source platforms like Hugging Face are building decentralized AI infrastructure to give smaller players access to compute and models. Their vision? A more democratic AI future, not one ruled by just a few firms.

We may soon live in a world where the best AI isn’t the most powerful. It’s the most useful. The fairest. The most trusted. And maybe that’s what real “winning” looks like.

This isn’t about slowing down AI; it’s about steering it toward outcomes that matter. Winning the future means making sure the future works for everyone.

From Soft Laws to Smart Regulation

So far, regulation has been largely reactive, rather than proactive. But recent steps suggest this may finally be changing. Still, gaps remain. True regulatory maturity is still a work in progress.

For a long time, AI development was driven by good intentions. Guidelines were published. Principles were signed. But nothing stopped bad actors. There were no real consequences, just promises. That’s changing. We may soon see the rise of real-time, risk-aware AI regulation. UNESCO’s “Recommendation on the Ethics of AI” was adopted by all 194 member states in November 2021, creating a global standard for AI ethics. This marks a change. Ethics are moving out of white papers and into law books.

These developments represent necessary steps, but they are often experimental, uneven, or even aspirational. While regulation is gaining momentum, it remains dependent on political will, national capacity, and also market pressure.

More governments are regulating by design. Rather than react to problems, countries like Brazil, South Korea, and Mexico are experimenting with public procurement as a tool for ethical AI governance. For example, Brazil’s 2024 AI law and South Korea’s 2025 AI Basic Act introduce risk-based frameworks; however, they remain in their early stages and lack substantial penalties for noncompliance. Mexico’s e-procurement system generally enhances transparency, laying the groundwork for AI-specific governance; however, it’s not yet customized to address AI risks or misuse, and its impact remains procedural mainly.

These steps hint at what future governance could look like, but they are not yet scalable or coordinated across borders. Without that, they risk becoming isolated experiments rather than systemic safeguards. If a system isn’t transparent, it doesn’t get used to it. That’s innovative governance.

We may also see regulations built into contracts and codes. In January 2025, India announced its AI policy landscape. The proposed framework is promising but still under public consultation, and like many global efforts, it lacks binding timelines. In 2020, Amsterdam and Helsinki launched public AI registers, enabling transparency, but participation is voluntary, and most cities worldwide still lack comparable public tools. These registers, one of about 83 city and national-level algorithm repositories mapped in 2024 by OECD/GPAI, allow public feedback and help build trust in municipal AI. It’s like food labeling for algorithms.

We may see more city-level AI laws where trust is built from the ground up. The big picture? Regulation doesn’t have to be about saying “no.” It can be about setting the right conditions to say “yes” safely and responsibly. We’re entering a new phase.

Not the age of rules for rules’ sake but of regulation that adapts, anticipates, and protects. And that’s the kind of law AI needs.

In short, the direction is promising, but the pace, reach, and coordination are lacking. The real risk is that we mistake progress for completion. Without collective intent and binding governance, the AI race will continue, not just in labs and boardrooms, but also in how regulation is outpaced by innovation.

Principles Over Profits and Concluding Remarks

Across the world, there’s growing interest in making AI not just more innovative but more meaningful.  That means creating systems that serve people, not just markets.

We may see new ways to measure the progress of AI. Instead of tracking how fast or how large a model is, we might ask how fair it is. Or how well it protects privacy. Or how clearly it explains its decisions. These are the metrics that matter to real users.

Companies may be judged more by how responsibly they act than by how much they disrupt. Responsible design, ethical leadership, and transparency could soon become the qualities investors and customers value most. In a crowded AI field, trust might become the most significant differentiator.

And people are watching. Users are becoming more thoughtful. They’re asking questions. They want AI that respects them and doesn’t exploit their data. That’s not slowing innovation. It’s steering it.

This race won’t end at a single finish line. However, if we redefine what it means to lead by prioritizing trust and transparency, we may still change the outcome. It’s not about slowing down. It’s about making the pace sustainable and the destination worthwhile.

Miroslav Milovanovic

I hold a PhD in Computer Science and Electrical Engineering and currently serve as an associate professor at the Faculty of Electronic Engineering. My academic background and professional experience have provided me with expertise in Data Science and Intelligent Control, which I actively share with students through teaching and mentorship. As the Chief of the Laboratory for Intelligent Control, I lead research in modern industrial process automation and autonomous driving systems.

My primary research interests focus on the application of artificial intelligence in real-world environments, particularly the integration of AI with industrial processes and the use of reinforcement learning for autonomous systems. I also have practical experience working with large language models (LLMs), applying them in various research, educational, and content generation tasks. Additionally, I maintain a growing interest in AI security and governance, especially as these areas become increasingly critical for the safe and ethical deployment of intelligent systems.

Scroll to Top