top of page
Search

Op-ed: “If China Doesn’t Regulate, Why Should We?” A Lazy Excuse for Inaction

  • john87559
  • Jun 20
  • 3 min read

Written by Yvette Hastings


You’ve probably heard it before, almost like background noise at this point: “What’s the point of regulating AI if China isn’t doing it?”


On the surface, it sounds like a reasonable concern. Nobody wants to fall behind in the tech race. But dig a little deeper and it starts to unravel. It’s a bit like saying, “Why bother with a fire alarm if the neighbour’s house is already on fire?” It misses the point entirely.


Skyline of of the city of Shanghai, China
Skyline of of the city of Shanghai, China

First of all, China is regulating AI, but not in the way some seem to assume. The focus isn’t primarily on ethics, fairness or protecting citizens’ rights. It’s more about safety: control, stability and managing information flow. A digital infrastructure built with national security and governance in mind, rather than transparency or public trust. Think less “data dignity” and more “data discipline.” So if you’re pointing to China’s approach as a reason not to pursue thoughtful regulation, you might be staring at the wrong map.


More importantly, we’re no longer at the starting line. Pandora’s box is well and truly open. AI is out in the world, but no one’s entirely sure what actually came out. Power? Opportunity? Bias? Surveillance? A cocktail of all of the above? Something as yet unidentified? The uncertainty is real, and ignoring it won’t make it disappear. If anything, it’s a reason to step up, not step back.


So rather than shrug and say,“Well, if they’re not doing it, why should we?”, a better question might be: “What kind of world do we want to help shape?”

AI’s unpredictability means regulation isn’t about curbing progress, it’s about setting a direction. When systems reinforce social biases, serve misinformation in personalised wrappers, or help oppressive regimes refine their tactics, it’s not just about faulty code. It’s about a lack of rules, foresight and collective responsibility.


Some countries and corporations are deploying AI as fast as possible, chasing influence and market share. It’s a bit of a digital gold rush: fast, competitive, and with few safeguards. But if history has taught us anything, it’s that unregulated booms often end in busts. And when you’re dealing with technology that can reshape democracies, markets and everyday life, the stakes are simply too high to leave it all to chance.


That said, regulation should not be a solo effort. AI does not recognise national borders, and neither should our attempts to govern it. What we are likely to see, and in some cases, already are, is the groundwork for international cooperation. Think more along the lines of nuclear or climate treaties, not one-size-fits-all laws. The geopolitics of AI is unfolding, and serious discussions are already happening about shared standards, risk thresholds and responsible deployment.


So rather than shrug and say, “Well, if they’re not doing it, why should we?”, a better question might be: “What kind of world do we want to help shape?” One that races ahead with no brakes, or one that builds smart, resilient guardrails along the way?


Let’s be clear. No one is suggesting we attempt to lock AI in a cupboard and then throw away the key. But we do need to treat it with the caution it demands. This is not about stifling innovation. It is about giving innovation the space to thrive without trampling over rights, ethics, global stability or our mental and physical health.


Because without regulation, we’re just passengers in a racecar we didn’t design, with no idea where it’s heading. And frankly, a seatbelt sounds pretty good right about now.

 
 
 

Comments


bottom of page