.jpg)
Why AI Governance Matters: AI Is Not Traditional Software
The corporate world is confused about what could be a problematic error. For thirty years, the technology rhythm was predictable: scope requirements, code, test for bugs, and release. When things broke, you patched the code. When tasks expanded, you would hire a new engineer who could solve problems. This deterministic cycle is the bedrock of modern business IT.But the tools we are deploying today, such as generative pre-trained transformers and autonomous agents, are not software in any traditional sense. They are probabilistic engines characterised by emergent behaviors that their creators nor users cannot fully anticipate. Treating Artificial Intelligence as "software plus data" is a category error of immense proportions, as unlike most software, we have such little knowledge about why the models do what they do. We do not simply deploy such tools; we introduce autonomous decision-makers into the corporate ecosystem where lots could go wrong if such models are left unchecked.
How Traditional Software Works
Traditional software operates on a clockwork mechanism. Every function, every output, is accessible and traceable directly back to explicit, human-defined logic within the source code. If the output is incorrect or unexpected, the fault often lies in an identifiable flaw or bug in the IF-THEN statements or pre-programmed algorithms.
How AI Is Fundamentally Different
Artificial Intelligence operates on a different ontological plane from the common logics applied to modern software engineering. Modern AI models, particularly deep neural networks, function as complex black boxes, with the field of interpreting them being a nascent but rapidly developing one.
Within these architectures, the explicit IF-THEN logic of traditional code is replaced by a vast, difficult-to-interpret and high-dimensional mathematical landscape. An input, even a simple one, triggers numerous processes across thousands or even millions of artificial neurons, which are similar to the neurons in our brain as they help execute small functions.
For a single input, hundreds of activations are possible in the initial layer alone, and this process is replicated across numerous, interconnected layers. The output we see in LLMs or generative vision models is not the result of single, linear decision paths, but rather the statistical culmination of steps starting from massive datasets.
This is why understanding how an AI reaches specific conclusions is exceptionally challenging, and why it is important to address AI's potential safety concerns early, at any level.
In this shift, we lose the ability to predict how a system will react to a novel prompt. Unlike a software bug that remains static until fixed, AI risk is emergent, evolving long after deployment. Further worrying behaviours may emerge if AIs are scaled without much thought, including strategic thinking or goal-oriented behaviour where models have veered off-script for determining what such goals might be.
Bias to Catastrophe: The Spectrum of AI Risk
There are numerous novel risks AI may pose. One may look at it via frequency, severity, or scale of such risks. For practicality, one might also consider the sophistication or computation leveraged by the model.
High-Frequency Risks
At the immediate level, most organisations face high-frequency risks that erode trust. These can be measured by benchmarks and evaluation focusing on accuracy, bias, sycophancy, and more.
Some of these risks bear a familiar face, as we have seen other regulations attempt to reduce or penalise erroneous processing. Algorithmic bias in hiring or lending creates immense reputational liability. Precedents exist in the Singapore PDPA and EU GDPR, both of which mandate accountability in data processing, especially if data was poorly or wrongly handled. Companies must endeavour to reduce such risks, but it can be difficult to figure out where to start and how to ensure such standards at scale.
Existential-Scale Risks
Moving toward Artificial General Intelligence (AGI), the risk profile shifts from reputational and a matter of profit to existential.
Acclaimed expert Yoshua Bengio has highlighted the acute danger of a loss of control of frontier AI systems, positing that sophisticated autonomous systems could bypass human oversight. AI safety researcher Dan Hendrycks suggests that in a competitive ecosystem, AI systems may undergo a form of natural selection, prioritising their own proxy goals over human values where these proxy goals shift away from their creator's intent.
Recent research from Apollo Research into scheming suggests that advanced models might learn to hide true objectives during testing, meaning we may not even be able to detect such shifts. Furthermore, METR's (Model Evaluation and Threat Research) work on autonomous capability evaluations demonstrates that models are becoming increasingly adept at self-correction, bringing us closer to a threshold where the off-switch becomes a theoretical construct rather than a practical safeguard.
Why Current Standards Must Evolve
The vast and often opaque nature of AI renders existing legal and compliance structures inadequate, especially since new models defy previously known competencies as they get released.
Product liability laws assume predictable manufacturing defects. Data privacy laws like GDPR were designed for static databases, not fluid models that memorise and regurgitate sensitive information. The law cannot always keep up with progress and cannot be depended on alone, especially not for preventive measures.
While ISO 27001 and its pillars of focus are excellent for data integrity, such standards cannot measure if a model's goal-directed behaviour is drifting. The NIST AI Risk Management Framework (RMF) 1.0 acts as a critical gap-filler with its introduction of the Map, Measure, Manage cycle.
In Singapore, this is mirrored by the Model AI Governance Framework (MAIGF) and the MAS principles-based approach. Singapore also introduced a guided workflow for technical testing of Explainability, Robustness and Fairness with their developer-oriented toolkit, proving to be a strong leader in AI governance at a time when many countries opt for self-governance or industry guidance.
The Necessity of Workplace Best Practices
For most of the world, waiting for regulators would be a lacking strategy. Law moves at the speed of bureaucracy; AI moves at the speed of compute.
Currently, corporate governance is the Wild West, with organisations picking and choosing which standards and regulations suit their needs best. There is not one service to understand the full landscape at a glance and ensure not just compliance, but safety. This also means having trained personnel or access to advisors who can handle such problems.
Internal best practices must start in the workplace, including:
1. Access to robust testing environments for AI systems
2. Clear human-in-the-loop protocols for high-stakes decisions
3. An insistent demand for explainability in AI outputs
4. Dedicated AI governance functions, not subsets of IT security
AI Governance should be viewed as a dedicated, sophisticated response to a never-before-seen challenge, not a subset of IT security to be grouped with cybersecurity concerns.
Why the Window for Action Is Closing
The debt being accumulated today is governance debt. Every unchecked model integrated into a workflow calcifies into a liability that will be excruciatingly difficult to unwind later.
We must care today because the window to establish foundational governance is closing fast. The organisations that will thrive in the AI era are not just those that adopt the fastest, but those that adopt the safest. The age of AI demands the highest level of compliance and safety, and nothing lesser.