By Reven Singh, Sales Engineer, InterSystems South Africa
Most South African organisations we engage with say they are exploring artificial intelligence (AI) but far fewer can explain how their existing data architecture will sustain it.
RELATED: Data streaming complexity undermines corporate AI readiness, Conduktor report warns
Yet in many environments, the core data estate remains fragmented, duplicated across systems, inconsistently governed and slow to reconcile. That is where most AI initiatives quietly encounter difficulty and in some cases just fail. But, AI does not fail because the model is insufficiently sophisticated. It fails because the data beneath it is incomplete, delayed, contradictory or poorly contextualised. When intelligent systems rely on untrusted foundations, their outputs become unreliable. In regulated or high-stakes sectors like finance and healthcare, that is not a minor inconvenience, it is a huge risk.
If organisations are serious about AI, they need to go back to basics.
From fragmentation to usable context
Layered systems, data exports, and batch processing are often criticised, but in reality they are a necessary consequence of how organisations have evolved and they are not going away. Core systems will continue to operate independently, legacy environments will remain in place, and data will still move across platforms in different formats and at different speeds. The challenge, therefore, is not to eliminate this complexity, but to make it usable.
What has changed is how newer AI capabilities interact with data. Generative and agentic systems are able to work across different types of data, not just structured records but also unstructured inputs such as documents, images, and text. That creates a different architectural opportunity. Instead of continuing to optimise for fragmented pipelines, organisations should be thinking about consolidating their data into a platform that can support both structured and unstructured data and accommodate both batch and real-time processing.
This is less about simplifying the architecture and more about creating flexibility in how data can be accessed and used.
The hidden cost of fragmentation
The significance of this shift sits in what can be described as context. AI systems are only as effective as the information they have access to at the point a decision is made. That means having the right data, whether structured or unstructured, available at the right time, which in turn requires both speed and the ability to process events as they occur, not just after the fact. If data is incomplete, delayed, or inconsistent across systems, outputs become harder to trust regardless of how advanced the model may be.
This is where the idea of context engineering becomes useful, although in practice it is less about introducing something new and more about getting the fundamentals right. It requires that data from different sources can be brought together without constant rework, that identities are resolved consistently across systems, and that both batch and real-time processing can coexist without creating further fragmentation. The objective is not to design a perfect architecture, but to ensure that context can be assembled when it is needed, so that decisions can be made with confidence based on what is known at that moment.
Trust as an architectural property
As AI systems begin to influence operational decisions, trust becomes central. While trust is often discussed in ethical or regulatory terms, it is first and foremost an engineering question. Can the system explain how it arrived at a recommendation? Is there an audit trail? Are data sources known and validated? Are roles and permissions enforced consistently?
In environments with strict data protection requirements and hybrid estates, the stakes are higher. Systems must maintain integrity across cloud and on-premises infrastructure. They must support secure access while preserving performance. They must function reliably even where connectivity is constrained.
Trust cannot be added after deployment. It must be designed into the way data is stored, processed and orchestrated from the outset.
A quieter race
The broader conversation around AI has become preoccupied with model comparisons and benchmark scores, as though performance differentials alone determine value, while far less attention is given to whether the underlying systems are coherent, resilient, and well-governed enough to carry those capabilities into production.

Singh, Sales Engineer, InterSystems South Africa
Organisations that align transactional systems, analytics and AI around a unified data foundation will be able to scale intelligent capabilities steadily. Those that continue to layer AI onto fragmented estates will remain in pilot mode, revisiting the same integration and data quality challenges with each new initiative.
The difference will not always be obvious at first. Both groups may produce impressive demonstrations. The distinction emerges over time, in reliability, in governance, and in the ability to move from isolated projects to enterprise-wide adoption. Ask yourself: will my data architecture sustain intelligent systems as part of everyday operations?
If AI is to become embedded in core operations rather than remain confined to experimentation, the effort must start with system design, data integrity and architectural discipline, since algorithms on their own are insufficient; it is the structure around them that determines whether they can be trusted, governed and sustained over time.

































