Elffar Analytics
  • Home
  • Blog

Elffar Analytics Blog

by Joel Acha

The Shift from Augmented to Agentic Analytics

19/4/2026

0 Comments

 
Picture
Introduction

Analytics has never stood still.

Over time, different modes of analytics have emerged to meet different business needs and different levels of user maturity. First came governed analytics, where trusted reporting and centrally managed definitions were the priority. Then came self-service analytics, where the focus shifted towards wider access, exploration and user-driven insight discovery.

More recently, analytics entered another phase: augmented analytics. This was the point at which AI and machine learning began to assist with activities such as pattern detection, insight surfacing, explanation and natural language generation and processing. In many ways, augmented analytics marked the beginning of AI becoming part of the analytics experience.

I think we are now seeing the next progression of that idea: agentic analytics.

For me, agentic analytics is not separate from augmented analytics but can be seen more as a natural evolution. If augmented analytics introduced AI assistance into the analytics workflow, agentic analytics extends that further by applying AI agents across more of the data-to-insight process, helping to orchestrate steps, interpret intent, retrieve context, explain findings and increasingly recommend or support actions.

This is not simply self-service analytics with a chatbot bolted on. Nor is it a replacement for the strands that came before it. Governed analytics still matters. Self-service analytics still matters. Augmented analytics still matters. But AI assistants and agents are beginning to create a further mode of interaction with analytics, one that is more conversational, context-aware and increasingly action-oriented.
​Governed Analytics

Governed analytics was built around trust, consistency and control.

This was the world of centrally managed reporting, curated dashboards, semantic models, governed metrics and carefully controlled access to data. The emphasis was on ensuring that when people asked for revenue, margin, headcount or customer numbers, they were looking at the same definitions.
​
That model solved an important problem. It created confidence in enterprise reporting and gave organisations a stable analytical foundation.
Picture
Self-Service Analytics

Self-service analytics expanded the audience for data.

Instead of relying entirely on centrally built reports, business users were increasingly able to explore data for themselves, build their own dashboards and answer their own questions more directly.

This changed analytics significantly. It increased agility, broadened access to insight and reduced some of the bottlenecks associated with traditional BI delivery models.
​
At the same time, it introduced its own tensions around governance, consistency and control. That is why governed analytics did not disappear when self-service analytics arrived. The two strands have continued to coexist.
Picture
Augmented Analytics

Augmented analytics marked the point at which AI and machine learning began to play a more active role in the analytics experience.

Instead of simply presenting reports or dashboards, analytics platforms began to assist with activities such as pattern detection, insight surfacing, explanation, natural language interaction and automated visual suggestions.

This was an important shift because it moved analytics beyond static consumption and self-service exploration into a more assisted mode of working. The system was no longer just something users queried directly. It was beginning to participate more actively in helping users find and interpret insight.
​

In that sense, augmented analytics became the bridge between traditional analytics models and the more agentic capabilities now starting to emerge.
Picture
​Why a New Strand Is Emerging

AI is now changing the way users interact with analytics once again.

Users are no longer limited to navigating dashboards, building visualisations or writing queries. Increasingly, they can ask questions conversationally, request explanations, explore follow-up questions and receive suggested next steps.

In some cases, AI can also retrieve relevant knowledge, explain business context, simulate scenarios, recommend actions or even interact with downstream tools and workflows.

That is a different interaction model from either governed analytics or self-service analytics.

It is not simply about giving users more charts. It is about allowing analytics systems to behave more like intelligent collaborators.

What Agentic Analytics is

For me, analytics becomes agentic when the platform can do more than simply return a report, render a chart or answer a natural language query.

Agentic analytics begins to emerge when the system can:
  • interpret business intent rather than just keywords
  • use governed definitions and business context
  • retrieve supporting knowledge at runtime
  • explain results conversationally
  • chain together multiple analytical steps
  • recommend next actions or scenarios to consider
  • interact with downstream tools, systems or workflows

This is where analytics begins to move from static consumption and even self-service exploration towards a more active, guided and potentially action-oriented experience.

"Agentic analytics represents the evolution of augmented analytics by applying AI agents... for data analysis. It involves software used for the process of data analysis that applies AI agents across the data-to-insight workflow, orchestrating tasks semi-autonomously or autonomously toward stated goals that support, augment, or automate insights." - Gartner
​What Sits Beneath Agentic Analytics

This new strand does not remove the need for the foundations that came before it. In fact, it often depends on them even more.

Agentic analytics still needs:
  • trusted enterprise data
  • governed metrics and semantic structure
  • contextual knowledge
  • retrieval mechanisms such as RAG
  • connectivity to tools and systems
  • governance, observability and guardrails

Without these layers, so-called agentic experiences risk becoming superficial, inconsistent or untrustworthy.

This is why agentic analytics is not just an interface change. It is also an architectural one.

Why It Does Not Replace Governed or Self-Service Analytics

It would be a mistake to think of agentic analytics as the end of governed analytics or self-service analytics.

Governed analytics still matters because organisations still need trusted reporting, shared definitions and control over critical metrics.

Self-service analytics still matters because many users will continue to want the freedom to explore data visually and directly.
​
Agentic analytics should instead be seen as an additional strand, one that sits alongside the others and introduces a new interaction model for certain types of analytical work.
In practice, most organisations will likely operate across all three.
​How Oracle Can Enable Agentic Analytics

Oracle already has many of the building blocks needed to support this emerging strand.

Oracle Analytics provides governed metrics, semantic models, dashboards, self-service exploration and AI-driven user experiences. Oracle AI Data Platform extends that picture by bringing together structured and unstructured data, catalog capabilities, governed discovery, notebooks, AI tooling and an emerging agent platform story. Within that broader platform, Autonomous AI Lakehouse adds an important data foundation by combining lakehouse-style data management with Autonomous Database capabilities, helping to make governed structured and unstructured data more accessible for analytics and AI workloads. In that sense, Oracle AI Data Platform provides the wider platform layer, while Autonomous AI Lakehouse helps provide one of the core data and storage layers that agentic analytics can build upon.

Recent announcements and summit discussions have also pointed towards a broader Oracle direction that includes AI assistants, Agent Studio, Agent Hub, Agent Registry, AI-powered applications and stronger orchestration between agents, tools and enterprise systems.

Taken together, this suggests that Oracle is not just adding AI features into analytics. It is increasingly putting in place the wider architecture needed to support agentic analytics in practice, including:
  • governed enterprise data
  • semantic structure and business context
  • retrieval and knowledge grounding
  • agent development and orchestration capabilities
  • governance, observability and guardrails

That does not mean every organisation is there yet. But it does suggest that Oracle is assembling many of the components required to move analytics beyond static reporting and dashboard consumption towards more conversational, guided and action-oriented experiences.

Why This Matters

Naming this shift matters because it helps separate a real change in the analytics experience from a vague sense that AI is being added everywhere.

If agentic analytics is becoming a distinct strand, then organisations need to think more clearly about:
​
  • where it fits in their analytics strategy
  • what architectural foundations it depends on
  • which use cases are best suited to it
  • how it should be governed
  • how it should coexist with governed and self-service analytics

That is a much more useful conversation than simply asking whether AI will replace dashboards.

Conclusion

We have already seen major shifts in analytics through governed, self-service and augmented models.

I think we are now beginning to see the emergence of an update to the third strand: augmented analytics to agentic analytics.

This strand is characterised by conversational interaction, contextual grounding, guided exploration, recommendation and, increasingly, the ability to support actions as well as insights.
It does not replace what came before it. But it does introduce a genuinely different way of engaging with analytics.
​
If that proves to be true, then agentic analytics may become one of the most important ways of thinking about the next phase of analytics evolution.
0 Comments

Connecting OAC to AIDP is one thing. Governing access by real user identity is the bigger challenge

10/4/2026

0 Comments

 
Picture
Introduction

When I recently wrote about how to connect Oracle Analytics Cloud to the Oracle AI Data Platform catalogue, the focus was on getting the integration working end to end. That matters, because it helps bring analytics closer to the governed enterprise data estate and makes trusted data assets more discoverable to analytics users. However, the more I thought about it, the more I felt that connectivity is only part of the story.

In enterprise environments, a successful connection is not the same thing as governed access. If data access is ultimately mediated by the credentials configured in the connection rather than the identity of the actual Oracle Analytics Cloud user, there is a risk that users may be able to see or reach data in ways that do not fully reflect their own entitlements.

That matters even more now that analytics and AI are increasingly converging. The challenge is no longer just how a dashboard author connects to data. It is how a platform ensures that access policies follow the real user or agent making the request, and that those policies are enforced consistently, centrally, and with audit traces.
Why the issue matters

At first glance, this can sound like a technical detail about connection setup. In reality, it goes to the heart of enterprise governance.

If a shared connection identity is used to broker access, then the logical policy boundary may sit at the connection rather than at the individual user. That can be acceptable for some narrow scenarios, but it becomes much harder to defend in environments where access to data should be based on role, attribute, business context, geography, or sensitivity classification.

A simple example helps illustrate the point. Imagine an OAC workbook built over a catalogue-discovered sales dataset exposed through a connection that has been configured with a technical identity. A regional sales manager in the UK should only see UK records, while a counterpart in Germany should only see German records. If the access path is governed primarily by the shared connection identity rather than the runtime identity of the actual OAC user, then the policy boundary risks being applied too broadly. Even if the workbook itself is shared appropriately, the underlying data access model may still not be enforcing the correct row-level boundaries for each individual viewer.

It also becomes harder to explain cleanly to security and audit stakeholders. They do not just want to know that data was accessed through an approved tool. They want confidence that the person or agent requesting the data only saw what they were genuinely entitled to see, and that the control point enforcing that decision was robust.
Picture
Why this matters beyond dashboards

This is not only an Oracle Analytics Cloud question. It is increasingly an enterprise AI question too.

As organisations move towards agentic workflows, the number of system-to-system interactions grows. Requests may be assembled dynamically, delegated across components, or executed by agents acting on behalf of users. In that sort of world, relying on broad service identities can quickly become uncomfortable.

The long-term target should be a consistent model in which analytics users, applications, and AI agents are all governed by the same underlying identity-aware policy framework. Otherwise, there is a risk of ending up with one access model for dashboards, another for APIs, and yet another for agents.
Picture
Why Oracle’s recent messaging is interesting

That is why Oracle’s recent direction is worth paying attention to. In Oracle’s recent Enterprise-Ready AI webinar for Oracle AI Data Platform, the messaging was not just about clever prompts or isolated demos. The emphasis was on building dependable, governed AI with the right data foundation, tooling, architecture, and guardrails. That framing matters, because it places governance and control at the centre of the platform story rather than treating them as afterthoughts.

That same direction appears in Oracle’s recent Deep Data Security announcement for Oracle AI Database 26ai. Based on Oracle’s public positioning, Deep Data Security looks set to become a database-native authorisation layer that sits beneath consuming tools such as analytics platforms, applications, APIs, and AI agents. Rather than trusting each consuming layer to implement access rules correctly, Oracle is describing a model in which the database evaluates verified identity and runtime context, then applies declarative SQL policies to enforce row, column, and even cell-level access boundaries centrally. In enterprise AI terms, that places Deep Data Security in the control layer of the stack: below the user experience, below the agent or application orchestration layer, and directly alongside the enterprise data itself.

That matters because one of the biggest risks in both analytics and agentic AI is the use of broad service identities. If an analytics connection, application tier, or agent framework connects with more privilege than the end user should actually have, then a mistake, weak application logic, or even prompt injection can expose data too broadly. Oracle’s stated answer is to push least-privilege enforcement down into the database so that agents, analytics workloads, and normal application workloads are all constrained by the same underlying policy model. In other words, even if the request is assembled dynamically by an agent or arrives through a shared connection path, the database should still be able to determine who the real requester is, what context applies, and what subset of data that requester is genuinely allowed to see or act upon.
​
If that model is realised as Oracle is signalling, it would help resolve the exact issue discussed earlier in this article. Instead of relying solely on the credentials configured in the OAC connection, the longer-term pattern would be to propagate end-user or agent identity and let database-enforced policies decide access at execution time. That would make it far easier to ensure that human users only see authorised records, that AI agents act within tightly bounded privileges, and that audit trails show which real identity was behind the request. The broader significance is that Deep Data Security is not just another security feature. It has the potential to become a foundational trust layer for enterprise AI, ensuring that both agentic and non-agentic workloads can only access the data they are entitled to, regardless of which tool, interface, or autonomous workflow initiated the request.

​What good looks like

For me, a strong enterprise pattern should aim for a few clear outcomes:
​
  1. The requesting identity should be preserved end to end as far as possible. The system should know who is asking, not just which shared connection is being used.
  2. Policy enforcement should happen at the data layer as well as in the consuming platform. Catalogue visibility and connection controls are useful, but durable governance depends on the place where the data is actually being served.
  3. Audit trails should remain meaningful. Security teams should be able to understand which person or agent requested access, which policy was evaluated, and what data was returned.
  4. The same model should be capable of supporting both traditional analytics and newer agentic AI use cases. It would be a mistake to build a strong user access model for analytics and then bypass it for AI-driven access paths.

The following conceptual view shows how that future-state pattern could fit together across Oracle structured data, non-Oracle structured data, and unstructured content.
Picture
In this model, AIDP acts as the discovery, orchestration, and semantic coordination layer across the wider estate. It carries identity and policy context from the consuming workload, whether that workload is a standard analytics flow in OAC or an agentic AI workflow.

The key point is that both analytics and agentic AI follow the same governance path. Identity is propagated, policy intent is preserved, enforcement happens as close to the data as possible, and audit trails remain tied to the real requester rather than a broad shared connection.
Where this could lead

I think this is where the OAC and AIDP story starts to become much more interesting. The initial connection between Oracle Analytics Cloud and the Oracle AI Data Platform catalogue is useful because it improves discoverability and brings analytics closer to a governed data estate. But the broader enterprise question is how that access path evolves into a model where the real requesting user identity and runtime context drive policy decisions consistently. A forward-looking view is that AIDP could increasingly act as the orchestration and governance-aware access layer across a much wider enterprise data landscape, while Deep Data Security becomes the database-resident policy enforcement layer beneath it. In that sort of model, AIDP would not just catalogue Oracle-native assets. It could also organise access to non-Oracle platforms, federated structured sources, and unstructured content stores, while carrying identity context, workload context, and policy intent across analytics, applications, and agentic AI workflows.

That matters because enterprise AI is rarely confined to a single vendor estate. Valuable business context may sit in Oracle databases, third-party cloud platforms, SaaS applications, object storage, document repositories, and unstructured sources such as PDFs, transcripts, emails, and reports. For Oracle-managed structured data, Deep Data Security could provide the strongest final trust boundary by evaluating propagated identity and context at execution time and enforcing least-privilege access directly in the database. For non-Oracle structured data and unstructured repositories, the same principle should still apply, with AIDP acting as the policy-aware coordination layer and source-native controls enforcing access locally. In that sense, Deep Data Security could become the Oracle data-plane enforcement pattern, while AIDP provides the broader control-plane and orchestration pattern across heterogeneous enterprise sources.

This future-state view directly addresses the limitation observed in the current OAC connection pattern. Today, the concern is that the configured connection credentials can become the effective access identity. In the model described here, that burden shifts away from the shared connection and back towards propagated user or agent identity, with policy enforcement happening at execution time as close to the data as possible. Seen this way, AIDP and Deep Data Security are not competing controls but complementary parts of the same enterprise AI stack: AIDP organises, exposes, and orchestrates access across structured and unstructured assets, while Deep Data Security provides the final database-enforced trust boundary for Oracle-managed data and federated controls play the equivalent role for external sources.

Closing thoughts

Connecting OAC to AIDP is one thing. Extending that into a model where access is governed by real user identity across analytics and AI workflows is the next important step.

That does not reduce the value of the integration. If anything, it reinforces why it matters. As analytics platforms become more tightly connected to broader enterprise data and AI ecosystems, the quality of the security and governance model becomes even more important.
​
From Oracle’s recent messaging, it is clear that this bigger picture is very much on the radar. The direction of travel appears to be towards a more identity-aware, policy-driven, and enterprise-ready model. We are not fully there yet, but it is an important space to watch as the platform continues to evolve.

Reference points used in this post
  • Oracle’s webinar messaging around dependable, governed AI and configurable guardrails
  • Oracle Deep Data Security
  • Oracle’s published Deep Data Security positioning for Oracle AI Database 26ai
  • Previous technical post on connecting OAC to the AIDP catalogue
0 Comments

How to Connect Oracle Analytics Cloud to the Oracle AI Data Platform Catalog

1/4/2026

0 Comments

 
Picture
Introduction

Oracle Analytics Cloud is often discussed in the context of semantic models, dashboards and AI assistants, while Oracle AI Data Platform is increasingly discussed in the context of data platform, governance and catalog capabilities.

Connecting Oracle Analytics Cloud to the Oracle AI Data Platform catalog helps bring these worlds together. It allows analytics users to discover and work with catalogued data assets more directly, while aligning Oracle Analytics more closely with the broader governed enterprise data ecosystem.
In this post, I will walk through the process of connecting Oracle Analytics Cloud to the Oracle AI Data Platform catalog, show what the connection looks like in practice, and highlight a few observations along the way.

What This Connection Enables

At a practical level, connecting OAC to the AIDP catalog can help with several things:
  • discovery of catalogued data assets from within Oracle Analytics Cloud
  • closer alignment between analytics and the governed data platform
  • improved visibility of trusted data assets for analytics users
  • a more integrated workflow between catalog, governance and analytics

This is not just a connection exercise. It is also part of bringing Oracle Analytics more directly into the wider enterprise data and AI landscape.

Prerequisites

Before starting the connection, make sure the following are in place:
  • access to Oracle Analytics Cloud
  • access to an Oracle AI Data Platform environment
  • appropriate permissions to the AIDP catalog
  • any required identity, network or tenancy prerequisites
  • the endpoint or connection details required by OAC

​Depending on your environment, you may also need to confirm whether there are region-specific, network or policy constraints that could affect connectivity.
​
Step 1: Start the Connection in OAC

Begin in Oracle Analytics Cloud and open the Create menu in the top-right corner. From there, select Connection.
Picture
Create a connection from OAC home page
This is the entry point for creating new source and platform connections in OAC.

Step 2: Retrieve the Connection Details from AIDP

Before completing the connection in OAC, go to Oracle AI Data Platform Workbench and navigate to the relevant workspace and compute instance. Open the Connection details tab and, under Connect with BI Tool, select Oracle Analytics Cloud.
Picture
AIDP Workbench showing workspace, compute, Connection details tab and Oracle Analytics Cloud option
This is where AIDP exposes the connection details needed by OAC. In practice, this step is easy to overlook, but it is central to the whole setup.

Step 3: Select the Oracle AI Data Platform Connection Type in OAC

Return to Oracle Analytics Cloud and create a new connection. In the connection type dialog, search for Oracle AI Data Platform and select it.
Picture
Select the Oracle AI Data Platform connector type
​Once selected, OAC opens the connection form for the AIDP catalog connector.

Step 4: Complete the Connection Form and Save

Populate the connection form using the details obtained from AIDP.

This includes:
  • a connection name and description
  • the downloaded connection details file from AIDP
  • authentication type
  • DSN
  • user OCID
  • tenancy OCID
  • region
  • private API key
  • API key fingerprint
  • catalog
Picture
Enter Oracle AI Data Platform connection credentials here
Two details are especially worth noting here. First, you need to select the connection details file exported from AIDP. Second, you also need to provide the private API key separately. Once everything has been entered correctly, save the connection.

This is one of the most important stages because even small mistakes in endpoints, permissions, fingerprints or authentication settings can prevent the connection from succeeding.

Step 5: Create a Dataset from the New Connection

After the connection has been created successfully, locate it in OAC, open the action menu, and select Create Dataset.
Picture
In Oracle Analytics Cloud, create a dataset
This is the point where the connection moves from simple setup into practical use.

Step 6: Select the AIDP Connection and Use the Exposed Assets

In the Create Dataset dialog, select the newly created AIDP connection
Picture
From there, OAC can begin to expose the catalogued assets available through the connection. This is where the setup becomes meaningful from an analytics perspective, because the governed assets surfaced through AIDP are now available for use in Oracle Analytics workflows.
Picture
An OAC workbook created from an AIDP catalog sourced dataset
What Caught Me Out

A few things to look out for:
  • permissions may be correct in one platform but still not sufficient end to end
  • endpoint details may need to be copied carefully and exactly
  • you may need to edit the JSON configuration file from AIDP to add the API key fingerprint. If you do not have access to the OCI Console, you can request the fingerprint from an OCI administrator, who can retrieve it from the IAM user management section. Oracle also documents the relevant OCI credential steps here: Locating OCI credentials
  • the region in the downloaded JSON file may default to the tenancy’s default OCI region rather than the actual region of the AIDP instance. If your AIDP environment is provisioned in a different region, you will need to manually update the region entry in the JSON file
  • region, tenancy or network configuration may affect what works
  • AIDP compute associated to the catalog must be running for the catalog to be accessible in OAC

Why This Matters Architecturally

Although this is a technical connection exercise, it matters for a broader reason. 
Connecting Oracle Analytics Cloud to the Oracle AI Data Platform catalog is part of bringing analytics closer to the wider governed enterprise data ecosystem. It strengthens the relationship between:
  • analytics consumption
  • governed discovery
  • metadata visibility
  • enterprise data platform capabilities

​That matters because enterprise analytics increasingly needs to sit within a broader data and AI architecture, not outside it.

Conclusion

Connecting Oracle Analytics Cloud to the Oracle AI Data Platform catalog is a practical step towards a more integrated analytics and data platform experience.


It allows OAC to participate more directly in the governed discovery and visibility of enterprise data assets, while bringing analytics users closer to the broader AIDP ecosystem.
0 Comments

The Architecture Beneath Enterprise AI

29/3/2026

0 Comments

 
Picture
Introduction

Enterprise AI is one of the most widely used terms in technology today, but it is also one of the most loosely defined.

In many discussions, enterprise AI is reduced to a model, a chatbot or an assistant connected to company data. In practice, the reality is far more complex. Reliable enterprise AI depends on a layered architecture that brings together data, business meaning, retrieval, tools, agents, collaboration and governance.

That distinction matters because enterprise AI is not simply about generating responses. It is about generating responses and outcomes that are grounded in trusted data, aligned with business definitions, connected to the right systems and governed appropriately.

A useful way to think about this is as a stack of architectural layers sitting beneath the user-facing AI experience.
Picture
Enterprise Data Sources

At the base of the stack sit the enterprise data sources. These include:

  • databases
  • operational applications
  • SaaS platforms
  • APIs
  • documents and content repositories
  • events and streaming data
  • external data sources

This is the raw material of enterprise AI. It includes both structured and unstructured data and is often fragmented across multiple systems, business domains and platforms.

Without access to this landscape, enterprise AI has no business context to work with.
Picture
Semantic Layer and Ontologies

Above the data sources sits the semantic layer and, in some cases, broader ontologies.

This layer provides the shared business meaning that allows enterprise data to be interpreted consistently.

It can include:
  • business definitions
  • metrics and calculations
  • hierarchies and relationships
  • rules and constraints
  • lineage and provenance

This layer is critical because data on its own does not explain what it means. Enterprise AI needs more than access to records and documents. It needs to understand how the business defines concepts such as revenue, margin, customer value, supplier performance or attrition risk.

Without this semantic grounding, enterprise AI risks generating answers that may sound plausible but are not aligned with the organisation’s own definitions.
Picture
RAG as the Evidence Layer

Retrieval-augmented generation, or RAG, adds another important layer.

If semantics provide the meaning, RAG provides the evidence.

RAG allows AI systems to retrieve relevant context at runtime from trusted enterprise sources. This may include documents, policies, reports, knowledge bases or other forms of business content.

The role of RAG is not to replace semantics. It is to complement it by ensuring that AI agents and assistants can draw on the most relevant supporting information when generating responses.

In practice, this means RAG helps ground responses in enterprise knowledge and reduces the risk of answers being produced without supporting context.
Picture
MCP as the Connectivity Layer

Another increasingly important layer is MCP, or Model Context Protocol.

If semantics provide meaning and RAG provides evidence, MCP provides connectivity.

MCP connects models and agents to tools, systems and actions. It allows AI-driven experiences to move beyond simply answering questions and into interacting with enterprise systems, invoking tools and participating in workflows.

This is an important distinction because enterprise AI is not just about information access. It is increasingly about enabling AI systems to work with enterprise applications, APIs, analytical tools and operational processes.

Without this connectivity layer, AI remains largely conversational. With it, AI begins to participate in the broader enterprise operating model.
Picture
LLMs and AI Agents

Above these layers sit the LLMs and AI agents that most people associate most directly with AI.

This layer includes:
  • conversational assistants
  • copilots
  • domain-specific agents
  • AI applications

The LLM provides the reasoning capability. The agents provide task orientation, domain-specific behaviour, orchestration and interaction patterns.

This is the layer that reasons over enterprise context, uses tools and generates outcomes.

However, this layer is only as effective as the layers beneath it. Without enterprise data, semantics, retrieval and connectivity, LLMs and agents have far less ability to operate reliably in enterprise settings.
Picture
A2A and Agent Collaboration

As enterprise AI evolves, it is increasingly clear that a single agent is often not enough.

This is where agent-to-agent collaboration becomes important. A2A, or Agent2Agent, is a Google-backed open protocol designed to allow specialised agents to communicate, coordinate and delegate tasks across systems.

A2A enables specialised agents to communicate, delegate tasks and coordinate with one another. Instead of relying on one general-purpose assistant to handle everything, enterprises can begin to orchestrate multiple specialised agents working together.

That creates new possibilities for:

  • delegation across agents
  • coordination of multi-step tasks
  • domain-specific specialisation
  • more scalable agentic architectures

This moves enterprise AI beyond isolated assistants towards a more collaborative agent ecosystem.
Picture
Governance and Trust

Running across the entire stack is governance and trust.

This is not a separate add-on. It is a cross-cutting requirement that applies to every layer.

This includes:

  • policies and guardrails
  • lineage and provenance
  • security and privacy
  • compliance and audit
  • monitoring and observability

This is what turns AI into enterprise AI.

The enterprise requirement is not just to make AI useful. It is to make AI trustworthy, observable, compliant and aligned with organisational policy.
Picture
Bringing the Layers Together

When viewed as a whole, enterprise AI begins to look much less like a single technology and much more like an architecture.
  1. Enterprise data provides the source material.
  2. Semantics provide the meaning.
  3. RAG provides the evidence.
  4. MCP provides the connectivity.
  5. LLMs provide the reasoning.
  6. A2A provides the collaboration.
  7. Governance provides the trust.

That combination is what allows enterprise AI to move from generic model interaction towards grounded, connected and actionable outcomes.

Conclusion

Enterprise AI is far more than an LLM connected to company data.

It is a layered architecture that can combine enterprise data, semantics, retrieval, connectivity, agents, collaboration and governance.

Not every enterprise AI implementation will require every layer in the same way. Some use cases may not need RAG, MCP or agent-to-agent collaboration at all. However, certain foundations are far harder to compromise on. Trusted enterprise data, clear business meaning and appropriate governance are often non-negotiable if AI is to operate reliably in an enterprise setting.

Understanding that architecture matters because it helps explain why enterprise AI is not just a model problem. It is a data, meaning, systems and governance problem as well.

As organisations move beyond experimentation and towards operational AI, it is these underlying architectural layers, and the choices they make about which are essential for each use case, that will determine whether enterprise AI remains superficial or becomes truly useful, trusted and effective.
0 Comments

Inside the Oracle AI Data Platform Summit London: Building the Agentic Enterprise Stack

25/3/2026

0 Comments

 
Picture
Introduction

I attended the Oracle AI Data Platform Summit in London the day after attending the Oracle AI World London 2026. While AI World provided the broader direction for Oracle’s agentic AI push across applications, infrastructure, database and analytics, the AIDP Summit offered an opportunity to hear directly from the product team and go much deeper into the platform thinking behind that message.

That made the summit particularly valuable. The previous day’s keynotes established the scale of Oracle’s AI ambitions, but the AIDP Summit began to explain how Oracle is thinking about the data, tooling, governance and runtime architecture required to support enterprise AI in practice.

In a recent blog post, I wrote about Oracle AI World London 2026 and the way Oracle’s messaging had clearly shifted from assistants and copilots towards more agentic workflows. At the summit, that same message continued, but with much more detail on the underlying platform components.

From Broad AI Messaging to Platform Detail

One of the most interesting aspects of the summit was how clearly it connected back to the themes from AI World.

At AI World, Oracle’s messaging framed OCI as the enterprise AI platform across models, agents and governance. Fusion Agentic Applications, Agent Studio and the broader move towards orchestrated AI workflows all pointed to a strategy that is now much bigger than standalone assistants.

At the AIDP Summit, that story was extended into the platform architecture needed to make enterprise AI operational.

AIDP was positioned not simply as a set of data services, but as an enterprise-ready data and AI foundation supporting:
  • AI-powered development tools
  • agentic business user experiences
  • governed access to data and models
  • orchestration of agents and tools​ ​
Picture
That framing is important because it reinforces the idea that enterprise AI is not just about access to models. It requires a platform foundation that can support the full lifecycle of building, exposing, governing and monitoring AI-driven experiences.

The Emerging Agent Architecture in AIDP

A particularly strong part of the summit was the way Oracle described the emerging agent architecture within AIDP.

Several components stood out.

Agent Studio

Agent Studio was presented as a new AIDP capability for building agents. What stood out here was the combination of a low-code visual experience with support for more code-first and pro-code approaches.

This is important because it suggests Oracle is aiming to support both business-oriented builder experiences and more technical developer workflows.

The ability to test and refine agents, monitor response quality and efficiency, and review detailed performance at the level of individual agents and tools also suggests a much more operational view of enterprise AI development than simple prompt experimentation.

Agent Registry

The Agent Registry was another significant part of the story. Oracle explained that as long as agents support the A2A protocol  , they can be registered.

That is a notable architectural point because it opens the door to a broader ecosystem in which not all agents need to originate from the same platform. Instead, the registry becomes a governed directory of internal and external agent capabilities.

Agent Hub

Agent Hub was positioned as the consumer entry point, enabling discovery of agents, interaction and task orchestration. It works with the agent catalog when an agentic request is made in the hub, and Oracle also discussed the idea of agents calling other agents through A2A interactions.

Taken together, Agent Studio, Agent Registry and Agent Hub suggest a coherent pattern:
  • build agents
  • register agents and tools
  • expose them to users
  • orchestrate interactions across them

This begins to look much more like an enterprise agent platform than a standalone assistant feature.

Governance, Guardrails and Observability

Another strong theme throughout the summit was that Oracle is treating governance and observability as first-class capabilities.

That came through in several areas:
  • out-of-the-box blocking of toxic and malicious content
  • agent guardrails
  • monitoring of deployed agents
  • latency monitoring at thread level
  • token consumption visibility
  • cost estimation and optimisation for generative AI usage

​This matters because it reflects a more realistic enterprise view of AI adoption.
For many organisations, the challenge is no longer just whether they can build an AI agent. The bigger question is whether they can operate one responsibly, observe its behaviour, understand its cost profile and keep it aligned with governance requirements.

The summit sessions suggested that Oracle understands this well. The emphasis was not just on building agents, but on running them in a governed and observable way.

Flexibility in Models and Developer Tooling

​
Another interesting message was the platform’s support for a wide variety of foundational LLMs, with the flexibility to change models over time.

This flexibility was positioned not only as a technical advantage, but also as something that matters for regional compliance and changing enterprise requirements.

That is an important point because model choice is increasingly tied to policy, data residency, cost, performance and risk considerations.

On the development side, Oracle also discussed AI code assist in notebooks and noted that AIDP plugins for VS Code are in the pipeline.

This points to a broader trend within AIDP towards improving developer productivity as part of the platform experience, rather than treating notebooks and data tooling as isolated utilities.

FAIDP and the Changing Nature of Analytics Experiences

The summit also touched on FAIDP and the changing role of AI in insight discovery.

One of the clearest themes here was the shift away from traditional patterns in which IT teams built pre-defined dashboards for business users to consume. The message instead was that AI is changing insight discovery into a more conversational and interactive experience.

What was especially interesting was the suggestion that this evolution no longer stops at insights. It now increasingly moves towards:

  • recommendations
  • scenario simulation
  • actions alongside business users

That is a meaningful shift.

The implication is that analytics is moving from being a system of static outputs towards becoming an interactive layer that can explain, recommend and potentially act.

There was also an important positioning point around FAIDP as a SaaS version of AIDP, with no migration required to move from FDI to FAIDP. For organisations already invested in Fusion Analytics, that is likely to be a very significant message.

My Main Takeaway

What stood out to me most at the summit was that Oracle’s AIDP story is no longer just about assembling data services.

It is increasingly about providing the governed platform foundation for building, registering, exposing, orchestrating and monitoring enterprise AI agents.

That feels like an important shift.

AI World London set the strategic direction. The AIDP Summit then provided a much more detailed view of how Oracle is attempting to turn that strategy into an enterprise platform model.
​

For me, that was the value of attending both events. The combination made it much easier to connect the high-level messaging from the keynotes with the more practical architecture and product direction being discussed by the product team. There were also a number of open question sessions  with a wide range of areas covered ranging from  the future state of analytics with the emergence of agentic AI to Partner Enablement and Product pricing to name a few that were all answered candidly .

Conclusion

Oracle AI World London made it clear that Oracle is pushing strongly into agentic AI across applications, infrastructure and analytics.

The Oracle AI Data Platform Summit then added another layer of depth, showing how Oracle is thinking about the platform capabilities needed to support that vision in practice.

From Agent Studio, Agent Registry and Agent Hub, through to guardrails, observability, developer tooling and the evolution of analytics experiences, the summit painted a picture of AIDP as more than just a data platform. It is increasingly being positioned as the foundation for the next generation of governed enterprise AI.
0 Comments

Oracle AI World London 2026: Agentic AI Takes Centre Stage

24/3/2026

0 Comments

 
Picture
Today I attended Oracle AI World London, and this year’s message was clear from the outset:

This was the year of Agentic AI. Across every layer of the Oracle stack, from database to applications, the focus was on systems that can:
  • reason
  • decide
  • act
What stood out was not just individual announcements, but how consistently this idea showed up:

  • in the database, with private agent execution and embedded governance
  • in OCI, with a unified platform for models, agents, and control
  • in AI Data Platform, with semantic models grounding AI in business meaning
  • in Fusion Applications with fully agentic, outcome-driven workflows

This was not a collection of AI features. It was a coordinated move towards agentic enterprise systems.
A joined-up Oracle AI story

One thing that stood out to me from the event was that Oracle was not presenting AI as a disconnected layer sitting above the enterprise stack. The keynote message felt much more integrated than that. Applications, database, infrastructure, and data platform were all presented as parts of the same broader enterprise AI story. From the stage content and panel discussions, Oracle’s direction looked clear: enterprise AI needs models, agents, governance, and trusted business data working together rather than as isolated components. 

That point matters because it helps explain the shape of the day’s announcements. Rather than one single headline, Oracle presented multiple pieces of an AI operating model: agentic applications at the business layer, agentic innovation in the database, OCI as the enterprise AI platform, and AI Data Platform as the foundation for unifying data and shared business semantics. 

Fusion Agentic Applications move beyond copilots

The main SaaS applications announcement was Oracle’s launch of Fusion Agentic Applications. Oracle described these as a new class of enterprise applications powered by co-ordinated teams of specialised AI agents that are outcome-driven, proactive, and reasoning-based. Built into Oracle Fusion Cloud Applications, they are designed to make and execute decisions inside business processes by securely accessing enterprise data, workflows, policies, approval hierarchies, permissions, and transactional context. 

For me, this is one of the most significant shifts in Oracle’s applications story. Oracle is now pushing a more ambitious proposition: Agentic AI that does not just help a user think, but can participate in the actual flow of work. That is a meaningful step forward because it moves AI closer to business outcomes rather than keeping it at the level of guidance and suggestion.

Oracle AI Database brings agentic AI and governance together

The database announcements were among the most strategically important parts of the keynote, not just because Oracle introduced new agentic AI capabilities, but because it made a strong case for governance being built directly into the data layer. The message was that AI and enterprise data should be architected together across operational databases and data lakehouses, rather than separated by additional layers of data movement and orchestration. 

For me, the most important part of that story was Oracle Deep Data Security. Oracle describes this as database-native, end-user-specific access control, where each end user, or AI agent acting on behalf of that user, can only see the data they are authorised to access. Oracle also positions it as a way to protect against AI-era threats such as prompt injection, while applying least-privilege access and centralising security away from application code. In other words, Oracle is arguing that governance for agentic AI is strongest when it is enforced at the source of the data, inside the database itself. 

That feels highly significant in an agentic AI world. If agents are going to query data dynamically and act within real business workflows, then security cannot just sit in the application tier. It needs to be embedded where the data is actually controlled. This was, in my view, one of the clearest and most important database messages of the day. 

Alongside that, Oracle also announced AI Database Private Agent Factory, which provides a no-code way to build and deploy data-driven agents and workflows, including prebuilt agents such as Database Knowledge Agent, Structured Data Analysis Agent, and Deep Data Research Agent. That is an important part of the story too, but to me it works best as an enabler within the wider message: Oracle wants the database not just to store business data, but to become a secure execution and control point for enterprise-grade agentic AI. 
AI Data Platform and the role of semantics

Another interesting part of the day was how Oracle AI Data Platform sat within the wider message. Oracle’s AI World content says Oracle AI Data Platform unifies enterprise data, applies shared business semantics, and embeds AI directly into workflows. It suggests that AI Data Platform is not just being positioned as storage or plumbing. It is being positioned as part of the grounding layer for enterprise AI, where business meaning is applied to enterprise data before AI is operationalised. 

I think this is a point worth dwelling on. If agentic AI is going to operate reliably in enterprise settings, then it needs more than access to raw data. It needs context. It needs meaning. It needs consistent business definitions. That is why semantics matter. In that sense, semantic models are not just an analytics concern. They are part of the enterprise grounding required to make AI outputs more trustworthy and more useful.
OCI Enterprise AI turns the OCI message into something more concrete

The OCI story at Oracle AI World London was not just that OCI underpins enterprise AI. It was that Oracle now has a more concrete way of packaging that story for customers.

With the general availability of OCI Enterprise AI, Oracle is bringing together models, agents, and governance in a single end-to-end developer offering designed to help teams move from experimentation to production faster. Oracle says the service combines AI intelligence, agentic execution, and built-in controls in one simplified environment, with support for both structured and unstructured data. 

That matters because one of the biggest barriers to enterprise AI is not access to models, but the complexity of stitching together tools, workflows, deployment patterns, and governance. Oracle’s answer is to package OCI Enterprise AI around three integrated layers: Models, Agents, and Governance. 
My takeaway from the day

My biggest takeaway from Oracle AI World London is that Oracle is trying to make AI practical for the enterprise by tying it closely to data, governance, and process.

The headline theme may well have been agentic AI, but the more important point is how Oracle has chosen to bring this to life. Fusion Agentic Applications bring agents into Fusion SaaS business workflows. Oracle AI Database brings agentic execution and governance closer to business data. OCI provides the wider platform layer. Oracle AI Data Platform helps unify enterprise data and apply shared semantics. Together the announcements feel less like a collection of disconnected AI announcements and more like an attempt to define a full enterprise AI stack. 
0 Comments

Column Swapping in Oracle Analytics: A Useful Enhancement in the March 2026 Update

18/3/2026

0 Comments

 
Picture
Introduction

Oracle Analytics updates often include small usability improvements that can significantly enhance how users interact with dashboards. While major capabilities such as AI agents tend to attract the most attention, incremental enhancements to data exploration can have an equally meaningful impact on day-to-day analytics workflows.

Long-time Oracle analytics users may remember the Column Selector feature in Oracle Business Intelligence Enterprise Edition (OBIEE). This capability allowed dashboard authors to define a set of columns that users could switch between dynamically. Instead of creating multiple charts or tables for different metrics, consumers could simply toggle between the available columns within a single visualisation.
Picture

This approach made dashboards far more flexible while keeping their design simple and manageable.


The March 2026 Oracle Analytics update introduces new capabilities that follow a similar principle. Users can now swap the columns used in overlay charts and map legends, allowing consumers to explore different perspectives of the data directly within a visualisation.

While these features may appear small at first glance, they represent an important step towards more interactive and exploratory analytics experiences.

Column Swapping in Visualisations

One of the key themes in the March 2026 update is enabling consumers to change the data being visualised without requiring modifications to the underlying analysis.

By allowing columns to be swapped directly within visualisations, Oracle Analytics enables dashboard designers to create fewer but more flexible charts. Consumers can then adjust the metrics or dimensions being displayed based on the analytical perspective they want to explore.

Two particularly useful examples of this capability are found in overlay charts and map visualisations.

Swapping Columns in Overlay Charts

Overlay charts allow multiple visualisation layers to be combined within a single chart. A common example is a bar chart showing one metric while a line overlay represents another measure.

For example, a visualisation might display:
​
  • Revenue by month as bars
  • Profit as a line overlay

Previously, the columns used for the axis and overlay layers were fixed once the visualisation was created. If users wanted to compare different metrics, dashboard authors often needed to create additional visualisations or redesign the chart.

With the March 2026 update, Oracle Analytics now allows users to 
swap the columns used in an overlay chart’s axis and legend labels directly within the visualisation.
Picture
This means consumers can dynamically switch the metrics being compared. For example, the same chart could easily be used to explore:
  • Revenue and profit
  • Profit and margin
  • Invoice amount and revenue

This capability significantly improves analytical flexibility while avoiding the need to duplicate charts for every metric combination.

Swapping Columns for Map Legends

A similar capability has also been introduced for map visualisations.

Maps commonly use colour legends to represent a business metric across geographic regions. For example, a map might highlight regions based on revenue performance or customer counts.

Previously, the metric used in the legend was fixed when the visualisation was designed. If users wanted to view the map using a different measure, authors typically needed to create separate map visualisations.

With the March 2026 update, Oracle Analytics now supports swapping the column used in map visualisation legends. Consumers can change the metric represented in the legend without modifying the underlying analysis.

For example, a single map could allow users to switch between:

  • Margin Percent by Country
  • Revenue by Country
  • Number of Customers by Region
Picture
​This provides a more flexible way to explore geographic performance while keeping dashboards simpler and easier to maintain.

Why This Matters for Dashboard Design

Although these enhancements may appear minor compared to larger platform features, they have important implications for dashboard design.

Traditionally, dashboard authors often needed to create multiple versions of the same chart in order to support different analytical perspectives. This could lead to crowded dashboards containing many similar visualisations that differed only by the metric being displayed.

By enabling columns to be swapped directly within visualisations, Oracle Analytics allows authors to design fewer but more flexible dashboards. Consumers can then explore the data more freely without requiring additional dashboard elements.

This approach offers several benefits:

  • dashboards remain cleaner and easier to navigate
  • fewer duplicate visualisations need to be maintained
  • consumers can explore multiple analytical perspectives within a single chart

In many ways, these capabilities bring back a familiar concept from OBIEE’s column selector feature, but applied to modern visualisations in Oracle Analytics.

Conclusion

While large platform capabilities often dominate release announcements, improvements to usability and exploration can have an equally significant impact on how organisations interact with their data.

The column swapping enhancements introduced in the March 2026 Oracle Analytics update make it easier for consumers to explore data directly within visualisations while allowing dashboard authors to keep designs simpler and more maintainable.

Features such as overlay chart column swapping and map legend swapping may appear small individually, but together they represent another step towards more interactive and user-driven analytics experiences in Oracle Analytics.
0 Comments

Oracle AI Data Platform as the Bridge Between Enterprise Data and AI Analytics

11/3/2026

0 Comments

 
Picture
Introduction

​
In a recent article, I explored why semantic models remain foundational in the age of AI-driven analytics. As conversational interfaces and AI agents become more common in analytics platforms, the need for governed business definitions and structured context underneath becomes even more important.

Natural language interfaces make analytics easier to use, but they also increase the importance of definitions for metrics, hierarchies and relationships that are consistent. Without that structure, even simple questions such as "Which definition of revenue should the AI use?" can lead to inconsistent answers.
While semantic models help provide this grounding within analytics tools, the challenge becomes more complex when we consider the broader enterprise data landscape.
Picture
The Fragmented Nature of Enterprise Data

Most organisations operate across a wide range of systems and data environments. These may include:
​​
  • ERP systems
  • CRM platforms
  • operational databases
  • SaaS applications
  • data warehouses
  • data lakes
  • unstructured documents and knowledge sources

​Each of these environments may have its own data structures, governance rules and access patterns. Connecting analytics and AI workloads across these systems is therefore not simply a technical problem of connectivity, but an architectural challenge of consistency and governance.

As organisations begin to adopt AI-driven analytics and AI agents, expectations change. Users increasingly expect systems to answer questions using data from across the enterprise ecosystem, often in near real time.
This raises an important question: how can organisations provide reliable and governed access to data across such a diverse landscape?
Picture
Oracle AI Data Platform as the Architectural Bridge

This is where Oracle AI Data Platform (AIDP) plays a pivotal role. Rather than being just another collection of services, AIDP can be viewed as a platform layer that helps bridge disparate enterprise data ecosystems. It provides a unified environment that supports data integration, governance, discovery and access across both structured and unstructured data sources.
​
Conceptually, the architecture begins to look something like this:
Picture
Governance and the Role of the Data Catalog

An important component of this architecture is governance. As data from different systems is brought together, organisations need mechanisms to ensure that the data being used by analytics tools and AI workloads remains trusted and discoverable.

Catalog capabilities play a key role here. They allow organisations to:
​
  • manage metadata
  • track lineage
  • support discovery of data assets
  • maintain governance across the platform

This helps create a "single pane of glass" through which enterprise data assets can be understood and accessed across the organisation.

​Extending the Architecture with Business Semantics

However, governance and cataloging alone do not fully address the challenge of consistent business meaning.
Today, semantic models typically exist within analytics platforms such as Oracle Analytics Cloud. These models define business metrics, hierarchies and relationships that allow users to interpret structured datasets consistently.

In a previous blog post I discussed how these semantic models help ground AI-driven analytics and AI agents by providing clear definitions for business concepts. As AI workloads expand across the broader data platform, there is an opportunity to extend this concept further.

Recently I proposed an idea in the Oracle Analytics Idea Lab suggesting the introduction of a business semantic layer within the AIDP catalog that would complement the semantic models already available in Oracle Analytics Cloud.

The goal would not be to replace existing semantic models, but to make governed business definitions more broadly accessible across the data ecosystem. Such a layer could provide shared definitions for key metrics such as revenue, margin or customer value that could be consumed by analytics tools, AI agents, data science workloads and other applications.
​
Importantly, this approach could also extend semantic context beyond purely structured datasets, helping connect structured enterprise data with unstructured knowledge sources.

​Supporting the Next Generation of AI Analytics

As organisations move towards AI-driven analytics, the importance of strong data architecture becomes even clearer.

AI agents and conversational analytics change how users interact with data, but they do not remove the need for governance, cataloging and semantic structure. If anything, these architectural components become even more critical in ensuring that AI-generated insights remain consistent and trustworthy.

Platforms such as Oracle AI Data Platform help provide the foundation for this architecture by bridging disparate enterprise data ecosystems and providing a unified environment for governance, integration and access.

Combined with well-designed semantic models and governed catalogs, this creates a powerful foundation for reliable AI-powered analytics.
Picture
Conclusion

Enterprise data environments are by nature fragmented, but AI-driven analytics increasingly expects a unified view of that data.

Oracle AI Data Platform provides an architectural bridge that helps organisations consolidate access to data across their ecosystems while maintaining governance and control. By combining integration, cataloging and analytics capabilities within a single platform layer, AIDP creates the potential for a true "single pane of glass" across enterprise data assets.

As AI analytics continues to evolve, extending semantic context across the broader data platform could become an important next step in ensuring that AI-generated insights remain grounded in consistent business meaning.
0 Comments

Why the Semantic Model Still Matters in the Age of AI Analytics

9/3/2026

0 Comments

 
Picture
The rapid evolution of AI driven analytics is changing how users interact with data. Instead of navigating dashboards or writing queries, users can now ask natural language questions and receive analytical insights instantly.

Oracle Analytics AI Agents are a good example of this shift. They allow users to explore data conversationally while combining structured analytics with contextual knowledge.
​
At first glance it may appear that traditional components of business intelligence architecture such as the semantic model are becoming less important in this new AI driven world.
In reality, the opposite is true.

As organisations introduce AI agents into their analytics platforms, the semantic model becomes even more important because it provides the structure and governance required to interpret enterprise data correctly.

Conversational Analytics in Oracle Analytics

Oracle Analytics AI Agents allow users to ask analytical questions directly against governed datasets.
For example, the agent can analyse football club performance data and generate insights from natural language questions.
Picture
Picture
This conversational interface makes analytics far more accessible, but it also raises an important architectural question:
​

How does the AI agent understand what the data actually means?

The Semantic Model as the Foundation of Enterprise AnalyticsEnterprise data is typically stored in structures optimised for storage and processing rather than business interpretation.

Tables may contain technical column names, encoded values or highly normalised structures that make sense to engineers but not necessarily to business users.
The semantic model solves this problem by defining the business meaning of the data.

Within Oracle Analytics, the semantic model provides:
  • business friendly naming
  • defined metrics and calculations
  • hierarchies and relationships
  • governed data access rules
The semantic model effectively acts as the bridge between raw data and business understanding.
Picture
This structure allows the platform to interpret analytical questions consistently.

Governing Business Metrics

​Another critical role of the semantic layer is defining the organisation’s key metrics.

Metrics such as invoice amount, revenue, order value or customer counts often require precise definitions and calculations.
These definitions are implemented directly within the semantic model.
Picture
By centralising metric definitions, Oracle Analytics ensures that dashboards, reports and AI agents all rely on the same authoritative calculations.
This prevents inconsistencies and ensures that analytical answers remain aligned with business definitions.

Knowledge Documents: Adding Context to AI AgentsWhile the semantic model defines the structure of enterprise data, Oracle Analytics AI Agents can also use knowledge documents to provide additional context.
These documents may contain:
  • definitions of business concepts
  • policy explanations
  • domain specific knowledge
  • analytical guidance
Oracle recently introduced additional configuration options when uploading these documents.
Administrators can now specify both document priority and document language.
Picture
Document priority allows organisations to control which documents are treated as more authoritative when the AI agent retrieves knowledge.
For example, curated internal documentation may be prioritised over supplementary material.
Picture
The language setting allows organisations operating across multiple regions to maintain multilingual knowledge sources for a single AI agent.
This ensures that the agent retrieves the most relevant document based on the language of the user’s question.

Semantic Models and Knowledge Documents Working Together

The semantic model and knowledge documents play complementary roles in grounding AI generated answers.
​

The semantic model provides:
  • structured data definitions
  • governed metrics
  • relationships between datasets
Knowledge documents provide:
  • contextual explanations
  • business policies
  • domain specific guidance

Together they form two layers of grounding:

Structured grounding
Provided by the semantic model, ensuring that queries are interpreted correctly against governed datasets.

Contextual grounding
Provided by knowledge documents, helping the AI agent interpret business concepts and policies.
This combination helps ensure that AI generated insights remain accurate and aligned with organisational definitions.

Why This Matters for Enterprise AI

The introduction of AI agents does not eliminate the need for well designed analytics architecture.
If anything, it reinforces its importance.

Conversational analytics may change the user interface, but the underlying principles of governed metrics, well structured semantic models and curated knowledge remain essential.

For architects and data leaders, the lesson is clear:

Successful enterprise AI is not just about models and prompts. It is about grounding those models in trusted, well structured organisational knowledge.
0 Comments

Subtle Improvements to Oracle Analytics AI Agent Knowledge Management

5/3/2026

0 Comments

 
Picture
When I first wrote about Oracle Analytics AI Agents a couple of months ago, one of the key capabilities I highlighted was the ability to upload documents that provide contextual knowledge to the Oracle Analytics AI Agent.
These documents allow the AI agent to interpret concepts and metrics correctly when answering analytical questions.
While working with the feature again recently, I noticed that Oracle has introduced a couple of additional configuration options in the March 2026 Oracle Analytics Cloud update when uploading knowledge documents. 
Picture
Setting document priority when uploading external knowledge.
Picture
Selecting the language associated with the knowledge document.
There is moe detail about these new configuration options in this Oracle Analytics video.

​These include Priority and Language, which provide more control over how knowledge is retrieved and used by the AI agent.

Document Priority

When uploading a document, you can now assign a priority level.
The available options are:
  • High
  • Regular
  • Low
This setting influences how strongly the document is considered when the AI agent retrieves knowledge to answer a question.
In practice this allows administrators to guide the agent towards more authoritative sources. For example, organisations may choose to assign higher priority to internal documentation or curated domain knowledge while leaving supplementary material at a lower priority.
This is particularly useful when multiple documents are attached to an agent in which case, this setting can be used to give certain documents precedence.

Document Language

Another useful addition is the ability to specify the language of the knowledge document.
Oracle Analytics supports multiple languages including English, Spanish, French, German, Japanese and several others.
Setting the language metadata helps the agent retrieve the most appropriate document when answering questions in different languages.
For organisations operating across international teams, this makes it easier to maintain multilingual knowledge sources for a single AI agent.

Picture
Why These Enhancements Matter​

​At first glance, the Priority and Language settings may appear to be minor configuration options. In reality, they represent an important step towards better governance of enterprise AI.
As organisations begin to rely on AI agents to answer analytical questions, the quality and relevance of the underlying knowledge becomes critical. Without clear control over which documents should take precedence, an AI agent could retrieve information from less authoritative sources, potentially leading to inconsistent or misleading responses.

The introduction of document priority allows organisations to guide the retrieval process. Trusted internal documentation, curated knowledge bases, or officially governed metrics definitions can be prioritised over supplementary materials. This helps ensure that responses generated by the AI agent remain aligned with the organisation’s approved definitions and analytical standards.
The language setting also plays an important role in global organisations. Many enterprises maintain documentation in multiple languages across different regions. By tagging documents with the correct language metadata, Oracle Analytics can ensure that the AI agent retrieves the most relevant knowledge for the user’s query.

Taken together, these enhancements move Oracle Analytics AI Agents closer to a governed enterprise AI model, where organisations can not only provide contextual knowledge, but also control how that knowledge is prioritised and interpreted.
​

For architects and data leaders, this reinforces an important principle: successful AI is not just about models and prompts, but about governed, well structured knowledge.
0 Comments
<<Previous

    Author

    A bit about me. I am an Oracle ACE Pro, Oracle Cloud Infrastructure 2023 Enterprise Analytics Professional, Oracle Cloud Fusion Analytics Warehouse 2023 Certified Implementation Professional, Oracle Cloud Platform Enterprise Analytics 2022 Certified Professional, Oracle Cloud Platform Enterprise Analytics 2019 Certified Associate and a certified OBIEE 11g implementation specialist.

    Archives

    March 2026
    January 2026
    October 2025
    July 2025
    June 2025
    May 2025
    March 2025
    February 2025
    January 2025
    December 2024
    November 2024
    September 2024
    July 2024
    May 2024
    April 2024
    March 2024
    January 2024
    December 2023
    November 2023
    September 2023
    August 2023
    July 2023
    September 2022
    December 2020
    November 2020
    July 2020
    May 2020
    March 2020
    February 2020
    December 2019
    August 2019
    June 2019
    February 2019
    January 2019
    December 2018
    August 2018
    May 2018
    December 2017
    November 2016
    December 2015
    November 2015
    October 2015

    Categories

    All
    ADW
    AI
    FDI
    OAC
    OAS
    OBIEE
    OBIEE 12c

    RSS Feed

    View my profile on LinkedIn
Powered by Create your own unique website with customizable templates.
  • Home
  • Blog