Connecting OAC to AIDP is one thing. Governing access by real user identity is the bigger challenge10/4/2026 Introduction When I recently wrote about how to connect Oracle Analytics Cloud to the Oracle AI Data Platform catalogue, the focus was on getting the integration working end to end. That matters, because it helps bring analytics closer to the governed enterprise data estate and makes trusted data assets more discoverable to analytics users. However, the more I thought about it, the more I felt that connectivity is only part of the story. In enterprise environments, a successful connection is not the same thing as governed access. If data access is ultimately mediated by the credentials configured in the connection rather than the identity of the actual Oracle Analytics Cloud user, there is a risk that users may be able to see or reach data in ways that do not fully reflect their own entitlements. That matters even more now that analytics and AI are increasingly converging. The challenge is no longer just how a dashboard author connects to data. It is how a platform ensures that access policies follow the real user or agent making the request, and that those policies are enforced consistently, centrally, and with audit traces. Why the issue matters At first glance, this can sound like a technical detail about connection setup. In reality, it goes to the heart of enterprise governance. If a shared connection identity is used to broker access, then the logical policy boundary may sit at the connection rather than at the individual user. That can be acceptable for some narrow scenarios, but it becomes much harder to defend in environments where access to data should be based on role, attribute, business context, geography, or sensitivity classification. A simple example helps illustrate the point. Imagine an OAC workbook built over a catalogue-discovered sales dataset exposed through a connection that has been configured with a technical identity. A regional sales manager in the UK should only see UK records, while a counterpart in Germany should only see German records. If the access path is governed primarily by the shared connection identity rather than the runtime identity of the actual OAC user, then the policy boundary risks being applied too broadly. Even if the workbook itself is shared appropriately, the underlying data access model may still not be enforcing the correct row-level boundaries for each individual viewer. It also becomes harder to explain cleanly to security and audit stakeholders. They do not just want to know that data was accessed through an approved tool. They want confidence that the person or agent requesting the data only saw what they were genuinely entitled to see, and that the control point enforcing that decision was robust. Why this matters beyond dashboards This is not only an Oracle Analytics Cloud question. It is increasingly an enterprise AI question too. As organisations move towards agentic workflows, the number of system-to-system interactions grows. Requests may be assembled dynamically, delegated across components, or executed by agents acting on behalf of users. In that sort of world, relying on broad service identities can quickly become uncomfortable. The long-term target should be a consistent model in which analytics users, applications, and AI agents are all governed by the same underlying identity-aware policy framework. Otherwise, there is a risk of ending up with one access model for dashboards, another for APIs, and yet another for agents. Why Oracle’s recent messaging is interesting That is why Oracle’s recent direction is worth paying attention to. In Oracle’s recent Enterprise-Ready AI webinar for Oracle AI Data Platform, the messaging was not just about clever prompts or isolated demos. The emphasis was on building dependable, governed AI with the right data foundation, tooling, architecture, and guardrails. That framing matters, because it places governance and control at the centre of the platform story rather than treating them as afterthoughts. That same direction appears in Oracle’s recent Deep Data Security announcement for Oracle AI Database 26ai. Based on Oracle’s public positioning, Deep Data Security looks set to become a database-native authorisation layer that sits beneath consuming tools such as analytics platforms, applications, APIs, and AI agents. Rather than trusting each consuming layer to implement access rules correctly, Oracle is describing a model in which the database evaluates verified identity and runtime context, then applies declarative SQL policies to enforce row, column, and even cell-level access boundaries centrally. In enterprise AI terms, that places Deep Data Security in the control layer of the stack: below the user experience, below the agent or application orchestration layer, and directly alongside the enterprise data itself. That matters because one of the biggest risks in both analytics and agentic AI is the use of broad service identities. If an analytics connection, application tier, or agent framework connects with more privilege than the end user should actually have, then a mistake, weak application logic, or even prompt injection can expose data too broadly. Oracle’s stated answer is to push least-privilege enforcement down into the database so that agents, analytics workloads, and normal application workloads are all constrained by the same underlying policy model. In other words, even if the request is assembled dynamically by an agent or arrives through a shared connection path, the database should still be able to determine who the real requester is, what context applies, and what subset of data that requester is genuinely allowed to see or act upon. If that model is realised as Oracle is signalling, it would help resolve the exact issue discussed earlier in this article. Instead of relying solely on the credentials configured in the OAC connection, the longer-term pattern would be to propagate end-user or agent identity and let database-enforced policies decide access at execution time. That would make it far easier to ensure that human users only see authorised records, that AI agents act within tightly bounded privileges, and that audit trails show which real identity was behind the request. The broader significance is that Deep Data Security is not just another security feature. It has the potential to become a foundational trust layer for enterprise AI, ensuring that both agentic and non-agentic workloads can only access the data they are entitled to, regardless of which tool, interface, or autonomous workflow initiated the request. What good looks like For me, a strong enterprise pattern should aim for a few clear outcomes:
The following conceptual view shows how that future-state pattern could fit together across Oracle structured data, non-Oracle structured data, and unstructured content. In this model, AIDP acts as the discovery, orchestration, and semantic coordination layer across the wider estate. It carries identity and policy context from the consuming workload, whether that workload is a standard analytics flow in OAC or an agentic AI workflow. The key point is that both analytics and agentic AI follow the same governance path. Identity is propagated, policy intent is preserved, enforcement happens as close to the data as possible, and audit trails remain tied to the real requester rather than a broad shared connection. Where this could lead
I think this is where the OAC and AIDP story starts to become much more interesting. The initial connection between Oracle Analytics Cloud and the Oracle AI Data Platform catalogue is useful because it improves discoverability and brings analytics closer to a governed data estate. But the broader enterprise question is how that access path evolves into a model where the real requesting user identity and runtime context drive policy decisions consistently. A forward-looking view is that AIDP could increasingly act as the orchestration and governance-aware access layer across a much wider enterprise data landscape, while Deep Data Security becomes the database-resident policy enforcement layer beneath it. In that sort of model, AIDP would not just catalogue Oracle-native assets. It could also organise access to non-Oracle platforms, federated structured sources, and unstructured content stores, while carrying identity context, workload context, and policy intent across analytics, applications, and agentic AI workflows. That matters because enterprise AI is rarely confined to a single vendor estate. Valuable business context may sit in Oracle databases, third-party cloud platforms, SaaS applications, object storage, document repositories, and unstructured sources such as PDFs, transcripts, emails, and reports. For Oracle-managed structured data, Deep Data Security could provide the strongest final trust boundary by evaluating propagated identity and context at execution time and enforcing least-privilege access directly in the database. For non-Oracle structured data and unstructured repositories, the same principle should still apply, with AIDP acting as the policy-aware coordination layer and source-native controls enforcing access locally. In that sense, Deep Data Security could become the Oracle data-plane enforcement pattern, while AIDP provides the broader control-plane and orchestration pattern across heterogeneous enterprise sources. This future-state view directly addresses the limitation observed in the current OAC connection pattern. Today, the concern is that the configured connection credentials can become the effective access identity. In the model described here, that burden shifts away from the shared connection and back towards propagated user or agent identity, with policy enforcement happening at execution time as close to the data as possible. Seen this way, AIDP and Deep Data Security are not competing controls but complementary parts of the same enterprise AI stack: AIDP organises, exposes, and orchestrates access across structured and unstructured assets, while Deep Data Security provides the final database-enforced trust boundary for Oracle-managed data and federated controls play the equivalent role for external sources. Closing thoughts Connecting OAC to AIDP is one thing. Extending that into a model where access is governed by real user identity across analytics and AI workflows is the next important step. That does not reduce the value of the integration. If anything, it reinforces why it matters. As analytics platforms become more tightly connected to broader enterprise data and AI ecosystems, the quality of the security and governance model becomes even more important. From Oracle’s recent messaging, it is clear that this bigger picture is very much on the radar. The direction of travel appears to be towards a more identity-aware, policy-driven, and enterprise-ready model. We are not fully there yet, but it is an important space to watch as the platform continues to evolve. Reference points used in this post
0 Comments
The rapid evolution of AI driven analytics is changing how users interact with data. Instead of navigating dashboards or writing queries, users can now ask natural language questions and receive analytical insights instantly. Oracle Analytics AI Agents are a good example of this shift. They allow users to explore data conversationally while combining structured analytics with contextual knowledge. At first glance it may appear that traditional components of business intelligence architecture such as the semantic model are becoming less important in this new AI driven world. In reality, the opposite is true. As organisations introduce AI agents into their analytics platforms, the semantic model becomes even more important because it provides the structure and governance required to interpret enterprise data correctly. Conversational Analytics in Oracle Analytics Oracle Analytics AI Agents allow users to ask analytical questions directly against governed datasets. For example, the agent can analyse football club performance data and generate insights from natural language questions. This conversational interface makes analytics far more accessible, but it also raises an important architectural question: How does the AI agent understand what the data actually means? The Semantic Model as the Foundation of Enterprise AnalyticsEnterprise data is typically stored in structures optimised for storage and processing rather than business interpretation. Tables may contain technical column names, encoded values or highly normalised structures that make sense to engineers but not necessarily to business users. The semantic model solves this problem by defining the business meaning of the data. Within Oracle Analytics, the semantic model provides:
This structure allows the platform to interpret analytical questions consistently. Governing Business Metrics Another critical role of the semantic layer is defining the organisation’s key metrics. Metrics such as invoice amount, revenue, order value or customer counts often require precise definitions and calculations. These definitions are implemented directly within the semantic model. By centralising metric definitions, Oracle Analytics ensures that dashboards, reports and AI agents all rely on the same authoritative calculations. This prevents inconsistencies and ensures that analytical answers remain aligned with business definitions. Knowledge Documents: Adding Context to AI AgentsWhile the semantic model defines the structure of enterprise data, Oracle Analytics AI Agents can also use knowledge documents to provide additional context. These documents may contain:
Administrators can now specify both document priority and document language. Document priority allows organisations to control which documents are treated as more authoritative when the AI agent retrieves knowledge. For example, curated internal documentation may be prioritised over supplementary material. The language setting allows organisations operating across multiple regions to maintain multilingual knowledge sources for a single AI agent.
This ensures that the agent retrieves the most relevant document based on the language of the user’s question. Semantic Models and Knowledge Documents Working Together The semantic model and knowledge documents play complementary roles in grounding AI generated answers. The semantic model provides:
Together they form two layers of grounding: Structured grounding Provided by the semantic model, ensuring that queries are interpreted correctly against governed datasets. Contextual grounding Provided by knowledge documents, helping the AI agent interpret business concepts and policies. This combination helps ensure that AI generated insights remain accurate and aligned with organisational definitions. Why This Matters for Enterprise AI The introduction of AI agents does not eliminate the need for well designed analytics architecture. If anything, it reinforces its importance. Conversational analytics may change the user interface, but the underlying principles of governed metrics, well structured semantic models and curated knowledge remain essential. For architects and data leaders, the lesson is clear: Successful enterprise AI is not just about models and prompts. It is about grounding those models in trusted, well structured organisational knowledge. At a recent Oracle Analytics Partner Meeting, one demo stood out to me (the others were great as well!) - the new AI Agent for Oracle Analytics Cloud (OAC). I’ve since spoken further with the product manager and been granted early access ahead of its LA (limited availability) in the November 2025 release, and I can already see the foundations of something significant taking shape. At first glance, the OAC AI Agent looks and feels similar to the Fusion AI Agent Studio -and that’s no coincidence. Oracle appears to be unifying its redwood AI agent look and feel across platforms, enabling analytics, applications, and custom experiences to have a unified user experience. In OAC, this translates into an embedded conversational interface that sits directly within your analytics workspace. Ask a question, and the agent doesn’t just return a text summary - it understands your semantic model, data lineage, and context before generating a response. From Chatty to Knowledgeable: The Librarian Analogy To understand what makes this so important, it helps to think of the AI Agent as a librarian. A large language model (LLM) on its own is like a well-spoken librarian with an excellent memory but no access to your organisation’s archive. Ask them a question, and they’ll respond confidently and eloquently but they’re drawing only on general world knowledge and patterns they’ve learned before. The result often sounds convincing, yet it may lack the precision or evidence that a business decision demands. The OAC AI Agent, on the other hand, gives that librarian the keys to your private archive. When you ask a question, they don’t just rely on memory and their extensive real-world knowledge; they walk into your own library of governed data, reports, and documents, retrieve the most relevant material, and then craft a response grounded in fact. That’s the power of Retrieval-Augmented Generation (RAG) - it lets Oracle’s AI Agent combine the fluency of language models with the factual grounding of your enterprise knowledge. How the OAC AI Agent Works Creating an AI Agent in Oracle Analytics Cloud To begin creating an AI Agent, navigate to the menu and select the Create AI Agent option. This initiates the process and brings you directly to the AI Agent configuration Immediately upon entering the configuration screen, you are prompted to add a dataset that will serve as the foundation for the AI Agent. It is essential to ensure that this dataset has already been indexed and appropriate synonyms for attributes have been configured. These preparatory steps are crucial for enabling the AI Agent to effectively leverage the dataset and provide meaningful, context-aware responses. You are then taken to the configuration screen Configuring and Supplementing the OAC AI Agent Step 1: Entering Supplemental Instructions Begin by providing supplemental information that offers the agent additional context regarding its specific use case. Additional prompt Instructions will help the agent better interpret user questions on a functional domain. This ensures the AI Agent is tailored to the unique requirements and environment it will operate within. Step 2: Defining the First Message The First Message serves as an introductory text displayed to users interacting with the agent. It describes the agent’s purpose and sets expectations for what the agent is designed to achieve. Step 3: Saving the Agent After all relevant information has been entered, proceed to save the agent. This action records the configuration and prepares the agent for further enhancement. Step 4: Supplementing with Documents Once the agent has been saved, you can enhance its capabilities by supplementing the previously entered contextual information with additional documents. Uploading these documents grounds the agent in your organisation’s custom enterprise knowledge, allowing it to provide more accurate and relevant responses. OAC AI Agent: Technical Foundations At its core, the OAC AI Agent leverages the vector search capabilities of the Oracle infrastructure which forms the backbone of OAC. This vector search enables the agent’s retrieval augmented generation (RAG) functionality, allowing it to efficiently surface relevant information in response to user queries. The OAC AI Agent achieves this by integrating three essential components, each playing a critical role in transforming natural-language questions into trustworthy, contextual insights. 1. Intent Recognition (LLM Layer) The large language model (LLM) layer is responsible for interpreting what the user is seeking. It analyses the natural-language query to determine the user’s intent and aligns this intent with relevant datasets, key performance indicators (KPIs), or dashboards available within OAC. 2. Retrieval Layer (RAG Engine) Once the user’s intent has been established, the agent’s retrieval layer searches for pertinent content across a range of defined governed sources. This process begins with OAC’s own semantic model and expands to include external knowledge repositories. Examples custom knowledge files that have been uploaded to the system or supplemental information defined in the AI agent. 3. Response Rendering (OAC Context) After retrieving the necessary data and knowledge, the information passes through Oracle’s Analytics Visualisation framework. The agent then generates a natural-language response that is firmly rooted in verified data, ensuring that every response respects OAC’s metadata, data lineage, and security protocols. Key Features and Considerations Dataset Preparation and Management
How the OAC AI Agent Delivers Value The OAC AI Agent produces responses that are designed to be highly effective for business users. This is achieved through a combination of generative AI capabilities, robust grounding in enterprise knowledge, and adherence to organisational standards.
This unique blend of conversational fluency and factual accuracy distinguishes the OAC AI Agent from standalone chat-based AI tools, delivering responses that are both engaging and trustworthy for enterprise use. Early Days, Big Potential
Let’s be clear — this feature is in its infancy. The current build focuses on natural-language exploration incorporating Retrieval-Augmented Generation (RAG) and narrative generation, with a roadmap that will expand its reasoning and automation capabilities over time. What’s exciting isn’t just the interface, but the architecture that’s emerging beneath it. For the first time, Oracle Analytics is embracing Retrieval-Augmented Generation (RAG). That means the AI Agent won’t rely solely on a large language model to generate responses. Instead, it will retrieve and ground its output in enterprise data and knowledge — both structured and unstructured. In practical terms, this opens the door for analysts and business users to ask questions that blend internal data with documents, policies, reports, and contextual information stored across the organisation. Whether it’s sales performance data, a product specification PDF, or a customer-service transcript, the AI Agent will eventually be able to bring these sources together to deliver context-aware insights. Bringing Unstructured Knowledge into the Analytics Conversation Historically, analytics platforms have struggled to bridge the gap between structured data (tables, metrics, and KPIs) and unstructured information (documents, notes, images, or messages). With RAG, Oracle is moving to close that gap. This isn’t just about generating summaries — it’s about creating a richer, more informed analytical experience. Imagine asking: “What were the main factors behind last quarter’s decline in customer satisfaction?” Today, OAC might point you to a metric or dashboard. With RAG, the AI Agent could augment that response with context drawn from call-centre transcripts, customer feedback reports, or support documentation — all retrieved securely from enterprise knowledge stores. The result is a shift from data-driven insights to knowledge-driven understanding. Governed Intelligence, Oracle Style One of the key advantages here is governance. Unlike standalone chatbots, the OAC AI Agent inherits the same security, metadata, and lineage controls that underpin Oracle Analytics. Responses remain explainable, consistent, and aligned with the organisation’s governed data model — ensuring that insights stay reliable even as AI becomes more conversational. This approach also complements Oracle’s broader AI ecosystem. The same underlying framework powers Fusion Applications and APEX AI Agents. As these services evolve, we can expect deeper integration, shared prompt orchestration, and unified management of knowledge sources across the Oracle Cloud stack. Looking Ahead The OAC AI Agent represents a starting point, not a destination. It’s a glimpse into where analytics is heading — from dashboards and KPIs towards context-aware conversations grounded in enterprise knowledge. As I explore this feature further through early access, I’ll be focusing on:
For now, it’s early days — but the direction is clear. With the AI Agent, Oracle Analytics isn’t just adding generative AI to dashboards; it’s laying the foundation for a new class of governed, knowledge-aware analytics experiences. Stay tuned — I’ll share a deeper hands-on review once the November 2025 update goes live. In the previous post, we traced how Fusion Data Intelligence (FDI) evolved from OBIA. In this second instalment of our FDI‑introductory series, you’ll explore the underlying technology and architecture that power FDI’s cloud-native analytics platform. 2. The FDI Architecture Ecosystem (The “Big Picture”) At its core, Fusion Data Intelligence (FDI) is a fully managed, cloud-native analytics platform running on Oracle Cloud Infrastructure (OCI). It stitches together your Fusion Cloud Applications, Oracle-managed data pipelines, Autonomous Data Warehouse (ADW), and Oracle Analytics Cloud (OAC) into a seamless, scalable end-to-end analytics solution - one that Oracle deploys, operates, and continuously evolves for you (there is some configuration that administrators need to carry out). First, Fusion Cloud SaaS applications - including ERP, HCM, SCM and CX pillars - serve as the transactional data sources. Oracle provides prebuilt ingestion pipelines tailored to each functional Pillar, handling everything from data extraction and change data capture (CDC) to transformation and consistent mapping into analytics-ready format . These pipelines write data directly into an OCI-hosted Autonomous Data Warehouse, which transform and load the Fusion data into a unified star-schema data model covering multiple functional domains. The schema is:
Once data arrives in the Autonomous Data Warehouse (ADW), Oracle Analytics Cloud takes over for semantic modelling and visualisation. A prebuilt semantic layer wraps the raw star schema into business-friendly subject-area views - covering finance, human resources, supply chain and customer experience - complete with standardised key metrics and dashboards . Through OAC, FDI delivers not just dashboards but intelligent, action-driven analytics, featuring natural-language querying, ML-based forecasting and anomaly detection to name just a few. 🔗 Summary Flow
This end-to-end ecosystem is fully managed by Oracle - covering provisioning, upgrades, performance tuning, and integration with Fusion App releases - offering a friction-free, scalable approach to enterprise analytics (there is some configuration that needs to be done by administrators). 3. Data Movement & Integration FDI’s data movement layer is built around Oracle-managed, prebuilt pipelines that automate ELT and Change Data Capture (CDC) for Fusion Applications (ERP, HCM, SCM, CX). These pipelines are configured and controlled through the intuitive FDI Console, making it easy for administrators to activate, modify or schedule updates with minimal effort. You don’t need to build complex ETL processes - Oracle handles the heavy lifting, while you focus on business relevance and reporting needs . By default, data pipelines are incremental with zero downtime, keeping analytics up-to-date without interrupting service. You also have the flexibility to perform on-demand full reloads, useful for data corrections or model updates - all managed with just a few clicks in the Console . Crucially, the architecture supports extensibility in two key ways:
All pipelines and augmentations are managed through the FDI Console. As an administrator, you can configure initial parameters - such as extract start dates, currency preferences, and schedule frequency - directly in the console interface. Any subsequent edits to pipelines, functional areas, or augmentations are seamless, with Oracle handling deployment and execution behind the scenes ✅ Summary: Core Benefits of FDI Pipelines
4. Lakehouse & Warehousing Foundation At the heart of Fusion Data Intelligence lies a star-schema model deployed on Oracle’s Autonomous Data Warehouse (ADW) - a cloud-native, self-tuning database that underpins fast, enterprise-grade reporting and analytics. Here’s how it’s structured and why it matters: ⚙️ Prebuilt Star Schema in ADW When FDI is provisioned, Oracle automatically creates a prebuilt star schema in ADW. This schema includes fact tables and a network of conformed dimensions - shared across multiple functional areas - that serve as the glue for cross-pillar analytics. Common dimensions include:
These shared dimensions enable users to analyse, for example, how procurement spend (SCM) impacts cash flow (finance), or how HR-driven workforce changes correlate with sales performance - a cross-functional insight made possible by a common semantic backbone. 🏗️ Support for External Data & Custom Schemas FDI doesn’t just ingest Fusion source data - it enables easy integration of external datasets into the same ADW environment. Whether it’s non-Oracle systems, legacy data, purchased data feeds, or even weather information, FDI supports loading external tables into custom schemas that can extend the star schema and semantic model. This extensibility is key to bridging out-of-the-box analytics with bespoke business insights - enhancing customer segmentation, supplying additional cost drivers to per-product profitability, or blending external KPIs directly alongside Fusion metrics. 🔍 Benefits of the Lakehouse Foundation
Under the hood, FDI’s star-schema in ADW provides a robust, extensible greenfield analytics foundation. Built on conformed dimensions and a scalable data warehouse, it enables seamless mash-ups of Fusion data with external sources, supporting rich, multi-domain analytics that truly span the enterprise. 5. Semantic Layer & Pre‑Built Metrics FDI abstracts hundreds of physical tables into logical business subject areas - finance (GL profitability, AP ageing, AR revenue, Trial Balance), HCM (talent acquisition, workforce core), procurement (spend, POs), and CX (campaign ROI, opportunity pipeline) - all underpinned by conformed dimensions. It includes a KPI library with over 2,000 standard metrics, accessible via Oracle Analytics Cloud’s intuitive key-metric editor and drag‑and‑drop visualisations. In essence, this semantic layer creates a unified business vocabulary that simplifies reporting and ensures consistency across the enterprise . 🔐 Complimenting Fusion-Defined Security FDI leverages Fusion’s built-in role-based security model, so the semantic layer inherits data roles, duty roles, and row/object-level filters defined in Fusion Cloud Applications. Access control is enforced through the Oracle Identity and Access Management (IAM) Service and the FDI Console, ensuring that users only see data they’re authorised to view. This unified approach simplifies administration and compliance by avoiding double entry of security definitions . 🧩 Hiding Complexity Through Logical Abstraction Rather than exposing raw tables, FDI offers a logical semantic layer that shields users from underlying complexity. Here’s what it achieves:
✅ Summary: User Experience & Governance Wins
6. Visualisation and Intelligent Dashboards
7. Governance, Security & Lineage Fusion Data Intelligence isn’t just about delivering insights - it’s built on a robust foundation of security governance and data lineage that brings trust, safety, and compliance to the analytics lifecycle. 🔐 Security Inherited from Fusion & Managed via OCI IAM FDI inherits its security framework directly from Fusion Cloud Applications. Role-based access, including data roles and duty roles configured in Fusion, are seamlessly enforced within the FDI semantic layer and Autonomous Data Warehouse (ADW). This ensures that users can access only the data they are authorised to see - without duplicating access definitions in multiple systems. User and group management within FDI is handled through OCI’s Identity and Access Management Service (IAM). You can sync your Fusion App users and roles into OCI IAM or manage them natively via OCI, and then assign access through system and job-specific groups tailored to FDI. This 1:1 mapping ensures governance is inherited and consistent across both transactional and analytics layers. Oracle also manages infrastructure-level security - covering upgrades, patching, encryption, IAM policy enforcement, key management, and auditing - helping to maintain compliance and relieve the operational burden on your team. 🧭 Data Lineage & Quality Built-In Trusted analytics demand transparency - and FDI delivers that through built-in data lineage and validation mechanisms. The system tracks the flow of data from source tables in Fusion Apps, through ingestion pipelines, into curated star schemas, and finally into Semantic Layer metrics and dashboards. Fusion SCM Analytics documentation provides end‑to‑end lineage spreadsheets that detail column‑ and table-level mappings, making it easy to trace every KPI back to its source fields. You can also monitor pipeline activity in the FDI Console, which records execution timestamps, row counts, and error logs - providing a clear audit trail of data loads and transformations. Further, FDI includes validation metrics that reconcile data loaded into ADW against transactional data in Fusion. These can be scheduled or run on‑demand, with reports surfaced directly in OAC - making it easy to identify data drift or discrepancies and swiftly pinpoint areas for correction ✅ Summary: Trust, Safety, and Compliance
8. Why This Architecture Matters for Organisations 🚀 Fusion Data Intelligence goes far beyond traditional BI. It sits at the heart of Oracle’s broader Data Intelligence Platform, delivering a unified, 360° view across all enterprise data—transactional, analytical, structured, and unstructured . 🌟 A Unified Data-Intelligence Ecosystem Unlike legacy stacks - OBIA, ODI, siloed data centres - FDI is built on Oracle’s next-generation Data Intelligence Platform. It blends data lakes, Autonomous Data Warehouse, Oracle Analytics Cloud, OCI AI services, and GoldenGate streaming into a seamless, managed ecosystem . This means organisations can now handle batch and real-time data, include external sources and apply AI/ML—all within one secure environment. This is Oracle's vision as Data Intelligence Platform has been announced but is not yet generally available. 🔄 Consistent Insights Across Pillars FDI’s architecture supports conformed dimensions and shared semantic models spanning finance, HR, SCM, and CX. This allows for unified KPIs and analytics, enabling stakeholders to ask and answer cross-domain questions like:
The result is enterprise-wide analytics based on a single source of truth . 💡 Full Extensibility with Governed Access As part of Oracle’s Data Intelligence Platform, FDI offers extensive extensibility. Users can bring in external datasets, extend semantic models, build custom analytics, and consume OCI AI services - all within Oracle’s security framework. Governed self-service means broad analytical freedom without compromising data integrity . 🛠 Evergreen Platform, Zero Infrastructure Burden The platform is fully managed and evergreen. Oracle handles everything - from provisioning, patching, tuning, and upgrades to integrating the latest AI services. Teams can focus on driving value rather than wrestling with infrastructure . 🎯 Summary: Strategic Differentiators
As you’ve seen, Fusion Data Intelligence delivers a fully managed, cloud-native analytics ecosystem - bringing together Fusion SaaS, Oracle’s Autonomous Data Warehouse, and Analytics Cloud under one secure, AI-enhanced platform. It unifies data across domains, embeds intelligent insights and governance, and eliminates legacy complexity - truly delivering on Oracle’s vision of a Data Intelligence Platform. Now it’s your turn: take a moment to reflect on how FDI could accelerate insight‑driven transformation in your organisation.
The rise of Agentic AI is transforming the analytics landscape, but it comes with an often-overlooked challenge: database strain. Traditionally, operational databases are ringfenced to prevent unstructured, inefficient queries from affecting critical business functions. However, in a world where AI agents dynamically generate and execute SQL queries to retrieve real-time data, production databases are facing unprecedented pressure. Additionally, Retrieval-Augmented Generation (RAG), a rapidly emerging AI technique that enhances responses with real-time data, is further intensifying this issue by demanding continuous access to up-to-date information. RAG works by supplementing AI-generated responses with live or external knowledge sources, requiring frequent, real-time queries to ensure accuracy. This puts even more strain on traditional database infrastructures. In a previous blog post, I looked at how Agentic AI will improve the experience for users of the Oracle Analytics ecosystem. This blog explores the risks of this architectural shift where AI Agents are in opposition with the traditional RDBMS architecture, why traditional solutions such as database cloning fall short, and how modern data architectures like data lakehouses and innovative storage solutions can help mitigate these challenges. Additionally, we examine the implications for the Oracle Analytics Platform, where these changes could impact both data accessibility and performance. The Problem: AI Agents, RAG & Uncontrolled Query Load A well-managed production database is typically shielded from unpredictable query loads. Database administrators ensure that only structured, optimised workloads access production systems to avoid performance degradation. But with Agentic AI and RAG, that fundamental principle is breaking down. Instead of a few human analysts running queries, organisations may now have dozens or even hundreds of AI agents autonomously executing SQL queries in real time. These queries are often:
This creates significant challenges for traditional RDBMS architectures, which were not designed to handle the scale and unpredictability of AI-driven workloads. With Retrieval-Augmented Generation (RAG) in particular, AI models require frequent access to real-time data to enhance their outputs, placing additional stress on transactional databases. Since these databases were optimised for structured queries and controlled access, the introduction of AI-driven workloads risks causing slowdowns, performance degradation, and even system failures. For users of Oracle Analytics, this shift presents serious performance implications. If production databases are overwhelmed by AI-driven queries, query response times increase, dashboards lag, and real-time insights become unreliable. Additionally, Oracle Analytics’ AI Assistant, Contextual Insights, and Auto Insights features, which rely on efficient access to data sources, could suffer from delays or inaccuracies due to excessive load on transactional systems. To mitigate this, organisations must rethink their database strategies, ensuring that AI workloads are governed, optimised, and properly distributed across more scalable architectures. The Traditional Approach: Cloning Production Data One way that organisations have attempted to address this issue is by cloning production databases on a daily or weekly basis to offload AI-driven queries. However, this approach presents several major drawbacks:
For Oracle Analytics users, these challenges could lead to outdated insights, reduced trust in AI-generated recommendations, and a poor user experience due to lagging or inconsistent data. Given these drawbacks, it’s clear that cloning is not a viable long-term solution for handling the database demands of Agentic AI. A Shift in Data Architecture: Data Lakes & Lakehouses Instead of relying on traditional RDBMS architectures, organisations are increasingly adopting data lakes and lakehouses to support AI-driven analytics. These architectures offer several key advantages:
For users of Oracle Analytics, this shift could mean that existing reports and dashboards need to be refactored to work efficiently with a lakehouse structure, adding additional effort and complexity. Optimising Performance with Modern Storage Solutions Beyond adopting new architectural patterns, organisations can leverage modern storage solutions like Silk to mitigate the strain on production databases. Silk provides a virtualised, high-performance data layer that optimises storage performance and scalability without requiring a complete architectural overhaul. By using Silk or similar intelligent storage virtualisation and caching technologies, organisations can:
For organisations using Oracle Analytics, integrating such solutions could help sustain real-time data access while alleviating the performance burden on production databases. However, despite these advantages, storage virtualisation and caching solutions are not a panacea. Organisations must still ensure that their AI workloads are properly governed to prevent excessive resource consumption, and they need to assess whether virtualised storage aligns with their broader data architecture and security policies.
Conclusion: Preparing for the Future of AI-Driven Analytics Agentic AI and RAG are here to stay, and with them comes a fundamental shift in how data is accessed and managed. However, blindly allowing AI-driven queries to run against production databases is not a sustainable solution. To support the evolving demands of AI, organisations must modernise their data strategies by:
For Oracle Analytics users, this shift will require rethinking how data is stored, accessed, and processed to ensure that the platform continues to deliver timely insights without compromising performance. The key takeaway? Traditional database architectures were not designed for AI-driven workloads. To fully embrace the potential of Agentic AI and RAG, organisations must rethink their data foundations - or risk being left behind. How is your organisation adapting to the challenges of AI-driven analytics? Let’s continue the conversation in the comments!
As organisations strive to make faster, smarter decisions, analytics tools must evolve to offer more than static dashboards and manual data exploration. Enter Contextual Insights, a game-changing feature set to debut in Oracle Analytics Cloud (OAC) as part of the January 2025 update.
Thanks to the Oracle Analytics Product Management team, I was given early access to this feature and have had the opportunity to explore it hands-on. In this blog, I’ll share my insights, experience, and feedback to help you understand the transformative potential of Contextual Insights. What Are Contextual Insights Contextual Insights are dynamically generated insights that appear based on the context of the data being analysed. Powered by Oracle’s advanced AI algorithms, these insights surface anomalies, trends, and patterns without requiring users to manually search for them. For instance, imagine you’re analysing sales data for a retail chain. Contextual Insights could highlight a sudden drop in sales for a specific region or a spike in returns for a particular product category—all without you needing to ask. Key Features of Contextual Insights 1. AI-Powered Recommendations Contextual Insights leverage AI to suggest deeper analysis opportunities, such as exploring correlations between variables or detecting unusual behaviour in your data. 2. Dynamic Visualisations The insights are not just textual; they include visual representations such as trend lines, scatter plots, or bar charts, making the findings easier to understand at a glance. 3. User-Centric Design These insights are tailored to the user’s role and the context of their query, ensuring that the most relevant information is surfaced. 4. Seamless Integration Contextual Insights work seamlessly with other OAC features like Auto Insights, Narratives, and the Natural Language Query (NLQ) interface.
How Contextual Insights Fit Into OAC
Oracle has designed Contextual Insights to seamlessly integrate into the Oracle Analytics Cloud (OAC) experience, enhancing usability while preserving the intuitive workflows users rely on. This integration ensures that insights appear naturally during the analytical process, enriching user interactions without introducing complexity. Key Examples of Integration: • Enhanced Visualisation Analysis: When users view a visualisation, Contextual Insights proactively surfaces anomalies, outliers, or unexpected trends specific to the data currently in focus. For instance, a sales trend chart might highlight a sudden dip in revenue for a specific region or product line, prompting immediate investigation. • Ad-Hoc Analysis Empowerment: In exploratory scenarios, where users are conducting ad-hoc analysis, Contextual Insights helps uncover patterns or correlations that might not have been apparent. For example, when analysing marketing campaign data, it might reveal that a spike in customer engagement correlates with specific demographic segments or external factors. A Unified Workflow for Enhanced Decision-Making: This tight integration allows Contextual Insights to complement the natural flow of analysis, ensuring that users discover actionable insights without needing to disrupt their existing processes or switch contexts. By blending seamlessly with tools like dashboards, visualisations, and exploration interfaces, Contextual Insights empowers users to focus on making data-driven decisions rather than spending time searching for information. Bridging Technical Gaps: For non-technical users, this integration is particularly valuable, as it eliminates the need for advanced data expertise to identify meaningful insights. Meanwhile, for advanced analysts, it acts as a catalyst, streamlining deeper exploration and enabling quicker hypothesis testing. By embedding Contextual Insights deeply within OAC, Oracle delivers an analytics experience that is not only smarter but also more intuitive and inclusive, ensuring users at all levels can uncover hidden opportunities with ease.
Why Contextual Insights Matter
In a world where data-driven decisions are becoming the norm, the ability to uncover relevant, actionable insights at the right moment can be the difference between staying ahead of the curve and falling behind. 1. Democratisation of Analytics Contextual Insights empower users who may not have technical expertise in data analytics, widening the reach of OAC across organisations. 2. Enhanced Productivity By surfacing insights automatically, analysts save time previously spent on manual data exploration. 3. Proactive Decision-Making Contextual Insights shift the focus from reactive reporting to proactive planning by identifying trends and anomalies in real-time. How Contextual Insights Complements Auto Insights While Auto Insights in Oracle Analytics Cloud focuses on providing users with automatically generated, high-level summaries and narratives about their data,
Contextual Insights takes this a step further by tailoring those insights to the user’s specific context. Auto Insights excels at offering a broad overview, such as key performance indicators or summarised trends, whereas Contextual Insights dynamically surface patterns, anomalies, and trends based on the user’s immediate data interactions. Together, these features create a seamless experience where users can move from a high-level understanding to in-depth exploration, uncovering actionable insights with minimal effort. This synergy ensures that users at all skill levels can maximise the value of their data, moving from descriptive to diagnostic analytics effortlessly
Real-World Use Cases
Contextual Insights unlock powerful opportunities across a variety of business sectors by surfacing patterns, trends, and anomalies that have the potential to drive more informed and proactive decision-making.
How to Get Started Once the January 2025 update is live, enabling and using Contextual Insights will be straightforward. 1. It is configured at a visualisation level. 2. Ensure that Contextual Insights is enabled for your visualisations. . 3. Select the data item to analyse and select the "Explain Selected" option from the context menu. For more detailed steps, refer to Oracle’s YouTube videos below explaining the feature and also detailing the configuration steps to set up Contextual Insights.
Final Thoughts
Contextual Insights represent a major leap forward in empowering organisations to make faster, smarter, and more informed decisions. By integrating advanced AI-driven capabilities directly into Oracle Analytics Cloud, this feature enables users of all skill levels to uncover hidden opportunities and respond proactively to emerging trends. As analytics tools evolve, features like Contextual Insights showcase Oracle’s commitment to democratising analytics and fostering innovation. Whether you’re a data novice or a seasoned data scientist, Contextual Insights can transform how you explore and act on your data. Embrace this feature to unlock the full potential of your analytics workflows and drive meaningful outcomes in your organisation. If you’re ready to explore its possibilities or need support with OAC, reach out or leave a comment below! |
AuthorA bit about me. I am an Oracle ACE Pro, Oracle Cloud Infrastructure 2023 Enterprise Analytics Professional, Oracle Cloud Fusion Analytics Warehouse 2023 Certified Implementation Professional, Oracle Cloud Platform Enterprise Analytics 2022 Certified Professional, Oracle Cloud Platform Enterprise Analytics 2019 Certified Associate and a certified OBIEE 11g implementation specialist. Archives
March 2026
Categories
All
|








RSS Feed