|
The rapid evolution of AI driven analytics is changing how users interact with data. Instead of navigating dashboards or writing queries, users can now ask natural language questions and receive analytical insights instantly. Oracle Analytics AI Agents are a good example of this shift. They allow users to explore data conversationally while combining structured analytics with contextual knowledge. At first glance it may appear that traditional components of business intelligence architecture such as the semantic model are becoming less important in this new AI driven world. In reality, the opposite is true. As organisations introduce AI agents into their analytics platforms, the semantic model becomes even more important because it provides the structure and governance required to interpret enterprise data correctly. Conversational Analytics in Oracle Analytics Oracle Analytics AI Agents allow users to ask analytical questions directly against governed datasets. For example, the agent can analyse football club performance data and generate insights from natural language questions. This conversational interface makes analytics far more accessible, but it also raises an important architectural question: How does the AI agent understand what the data actually means? The Semantic Model as the Foundation of Enterprise AnalyticsEnterprise data is typically stored in structures optimised for storage and processing rather than business interpretation. Tables may contain technical column names, encoded values or highly normalised structures that make sense to engineers but not necessarily to business users. The semantic model solves this problem by defining the business meaning of the data. Within Oracle Analytics, the semantic model provides:
This structure allows the platform to interpret analytical questions consistently. Governing Business Metrics Another critical role of the semantic layer is defining the organisation’s key metrics. Metrics such as invoice amount, revenue, order value or customer counts often require precise definitions and calculations. These definitions are implemented directly within the semantic model. By centralising metric definitions, Oracle Analytics ensures that dashboards, reports and AI agents all rely on the same authoritative calculations. This prevents inconsistencies and ensures that analytical answers remain aligned with business definitions. Knowledge Documents: Adding Context to AI AgentsWhile the semantic model defines the structure of enterprise data, Oracle Analytics AI Agents can also use knowledge documents to provide additional context. These documents may contain:
Administrators can now specify both document priority and document language. Document priority allows organisations to control which documents are treated as more authoritative when the AI agent retrieves knowledge. For example, curated internal documentation may be prioritised over supplementary material. The language setting allows organisations operating across multiple regions to maintain multilingual knowledge sources for a single AI agent.
This ensures that the agent retrieves the most relevant document based on the language of the user’s question. Semantic Models and Knowledge Documents Working Together The semantic model and knowledge documents play complementary roles in grounding AI generated answers. The semantic model provides:
Together they form two layers of grounding: Structured grounding Provided by the semantic model, ensuring that queries are interpreted correctly against governed datasets. Contextual grounding Provided by knowledge documents, helping the AI agent interpret business concepts and policies. This combination helps ensure that AI generated insights remain accurate and aligned with organisational definitions. Why This Matters for Enterprise AI The introduction of AI agents does not eliminate the need for well designed analytics architecture. If anything, it reinforces its importance. Conversational analytics may change the user interface, but the underlying principles of governed metrics, well structured semantic models and curated knowledge remain essential. For architects and data leaders, the lesson is clear: Successful enterprise AI is not just about models and prompts. It is about grounding those models in trusted, well structured organisational knowledge.
0 Comments
When I first wrote about Oracle Analytics AI Agents a couple of months ago, one of the key capabilities I highlighted was the ability to upload documents that provide contextual knowledge to the Oracle Analytics AI Agent.
These documents allow the AI agent to interpret concepts and metrics correctly when answering analytical questions.
While working with the feature again recently, I noticed that Oracle has introduced a couple of additional configuration options in the March 2026 Oracle Analytics Cloud update when uploading knowledge documents.
There is moe detail about these new configuration options in this Oracle Analytics video.
These include Priority and Language, which provide more control over how knowledge is retrieved and used by the AI agent. Document Priority When uploading a document, you can now assign a priority level. The available options are:
In practice this allows administrators to guide the agent towards more authoritative sources. For example, organisations may choose to assign higher priority to internal documentation or curated domain knowledge while leaving supplementary material at a lower priority. This is particularly useful when multiple documents are attached to an agent in which case, this setting can be used to give certain documents precedence. Document Language Another useful addition is the ability to specify the language of the knowledge document. Oracle Analytics supports multiple languages including English, Spanish, French, German, Japanese and several others. Setting the language metadata helps the agent retrieve the most appropriate document when answering questions in different languages. For organisations operating across international teams, this makes it easier to maintain multilingual knowledge sources for a single AI agent.
Why These Enhancements Matter
At first glance, the Priority and Language settings may appear to be minor configuration options. In reality, they represent an important step towards better governance of enterprise AI. As organisations begin to rely on AI agents to answer analytical questions, the quality and relevance of the underlying knowledge becomes critical. Without clear control over which documents should take precedence, an AI agent could retrieve information from less authoritative sources, potentially leading to inconsistent or misleading responses. The introduction of document priority allows organisations to guide the retrieval process. Trusted internal documentation, curated knowledge bases, or officially governed metrics definitions can be prioritised over supplementary materials. This helps ensure that responses generated by the AI agent remain aligned with the organisation’s approved definitions and analytical standards. The language setting also plays an important role in global organisations. Many enterprises maintain documentation in multiple languages across different regions. By tagging documents with the correct language metadata, Oracle Analytics can ensure that the AI agent retrieves the most relevant knowledge for the user’s query. Taken together, these enhancements move Oracle Analytics AI Agents closer to a governed enterprise AI model, where organisations can not only provide contextual knowledge, but also control how that knowledge is prioritised and interpreted. For architects and data leaders, this reinforces an important principle: successful AI is not just about models and prompts, but about governed, well structured knowledge. A common misconception about the Oracle AI Data Platform is that it represents a net new set of technologies. In reality, it is built deliberately on familiar OCI services such as Object Storage, Spark based processing, Generative AI, AI Autonomous Databases, IAM and networking. That observation is important but often misunderstood. Oracle AI Data Platform is not trying to introduce new infrastructure primitives. Instead, it takes proven OCI capabilities and layers clear architectural opinions on top of them. The value lies less in invention and more in acceleration, consistency and time to value. Those opinions are most clearly expressed through the Oracle AI Data Platform workbench, which acts as the unifying layer where these architectural choices become operational. From flexibility to intention Raw OCI is intentionally unopinionated. It offers enormous freedom, but also places the burden of design, integration and governance on the customer. Over time, most organisations converge on similar patterns, often after costly experimentation. Oracle AI Data Platform reflects those patterns back as defaults. It provides a paved path rather than a blank canvas. What opinionated means in practice Those opinions are explicit and deliberate.
These are not hard constraints, but the happy path is clear. That clarity is the essence of an opinionated platform. Why co locating data and AI matters The co-location of data and AI is one of the most important opinions in Oracle AI Data Platform. Oracle is making a clear statement that models should move closer to the data, not the other way around. Feature engineering, prompt grounding, fine tuning and inference all happen against shared, governed datasets rather than copied extracts. The impact is practical rather than theoretical.
This directly affects cost, operability and trust in AI driven outcomes. The workbench as the unifying layer This is where the workbench becomes central. It is not just a notebook environment. The workbench also provides a focal point for cataloguing and governance. Data assets, transformations, analytical outputs and AI artefacts can be understood, discovered and governed in context, rather than existing as disconnected technical components. This reinforces the platform’s opinion that trust, lineage and discoverability are foundational requirements for AI, not optional extras. It is the place where Oracle’s architectural opinions become operational. Notebooks, Spark jobs, SQL queries and Generative AI interactions all run in the same context, over the same data, governed by the same security model. Rather than stitching together multiple consoles and services, the workbench provides a coherent lifecycle from ingestion to analytics to AI. A deliberate trade off Opinionated platforms always involve a trade-off. Some design freedom is exchanged for faster delivery, consistency and lower cognitive overhead. For most organisations, especially as AI moves into regulated and business critical use cases, that trade-off is desirable. Oracle AI Data Platform does not remove flexibility. It removes the need to repeatedly reinvent the same architectural decisions. Part of a wider strategy These opinions align closely with how Oracle is positioning analytics and AI more broadly, including Oracle Analytics Cloud and Fusion AI Data Platform/Fusion Data Intelligence. Seen in this light, Oracle AI Data Platform is not about new technology. It is about institutionalising hard won architectural lessons. And that is where its real value lies. For as long as I can remember, Oracle Analytics users have been asking for one simple but powerful capability: a native Gantt chart. Something to track timelines, visualise dependencies, and monitor progress - all within the same dashboards where KPIs and trends already live. With the November 2025 update, Oracle Analytics Cloud (OAC) finally delivers. It’s a long-requested feature that transforms how we visualise projects, portfolios, and operational workflows. The Wait Is Over Until now, anyone wanting to visualise schedules inside OAC had to get creative — using bar charts to simulate timelines, or embedding third-party components. It worked, but it was clunky. The new Gantt Chart visualisation makes this native and intuitive. It allows analysts and project teams to show project timelines, milestones, and progress bars directly inside their OAC workbooks — fully integrated with data security, filters, and visual interactions. This isn’t just a pretty new chart. It’s a meaningful step toward operational analytics, where OAC becomes a live window into how work is progressing, not just how metrics are trending. What’s New in the November 2025 Update The Gantt Chart visual introduces a new way to represent time-based activities. Here’s what it currently supports: • Task timelines: Start and end dates rendered as horizontal bars. • Progress tracking: Percentage complete shown visually within each bar. • Milestones: Zero-duration tasks represented as markers. • Grouping: Organise tasks by project, phase, or resource type. • Baselines: Display baseline start and end dates alongside actuals for schedule comparison. • Dependencies: Align related tasks sequentially using shared attributes • Tooltips: Show contextual details such as owner, status, priority, or duration. For many teams - PMOs, delivery leads, or operations managers - this fills a long-standing visualisation gap in OAC. The new Gantt visualisation transforms Oracle Analytics Cloud into a capable project-tracking tool. It bridges the gap between analytics and project management - enabling users to track, analyse, and present progress all in one platform. Try It Yourself – Sample Data To test the new Gantt, I’ve created a realistic dataset that you can import directly into OAC.
It contains three concurrent projects:
It also includes shared dependencies (e.g. a global change freeze) to demonstrate how Gantt timelines can overlap across projects. Each row in the dataset represents a task, with columns for start/end dates, duration, status, percentage complete, baseline start/end, and dependencies - all mapped for easy use in the Gantt visual. How to Build the Gantt in Oracle Analytics Cloud
3. Map the fields as below. Once configured, you’ll see your projects laid out across a timeline - with bars showing duration, coloured progress, and milestone markers for key events. The Gantt chart shown above gives you a timeline view of your project tasks. Each horizontal bar represents a task’s duration from start date to end date, with markers indicating baselines, milestones and percent complete. This makes it easy to see overlaps, dependencies and progress at a glance.
Why This Matters This update pushes OAC further into operational reporting territory. Instead of switching to tools like Smartsheet, Excel, or Project for schedule reviews, you can keep everything inside OAC — governed, secured, and shared via the same semantic model. For organisations already integrating delivery data (e.g. from Oracle Fusion PPM, Jira, or Primavera) into OAC, this unlocks a new layer of insight:
Community Demand This feature has been one of the most upvoted requests on the Oracle Analytics Idea Lab. Many people in the community have asked for a proper Gantt visual for years, especially those working in delivery or programme management roles. It’s great to see Oracle Product Management not only listen, but execute — and deliver a native visual that feels integrated, performant, and flexible. Final Thoughts The Gantt Chart visualisation is a small feature with a big impact. It closes a long-standing gap in Oracle Analytics Cloud and moves the platform closer to a true operational analytics experience. Whether you’re tracking project sprints, release schedules, or transformation roadmaps, you can now visualise timelines, progress, and dependencies — all in one place, without leaving OAC. At a recent Oracle Analytics Partner Meeting, one demo stood out to me (the others were great as well!) - the new AI Agent for Oracle Analytics Cloud (OAC). I’ve since spoken further with the product manager and been granted early access ahead of its LA (limited availability) in the November 2025 release, and I can already see the foundations of something significant taking shape. At first glance, the OAC AI Agent looks and feels similar to the Fusion AI Agent Studio -and that’s no coincidence. Oracle appears to be unifying its redwood AI agent look and feel across platforms, enabling analytics, applications, and custom experiences to have a unified user experience. In OAC, this translates into an embedded conversational interface that sits directly within your analytics workspace. Ask a question, and the agent doesn’t just return a text summary - it understands your semantic model, data lineage, and context before generating a response. From Chatty to Knowledgeable: The Librarian Analogy To understand what makes this so important, it helps to think of the AI Agent as a librarian. A large language model (LLM) on its own is like a well-spoken librarian with an excellent memory but no access to your organisation’s archive. Ask them a question, and they’ll respond confidently and eloquently but they’re drawing only on general world knowledge and patterns they’ve learned before. The result often sounds convincing, yet it may lack the precision or evidence that a business decision demands. The OAC AI Agent, on the other hand, gives that librarian the keys to your private archive. When you ask a question, they don’t just rely on memory and their extensive real-world knowledge; they walk into your own library of governed data, reports, and documents, retrieve the most relevant material, and then craft a response grounded in fact. That’s the power of Retrieval-Augmented Generation (RAG) - it lets Oracle’s AI Agent combine the fluency of language models with the factual grounding of your enterprise knowledge. How the OAC AI Agent Works Creating an AI Agent in Oracle Analytics Cloud To begin creating an AI Agent, navigate to the menu and select the Create AI Agent option. This initiates the process and brings you directly to the AI Agent configuration Immediately upon entering the configuration screen, you are prompted to add a dataset that will serve as the foundation for the AI Agent. It is essential to ensure that this dataset has already been indexed and appropriate synonyms for attributes have been configured. These preparatory steps are crucial for enabling the AI Agent to effectively leverage the dataset and provide meaningful, context-aware responses. You are then taken to the configuration screen Configuring and Supplementing the OAC AI Agent Step 1: Entering Supplemental Instructions Begin by providing supplemental information that offers the agent additional context regarding its specific use case. Additional prompt Instructions will help the agent better interpret user questions on a functional domain. This ensures the AI Agent is tailored to the unique requirements and environment it will operate within. Step 2: Defining the First Message The First Message serves as an introductory text displayed to users interacting with the agent. It describes the agent’s purpose and sets expectations for what the agent is designed to achieve. Step 3: Saving the Agent After all relevant information has been entered, proceed to save the agent. This action records the configuration and prepares the agent for further enhancement. Step 4: Supplementing with Documents Once the agent has been saved, you can enhance its capabilities by supplementing the previously entered contextual information with additional documents. Uploading these documents grounds the agent in your organisation’s custom enterprise knowledge, allowing it to provide more accurate and relevant responses. OAC AI Agent: Technical Foundations At its core, the OAC AI Agent leverages the vector search capabilities of the Oracle infrastructure which forms the backbone of OAC. This vector search enables the agent’s retrieval augmented generation (RAG) functionality, allowing it to efficiently surface relevant information in response to user queries. The OAC AI Agent achieves this by integrating three essential components, each playing a critical role in transforming natural-language questions into trustworthy, contextual insights. 1. Intent Recognition (LLM Layer) The large language model (LLM) layer is responsible for interpreting what the user is seeking. It analyses the natural-language query to determine the user’s intent and aligns this intent with relevant datasets, key performance indicators (KPIs), or dashboards available within OAC. 2. Retrieval Layer (RAG Engine) Once the user’s intent has been established, the agent’s retrieval layer searches for pertinent content across a range of defined governed sources. This process begins with OAC’s own semantic model and expands to include external knowledge repositories. Examples custom knowledge files that have been uploaded to the system or supplemental information defined in the AI agent. 3. Response Rendering (OAC Context) After retrieving the necessary data and knowledge, the information passes through Oracle’s Analytics Visualisation framework. The agent then generates a natural-language response that is firmly rooted in verified data, ensuring that every response respects OAC’s metadata, data lineage, and security protocols. Key Features and Considerations Dataset Preparation and Management
How the OAC AI Agent Delivers Value The OAC AI Agent produces responses that are designed to be highly effective for business users. This is achieved through a combination of generative AI capabilities, robust grounding in enterprise knowledge, and adherence to organisational standards.
This unique blend of conversational fluency and factual accuracy distinguishes the OAC AI Agent from standalone chat-based AI tools, delivering responses that are both engaging and trustworthy for enterprise use. Early Days, Big Potential
Let’s be clear — this feature is in its infancy. The current build focuses on natural-language exploration incorporating Retrieval-Augmented Generation (RAG) and narrative generation, with a roadmap that will expand its reasoning and automation capabilities over time. What’s exciting isn’t just the interface, but the architecture that’s emerging beneath it. For the first time, Oracle Analytics is embracing Retrieval-Augmented Generation (RAG). That means the AI Agent won’t rely solely on a large language model to generate responses. Instead, it will retrieve and ground its output in enterprise data and knowledge — both structured and unstructured. In practical terms, this opens the door for analysts and business users to ask questions that blend internal data with documents, policies, reports, and contextual information stored across the organisation. Whether it’s sales performance data, a product specification PDF, or a customer-service transcript, the AI Agent will eventually be able to bring these sources together to deliver context-aware insights. Bringing Unstructured Knowledge into the Analytics Conversation Historically, analytics platforms have struggled to bridge the gap between structured data (tables, metrics, and KPIs) and unstructured information (documents, notes, images, or messages). With RAG, Oracle is moving to close that gap. This isn’t just about generating summaries — it’s about creating a richer, more informed analytical experience. Imagine asking: “What were the main factors behind last quarter’s decline in customer satisfaction?” Today, OAC might point you to a metric or dashboard. With RAG, the AI Agent could augment that response with context drawn from call-centre transcripts, customer feedback reports, or support documentation — all retrieved securely from enterprise knowledge stores. The result is a shift from data-driven insights to knowledge-driven understanding. Governed Intelligence, Oracle Style One of the key advantages here is governance. Unlike standalone chatbots, the OAC AI Agent inherits the same security, metadata, and lineage controls that underpin Oracle Analytics. Responses remain explainable, consistent, and aligned with the organisation’s governed data model — ensuring that insights stay reliable even as AI becomes more conversational. This approach also complements Oracle’s broader AI ecosystem. The same underlying framework powers Fusion Applications and APEX AI Agents. As these services evolve, we can expect deeper integration, shared prompt orchestration, and unified management of knowledge sources across the Oracle Cloud stack. Looking Ahead The OAC AI Agent represents a starting point, not a destination. It’s a glimpse into where analytics is heading — from dashboards and KPIs towards context-aware conversations grounded in enterprise knowledge. As I explore this feature further through early access, I’ll be focusing on:
For now, it’s early days — but the direction is clear. With the AI Agent, Oracle Analytics isn’t just adding generative AI to dashboards; it’s laying the foundation for a new class of governed, knowledge-aware analytics experiences. Stay tuned — I’ll share a deeper hands-on review once the November 2025 update goes live. Artificial intelligence is no longer a side project. For enterprises, AI has become a strategic priority—transforming how organisations innovate, compete, and operate. Yet most businesses still struggle with fragmented data pipelines, disconnected tools, and governance challenges that slow down progress with the underlying root cause being how disparate data exists in enterprises. While 78% of organisations planned to use AI in 2024 (Global AI Adoption Statistics: A Review from 2017 to 2025), the reality is that 68% of these organisations have data silos as their top concern (Data Strategy Trends in 2025: From Silos to Unified Enterprise Value - DATAVERSITY), and siloed data can cost companies up to 30% of their annual revenue (What are Data Silos and What Problems Do They Cause?|Definition from TechTarget). The culprit? The average enterprise runs on nearly 900 applications, with only one-third integrated (What Are Data Silos & Why is it a Problem? | Salesforce US), creating the very fragmentation that prevents AI success. Think of enterprise data like a busy international airport. Passengers arrive from different places, each with different documentation requirements:
Without a well-designed terminal, air traffic control, and secure customs processes, it would be chaos. The new Oracle AI Data Platform (AIDP) is that airport terminal for AI—a single hub where all types of data arrive, is organised, governed, and routed to their various destinations so Analytics tools and AI applications can “take flight” safely and efficiently. Oracle announced the AI Data Platform at Oracle AI World in Las Vegas on 14 October 2025, and it’s now generally available. Customers can access the live product site and documentation today, meaning you can onboard, configure the Master Catalog, and start building governed lakehouse-plus-AI pipelines on OCI straight away. Why Oracle AI Data Platform Matters At its core, AIDP helps enterprises do three things better:
The result? Faster time to value, improved governance, and the ability to scale AI beyond pilots into real enterprise impact. A Hypothetical Use Case: From Data Warehouse to AI-Powered Insights Consider a typical scenario:
Here’s how AIDP helps transform this setup:
In short, AIDP helps organisations move beyond descriptive dashboards to predictive and prescriptive intelligence, while leveraging the investments already made in ADW and OAC. How Oracle AI Data Platform Supports the Full Data Workflow One of AIDP’s key strengths is that it covers the entire lifecycle of enterprise data, much like how an airport manages passengers from arrival to departure.
By covering every stage of the workflow, AIDP ensures that UK (structured), EU (semi-structured), and international (unstructured) passengers all move smoothly through the airport, reaching their destinations as trusted, AI-driven insights. What is the Medallion Architecture? The Medallion Architecture is a layered data design pattern used to organise data in a data lake or lakehouse for clarity, quality, and reusability. It’s structured into three main layers: Bronze, where raw data is ingested “as is” from source systems; Silver, where data is cleaned, validated, and enriched for consistency and reliability; and Gold, where curated, business-ready data is optimised for analytics, reporting, and machine learning. This layered approach improves data quality at each stage while maintaining traceability from raw to refined insights. In AIDP, this spans Object Storage, open table formats (Delta/Iceberg/Hudi), and Autonomous Data Warehouse (ADW), all governed by the Master Catalog and RBAC. Bronze — Land (raw, “as is”) Purpose: Capture the truth of what arrived, without fixing it yet.
Silver — Refine (cleaned, standardised, enriched) Purpose: Make data structurally sound, consistent and joinable.
Airport analogy: organised lounge — fewer people, rules applied, order emerging. Gold — Serve (curated, business-ready) Purpose: Publish trusted datasets for BI, ML and sharing.
Airport analogy: premium lounge — calm, curated, ready to board. AIDP makes implementing this pattern simpler, with built-in orchestration and governance. What Are Delta Lake, Iceberg, and Hudi? If you’re new to these technologies, here’s a quick explainer:
Built on Open Source, Delivered as Managed Enterprises want the flexibility of open source, without the overhead of managing it at scale. AIDP blends the best of both:
The Bigger Picture With AIDP, Oracle isn’t just building another data platform — it’s constructing the air traffic control tower of enterprise AI. Think of your data as flights arriving from every corner of the globe: structured data landing from domestic routes, semi-structured touching down from across Europe, and unstructured streaming in from long-haul international journeys. AIDP coordinates the safe arrival, organisation, and departure of all of them, ensuring each passenger is where they need to be. By reducing unnecessary transfers, keeping to open flight paths, and providing a single terminal for AI development, Oracle makes sure your entire data estate operates like a well-run airport — efficient, secure, and ready to deliver value. Ready to transform your data chaos into AI-powered insights? Explore Oracle AI Data Platform and see how it can serve as your enterprise's AI airport terminal.
In the previous post, we traced how Fusion Data Intelligence (FDI) evolved from OBIA. In this second instalment of our FDI‑introductory series, you’ll explore the underlying technology and architecture that power FDI’s cloud-native analytics platform. 2. The FDI Architecture Ecosystem (The “Big Picture”) At its core, Fusion Data Intelligence (FDI) is a fully managed, cloud-native analytics platform running on Oracle Cloud Infrastructure (OCI). It stitches together your Fusion Cloud Applications, Oracle-managed data pipelines, Autonomous Data Warehouse (ADW), and Oracle Analytics Cloud (OAC) into a seamless, scalable end-to-end analytics solution - one that Oracle deploys, operates, and continuously evolves for you (there is some configuration that administrators need to carry out). First, Fusion Cloud SaaS applications - including ERP, HCM, SCM and CX pillars - serve as the transactional data sources. Oracle provides prebuilt ingestion pipelines tailored to each functional Pillar, handling everything from data extraction and change data capture (CDC) to transformation and consistent mapping into analytics-ready format . These pipelines write data directly into an OCI-hosted Autonomous Data Warehouse, which transform and load the Fusion data into a unified star-schema data model covering multiple functional domains. The schema is:
Once data arrives in the Autonomous Data Warehouse (ADW), Oracle Analytics Cloud takes over for semantic modelling and visualisation. A prebuilt semantic layer wraps the raw star schema into business-friendly subject-area views - covering finance, human resources, supply chain and customer experience - complete with standardised key metrics and dashboards . Through OAC, FDI delivers not just dashboards but intelligent, action-driven analytics, featuring natural-language querying, ML-based forecasting and anomaly detection to name just a few. 🔗 Summary Flow
This end-to-end ecosystem is fully managed by Oracle - covering provisioning, upgrades, performance tuning, and integration with Fusion App releases - offering a friction-free, scalable approach to enterprise analytics (there is some configuration that needs to be done by administrators). 3. Data Movement & Integration FDI’s data movement layer is built around Oracle-managed, prebuilt pipelines that automate ELT and Change Data Capture (CDC) for Fusion Applications (ERP, HCM, SCM, CX). These pipelines are configured and controlled through the intuitive FDI Console, making it easy for administrators to activate, modify or schedule updates with minimal effort. You don’t need to build complex ETL processes - Oracle handles the heavy lifting, while you focus on business relevance and reporting needs . By default, data pipelines are incremental with zero downtime, keeping analytics up-to-date without interrupting service. You also have the flexibility to perform on-demand full reloads, useful for data corrections or model updates - all managed with just a few clicks in the Console . Crucially, the architecture supports extensibility in two key ways:
All pipelines and augmentations are managed through the FDI Console. As an administrator, you can configure initial parameters - such as extract start dates, currency preferences, and schedule frequency - directly in the console interface. Any subsequent edits to pipelines, functional areas, or augmentations are seamless, with Oracle handling deployment and execution behind the scenes ✅ Summary: Core Benefits of FDI Pipelines
4. Lakehouse & Warehousing Foundation At the heart of Fusion Data Intelligence lies a star-schema model deployed on Oracle’s Autonomous Data Warehouse (ADW) - a cloud-native, self-tuning database that underpins fast, enterprise-grade reporting and analytics. Here’s how it’s structured and why it matters: ⚙️ Prebuilt Star Schema in ADW When FDI is provisioned, Oracle automatically creates a prebuilt star schema in ADW. This schema includes fact tables and a network of conformed dimensions - shared across multiple functional areas - that serve as the glue for cross-pillar analytics. Common dimensions include:
These shared dimensions enable users to analyse, for example, how procurement spend (SCM) impacts cash flow (finance), or how HR-driven workforce changes correlate with sales performance - a cross-functional insight made possible by a common semantic backbone. 🏗️ Support for External Data & Custom Schemas FDI doesn’t just ingest Fusion source data - it enables easy integration of external datasets into the same ADW environment. Whether it’s non-Oracle systems, legacy data, purchased data feeds, or even weather information, FDI supports loading external tables into custom schemas that can extend the star schema and semantic model. This extensibility is key to bridging out-of-the-box analytics with bespoke business insights - enhancing customer segmentation, supplying additional cost drivers to per-product profitability, or blending external KPIs directly alongside Fusion metrics. 🔍 Benefits of the Lakehouse Foundation
Under the hood, FDI’s star-schema in ADW provides a robust, extensible greenfield analytics foundation. Built on conformed dimensions and a scalable data warehouse, it enables seamless mash-ups of Fusion data with external sources, supporting rich, multi-domain analytics that truly span the enterprise. 5. Semantic Layer & Pre‑Built Metrics FDI abstracts hundreds of physical tables into logical business subject areas - finance (GL profitability, AP ageing, AR revenue, Trial Balance), HCM (talent acquisition, workforce core), procurement (spend, POs), and CX (campaign ROI, opportunity pipeline) - all underpinned by conformed dimensions. It includes a KPI library with over 2,000 standard metrics, accessible via Oracle Analytics Cloud’s intuitive key-metric editor and drag‑and‑drop visualisations. In essence, this semantic layer creates a unified business vocabulary that simplifies reporting and ensures consistency across the enterprise . 🔐 Complimenting Fusion-Defined Security FDI leverages Fusion’s built-in role-based security model, so the semantic layer inherits data roles, duty roles, and row/object-level filters defined in Fusion Cloud Applications. Access control is enforced through the Oracle Identity and Access Management (IAM) Service and the FDI Console, ensuring that users only see data they’re authorised to view. This unified approach simplifies administration and compliance by avoiding double entry of security definitions . 🧩 Hiding Complexity Through Logical Abstraction Rather than exposing raw tables, FDI offers a logical semantic layer that shields users from underlying complexity. Here’s what it achieves:
✅ Summary: User Experience & Governance Wins
6. Visualisation and Intelligent Dashboards
7. Governance, Security & Lineage Fusion Data Intelligence isn’t just about delivering insights - it’s built on a robust foundation of security governance and data lineage that brings trust, safety, and compliance to the analytics lifecycle. 🔐 Security Inherited from Fusion & Managed via OCI IAM FDI inherits its security framework directly from Fusion Cloud Applications. Role-based access, including data roles and duty roles configured in Fusion, are seamlessly enforced within the FDI semantic layer and Autonomous Data Warehouse (ADW). This ensures that users can access only the data they are authorised to see - without duplicating access definitions in multiple systems. User and group management within FDI is handled through OCI’s Identity and Access Management Service (IAM). You can sync your Fusion App users and roles into OCI IAM or manage them natively via OCI, and then assign access through system and job-specific groups tailored to FDI. This 1:1 mapping ensures governance is inherited and consistent across both transactional and analytics layers. Oracle also manages infrastructure-level security - covering upgrades, patching, encryption, IAM policy enforcement, key management, and auditing - helping to maintain compliance and relieve the operational burden on your team. 🧭 Data Lineage & Quality Built-In Trusted analytics demand transparency - and FDI delivers that through built-in data lineage and validation mechanisms. The system tracks the flow of data from source tables in Fusion Apps, through ingestion pipelines, into curated star schemas, and finally into Semantic Layer metrics and dashboards. Fusion SCM Analytics documentation provides end‑to‑end lineage spreadsheets that detail column‑ and table-level mappings, making it easy to trace every KPI back to its source fields. You can also monitor pipeline activity in the FDI Console, which records execution timestamps, row counts, and error logs - providing a clear audit trail of data loads and transformations. Further, FDI includes validation metrics that reconcile data loaded into ADW against transactional data in Fusion. These can be scheduled or run on‑demand, with reports surfaced directly in OAC - making it easy to identify data drift or discrepancies and swiftly pinpoint areas for correction ✅ Summary: Trust, Safety, and Compliance
8. Why This Architecture Matters for Organisations 🚀 Fusion Data Intelligence goes far beyond traditional BI. It sits at the heart of Oracle’s broader Data Intelligence Platform, delivering a unified, 360° view across all enterprise data—transactional, analytical, structured, and unstructured . 🌟 A Unified Data-Intelligence Ecosystem Unlike legacy stacks - OBIA, ODI, siloed data centres - FDI is built on Oracle’s next-generation Data Intelligence Platform. It blends data lakes, Autonomous Data Warehouse, Oracle Analytics Cloud, OCI AI services, and GoldenGate streaming into a seamless, managed ecosystem . This means organisations can now handle batch and real-time data, include external sources and apply AI/ML—all within one secure environment. This is Oracle's vision as Data Intelligence Platform has been announced but is not yet generally available. 🔄 Consistent Insights Across Pillars FDI’s architecture supports conformed dimensions and shared semantic models spanning finance, HR, SCM, and CX. This allows for unified KPIs and analytics, enabling stakeholders to ask and answer cross-domain questions like:
The result is enterprise-wide analytics based on a single source of truth . 💡 Full Extensibility with Governed Access As part of Oracle’s Data Intelligence Platform, FDI offers extensive extensibility. Users can bring in external datasets, extend semantic models, build custom analytics, and consume OCI AI services - all within Oracle’s security framework. Governed self-service means broad analytical freedom without compromising data integrity . 🛠 Evergreen Platform, Zero Infrastructure Burden The platform is fully managed and evergreen. Oracle handles everything - from provisioning, patching, tuning, and upgrades to integrating the latest AI services. Teams can focus on driving value rather than wrestling with infrastructure . 🎯 Summary: Strategic Differentiators
As you’ve seen, Fusion Data Intelligence delivers a fully managed, cloud-native analytics ecosystem - bringing together Fusion SaaS, Oracle’s Autonomous Data Warehouse, and Analytics Cloud under one secure, AI-enhanced platform. It unifies data across domains, embeds intelligent insights and governance, and eliminates legacy complexity - truly delivering on Oracle’s vision of a Data Intelligence Platform. Now it’s your turn: take a moment to reflect on how FDI could accelerate insight‑driven transformation in your organisation.
Over the years, many of us working in the Oracle analytics space have helped customers implement Oracle Business Intelligence Applications (OBIA) - a powerful solution in its time, offering prebuilt analytics across ERP, HCM and more. If you ever spent hours managing DAC, tweaking ETL mappings, or retrofitting OBIA customisations after a patch - you’ll understand why Fusion Data Intelligence feels like Oracle finally got analytics right.But let’s be honest: it had its fair share of complexity, rigidity, and technical debt. Fast-forward to today and we’ve entered a new era with Oracle Fusion Data Intelligence (FDI) - a reimagined, cloud-native analytics platform designed from the ground up for the Fusion SaaS landscape. And if you’ve ever battled with OBIA’s extensibility, upgrade cycles or data latency, FDI is likely to feel like a breath of fresh air. This post is the first in a short series unpacking what FDI actually is, how it compares with its predecessors, and what it means for Fusion customers today. Oracle's recent growth Over the past 2–3 years, Oracle has consistently grown its cloud business, with total revenue rising from $40.5 billion in FY2022 to $57.4 billion in FY2025, driven largely by strong momentum in Fusion Cloud Applications, NetSuite, and OCI (Oracle Cloud Infrastructure). While Oracle doesn’t match the scale of hyperscalers like AWS or Microsoft Azure in infrastructure alone, its distinct advantage lies in its full-stack strategy - uniquely offering enterprise SaaS, infrastructure, and the database layer under one roof. This vertically integrated model means Oracle can optimise performance, security, and cost across its stack, especially for Fusion workloads. Competitors like SAP and Workday lead in applications but lack native cloud infrastructure; AWS and Azure dominate infrastructure but rely on third-party SaaS partners. Oracle, by contrast, continues to blur the lines between application and platform, using technologies like Autonomous Database, OCI Gen2, and now Fusion Data Intelligence to deliver insights that are deeply embedded, secure, and performant - all within its own ecosystem. These figures aren’t just impressive - they’re a strong signal that Oracle’s SaaS portfolio is achieving scale and maturity, particularly in core enterprise functions like Finance, HR, and Operations. Fusion ERP alone has grown from $0.9B to $1.0B in quarterly revenue, underscoring widespread enterprise adoption. From Adoption to Insight: The Next Frontier As organisations continue investing in Oracle Fusion Cloud applications, the expectation isn’t just automation - it’s intelligence. Businesses aren’t content with simply moving transactional processes to the cloud; they want to understand the return on those investments, monitor performance in real time, and use their data to make faster, smarter decisions. This is where Fusion Data Intelligence (FDI) steps in. Just as Oracle’s adoption of Fusion SaaS pillars is accelerating, so too is the demand for embedded, governed, cross-functional insights that empower users in the flow of work. With SaaS platforms becoming the new systems of record, the analytics layer must evolve in lockstep - and be natively integrated, secure, and scalable. FDI is that evolution. Why FDI Matters Now More Than Ever
FDI bridges this critical gap by turning raw operational data into actionable intelligence - all while aligning with the Fusion application security model, lifecycle, and extensibility standards.
Looking Back: OBIA Was Revolutionary — But the World Has Moved On When it launched, Oracle Business Intelligence Applications (OBIA) was genuinely ahead of its time. Prebuilt subject areas, KPI dashboards, and ETL pipelines for ERP, HCM, SCM, and CRM systems allowed organisations to fast-track enterprise reporting without starting from scratch. OBIA gave business users actionable insights over operational systems, and it helped many enterprises move beyond siloed spreadsheets into a more governed BI model. But OBIA came with constraints that, over time, became significant limitations:
The Modern Alternative: Fusion Data Intelligence With Fusion Data Intelligence (FDI), Oracle has reimagined what enterprise application analytics should look like in the cloud era.
From OBIA to OAX to FAW to FDI: An Analytics Evolution FDI didn’t appear out of nowhere - it’s the result of five years of iterative development across multiple product identities. It began as Oracle Analytics for Applications (OAX), introduced around 2019 as a cloud-based successor to OBIA. OAX was designed to deliver prebuilt analytics for Oracle Fusion Cloud Applications, leveraging Oracle Autonomous Data Warehouse and Oracle Analytics Cloud. In 2020, OAX was rebranded as Fusion Analytics Warehouse (FAW), marking a shift toward a more unified, extensible platform. FAW introduced modular “pillars” aligned with business domains--ERP, HCM, SCM, and CX—each offering curated data models, semantic layers, and prebuilt KPIs. Over the next few years, Oracle expanded these pillars with hundreds of subject areas and embedded machine learning for predictive insights. In 2024, FAW was renamed Fusion Data Intelligence (FDI). This rebranding emphasized its broader mission: not just warehousing analytics, but enabling intelligent decision-making across the enterprise. FDI retained the core architecture—Autonomous Data Warehouse, Oracle Analytics Cloud, and managed pipelines—but added enhanced extensibility, data sharing capabilities, and a more intuitive console for governance and customisation. In short, where OBIA was revolutionary for the on-prem era, FDI is purpose-built for the cloud-native enterprise. It meets today’s expectations for agility, integration, governance, and intelligence - without the baggage of yesterday’s architecture. Looking Ahead
This post was just the beginning. Over the next few instalments, we’ll dive deeper into the nuts and bolts of Fusion Data Intelligence - from how it handles extensibility and embedded insights, to what it means for Fusion customers trying to move beyond dashboards and into decision intelligence. FDI represents more than just a new analytics tool - it’s a shift in how Oracle customers can extract value from their SaaS investments. If you’ve ever found yourself battling data silos, struggling with upgrades, or explaining to stakeholders why reporting still takes days, this series is for you. Stay tuned.
When we think about business data, we usually picture tidy tables and dashboards neatly populated with structured relational data. But in reality, much of an organisation’s most valuable information lives in unstructured formats—scanned invoices, PDFs, handwritten notes, and contracts. This data is often locked away in silos, disconnected from the wider analytical ecosystem.
Oracle Analytics’ AI Document Understanding feature changes that. It enables organisations to automatically extract structured data from documents stored in OCI Object Storage using pretrained AI models—all without needing a data science team. With this capability, you can enrich dashboards with data that would previously be too costly or complex to access. In this post, we’ll walk through:
What Is Oracle Analytics AI Document Understanding?
At its core, the AI Document Understanding capability in Oracle Analytics leverages AI models (deployed within Oracle Cloud Infrastructure) to parse and extract fields of interest from documents stored in OCI Object Storage. This is particularly powerful for automating workflows that currently depend on manual data entry or semi-structured file formats. It supports a range of document types and layouts, including:
IAM Policies
To enable Oracle Analytics to securely access documents stored in OCI Object Storage and to invoke AI services like Document Understanding, specific IAM policies must be in place. Without these policies, your OAC instance won’t have the necessary permissions to read documents or trigger AI model processing. In this section, we’ll walk through the exact tenancy- and compartment-level policies required, ensuring your setup is both functional and secure. You can find more information here.
The following IAM policies grant Oracle Analytics the necessary permissions to read from your Object Storage bucket and to invoke the AI Document Understanding service.
Compartment level IAM Policy
Notes
Next policy needs to be defined at the root compartment level
Root level IAM Policy
These policies are necessary to enable Oracle Analytics to access the OCI AI Document Understanding model. Without these policies correctly setup, you will encounter errors when you attempt to run your data flow in Oracle Analytics.
With the IAM policies configured, you can now proceed with setting up the connection and registering the model within Oracle Analytics.
You do this by creating an Oracle Analytics connection to your Oracle Cloud Infrastructure tenancy that will enable you to gain access to your OCI Object Storage Bucket.
Register a pre-trained Document Key Value Extraction model with your Oracle Analytics instance ensuring that the bucket created previously is selected.
This completes all prerequisites and the next step is to run the newly registered pre-trained model in Oracle Analytics by creating a data flow.
The next step is to create a create a "dataset" which is used as an input to the data flow. This dataset is a CSV file that contains the OCI object storage bucket URL where the documents have been uploaded to. This CSV file can either contain a row for each document with a URL for each document that you intend to process or a single row with a URL for the bucket itself. This way every document within this bucket will be processed. Personally, it's a no brainer for me to use the second option. As mentioned earlier in this article, you need to derive the bucket URL by logging on to the OCI console's bucket details page and copying the URL from your browser. You can see a sample below that has 2 tabs; the 1st tab is what you would use for option 1 where you list out your documents with their corresponding URLs. The 2nd tab has a single row and this is what you would use to instruct the data flow to process all documents within the specified bucket.
Follow the instructions here to create your data flow.
Using the Apply AI Model step, you make a call the the registered pretrained AI Document Understanding model. You then add a Save Data step in which you specify the output dataset. In my example below, I have a few Transform Column steps which are being used to execute some transformations to some columns.
Once the data flow has been saved, it can be run to generate the output dataset. You can see a sample data visualisation workbook below based on the output dataset with some insights of the information derived from the invoices.
Tips and tricks for working with unstructured data in Oracle Analytics
Working with unstructured documents—especially at scale—introduces its own set of quirks. Here are some practical insights to help you get the most out of the AI Document Understanding feature in Oracle Analytics: Use Document Batching Strategically Oracle Analytics currently imposes a 10,000-row processing limit per run. If you’re working with high volumes:
Reuse and Schedule Data FlowsOnce you’ve built a data flow that works, save it and schedule it to run regularly:
Start Small, Then Scale Try a proof-of-concept with 10–20 documents first:
Gotchas, Limits and Tips
1. Bucket URL Must Be Copied from Browser The most confusing part of this setup is finding the correct OCI Object Storage bucket URL. It’s not visible anywhere in the console UI—you must copy it from the bucket’s detail page URL in your browser. 2. 10,000 Document Row Limit There’s a hard limit of 10,000 document rows per data flow run. If your use case involves large volumes of documents, you’ll need to split your data or automate batch runs accordingly. Note that this limit is even less when a custom model is used. The limit in this scenario is 2,000 documents. 3. Document Layouts Matter The AI model is pre-trained for certain layouts (e.g. invoices, forms). Custom layouts may yield mixed results, and you may need to experiment with field mappings to improve outcomes. 4. Use Tags for Traceability Tag your buckets and policies in OCI with labels like oac-ai-docs so they’re easier to audit and maintain. Conclusion Oracle Analytics’ AI Document Understanding feature bridges a crucial gap between unstructured documents and visual analytics. With a few setup steps—bucket creation, IAM policy configuration, model registration, and a simple data flow—you can surface hidden insights from documents that would otherwise sit untouched. It’s a powerful tool, but one with nuances—such as the hidden bucket URL and processing limits—that are worth planning for. Still, for anyone looking to extend their analytics to the edges of their data estate, this capability opens the door. Oracle Analytics now makes it possible to integrate scanned documents, invoices, and other unstructured data sources directly into your dashboards—unlocking insights that were previously out of reach. Optimising Performance in Oracle Analytics Cloud: A Deep Dive into Extracted Data Access Mode10/5/2025
The May 2025 update to Oracle Analytics Cloud (OAC) introduces a significant new feature designed to boost performance and reduce dependency on source systems: the Extracted data access mode. This new capability is especially valuable for enterprise users seeking to optimise dashboard responsiveness, reduce backend load, and deliver consistent performance across a variety of usage scenarios. In this expanded post, we’ll delve into what Extracted mode brings to the table, compare it with the existing Live and Cached modes, and offer guidance on how to get the most value from it.
Understanding Data Access Modes in Oracle Analytics Cloud
To fully appreciate the advantages of the new Extracted mode, it helps to revisit the existing data access modes in Oracle Analytics Cloud — namely Live and Cached. Each mode supports different use cases, with varying implications for data freshness, system performance, and architectural complexity. Live Mode In Live mode, Oracle Analytics executes every query directly against the source system in real time. Whether a user is exploring a dashboard, applying filters, or drilling into data, each action sends a query to the backend database. Advantages:
Cached mode creates a temporary local copy of query results within OAC’s cache layer. This cache is generated on-the-fly when users first load a dashboard or perform a query and reused in subsequent interactions where applicable. Advantages:
Introducing: Extracted Mode (New in May 2025)
The newly introduced Extracted mode provides a more structured and predictable alternative. It allows dataset creators to perform a full extract of data from a source system and store that extract directly within Oracle Analytics. Unlike Cached mode, this data snapshot is proactively managed and completely reusable. Key Benefits of Extracted Mode:
Comparison Table: Live vs Cached vs Extracted Mode
Cached vs Extracted Mode (Quick Reference):
Considerations:
Creating and Managing Extracted Datasets in OAC
Working with Extracted mode is a straightforward process within Oracle Analytics Cloud’s interface. Here’s a step-by-step guide:
Additional Tips:
Where Extracted Mode Shines: Key Use Cases The benefits of Extracted mode become most apparent in high-demand or constrained environments. Here are several real-world examples where this mode adds tangible value:
Best Practices for Extracted Mode To ensure you get the best results from Extracted mode, consider these best practices:
Final Thoughts
The introduction of Extracted mode in Oracle Analytics Cloud marks a significant step forward in how practitioners can balance data freshness, performance, and scalability. By providing a fully materialised, high-speed dataset layer within OAC, this new mode empowers teams to deliver faster, more consistent user experiences without overloading backend systems. It’s not a silver bullet — and it won’t replace Live mode where real-time data is needed — but for many scenarios, particularly those requiring speed and stability, Extracted mode is a smart and strategic choice. With Oracle continuing to invest in features that improve accessibility, manageability, and user experience, this latest enhancement underlines the platform’s commitment to evolving enterprise analytics. |
AuthorA bit about me. I am an Oracle ACE Pro, Oracle Cloud Infrastructure 2023 Enterprise Analytics Professional, Oracle Cloud Fusion Analytics Warehouse 2023 Certified Implementation Professional, Oracle Cloud Platform Enterprise Analytics 2022 Certified Professional, Oracle Cloud Platform Enterprise Analytics 2019 Certified Associate and a certified OBIEE 11g implementation specialist. Archives
March 2026
Categories
All
|
|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||

















RSS Feed