Skip to content
Oracle Analytics by Adrian Ward
Oracle Analytics by Adrian Ward

25 Years Experience and Counting

  • Analytics Performance
  • My Profile
  • Publications
  • 5 Minutes of Fame
  • News Posts
  • Technical Posts
Oracle Analytics by Adrian Ward

25 Years Experience and Counting

Oracle Analytics Weekly Roundup — 06-Apr

Posted on April 6, 2026 By Adrian Ward

Below is this week’s Oracle Analytics & Data roundup for obiee.co.uk readers, focusing on OAC/OAS, Fusion Data Intelligence, Autonomous Data Warehouse / 23ai, and a practical SQL tip. Note: there were no new product-specific items provided this week, so this roundup focuses on stable, battle-tested patterns and usage tips rather than new feature announcements.

Oracle Analytics (OAC & OAS)

1) Hardening Your OAC/OAS Semantic Layer for Long-Term Maintainability

TL;DR: Treat your RPD / semantic model as code: standardise naming, isolate subject areas, and version-control everything to reduce regression risk during upgrades and migrations.

Even in quieter weeks with no major patch announcements, there’s plenty of value in revisiting how we structure and maintain our OAC and OAS semantic layers. Many production issues in Oracle Analytics are not caused by the platform itself, but by fragile modelling practices that don’t age well as requirements evolve.

Three practical patterns worth reinforcing:

  • Stable logical column names: Avoid renaming logical columns once reports are in use. Instead, add new columns when semantics change, and deprecate old ones gradually. This helps prevent broken workbooks and DV projects after RPD deployments.
  • Subject-area boundaries: Keep subject areas narrowly focused (e.g. “Sales – Orders”, “Sales – Pipeline”) rather than a single mega subject area. This improves query performance, security configuration, and user discoverability.
  • Semantic-layer version control: Store RPDs (OAS) or Model Administration Tool exports / BAR files (OAC) in Git or another VCS. Tag versions with the corresponding platform patch level so you can correlate model changes with platform changes when troubleshooting.

For OAC, also consider a clear promotion pipeline:

  1. Develop in a dedicated DEV instance with a separate IDCS / IAM domain where possible.
  2. Use snapshot exports (BAR files) for consistent migration of data models, DV projects, and security artefacts.
  3. Automate smoke tests (key dashboards, filters, prompts) in TEST before promoting to PROD.

These practices are not new, but they are the difference between stable, low-drama analytics environments and brittle ones that break with every change.

Source: No new Oracle Analytics-specific links were provided this week; guidance is based on established OAC/OAS operational best practices.

Fusion Data Intelligence (FDI)

2) Using FDI as a “Source of Truth” Without Over-Customising

TL;DR: Start with FDI’s standard subject areas and KPIs; only extend the model when you have clear, recurring business questions that cannot be answered with the delivered content.

Fusion Data Intelligence continues to be positioned as the curated, governed analytics layer on top of Fusion SaaS applications. Even without new feature drops to discuss this week, it’s worth reiterating an adoption pattern that consistently works well in real projects:

  • Lead with delivered content: Before designing custom metrics or data flows, exhaust what’s already present in the FDI semantic model and KPIs. This reduces maintenance overhead and keeps you aligned with future updates.
  • Document “gaps” explicitly: When a business requirement can’t be met with delivered content, capture it as a specific gap (e.g. “Need order fulfilment SLA by carrier and region, last 90 days”). This makes it easier to justify and design extensions.
  • Minimise data movement: Where possible, consume FDI data via its curated views instead of replicating large volumes into external warehouses unless there is a clear performance or integration driver.

From a governance perspective, treat FDI as the “golden” operational reporting layer for Fusion data, and use downstream platforms (OAC, ADW, other warehouses) for cross-system analytics and advanced modelling where needed.

Source: No new FDI-specific links were provided this week; summary reflects conservative, widely adopted usage patterns.

Autonomous Data Warehouse / 23ai

3) Practical Housekeeping for Autonomous Data Warehouse

TL;DR: Regularly review storage, inactive objects, and resource usage in ADW; small housekeeping tasks can delay costly scale-ups and keep query performance predictable.

In the absence of new ADW/23ai feature announcements this week, it’s a good time to revisit operational hygiene. Autonomous Data Warehouse is forgiving, but not magical: poor housekeeping still leads to higher costs and slower queries.

Three pragmatic tasks to schedule:

  • Identify unused tables and partitions: Use data dictionary views to find objects not queried in months. Consider archiving or dropping them to reclaim space and reduce clutter.
  • Review auto-scaling behaviour: Monitor how often your ADW instance scales up. If spikes correlate with specific ETL windows or ad-hoc workloads, you may be able to reschedule or optimise those jobs.
  • Check index and partition strategy: Even with ADW’s optimisations, appropriate partitioning and selective indexing can materially improve performance for large fact tables.

A simple example to identify “cold” tables by last access time (note that exact columns available can vary by version and configuration, so adjust conservatively for your environment):

SELECT owner,
       table_name,
       last_analyzed
  FROM dba_tab_statistics
 WHERE owner NOT IN ('SYS', 'SYSTEM')
   AND last_analyzed < ADD_MONTHS(TRUNC(SYSDATE), -6)
 ORDER BY last_analyzed;

This doesn’t tell you everything about usage, but it’s a helpful starting point for spotting candidates for review, especially in sandboxes and older environments.

Source: No new ADW / 23ai-specific links were provided this week; recommendations are based on standard ADW operational practices.

SQL Tip of the Week

4) Safely Using QUALIFY-Style Logic in Databases Without QUALIFY

TL;DR: If your database doesn’t support QUALIFY, you can emulate it cleanly with a subquery or inline view that applies window functions first, then filters.

Some modern SQL dialects (e.g. Snowflake, BigQuery) support a QUALIFY clause, which lets you filter on window functions without nesting queries. Oracle Database does not support QUALIFY, so you need a slightly different pattern.

Suppose you want the latest order per customer. In a dialect with QUALIFY, you might write:

-- Not valid in Oracle Database
SELECT customer_id,
       order_id,
       order_date,
       amount,
       ROW_NUMBER() OVER (
         PARTITION BY customer_id
         ORDER BY order_date DESC
       ) AS rn
  FROM orders
QUALIFY rn = 1;

In Oracle, you can achieve the same result with an inline view:

SELECT customer_id,
       order_id,
       order_date,
       amount
FROM (
    SELECT customer_id,
           order_id,
           order_date,
           amount,
           ROW_NUMBER() OVER (
             PARTITION BY customer_id
             ORDER BY order_date DESC
           ) AS rn
      FROM orders
) t
WHERE t.rn = 1;

This pattern is robust, optimiser-friendly, and works across OAC/OAS data models, ADW, and on-prem Oracle Database. It’s also a good habit when you’re writing SQL intended to be portable between multiple platforms.

Source: No external SQL-specific links were provided; example is based on standard Oracle SQL behaviour.

Share this:

  • Share on Facebook (Opens in new window) Facebook
  • Share on X (Opens in new window) X
Weekly News

Post navigation

Previous post
©2026 Oracle Analytics by Adrian Ward | WordPress Theme by SuperbThemes