Note: No new items were provided this week, so this roundup focuses on practical reminders, evergreen techniques, and conservative summaries of existing capabilities rather than new feature announcements.
Oracle Analytics (OAC & OAS)
1. Hardening Data Access with Row-Level Security
TL;DR: Revisit your row-level security (RLS) in OAC/OAS: centralise filters in the RPD or semantic model, and test with real user identities rather than only with admin accounts.
Row-level security remains one of the most effective ways to ensure that users only see the data they are entitled to in Oracle Analytics Cloud (OAC) and Oracle Analytics Server (OAS). Instead of scattering filters across workbooks and analyses, you can define data filters at the model layer (RPD in OAS, semantic model in OAC) and tie them to application roles or groups. This keeps your logic consistent and auditable.
A common anti-pattern is testing security only as a power user or BI administrator. Because admins often bypass RLS, this can hide misconfigurations until a business user reports unexpected data. A better approach is to maintain a small set of “test users” that mirror real roles (e.g. “Sales_North”, “HR_ReadOnly”) and validate dashboards with those accounts whenever you change filters or role mappings.
Also consider documenting your RLS rules in a simple table (role → filter predicate) and storing that alongside your RPD or semantic model source control. This makes it easier to review changes during code reviews and avoids accidental privilege creep over time.
Source: https://www.oracle.com/business-analytics/
2. Performance Tuning: Use Query Limits and Cache Wisely
TL;DR: Combine query limits with judicious use of BI Server cache to protect your backend databases and keep dashboards responsive.
Performance tuning in OAC/OAS is rarely about a single magic setting. Two levers that consistently help are query limits and caching. Query limits at the connection pool or subject area level can prevent runaway queries from consuming all database resources. For example, you can cap maximum rows returned, maximum query time, or both. This protects shared environments and encourages better-designed analyses.
BI Server cache, when enabled and managed carefully, can dramatically reduce repeated load on your data warehouse for common dashboards and prompts. The key is to align cache persistence with your data refresh cycles and to invalidate cache when ETL loads complete. Over-aggressive caching without proper invalidation can lead to users seeing stale data, while no caching at all can overload the database during peak reporting hours.
For mixed workloads, consider caching only for specific subject areas or dashboards that are read-mostly and refreshed on a predictable schedule, and keep ad-hoc exploration areas uncached.
Source: https://docs.oracle.com/en/cloud/paas/analytics-cloud/index.html
Fusion Data Intelligence (FDI)
3. Treat FDI as a Productised Data Model, Not Just Another Source
TL;DR: When integrating Fusion Data Intelligence with OAC or other tools, lean on its curated, cross-functional data model instead of rebuilding the same joins and logic yourself.
Fusion Data Intelligence (FDI) provides a packaged, cross-domain data model on top of Fusion Applications data. Rather than connecting directly to raw operational tables and reverse-engineering relationships, FDI exposes curated entities and measures that already embed business logic such as calendar handling, status flags, and common aggregations.
When you bring FDI into OAC or other analytics platforms, treat it as a semantic layer in its own right. Start by mapping your analytics requirements to the entities and measures FDI already provides, and only extend the model where necessary. This approach reduces duplication of logic across tools and helps keep KPIs consistent between embedded Fusion analytics and external dashboards.
It’s also worth aligning your security model with FDI’s own role and data access patterns, so you don’t end up with conflicting entitlements between Fusion and downstream analytics environments.
Source: https://www.oracle.com/applications/fusion-data-intelligence/
Autonomous Data Warehouse / 23ai
4. Use Separate Service Workloads for ETL and Analytics
TL;DR: In Autonomous Data Warehouse (ADW), separate heavy-load ETL sessions from interactive analytics sessions using different services and resource management settings.
Autonomous Data Warehouse (part of the broader Oracle Database 23ai family) allows you to define multiple database services with different resource management profiles. A practical pattern is to route batch ETL or ELT jobs through one service (with higher parallelism and less strict response-time expectations) and interactive BI/analytics traffic through another (with tighter concurrency and response-time controls).
This separation helps avoid situations where a long-running transformation job degrades the experience for dashboard users. It also makes it easier to monitor and tune each workload independently using built-in performance views and tools such as SQL Monitor.
For teams running OAC or OAS on top of ADW, make sure your BI connection strings explicitly reference the intended service (for example, an “_high” or “_medium” analytics service) rather than the default, and coordinate with your DBAs on any changes to service-level resource plans.
Source: https://docs.oracle.com/en/cloud/paas/autonomous-database/index.html
5. Incremental Loads with Partition Exchange
TL;DR: For large fact tables in ADW/23ai, consider partition exchange loading to minimise downtime and maximise load performance.
When dealing with large, partitioned fact tables, a classic Oracle technique that still pays dividends in ADW/23ai is partition exchange loading. Instead of inserting or merging millions of rows directly into a live partition, you load data into a staging table with the same structure and then swap it into the target partition using an ALTER TABLE ... EXCHANGE PARTITION operation.
This approach keeps your main table available for queries while the load is in progress and reduces index maintenance overhead. Once the exchange completes, queries immediately see the new data with minimal locking and logging. It is especially useful for daily or hourly incremental loads where each batch corresponds neatly to a partition (for example, by load date or transaction date).
Source: https://docs.oracle.com/en/database/oracle/oracle-database/23/
SQL Tip of the Week
6. Safely De-Duplicating with MATCH_RECOGNIZE
TL;DR: Use MATCH_RECOGNIZE to identify and keep only the “best” row per business key when dealing with near-duplicates, instead of relying solely on ROW_NUMBER() and ad-hoc ordering.
Oracle’s MATCH_RECOGNIZE clause can be a powerful alternative to window functions when you need to detect patterns or de-duplicate based on business rules. Suppose you receive multiple records per customer per day and want to keep only the latest successful status per day.
SELECT *
FROM customer_events
MATCH_RECOGNIZE (
PARTITION BY customer_id, event_date
ORDER BY event_time DESC
MEASURES
FIRST(event_id) AS kept_event_id,
FIRST(status) AS kept_status,
FIRST(event_time) AS kept_event_time
ONE ROW PER MATCH
PATTERN (best)
DEFINE
best AS status = 'SUCCESS'
);
This pattern partitions events by customer and date, orders them from latest to earliest, and selects the first row that satisfies your “best” condition (in this case, a SUCCESS status). Compared with a simple ROW_NUMBER() approach, this makes the rule explicit and easier to extend (for example, preferring certain channels or sources) without rewriting complex ORDER BY expressions.
Source: https://docs.oracle.com/en/database/oracle/oracle-database/23/sqlrf/MATCH_RECOGNIZE.html