Note: No external items were provided this week, so this roundup focuses on practical guidance, recent themes in the community, and evergreen techniques rather than specific news announcements.
Oracle Analytics (OAC & OAS)
1. Hardening Data Security with Row-Level Filters
TL;DR: Use row-level security (RLS) in OAC/OAS to ensure users only see the data they are entitled to, especially when multiple business units share a common semantic model.
Row-level security remains one of the most impactful – and often underused – features in Oracle Analytics. Whether you are on OAC or OAS, centralising logic in the RPD / semantic model and using application roles to drive filters is still the cleanest way to avoid proliferating separate datasets and workbooks for each business unit.
A common pattern is to define a logical dimension for “Business Unit” or “Operating Company” and then apply a data filter on that logical table for a given application role. For example, users in the BI_CONSULTING_BU role only see rows where BUSINESS_UNIT = 'CONSULTING'. This keeps the model reusable while enforcing strict segregation of data.
Key practices:
- Map identity provider (IdP) groups to application roles in OAC/OAS so security follows SSO groups.
- Prefer semantic model filters over workbook-level filters for anything security-related.
- Test with “Preview as Role” (or equivalent) to validate what each persona actually sees.
Useful reference on model-based security in Oracle Analytics:
https://docs.oracle.com/en/cloud/paas/analytics-cloud/acsom/security-data.html
2. Incremental Data Refresh Patterns for OAC Datasets
TL;DR: If you are still doing full reloads of large OAC datasets, consider incremental refresh patterns keyed on dates or surrogate keys to reduce load times and API pressure.
OAC datasets sourced from ADW, Fusion, or on-prem databases can grow quickly. Full reloads become slow and can stress both OAC and the source system. A more scalable pattern is to:
- Partition your fact data by a load key (e.g. transaction date, last update date, or a sequence ID).
- Use a database view that exposes only “new or changed” rows since the last successful load.
- Drive an incremental dataset refresh in OAC based on that view.
For example, maintain a control table in the database that stores the last load timestamp used by OAC. Your view then filters rows where LAST_UPDATE_DATE > LAST_LOAD_DATE. After a successful dataset refresh, update the control table. This keeps the logic in the database, where it is easier to debug and optimise.
OAC dataset documentation:
https://docs.oracle.com/en/cloud/paas/analytics-cloud/acsds/working-datasets.html
Fusion Data Intelligence (FDI)
3. Treating FDI as a Productised Analytics Layer, Not Just “Another Source”
TL;DR: FDI is more than a convenient source for OAC; it is a curated semantic and metrics layer over Fusion data. Design your analytics around its subject areas and KPIs rather than rebuilding everything from raw tables.
As more customers roll out Fusion Data Intelligence, a recurring architectural question is whether to consume FDI directly from OAC or to bypass it and model everything from the underlying Fusion or ADW schemas. In most cases, the sweet spot is to treat FDI as the “first-class” analytics layer and only drop down to raw tables when you have a very specific gap.
Benefits of leaning into FDI:
- Predefined subject areas and measures aligned with Fusion business processes.
- Reduced need to re-implement complex business logic (e.g. multi-ledger accounting rules, HR security, or procurement statuses).
- Better upgrade path as Oracle evolves the FDI content with new KPIs and attributes.
A practical pattern is:
- Use FDI subject areas as the primary source for OAC semantic models and datasets.
- Extend with custom ADW or Fusion views only where FDI does not yet expose the required attributes or grain.
- Document which KPIs are “FDI standard” versus “customer-specific extensions” for long-term maintainability.
FDI overview:
https://docs.oracle.com/en/cloud/saas/applications-common/23d/oafdi/index.html
Autonomous Data Warehouse / 23ai
4. Using ADW as the Hub for Hybrid Analytics Architectures
TL;DR: ADW (and the broader 23ai database platform) works best as the central query and transformation hub, with OAC and FDI consuming curated views and materialized views rather than raw tables.
Many Oracle Analytics deployments now combine multiple sources: Fusion SaaS, on-prem E-Business Suite, third-party SaaS, and flat files. ADW is well-suited to be the consolidation point for this data, especially when paired with OAC as the front-end.
Recommended patterns:
- Land raw data into staging schemas in ADW (via Data Transforms, GoldenGate, or external tools).
- Transform into conformed dimensions and facts using SQL views and materialized views, not just ad-hoc queries.
- Expose only curated schemas to OAC and FDI to keep semantic models clean and stable.
With 23ai features such as improved JSON support and in-database machine learning, you can keep more of the logic close to the data. For example, you can score ML models in SQL and surface the predictions directly into OAC workbooks without exporting to separate services.
ADW documentation:
https://docs.oracle.com/en/cloud/paas/autonomous-database/index.html
SQL Tip of the Week
5. Safely De-duplicating Rows with MATCH_RECOGNIZE
TL;DR: Use MATCH_RECOGNIZE to identify and keep only the “best” row in a set of duplicates based on business rules, without resorting to complex self-joins.
When integrating multiple sources into ADW, you often end up with duplicate business keys (e.g. the same customer ID from two systems). A common requirement is to keep one “surviving” row based on a rule such as “latest update wins” or “prefer source A over source B”.
Here is a practical example using MATCH_RECOGNIZE to keep the latest version of each customer by LAST_UPDATE_DATE:
-- Sample structure:
-- CUSTOMER_ID, SOURCE_SYSTEM, LAST_UPDATE_DATE, NAME, EMAIL
SELECT
customer_id,
source_system,
last_update_date,
name,
email
FROM customers
MATCH_RECOGNIZE (
PARTITION BY customer_id
ORDER BY last_update_date DESC
MEASURES
FIRST(customer_id) AS customer_id,
FIRST(source_system) AS source_system,
FIRST(last_update_date) AS last_update_date,
FIRST(name) AS name,
FIRST(email) AS email
ONE ROW PER MATCH
PATTERN (latest_row)
DEFINE
latest_row AS 1 = 1
);
What this does:
PARTITION BY customer_idgroups duplicates for the same customer.ORDER BY last_update_date DESCputs the most recent row first in each partition.FIRST(...)inMEASURESpicks the values from that most recent row.ONE ROW PER MATCHensures you get one surviving row per customer.
This pattern is more expressive and maintainable than nested analytic functions or self-joins, especially as your survivorship rules become more complex (e.g. “prefer SOURCE_SYSTEM = ‘FDI’ unless the other source is more than 7 days newer”).
Oracle SQL pattern matching reference:
https://docs.oracle.com/en/database/oracle/oracle-database/23/sqlrf/MATCH_RECOGNIZE.html