Note: No external items were provided this week, so this roundup focuses on recent themes, common questions, and practical tips we’re seeing in Oracle Analytics and data projects. Where relevant, links point to core product documentation and reference material rather than news announcements.
Oracle Analytics (OAC & OAS)
1. Hardening Data Access with Row-Level Security in OAC/OAS
TL;DR: Row-level security (RLS) in OAC/OAS is still the most effective way to keep a single semantic model while tailoring data visibility by user or role. This week we revisited best practices for implementing RLS using session variables and application roles.
Row-level security remains a recurring topic on Oracle Analytics projects, especially when customers consolidate multiple legacy reports into a single OAC or OAS semantic model. The goal is simple: finance should see all cost centres, but regional managers should only see their region, and external partners should see only their own data.
In OAC/OAS, RLS is typically implemented using:
- Session variables populated at login (e.g.
REGION,COST_CENTER_LIST). - Application roles mapped to identity provider groups (IDCS, OCI IAM, or on-prem LDAP).
- Data filters defined in the RPD (OAS) or in the semantic modeler (OAC) that reference those variables.
A common pattern is to maintain a security bridge table (e.g. USER_REGION_SECURITY) that maps user IDs or roles to allowed regions. At login, an initialization block populates a session variable such as NQ_SESSION.REGION_LIST, and the logical table source for your fact table is filtered with a predicate like:
REGION IN (VALUEOF(NQ_SESSION.REGION_LIST))
For OAC’s web-based semantic modeler, the same idea applies, but you define filters in the modeler UI and reference session variables via the expression editor. The key is to keep security logic centralised and auditable, not scattered across individual workbooks or DV projects.
Useful references:
- https://docs.oracle.com/en/cloud/paas/analytics-cloud/acsom/security-oracle-analytics.html
- https://docs.oracle.com/en/middleware/bi/analytics-server/administer-oas/security-oracle-business-intelligence.html
2. Managing Semantic Models Across Environments
TL;DR: Promoting semantic models between DEV, TEST, and PROD in OAC/OAS works best when you standardise connection names, externalise credentials, and script deployments where possible.
Another frequent challenge is keeping semantic models consistent across environments. On OAS, this is traditionally handled with RPD promotion via WLST or EM; on OAC, we now have the web-based semantic modeler and BAR (backup and restore) exports for content migration.
Some practical tips:
- Standardise connection names (e.g.
DW_CONNrather than embedding environment-specific names). This allows you to change only the underlying connection details per environment. - Use OCI Vault or wallet-based connections for database credentials in OAC, instead of hard-coding passwords in models.
- Script BAR exports/imports for repeatable deployments, especially when you have multiple subject areas and DV projects to promote.
- Version control RPDs (OAS) or JSON exports of semantic models where possible, so you can track changes over time.
References:
- https://docs.oracle.com/en/cloud/paas/analytics-cloud/acsom/backup-and-restore.html
- https://docs.oracle.com/en/middleware/bi/analytics-server/administer-oas/managing-repository.html
Fusion Data Intelligence (FDI)
3. Thinking Through Data Lineage from Fusion Apps into FDI
TL;DR: As more customers adopt Fusion Data Intelligence, questions about lineage from Fusion transactional tables to curated FDI subject areas are increasing. The safest approach is to lean on Oracle’s delivered documentation and data dictionaries rather than reverse-engineering pipelines.
FDI sits between Oracle Fusion Cloud Applications and your analytics layer, providing curated, analytics-ready data models. When implementing, teams often want to understand exactly how a Fusion source table (e.g. PER_ALL_PEOPLE_F in HCM) flows into a particular FDI dataset or metric.
While the underlying pipelines are managed by Oracle, you can still document lineage at a functional level:
- Start from the FDI subject area or dataset you are using (e.g. Workforce, Financials, Procurement).
- Use the FDI data dictionary and technical documentation to identify the corresponding Fusion business objects and tables.
- Capture this mapping in your own data catalogue or Confluence page for project stakeholders.
Because FDI is a managed service, avoid relying on any undocumented internal object names or behaviours; they can change between updates. Instead, treat FDI as a stable semantic layer and focus on how it aligns with your reporting requirements and governance processes.
Reference:
Autonomous Data Warehouse / 23ai
4. Partitioning Strategy Check-Up for ADW
TL;DR: Many ADW environments start simple and grow quickly; revisiting partitioning strategy (especially on large fact tables) can yield immediate performance and maintenance benefits.
Autonomous Data Warehouse (ADW) encourages a “load and query” mindset, but as data volumes grow, classic data warehouse techniques still matter. Partitioning is one of the most impactful, particularly for time-based fact tables.
Key considerations:
- Partition by date for large fact tables (e.g.
SALES_FACTpartitioned byTRANSACTION_DATE), enabling partition pruning for time-sliced queries. - Use interval partitioning so new time periods are created automatically without DDL changes.
- Align data lifecycle policies with partitions (e.g. drop or move old partitions to cheaper storage if applicable).
Example pattern:
CREATE TABLE sales_fact (
sale_id NUMBER,
transaction_dt DATE,
customer_id NUMBER,
amount NUMBER
)
PARTITION BY RANGE (transaction_dt)
INTERVAL (NUMTOYMINTERVAL(1, 'MONTH'))
(
PARTITION p_start VALUES LESS THAN (DATE '2024-01-01')
);
On 23ai-based services, the same principles apply, but you can also explore newer features as they become generally available. Always confirm support and syntax in the current ADW/23ai documentation before using advanced options.
References:
- https://docs.oracle.com/en/cloud/paas/autonomous-database/adbsa/partitioning.html
- https://docs.oracle.com/en/database/oracle/oracle-database/23/sqlrf/CREATE-TABLE.html
SQL Tip of the Week
5. Using MATCH_RECOGNIZE for Simple “First/Last Status” Problems
TL;DR: Oracle’s MATCH_RECOGNIZE is often associated with complex pattern matching, but it can elegantly solve everyday “first/last status” questions in a single pass.
Suppose you have an order status history table and you want, for each order, the first and last status and their timestamps. You can do this with analytic functions, but MATCH_RECOGNIZE gives you a concise, extensible pattern.
SELECT *
FROM orders_status
MATCH_RECOGNIZE (
PARTITION BY order_id
ORDER BY status_ts
MEASURES
FIRST(A.status) AS first_status,
FIRST(A.status_ts) AS first_status_ts,
LAST(A.status) AS last_status,
LAST(A.status_ts) AS last_status_ts
ONE ROW PER MATCH
PATTERN (A+)
DEFINE
A AS TRUE
);
Explanation:
PARTITION BY order_idgroups events per order.PATTERN (A+)treats the entire sequence as a single match.FIRSTandLASTin theMEASURESclause give you the initial and final status in one query.
This pattern is easy to extend to detect more complex behaviours (e.g. “shipped then returned within 7 days”) without resorting to multiple self-joins.
Reference: