OBIEE 11g Security Explained

I would like to keep it very short..

Login to console\security realm\myrealm\user and group

User –

User is the lowest level in security structure

Group –

List of Users can be under a Group. Group can be under a Parent Group.

Example –

User1, User2 are under GroupA and GroupA is under GroupB.

User1 and User2 inherits all privileges from GroupA and GroupA inherits privileges from GroupB.

Login to EM\Business Intelligence\coreapplication(right click)\Security\Application Roles or Application Policies

Application Role –

Application Role is a role which has certain privilages like access to create or view reports\dashboards and it is the one which grants privilages to users\groups\application roles under it.

It mean when we add a member let it be a user\group\application role under Application Role, all the members below him will get his privileges.

Application Policy –

Application policies are named after Application Roles to which they apply.(Hence both Application Role\Policy will have same name)

Application policies are sets of java permissions that are associated with a application role. The BIAuthor application policy, for example, allows the user to develop reports and data models with BI Publisher, access Essbase administration and calculation functions, and perform other report-authoring tasks. What application policy permission classes explicitly don’t cover is privileges such as being able to access the analysis editor, create dashboards, or use other areas of repository or Presentation Server functionality that are controlled by permissions set in the Oracle BI Administration tool, or the Administration page in Presentation Services.

  • For applications written in Java, such as BI Publisher, Financial Reporting and Real-Time Decisions, you control their use by using application policies, whilst
  • For the C++ “legacy” components such as the BI Server and BI Presentation Server, you control them by their own in-built privileges and permissions

We never alter or create a application policy, need not to worry about application policy. When it is needed to provide access for a Application Role to BI Publisher, Essbase and Financial Reporting we use create like option to create a application policy based on access level.

Posted in OBIEE | 1 Comment

Configuring Group Account Numbers in Oracle BI Applications 11.1.1.x

Introduction

The general concept discussed in this blog applies to all OBIA releases. The specific screenshots on Configuration Manager is only available starting 11.1.1.7.0.

When you implement Financial Analytics, one of the most important steps is to set up Group Account Numbers. Many issues are reported due to incorrect setup of Group Account Numbers.

What is Group Account Number?

– Group Account Number logically groups GL natural accounts into reportable group(s), so users can view financial reports at a higher level than that of a GL natural account.

Example: Assign natural account range 1210 – 1240 to Group Account Number “AR” (Accounts Receivable).

CHART OF ACCOUNTS ID

FROM ACCT

TO ACCT

GROUP_ACCT_NUM

101

1110

1110

CASH

101

1210

1240

AR

101

1280

1280

OTHER CA

101

1340

1340

PPAID EXP

oGroup Account Number is used to establish relationship between GL natural accounts and Financial Statement Item Code. Financial Statement Item Code is mapped to base fact tables for GL reconciliation process.

Example: Group Account Number, Financial Statement Item Code & Base Fact Table

GROUP_ACCT_NUM

FIN_STMT_ITEM_CODE

BASE FACT TABLE

ACC DEPCN

OTHERS

W_GL_OTHER_F

ACC LIAB

OTHERS

W_GL_OTHER_F

AP

AP

W_AP_XACT_F

AR

AR

W_AR_XACT_F

CASH

OTHERS

W_GL_OTHER_F 

CMMN STOCK

OTHERS

W_GL_OTHER_F 

COGS

COGS

W_GL_COGS_F

REVENUE

REVENUE

W_GL_REVN_F

How to configure Group Account Number(s) and Financial Statement Item Code(s)?

oGroup Account Numbers are defined / configured through a configurable csv file. In case of Oracle the file name is file_group_acct_codes_ora.csv. For PeopleSoft, it is file_group_acct_codes_psft.csv, and for JDE, it is file_group_acct_codes_jde.csv. Users are allowed to specify which GL natural accounts are assigned to a particular group account within chart of accounts, GL Business Units and company for Oracle, PeopleSoft and JDE sources respectively. Please ensure that the account ranges are continuous without any overlaps. If necessary you can have multiple ranges and/or accounts assigned to the same group account. Additionally, please ensure that the accounts being provided are all leaf accounts / ranges and not parent nodes/ranges.

oBy mapping GL accounts to group account numbers and then associating group accounts to a financial statement Item code, an indirect association is built between GL accounts and financial statement Item codes as well. In case of BI APPS 11.1.1.7.1 or later, association of group account numbers to financial statement item code is performed in Oracle BI Applications Configuration Manager. But, in earlier releases like 7.9.6.3 this configuration was also done using a configurable file file_grpact_fstmt.csv.

 

Note: Customers should not modify the mapping relationship between Group Account Number and Financial Statement Item Code for Group Account Numbers AP, AR, COGS and REVENUE.

oIt is not always necessary to assign all out of box group account numbers to certain account ranges. In cases where customers are not planning to use the logical metrics under GL which uses Group Account Number as part of their metric definitions, users do not need to configure majority of the Group Account Numbers. However, if users are still planning to implement Payables, Receivables, Revenue, or COGS facts, they still need to configure Group Account Numbers AP, AR, REVENUE, or COGS because these Group Account numbers are also used for the GL reconciliation process against these facts.

oIf for some reason, out of the box group account numbers are not sufficient, then the high level steps to add additional group accounts are (11g):

  • Define new Group Account Numbers in Configuration Manager
  • Assign the newly created Group Account Number to a financial statement code (e.g. AP, AR, REVENUE, COGS, OTHERS).
  • Assign GL account to Group Accounts in source specific csv file.
  • Model the RPD to include a new group account measure and expose in presentation layer.

Why is it important?

Group Account Number configuration is important as it determines that the right GL Accounts are assigned to the group account number(s). In other words, it determines the accuracy of most of the reports that are reported from Financial Analytics where Group Account Number is part of the report. Group Account Numbers in combination with Financial Statement Item Codes are also leveraged in GL reconciliation process to ensure that sub ledger data reconciles with GL Journal entries.

Group Account Number Usage in Financial Analytics

-GL reports

For “Account Receivable” in balance sheet reports, we calculate the total amount for GL natural accounts from 1210 to 1240 for ledgers with chart of account id 101.

oBecause of this association and subsequent configuration to expose the group account number metric in RPD, users can now report on Group Account Number metric. For example in a Balance Sheet Report, users can now report on “AR Amount” from “Fact – Fins – GL Balance” (Logical Fact) corresponding to “AR” group account.

-Reconciliation

oBecause of the indirect relationship that is built between GL Accounts and Financial statement Item codes, it is now possible to do GL reconciliation to ensure that the sub ledger data reconciles with GL Journal entries. For example, it is possible that after an invoice has been transferred to GL, the user might decide to adjust the invoice in GL. In which case, it is important that the adjustment amount is trickled down to the sub ledger base fact along with the balance fact. So, to determine such sub ledger transactions in GL, the reconciliation process uses Financial Statement item codes.

 

Common Issues due to Incorrect Group Account Number Setup

1. Sub ledger transactions are not posted.

Group account numbers are associated with financial statement item codes. The GL reconciliation process uses the financial item codes to identify sub ledger fact tables to be reconciled with GL journals. If the group account number assignment to GL account is incorrect, sub ledger facts remain “unposted”.

As Sub ledger balance facts are based on posted transactions, for example, AP balances or AR balances reports may return incorrect results.

For Example: GL account “1210” is supposed to be associated to “AR” (Accounts Receivables) group account but was mistakenly associated to “AP” (Accounts Payables). In which case, as part of ETL Process all the GJ Journal lines for account 1210 are attempted to be reconciled against sub ledger accounting records in AP fact. But, in reality these GL journal lines came from AR and not AP. So, because it couldn’t find the corresponding entries in “AP” they will remain ��unposted”.

2. Unnecessary Manual Records are created in Sub ledger facts.

When a group account number is incorrectly assigned to an account, unnecessary records with transaction type “Manual” are created in sub ledger facts (e.g. W_AP_XACT_F, W_GL_REVN_F etc). Those records do not have sub ledger transaction details (e.g. supplier, customer, transaction number, and so on).

Following the same example above, because it couldn’t find the corresponding entries in “AP”, the ETL process will insert “Manual” records into the AP fact because it thinks that these entries are “Manual” Journal entries created directly in the GL system.

3. GL reports return incorrect results.

When group account number assignments are not correct, GL metrics such as “AP”, “AR” from the above example will have incorrect amount. Therefore, whenever, you see GL metrics showing incorrect amount, the first thing you should check is if the group account num used by that metric has correct GL account ranges assigned to it.  

 

1 Comment

Executing your own JavaScript in OBIEE 10g

Step 1a: Add your script to Common.JS (this usually file is found in the {OracleBI}\web\app\res\b_mozillahere is an example:

// MY Stuff 
function obi_popup() 

    alert(“Hello World”) 
}

Image

 Step 1b: If you are using OC4J sync the file with the one in {OracleBI}\oc4j_bi\j2ee\home\applications\analytics\analytics\res\b_mozilla

Step 2: restart the presentation server and you webserver.

Step 3: Add the execution code to your dashboard under Text –

 <input type=”button” onclick=”obi_popup()” value=”Click Me!”>

Image

Don’t forget to check the Contains HTML Markup!

Step 4: Save your dashboard page and test….

Image

Leave a comment

Different Types of Dimensions and Facts in Data Warehouse

Dimension –

A dimension table typically has two types of columns, primary keys to fact tables and textual\descreptive data.

Fact –
A fact table typically has two types of columns, foreign keys to dimension tables and measures those that contain numeric facts. A fact table can contain fact’s data on detail or aggregated level.

 

Types of Dimensions –

Slowly Changing Dimensions:

 Attributes of a dimension that would undergo changes over time. It depends on the business requirement whether particular attribute history of changes should be preserved in the data warehouse. This is called a Slowly Changing Attribute and a dimension containing such an attribute is called a Slowly Changing Dimension.

 

Rapidly Changing Dimensions:

A dimension attribute that changes frequently is a Rapidly Changing Attribute. If you don’t need to track the changes, the Rapidly Changing Attribute is no problem, but if you do need to track the changes, using a standard Slowly Changing Dimension technique can result in a huge inflation of the size of the dimension. One solution is to move the attribute to its own dimension, with a separate foreign key in the fact table. This new dimension is called a Rapidly Changing Dimension.

 

Junk Dimensions:

A junk dimension is a single table with a combination of different and unrelated attributes to avoid having a large number of foreign keys in the fact table. Junk dimensions are often created to manage the foreign keys created by Rapidly Changing Dimensions.

 

Inferred Dimensions:

While loading fact records, a dimension record may not yet be ready. One solution is to generate an surrogate key with Null for all the other attributes. This should technically be called an inferred member, but is often called an inferred dimension.

 

Conformed Dimensions:

A Dimension that is used in multiple locations is called a conformed dimension. A conformed dimension may be used with multiple fact tables in a single database, or across multiple data marts or data warehouses.

 

Degenerate Dimensions:

 A degenerate dimension is when the dimension attribute is stored as part of fact table, and not in a separate dimension table. These are essentially dimension keys for which there are no other attributes. In a data warehouse, these are often used as the result of a drill through query to analyze the source of an aggregated number in a report. You can use these values to trace back to transactions in the OLTP system.

 

Role Playing Dimensions:

A role-playing dimension is one where the same dimension key — along with its associated attributes — can be joined to more than one foreign key in the fact table. For example, a fact table may include foreign keys for both Ship Date and Delivery Date. But the same date dimension attributes apply to each foreign key, so you can join the same dimension table to both foreign keys. Here the date dimension is taking multiple roles to map ship date as well as delivery date, and hence the name of Role Playing dimension.

 

Shrunken Dimensions:

A shrunken dimension is a subset of another dimension. For example, the Orders fact table may include a foreign key for Product, but the Target fact table may include a foreign key only for ProductCategory, which is in the Product table, but much less granular. Creating a smaller dimension table, with ProductCategory as its primary key, is one way of dealing with this situation of heterogeneous grain. If the Product dimension is snowflaked, there is probably already a separate table for ProductCategory, which can serve as the Shrunken Dimension.

 

Static Dimensions:

Static dimensions are not extracted from the original data source, but are created within the context of the data warehouse. A static dimension can be loaded manually — for example with Status codes — or it can be generated by a procedure, such as a Date or Time dimension.

 

Types of Facts –

 

Additive:

Additive facts are facts that can be summed up through all of the dimensions in the fact table. A sales fact is a good example for additive fact.

Semi-Additive:

Semi-additive facts are facts that can be summed up for some of the dimensions in the fact table, but not the others.
Eg: Daily balances fact can be summed up through the customers dimension but not through the time dimension.

Non-Additive:

Non-additive facts are facts that cannot be summed up for any of the dimensions present in the fact table.
Eg: Facts which have percentages, ratios calculated.

 

Factless Fact Table:

 In the real world, it is possible to have a fact table that contains no measures or facts. These tables are called “Factless Fact tables”.

Eg: A fact table which has only product key and date key is a factless fact. There are no measures in this table. But still you can get the number products sold over a period of time.

Based on the above classifications, fact tables are categorized into two:

Cumulative:

This type of fact table describes what has happened over a period of time. For example, this fact table may describe the total sales by product by store by day. The facts for this type of fact tables are mostly additive facts. The first example presented here is a cumulative fact table.

Snapshot:

This type of fact table describes the state of things in a particular instance of time, and usually includes more semi-additive and non-additive facts. The second example presented here is a snapshot fact table.

29 Comments

Active Directory Security using Init Blocks & Variables in 10g Style – Oracle BI 11g

Before we go into BI EE 11g, lets do a quick recap on how the BI EE 10g security model works. Let’s use the Active Directory as the underlying LDAP where users will be authenticated. In 10g for authenticating & authorizing against LDAP, we will be using 2 init blocks. They are

1. Authentication Init Block: In this init block, USER variable will be defined as the variable target and a connection to the Active Directory LDAP server will be defined as shown below.

2. Authorization Init Block: BI EE 10g, had an issue in extracting the LDAP groups directly. So for authorization there are a couple of alternative ways that are typically used

a. Using DBMS_LDAP pl/sql and extract the Active Directory groups dynamically using an init block. These groups will have to be manually defined in the repository as well.

b. Using an external database table to hold the user to group mapping. Then an init block will be used to extract the groups dynamically. Even in this case the Groups will have to be defined in the repository.

In both the cases above, the important point to note is the dynamic assignment of Group(s) to an User using Row-Wise initialization. An example is shown below showing the User to Group assignment table in 10g (and also the corresponding Init Blocks)

After doing this, if we now log into BI EE 10g as each of these users, we should be able to look at the USER and GROUP session variables. For Pierre Houdan it will be 2 groups assigned and for Charla Malinsky, it will be only one group as shown below.

This way 10g offered good flexibility of having dynamic USER to GROUP assignment. In BI EE 11g, since we have moved all the Users and Groups to Weblogic Default Provider, we cannot use dynamic init block based assignment for Users and Application Roles residing within Weblogic. There are other methods for doing dynamic user-group assignment within weblogic Default provider. But for now, lets consider a method that we have used before in the past i.e. 10g way of using Init Blocks.

We can replace the Default Provider with Active Directory within Weblogic. But even in that case, weblogic User-Group assignments are obtained directly from Active Directory. Also, the Group/User to Application Role Mapping will have to be done manually. To make it dynamic (from a database table) using init blocks, we cannot use that method. So, we will have to follow the 10g method of authentication/authorization.

To begin with, we need to ensure that we do not have overlapping users/groups in both the Weblogic Provider and Active Directory. After ensuring this, we start off with defining a server connection to Active Directory from within the BI EE 11g repository as shown below

We then define an initialization block to set the USER variable using the above defined LDAP server as shown below.

To make it similar to what we have in 10g, lets create a database table containing the User to Group (Application Role in 11g) mapping.

Create the 3 Groups(sfusers, sfmanagers & ManagementGroup) as Application Roles within BI EE 11g. The procedure for defining the Application Roles is given here.

After defining the application roles, restart the BI Server so that the changes can be picked up by the BI Server.

In our repository we have 2 subject areas, Sales – Fact Sales & Sales – Store Quality. Sales – Fact Sales can be used by users belonging to sfusers & sfmanagers application roles. Sales – Store Quality can be used by users belonging to ManagementGroup application role. So, lets define the security in the repository accordingly. Also, the Staff Dimension in the Sales – Fact Sales Subject area can be accessed only by users belonging to sfusers group.

Before starting to create the Initialization blocks for authorization, lets try to understand how application roles in 11g work. Lets define a user in Active Directory called ADUser. For now, this user does not belong to any application role. Also, no initialization blocks for authorization have been defined yet in the repository. If you go into the repository and look at the list of System Session Variables, there are 2 variables now in 11g that determine Authorization. They are ROLES and GROUP.

Remember, even though we have a variable called GROUP, this variable will actually affect the ROLES variable. ROLES variable cannot be updated directly through an initialization block. To understand further, lets try to login as the ADUser.

As you see, the user is not able to login. This is different from a user defined in Weblogic Default Authenticator. For external AD users coming through an Init Block, the GROUP variable should have a value. And the value should match with atleast one of the Application Roles. So, to test this out, lets add the ADUser to the sfusers Group in our authorization table. Also, lets define a new Authorization Init Block which will set the GROUP variable from this database table.

If we now login as the ADUser, we should now be able to login.

Though this user is able to login, this user still does not have the privilege to create new reports. To do that, lets add the ADUser to the BIAuthor role as well in the database table.

If we now log back in as the ADUser again, this user should have access to create reports.

To make it a bit more interesting, lets make the following changes to the permissions in the presentation layer. Lets provide access to sfusers & sfmanagers group to create reports. Also, lets provide them access to create reports on their respective subject areas as shown below

If we now login as Pierre.Houdan, we should be able to create report out of both the Subject Areas available.

Also, if we go into the Sales – Fact Sales subject area, the Dim Staff will not be visible to this user as only sfusers group have access to this.


If we login as Charla.Malinsky, we should be able to create report only out of the Sales – Fact Sales subject area.

As you see both the Presentation Services privileges and the RPD privileges get honored properly. Lets now create a report to look at the USER and ROLES session variable values. Following is what we would get.

As you see, the values that we pushed into the GROUP variable (sfusers, sfmanagers, ManagementGroup etc) are now available in the ROLES variable. Another important point to note is, whenever we setup the security in the above method in 11g, we would be getting the warnings in the Admin Console as shown below. The reason for this is Weblogic will try to authenticate these users first in its default authenticator before pushing it through the init blocks. So, its important to be aware of this overhead.

Source : http://www.rittmanmead.com/2010/11/oracle-bi-11g-active-directory-security-using-init-blocks-variables-10g-style/

Leave a comment

Start on Criteria tab when editiing analysis – OBIEE11g

In 11.1.1.5 version, there was no direct option to go to criteria tab when clicked on edit analysis. This feature is available now in 11.1.1.6 in my account section.

If you want this feature, click on my account -> Analysis editor and select “Start on Criteria tab when editiing analysis”. click ok.

Next time when you click on edit analysis, the system will take you to criteria tab.

1 Comment

Changes required to export all rows to Excel

1)Following changes are done in instanceconfig.xml file between <View> tages

<View>
<Pivot>
<MaxCells>1920000</MaxCells>
<MaxVisibleColumns>30</MaxVisibleColumns>
<MaxVisiblePages>1000</MaxVisiblePages>
<MaxVisibleRows>64000</MaxVisibleRows>
<MaxVisibleSections>25</MaxVisibleSections>
<DefaultRowsDisplayed>25</DefaultRowsDisplayed>
</Pivot>
<Table>
<MaxCells>1920000</MaxCells>
<MaxVisiblePages>1000</MaxVisiblePages>
<MaxVisibleRows>64000</MaxVisibleRows>
<MaxVisibleSections>25</MaxVisibleSections>
<DefaultRowsDisplayed>25</DefaultRowsDisplayed>
</Table>
<Charts>
<JavaHostReadLimitInKB>6144</JavaHostReadLimitInKB>
</Charts>
</View>

2)Login to EM

Business Intelligence->coreapplication->Capacity Management->Performace
change Maximum Number of Rows to Download to 64000.

Apply,Activate changes and restart all services.

Leave a comment

Event Polling Table to Purge the cache in OBIEE

An event polling table is a way to notify the Oracle BI Server that one or more physical tables have been updated. Each row that is added to an event table describes a single update event. The cache system reads rows from, or polls, the event table, extracts the physical table information from the rows, and purges cache entries that reference those physical tables.
Generaly we will purge the cache in two ways
Manage – Cache
From dashboard also we can purge the cache
Settings – Issue SQL — Call Sapurgeallcache()
Instead doing like this on regular interval with the help of Event table we can purge the cache.
Procedure :
Create table S_NQ_EPT
(
UPDATE_TYPE INTEGER DEFAULT 1 NOT NULL,
UPDATE_TIME DATE DEFAULT SYSDATE NOT NULL,
DB_NAME VARCHAR2(40),
CATALOG_NAME VARCHAR2(40),
SCHEMA_NAME VARCHAR2(40),
TABLE_NAME VARCHAR2(40) NOT NULL,
OTHER VARCHAR2(80) DEFAULT NULL
)
Once the table is created we have to import the table into Physical layer.
Goto Tools — Utilities

Click OracleBI Event Tables and Execute

Will find the list of tables.Select the S_NQ_EPT as a Event table and also set the Polling frequency Time.

Once we selected the event table we can able to see that in the physical layer as below

Now create a sample report based on Dept Dimension to create the cache

Check the Cache Folder

Once this is done we need to insert data into the S_NQ_EPT table

After the Polling frequency time we mentioned just check the Cache folder it will be Purged

Posted in OBIEE | 3 Comments

How to change the RPD and Webcatalog in OBIEE 11G

How to change the RPD and Webcatalog in OBIEE 11G.In OBIEE 10G we will directly change in the instanceconfig.xml and NQSconfig.INI but in OBIEE 11G we have to this in EM
http://localhost:7001/em/
1. Login in to http://localhost:7001/em/
2. Expand ‘Business Intelligence’ node on the left and choose Coreapplication.

3. Now click on the CoreApplication.
4. Now click on the Deployment tab on the right. Next, go to the Repository tab.

5. Click on ‘Lock and Edit Configuration’ to enable you to change the default settings.
6. In the ‘Upload BI Server Repository’ section, click on the Browse button and locate your new RPD file
7. Enter RPD password
8. Keep on the same configuration screen to update webcat selection
9. You need to update the path at the bottom of the same EM screen, with the location looking like this.

Posted in OBIEE | Leave a comment

Reuse Static Content in BIPublisher

If we want to reuse a particular content in BIPublisher we can use the predefined syntax For example if want to print some text at each page after the table like

So at each page it will be printed like the header and footer.

Posted in BIP | Leave a comment