BI Server Restart

If you want to see correct calculations – then a BI Server Restart might be required after you change Aggregation Rule for a column from SUM to AVG (or in any way).

I noticed that problem on Grand-Total calculations – it just wouldn’t perform correct aggregation for the column after RPD change in online mode (with Aggregation Rule – Default).

After services restart, OBIEE would perform correct grand-total aggregation.

Problem With Caching A Report in Answers Using A Script

The cache is purged but new cache is not created. Customer is not able to cache the reports which contain Database Functions for Columns.
All the cache entries were not created when used a logical query which contains DB functions in it. The query doesn’t get cached and there is no error
produced.
Cache is created when the queries are not using any database functions.

Cause

Queries containing the evaluate function are not qualified to cache. Hence this query is not getting cached. This is expected behavior.
Evaluate is not cacheable because there is no reliable means for OBIEE to interpret the semantics of the string function definition and determine if the
function defined there can be reused under what condition. Take a simple example, Evaluate(‘Today()’ as Date), OBIEE would have no idea this function
returns a result that can be reused till the end of day.

Dynamic repository variables are refreshed by data returned from queries. The same rule applies here. If the query that refreshes the variable is using a
database function and evaluate then the whole query (report) will not be cached as the value of the variable cannot be cached.
If the query that refreshes the variable does not use database functions then it is cached and the report should be cached.
However, when the value of a dynamic repository variable changes, all cache entries associated with a business model that reference the value of the variable,
are purged automatically.

If you see the query in the Cache Manager then it means that is cached. You can check if that particular cached query is used or not by checking the “Last
used” column after the user ran the report. The cache should be used and the value of the “Last Used” column should be updated and different from the value
of the “Created” column.

Solution
Using iBots is an alternative. If you schedule the reports they should appear in the cache until the cache is purged. Keep in mind that reports having the
database functions and evaluate will not be cached either. Also, reports having dynamic repository variables will automatically be purged when the variable
changes.
The iBot should not include queries with database functions and evaluate in order to be cached.

Answers request causes BI server to crash

I found an interesting bug in metalink. I wish there were more specifics as to how complex the report was. I just had a similar Assertion error yesterday which I tried to solve by increasing STACK size. However, it seems as it’s possible to crash BI server with longer reports.

When a custom report is executed on Windows the following error is received: –

Error Codes: OPR4ONWY:U9IM8TAC:OI2DL65P
State: HY000. Code: 10058. [NQODBC] [SQL_STATE: HY000] [nQSError: 10058] A general error has occurred. [nQSError: 46036] Internal Assertion: Condition FALSE, file .\NQThreads\SUGThread.cpp, line 515. (HY000)

However, on Linux the OBI Server crashes with this typical Stack Trace from the Core file generated: –

0 0xb427f9b2 in samem_details_800::ThreadAllocator<0>::Allocate
(this=0x86fcd00, Index=2) at threadallocator.cpp:443
443 { (gdb) bt
#0 0xb427f9b2 in samem_details_800::ThreadAllocator<0>::Allocate
(this=0x86fcd00, Index=2) at threadallocator.cpp:443 @ #1 0xb42784df in samem_details_800::ThreadAllocator<0>::allocate
(this=0x86fcd00, nBytes=20, nIndex=2,
pFile=0xb4a171dc “thirdpartysource/STLport-4.5/src/nqnodealloc.cpp”,
nLine=10) at threadallocator.cpp:966
#2 0xb4286098 in samem_details_800::Manager::Allocate (this=0xb429cac0, @ Bytes=20,
pFile=0xb4a171dc “thirdpartysource/STLport-4.5/src/nqnodealloc.cpp”,
nLine=10) at manager.cpp:469
#3 0xb4283dae in samem_800_allocate_dbg (Bytes=20, pFile=0xb4a171dc @ “thirdpartysource/STLport-4.5/src/nqnodealloc.cpp”, nLine=10)
at memoryallocator.cpp:160
#4 0xb49fbfb8 in NQNodeAlloc::allocate (__n=20) at @ thirdpartysource/STLport-4.5/src/nqnodealloc.cpp:10
#5 0xb721a1f3 in _SASSTL::allocator<_SASSTL::_Rb_tree_node
>::allocate (this=0x87f7bac, __n=1)
at thirdparty/include/stlport/stl/_alloc.h:372
#6 0xb721a02e in
_SASSTL::_STLP_alloc_proxy<_SASSTL::_Rb_tree_node*,
_SASSTL::_Rb_tree_node,
_SASSTL::allocator<_SASSTL::_Rb_tree_node > >::allocate @ (this=0x87f7bac, __n=1)
at thirdparty/include/stlport/stl/_alloc.h:514
#7 0xb72198c6 in _SASSTL::_Rb_tree, _SASSTL::less, @ _SASSTL::allocator >::_M_create_node (this=0x87f7bac, @ __x=@0xb0e23264)
at thirdparty/include/stlport/stl/_tree.h:243
#8 0xb673169d in _SASSTL::_Rb_tree, _SASSTL::less, @ _SASSTL::allocator >::_M_insert (this=0x87f7bac, __x_=0x0, @ __y_=0x8d895b8, __v=@0xb0e23264, __w_=0x0)
at thirdparty/include/stlport/stl/_tree.c:366
#9 0xb6730c6d in _SASSTL::_Rb_tree, _SASSTL::less, @ _SASSTL::allocator >::insert_unique (this=0x87f7bac, @ __v=@0xb0e23264)
at thirdparty/include/stlport/stl/_tree.c:412
#10 0xb6730499 in _SASSTL::set, _SASSTL::allocator >::insert @ (
this=0x87f7bac, __x=@0xb0e23264) at
thirdparty/include/stlport/stl/_set.h:137
#11 0xb672fe6b in RqNode::AddRqNodePtr (this=0x87f7b88, pRqNodePtr=0x8d999e8)
at server/Query/Optimizer/Request/Src/SQORRqNode.cpp:432
#12 0xb672e34e in SmartRqNodePtr (this=0x8d999e8, rhs=@0x8d995a0) at @ server/Query/Optimizer/Request/Src/SQORRqNode.cpp:55
#13 0xb66a7d17 in RqDerivedColumnReference (this=0x8d999b8, rhs=@0x8d99570, @ bNewIDs=false, bDeepCopy=true)
at server/Query/Optimizer/Request/Src/SQORRqExpr.cpp:1418
#14 0xb66a8898 in RqDerivedColumnReference::DeepCopy (this=0x8d99570, @ bNewIDs=false)
at server/Query/Optimizer/Request/Src/SQORRqExpr.cpp:1525
#15 0xb672e748 in RqNode (this=0x8d977b0, rhs=@0x8d97368, bNewIDs=false, @ bDeepCopy=true)
at server/Query/Optimizer/Request/Src/SQORRqNode.cpp:146
#16 0xb66a0d3c in RqExpr::RqExpr$base () at @ server/include/Query/Optimizer/Request/SQORRqNode.h:77
#17 0xb66d0f68 in RqExprCond::RqExprCond$base () at @ server/include/Query/Optimizer/Request/SQORRqList.h:33
#18 0xb66d6c58 in RqExprCondIsNull (this=0x8d977b0, rhs=@0x8d97368, @ bNewIDs=false, bDeepCopy=true)
at server/Query/Optimizer/Request/Src/SQORRqExprCond.cpp:1299
#19 0xb66d6e88 in RqExprCondIsNull::DeepCopy (this=0x8d97368, bNewIDs=false)
at server/Query/Optimizer/Request/Src/SQORRqExprCond.cpp:1342
#20 0xb672e748 in RqNode (this=0x8d8f878, rhs=@0x8d8f430, bNewIDs=false, @ bDeepCopy=true)
at server/Query/Optimizer/Request/Src/SQORRqNode.cpp:146
Cause
It appears that due to the complexity of the Expressions in the Answer Columns of the custom Report, the Expression Builder makes several recursive calls which eventually increases the Stack Size of the Thread until it reaches its maximum and it throws an Asertion Error.

On a Windows environment we have a check of the ‘LowStackCheck’ parameter which is not present in Linux and therefore it crashes the OBI Server giving a ‘sigsegv’ error.
Solution

Currently, there is no solution. The workaround is to re-design the report so that the Expressions are less complicated (e.g. Creating Measures that sum up at various combinations of Dimension Levels that allows the Users to avoid creating the complex Formulas in Answers and performs well)

This looks like a major change in the code is required to fix this type of behavior and therefore we will not be able to fix this until at least our 11.x release.

New Week.

I received only 18 responses to the survey so far. I think I’ll wait until tomorrow to publish results. I also realized that some of the questions weren’t worded clearly – hence the confusion.

One of the most notable articles this week – Oracle BI EE 10.1.3.4.1 – Multi Hierarchy reporting article by Venkatakrishnan J where he’s using a very clever fragmentation tactic to control hierarchy and drill-down order. Very out-of-the-box!

John Minkjan has posted several interesting OBIEE findings here.

Check out this newly minted federal IT spending Dashboard – I think it’s a good starting effort, but there’re many problems with its current implementation, such as – site not working correctly in Firefox, using too much Flash (I wonder if it’s 508 compatible), confusing UI, not very detailed). On a positive side – there’re many ways to create customized feeds and export data to CSV. I’m actually wondering which new business opportunities would be created if US Government continues to open more and more federal data to public.

Stay tuned.

Readership Survey

I’d appreciate if you could take a minute of your time and answer a few questions. I’ll try to use that input in making this site more useful and interesting for you.  I’m trying to get a feel on possible improvements and enhancements.  Some of the things I consider adding – a star-rating system for posts, bulletin-board / discussion board, and recommendation services. P.S. In the end of the survey it’ll ask you for Name / E-mail – Please ignore it and just click on Submit Survey. I’ll try to publish results once I get a meaningful sample.

[SURVEYS 1]