Assessing instance/Cube Efficiency

Post Reply
MarenC
Posts: 108
Joined: Sat Jun 08, 2019 9:55 am
OLAP Product: Planning Analytics
Version: Planning Analytics 2.0
Excel Version: Excel 2016

Assessing instance/Cube Efficiency

Post by MarenC » Thu Jul 16, 2020 11:47 am

Hi,

I am trying to get an handle on a tm1 instance with lots of rules.

I basically want to ask the question, is the instance optimised (I don't think it is)?
So I am running tests before getting into the specifics of the rules.

One test I want to understand is:

How many cells are populated by each measure type?

So I created a view at leaf level and I am doing a cellput to a cube which includes the measure dimension to count
the number of cells by measure.

So the data tab is simply

x=x+1;

Cellput(x, 'CountCube', vMeasures, 'Count');

The process is really 'sticky' and at this rate might take all day.

Does this indicate a potential issue with the cube or is it just a reflection of the complexity of the rules?

Any tips on how to assess a tm1 instance and testing techniques would be welcome too!

Maren

tomok
MVP
Posts: 2741
Joined: Tue Feb 16, 2010 2:39 pm
OLAP Product: TM1, Palo
Version: Beginning of time thru 10.2
Excel Version: 2003-2007-2010-2013
Location: Atlanta, GA
Contact:

Re: Assessing instance/Cube Efficiency

Post by tomok » Thu Jul 16, 2020 12:53 pm

MarenC wrote:
Thu Jul 16, 2020 11:47 am
So I created a view at leaf level and I am doing a cellput to a cube which includes the measure dimension to count
the number of cells by measure.

So the data tab is simply

x=x+1;

Cellput(x, 'CountCube', vMeasures, 'Count');

The process is really 'sticky' and at this rate might take all day.
Sounds like your view is not zero suppressed and you are processing each intersection in the cube.
Tom O'Kelley - Manager Finance Systems
American Tower
http://www.onlinecourtreservations.com/

burnstripe
Posts: 5
Joined: Wed May 06, 2020 2:58 pm
OLAP Product: Planning Analytics
Version: 2.0.9
Excel Version: 2016

Re: Assessing instance/Cube Efficiency

Post by burnstripe » Thu Jul 16, 2020 8:27 pm

I agree it sounds like zero suppression is turned off or there are an awful lot of populated cells/nested calculations. If I were evaluating performance of the model here's a few things I would do:

1) Optimize the dimension order > This can reduce your models size without making sweeping changes, just by careful not to swap the last dimension if it contains a string element
2) Enable performance monitor and have a look at the }stat cubes, it'll show how much memory is required by static values or feeders
If you have a larger number of static values then expected is the cube updated by users for forecasting as an example. It could be a user has placed a 3) value against a consolidated element and it has spread across all it's children. If this has happened you'll have to clean up the data and place it against the correct element
4) If you have a larger number used by rules/feeders then check the feeder statements for any overfeeds
5) Is there data in the cube that is no longer required, and if not required remove or archive it

Having a large number of rules isn't a problem in itself, it could be that the rules have just been separated out rather then combined. If you find you are going around the houses then I would take a step back and think what should it be doing.

Where is the data coming from (External source / user input) and weigh up whether it would be better with a process or rule. For example if the data is just historic the information may be better in static form and additional data flowed in incrementally. If it is constantly changing for example in a rolling forecast you may want to update it every month bringing in actuals for historic months and budgets for future months, then a rule may be better suited.

I hope this helps, if you are still struggling and the TI process is going nowhere then perhaps try using a rule based approach to find the number of populated cells for each measure. There's numerous ways to do this, one i've used before is say I want to calculate the number of nodes populated within the measure "Value".

Create another measure, lets say Counter
In the Rule file create a rule that says
['Counter'] = N: ['Value']/['Value'];
and then a feeder that says
['Value'] => ['Counter'];

Then open up a cube view and create either select your top elements from each dimension or if there are multiples select them all and create a "rollup". The rule above will generate a 1 against each populated node within the Counter measure, so when you look at a consolidated level, you'll see just how many nodes are populated within the value measure.

MarenC
Posts: 108
Joined: Sat Jun 08, 2019 9:55 am
OLAP Product: Planning Analytics
Version: Planning Analytics 2.0
Excel Version: Excel 2016

Re: Assessing instance/Cube Efficiency

Post by MarenC » Fri Jul 17, 2020 8:32 am

Thanks for the replies.

Zero suppression was on, I created the data source in the TI itself and made sure the skip blanks was ticked.

I did notice the elements were being double counted in some dimensions so hierarchy sorted some of those.

The process is still a bit sticky but I did manage to get some output by Measure.

By doing a count of 1 on each pass I estimated that adding up all the measures there are about 45 million populated cells.
Though it says the number of Stored calculated cells is 4.5 million and the number of populated numeric cells is 3 million.
So I don't know why when I run the TI it counts 45 million but stats by cube says differently.

According to stats by cube the number of fed numeric cells is 205 million.
This seems high compared to the number of populated cells, would others agree?

Maren

tomok
MVP
Posts: 2741
Joined: Tue Feb 16, 2010 2:39 pm
OLAP Product: TM1, Palo
Version: Beginning of time thru 10.2
Excel Version: 2003-2007-2010-2013
Location: Atlanta, GA
Contact:

Re: Assessing instance/Cube Efficiency

Post by tomok » Fri Jul 17, 2020 11:58 am

MarenC wrote:
Fri Jul 17, 2020 8:32 am
By doing a count of 1 on each pass I estimated that adding up all the measures there are about 45 million populated cells.
Though it says the number of Stored calculated cells is 4.5 million and the number of populated numeric cells is 3 million.
So I don't know why when I run the TI it counts 45 million but stats by cube says differently.
Are you sure your source view doesn't have some consolidated elements in it?
MarenC wrote:
Fri Jul 17, 2020 8:32 am
According to stats by cube the number of fed numeric cells is 205 million.
This seems high compared to the number of populated cells, would others agree?
It does. It sounds like you have some serious overfeeding going on.
Tom O'Kelley - Manager Finance Systems
American Tower
http://www.onlinecourtreservations.com/

MarenC
Posts: 108
Joined: Sat Jun 08, 2019 9:55 am
OLAP Product: Planning Analytics
Version: Planning Analytics 2.0
Excel Version: Excel 2016

Re: Assessing instance/Cube Efficiency

Post by MarenC » Fri Jul 17, 2020 1:09 pm

Hi Tomok,

there are no consolidations in the view.

In my process to count the number of populated cells by Cost Area I have
the following code:

Prolog:

Code: Select all

x=1;
CubeClearData( 'zCountCube' );
Data:

Code: Select all

If ( CellIsUpdateable ('P&L', V1, V2, Version, V4, Dept, Cost Area, Measure, ) = 0);
	CellIncrementN(x, 'zCountCube', Cost Area, 'Count');
Endif;

The result of this is 33 million count for all Measures. Which I am presuming is the total number of
populated cells with a rule?

I have noticed the following feeder in the rule file:

Code: Select all

['Revenue', 'Value£'] => ['South', 'Rev %'];
South is a consolidated element with 200 children. Therefore the above rule is feeding a consolidation.
I think this may be one source of the overfeeding (there are 7 dims in the P&L cube)

Maren

Wim Gielis
MVP
Posts: 2487
Joined: Mon Dec 29, 2008 6:26 pm
OLAP Product: TM1
Version: PAL 2.0.8
Excel Version: Office 365 - latest
Location: Brussels, Belgium
Contact:

Re: Assessing instance/Cube Efficiency

Post by Wim Gielis » Fri Jul 17, 2020 2:08 pm

MarenC wrote:
Fri Jul 17, 2020 1:09 pm
South is a consolidated element with 200 children. Therefore the above rule is feeding a consolidation.
I think this may be one source of the overfeeding (there are 7 dims in the P&L cube)

Maren
I betcha !
Best regards,

Wim Gielis

Excel Most Valuable Professional, 2011-2014
https://www.wimgielis.com ==> 112 TM1 articles and a lot of custom code
Newest blog article: Updating process line numbers with AutoHotKey

MarenC
Posts: 108
Joined: Sat Jun 08, 2019 9:55 am
OLAP Product: Planning Analytics
Version: Planning Analytics 2.0
Excel Version: Excel 2016

Re: Assessing instance/Cube Efficiency

Post by MarenC » Fri Jul 17, 2020 2:39 pm

I am glad you agree Wim!

Post Reply