Rule Performance Tuning
- mattgoff
- MVP
- Posts: 516
- Joined: Fri May 16, 2008 1:37 pm
- OLAP Product: TM1
- Version: 10.2.2.6
- Excel Version: O365
- Location: Florida, USA
Rule Performance Tuning
Any best methods people have to share regarding rule (and feeder) tuning? Both rules of thumb (cost of specific functions) and diagnostic tools (instead of trial and error, my usual method) to locate bottlenecks are helpful.
Matt
Matt
Please read and follow the Request for Assistance Guidelines. It helps us answer your question and saves everyone a lot of time.
- Eric
- MVP
- Posts: 373
- Joined: Wed May 14, 2008 1:21 pm
- OLAP Product: TM1
- Version: 9.4
- Excel Version: 2003
- Location: Chicago, IL USA
Re: Rule Performance Tuning
I remember hearing that the following functions are data hogs, but never confirmed.
STR()
NMBR()
STR()
NMBR()
Regards,
Eric
Blog: http://tm1-tipz.blogspot.com
Articles: http://www.google.com/reader/shared/use ... /label/TM1
Production: 32 bit 9.0 SP2, Windows 2000 Advanced Server. Web: 32 bit 9.0 SP2, Windows 2000 Server. Excel 2003
Eric
Blog: http://tm1-tipz.blogspot.com
Articles: http://www.google.com/reader/shared/use ... /label/TM1
Production: 32 bit 9.0 SP2, Windows 2000 Advanced Server. Web: 32 bit 9.0 SP2, Windows 2000 Server. Excel 2003
- Steve Rowe
- Site Admin
- Posts: 2416
- Joined: Wed May 14, 2008 4:25 pm
- OLAP Product: TM1
- Version: TM1 v6,v7,v8,v9,v10,v11+PAW
- Excel Version: Nearly all of them
Re: Rule Performance Tuning
I think one area you can look at is if you are referencing strings in rules. Usually but not always these are attributes.
It's important that strings you test against (this means all attributes, even numeric ones are really strings) should be input and not rule derived. This is because ruled strings are never cached in RAM so if you are testing against a ruled string the rule to calculate the string is evaluated everytime.
Given high sparsity you can have more work done calcualting the test than the real work of the rule itself.
Say you have
['myElement'] = N: If ( Attrs ('dim',!dim, 'Valid')@='True' , a*b, o);
If the above rule covers 1000 cells and only 10% of them are valid, you would be calculating the logic behind the Valid attribute 1000 times and the a*b only 100. If the rule on the attribute is complex then you can be doing an awful lot of extra work.
If you really need your "attribute" to be fully dynamic / rule driven try and make sure it's a numeric in a real cube, this way the result is cached in RAM.
(I think!)
Cheers
HTH
It's important that strings you test against (this means all attributes, even numeric ones are really strings) should be input and not rule derived. This is because ruled strings are never cached in RAM so if you are testing against a ruled string the rule to calculate the string is evaluated everytime.
Given high sparsity you can have more work done calcualting the test than the real work of the rule itself.
Say you have
['myElement'] = N: If ( Attrs ('dim',!dim, 'Valid')@='True' , a*b, o);
If the above rule covers 1000 cells and only 10% of them are valid, you would be calculating the logic behind the Valid attribute 1000 times and the a*b only 100. If the rule on the attribute is complex then you can be doing an awful lot of extra work.
If you really need your "attribute" to be fully dynamic / rule driven try and make sure it's a numeric in a real cube, this way the result is cached in RAM.
(I think!)
Cheers
HTH
Technical Director
www.infocat.co.uk
www.infocat.co.uk
-
- Posts: 68
- Joined: Wed Feb 25, 2009 2:26 am
- OLAP Product: TM1, Cognos Express
- Version: 9.5 9.4 9.1 9.0 8.4
- Excel Version: 2007 2003
Re: Rule Performance Tuning
In environments with large sets of rules (and cubes) its important to determine first whether the bottlenecks are caused by rule calcs or feeders.
Speaking generically - I go down the path of benchmarking performance first then ripping out all feeders and comparing performance vs the benchmark. It's common for feeders to perform fine during an initial build, only to later get bogged down when huge tranches of cube data is populated.
Speaking generically - I go down the path of benchmarking performance first then ripping out all feeders and comparing performance vs the benchmark. It's common for feeders to perform fine during an initial build, only to later get bogged down when huge tranches of cube data is populated.
-
- Site Admin
- Posts: 1454
- Joined: Wed May 28, 2008 9:09 am
Re: Rule Performance Tuning
Interesting....
Roger, if you take a built, decent sized system and remove all the feeders, do you:
a Enable skipcheck? Then you will see no numbers;
b Disable skipcheck? Then any halfway way solid view will take days to calculate.
If you find that feeders are slowing down the system when it fills up with data, you are overfeeding (and your system will be too big).
Well written feeders should not slow a system down in use. Yes, they will take time on startup, but they are doing their job, to direct the engine to where there is data of value.
Roger, if you take a built, decent sized system and remove all the feeders, do you:
a Enable skipcheck? Then you will see no numbers;
b Disable skipcheck? Then any halfway way solid view will take days to calculate.
If you find that feeders are slowing down the system when it fills up with data, you are overfeeding (and your system will be too big).
Well written feeders should not slow a system down in use. Yes, they will take time on startup, but they are doing their job, to direct the engine to where there is data of value.
-
- Posts: 68
- Joined: Wed Feb 25, 2009 2:26 am
- OLAP Product: TM1, Cognos Express
- Version: 9.5 9.4 9.1 9.0 8.4
- Excel Version: 2007 2003
Re: Rule Performance Tuning
'Well written feeders '..... across my travels, the concept of feeders is easily the most mis-understood part of TM1 (FYI this statement excludes conditional feeders which exponentially complicates matters). Based on this you can bet your bottom dollar there are poorly written feeders in most large models.Well written feeders should not slow a system down in use. Yes, they will take time on startup, but they are doing their job, to direct the engine to where there is data of value.
The most common performance 'mistake' I come across is users attempting to 'avoid writing feeders' (however still using SKIPCHECK;) by creating rules across both N: and C:
(ie. ['measure1'] = ['measure2'] * ['measure3'].
- mattgoff
- MVP
- Posts: 516
- Joined: Fri May 16, 2008 1:37 pm
- OLAP Product: TM1
- Version: 10.2.2.6
- Excel Version: O365
- Location: Florida, USA
Re: Rule Performance Tuning
I'm working my way through my rules and feeders now. One improvement I've made is to be as explicit as possible with an early IF in all complex rules to bypass more expensive operations. e.g. Test if FTE > 0 before doing any other tests OR DB/ATTRSes.
One that note, since I've been adjusting my rules, I now have an issue where rows that appear to be zeroed out won't zero suppress. I've tried manually clearing them which initially causes them to clear, but if I edit my rules they return to the view. This makes me think it's feeder related (and the opposite of most feeder/cube viewer problems). Any ideas on what might be going on?
Matt
One that note, since I've been adjusting my rules, I now have an issue where rows that appear to be zeroed out won't zero suppress. I've tried manually clearing them which initially causes them to clear, but if I edit my rules they return to the view. This makes me think it's feeder related (and the opposite of most feeder/cube viewer problems). Any ideas on what might be going on?
Matt
Please read and follow the Request for Assistance Guidelines. It helps us answer your question and saves everyone a lot of time.
-
- MVP
- Posts: 3652
- Joined: Fri Mar 13, 2009 11:14 am
- OLAP Product: TableManager1
- Version: PA 2.0.x
- Excel Version: Office 365
- Location: Switzerland
Re: Rule Performance Tuning
If editing and saving a rule causes zero values to display in a zero suppressed grid then this indicates overfeeding, i.e. values that calculate to a value of zero (and that you therefore theoretically don't want to feed) are being fed. The challenge is to eliminate the overfeeding, depending on the type of model it isn't always possible to eliminate all of this type of overfeeding.
-
- Posts: 63
- Joined: Wed Sep 14, 2011 3:10 am
- OLAP Product: TM1
- Version: PA 2.0
- Excel Version: Office 365
Re: Rule Performance Tuning
Hi all,Steve Rowe wrote:Say you have
['myElement'] = N: If ( Attrs ('dim',!dim, 'Valid')@='True' , a*b, o);
If the above rule covers 1000 cells and only 10% of them are valid, you would be calculating the logic behind the Valid attribute 1000 times and the a*b only 100.
I know this topic had already been posted for quite some time. However, as recently, we are having some issue with the performance issues pertaining to rule. Hence I'm now studying how to fine-tuning the rules.
What Steve quoted above, I would like to know if the rule is
['myElement'] = N: a*b;
instead, will it give better performance?
In another example,
['Sales'] = N: Qty*UnitPrice;
Assuming I only need to read data for 2012. I do have other a lot of other year elements 2001~2040. If I put rules like
['Sales'] = N: If (!Year = '2012',Qty*UnitPrice,stet);
OR
['Sales'] = N: Qty*UnitPrice;
taking more memory in computing ?
Thank you.
Beek
- garry cook
- Community Contributor
- Posts: 209
- Joined: Thu May 22, 2008 7:45 am
- OLAP Product: TM1
- Version: Various
- Excel Version: Various
Re: Rule Performance Tuning
Yes but not the same functionality. As per Steve's point, it's not having to work out the attribute which isn't stored in RAM but by doing this you obviously lose the Valid logic check.If the rule is
['myElement'] = N: a*b;
instead, will it give better performance?
['Sales','2012'] = N: Qty*UnitPrice;['Sales'] = N: Qty*UnitPrice;
Assuming I only need to read data for 2012. I do have other a lot of other year elements 2001~2040. If I put rules like
['Sales'] = N: If (!Year = '2012',Qty*UnitPrice,stet);
OR
['Sales'] = N: Qty*UnitPrice;
taking more memory in computing ?
would be more sensible as it avoids potential precedence issues later. Your second example would calc for all years so wouldn't meet your requirement of only 2012 data being calc'd. The feeding of this is the more crucial part though - if you do the classic Qty => Sales feeder without narrowing it to a year, it'll feed all years regardless of the calculated value (inc zero) so need to make sure it's ['Qty','2012']=>['Sales']; to avoid feeding a stack of invalid years.
-
- Posts: 64
- Joined: Fri Jul 27, 2012 4:13 pm
- OLAP Product: TM1
- Version: 2010
- Excel Version: Excel 2010
Re: Rule Performance Tuning
I thought this would be over-feeding. But I don't have any successful experiences with over-feeding. At the end of the day , I need to feed every rule properly rather than over-feeding.
beek wrote:Hi all,Steve Rowe wrote:Say you have
['myElement'] = N: If ( Attrs ('dim',!dim, 'Valid')@='True' , a*b, o);
If the above rule covers 1000 cells and only 10% of them are valid, you would be calculating the logic behind the Valid attribute 1000 times and the a*b only 100.
I know this topic had already been posted for quite some time. However, as recently, we are having some issue with the performance issues pertaining to rule. Hence I'm now studying how to fine-tuning the rules.
What Steve quoted above, I would like to know if the rule is
['myElement'] = N: a*b;
instead, will it give better performance?
In another example,
['Sales'] = N: Qty*UnitPrice;
Assuming I only need to read data for 2012. I do have other a lot of other year elements 2001~2040. If I put rules like
['Sales'] = N: If (!Year = '2012',Qty*UnitPrice,stet);
OR
['Sales'] = N: Qty*UnitPrice;
taking more memory in computing ?
Thank you.
Beek
-
- Posts: 63
- Joined: Wed Sep 14, 2011 3:10 am
- OLAP Product: TM1
- Version: PA 2.0
- Excel Version: Office 365
Re: Rule Performance Tuning
I will not be using
['Sales','2012'] = N: Qty*UnitPrice;
as this will be too static. It would be more likely to be
['Sales'] = N: If (!Year @= DB('Current','Year','Value'),Qty*UnitPrice,stet);
In fact, I want to feed the Sales (no matter which year it is). As most of time, users will only be checking the current year (eg 2012), hence I'm thinking, will limiting the calculation to only 2012 save more resources?
My purpose now is to evaluate and try to use a more resource efficient way of writing rules..
['Sales','2012'] = N: Qty*UnitPrice;
as this will be too static. It would be more likely to be
['Sales'] = N: If (!Year @= DB('Current','Year','Value'),Qty*UnitPrice,stet);
In fact, I want to feed the Sales (no matter which year it is). As most of time, users will only be checking the current year (eg 2012), hence I'm thinking, will limiting the calculation to only 2012 save more resources?
My purpose now is to evaluate and try to use a more resource efficient way of writing rules..
- Steve Rowe
- Site Admin
- Posts: 2416
- Joined: Wed May 14, 2008 4:25 pm
- OLAP Product: TM1
- Version: TM1 v6,v7,v8,v9,v10,v11+PAW
- Excel Version: Nearly all of them
Re: Rule Performance Tuning
Hi beek,
I'd only go to this extreme if you have a problem and you really need to stop non-2012 data being available.
Your resources only get used up when users ask for a result so if you are going to feed all periods and in general users are not going to ask for non-2012 data there is not much difference between
['Sales'] = N: Qty*UnitPrice;
and
['Sales'] = N: If (!Year @= DB('Current','Year','Value'),Qty*UnitPrice,stet);
Since in the first one the rule for non-2012 won't be evaluated since the users won't be looking.
Cheers,
I'd only go to this extreme if you have a problem and you really need to stop non-2012 data being available.
Your resources only get used up when users ask for a result so if you are going to feed all periods and in general users are not going to ask for non-2012 data there is not much difference between
['Sales'] = N: Qty*UnitPrice;
and
['Sales'] = N: If (!Year @= DB('Current','Year','Value'),Qty*UnitPrice,stet);
Since in the first one the rule for non-2012 won't be evaluated since the users won't be looking.
Cheers,
Technical Director
www.infocat.co.uk
www.infocat.co.uk
-
- Posts: 63
- Joined: Wed Sep 14, 2011 3:10 am
- OLAP Product: TM1
- Version: PA 2.0
- Excel Version: Office 365
Re: Rule Performance Tuning
Thanks Steve.
If the resource will only be consumed when users requesting to view. Then, what about loading of the cube ?
As my problem now is that the cube is increasing at a tremendous rate. For the biggest cube, it is taking slightly more than 1hr to load(this is reflected in tm1server.log). Hence, it is causing our each restart of TM1 server taking long time.
Besides, from the memory monitoring tool that I'm using on the TM1 server showing that the free memory at the server compared to previous months, is decreasing at quite an alarming rate. Hence, I'm pressed to find some answers/solution to downsize the model (as far as possible)..
If the resource will only be consumed when users requesting to view. Then, what about loading of the cube ?
As my problem now is that the cube is increasing at a tremendous rate. For the biggest cube, it is taking slightly more than 1hr to load(this is reflected in tm1server.log). Hence, it is causing our each restart of TM1 server taking long time.
Besides, from the memory monitoring tool that I'm using on the TM1 server showing that the free memory at the server compared to previous months, is decreasing at quite an alarming rate. Hence, I'm pressed to find some answers/solution to downsize the model (as far as possible)..
-
- MVP
- Posts: 2832
- Joined: Tue Feb 16, 2010 2:39 pm
- OLAP Product: TM1, Palo
- Version: Beginning of time thru 10.2
- Excel Version: 2003-2007-2010-2013
- Location: Atlanta, GA
- Contact:
Re: Rule Performance Tuning
This is the cube feeders being processed. What does your feeder statement look like? Are you feeding years that don't need to be?beek wrote:Thanks Steve.
If the resource will only be consumed when users requesting to view. Then, what about loading of the cube ?
As my problem now is that the cube is increasing at a tremendous rate. For the biggest cube, it is taking slightly more than 1hr to load(this is reflected in tm1server.log).
-
- Posts: 63
- Joined: Wed Sep 14, 2011 3:10 am
- OLAP Product: TM1
- Version: PA 2.0
- Excel Version: Office 365
Re: Rule Performance Tuning
I have feeders like
['Qty'] =>['Sales'];
and also feeders like
['Inventory'] => DB ('Forecast',!Year,!Scenario, IF (DIMIX('Week',!Week)<DIMIX('Week',DB('Setting','Current Week','Value') & DIMIX('Week',!Week)>DIMIX('Week',DB('Setting','Start Week','Value'), ATTRS('Week',!Week,'Next')),!Country, !SKU, !Measure);
We are having both actual & forecasted data in the cube. For actualised weeks, data will be imported from the ERP system. For forecasted weeks, it will be computed (carried from week's inventory)
For inventory, I cant think of any way to "downsize" the rule/feeder. Hence am looking at Sales computation. If I put a IF to check on the year, will this shorten the feeder time?
['Qty'] =>['Sales'];
and also feeders like
['Inventory'] => DB ('Forecast',!Year,!Scenario, IF (DIMIX('Week',!Week)<DIMIX('Week',DB('Setting','Current Week','Value') & DIMIX('Week',!Week)>DIMIX('Week',DB('Setting','Start Week','Value'), ATTRS('Week',!Week,'Next')),!Country, !SKU, !Measure);
We are having both actual & forecasted data in the cube. For actualised weeks, data will be imported from the ERP system. For forecasted weeks, it will be computed (carried from week's inventory)
For inventory, I cant think of any way to "downsize" the rule/feeder. Hence am looking at Sales computation. If I put a IF to check on the year, will this shorten the feeder time?
-
- Posts: 7
- Joined: Tue Jul 10, 2012 3:47 pm
- OLAP Product: Cognos TM1
- Version: 9.5.2,10.2.2
- Excel Version: 2016
Re: Rule Performance Tuning
I completely agree.. Even I have seen calc written at N: and C: level.. and this really hampers the performance. Feeders if written properly and all your rules are written at N: level then I feel cube really works well.Well written feeders should not slow a system down in use. Yes, they will take time on startup, but they are doing their job, to direct the engine to where there is data of value.
'Well written feeders '..... across my travels, the concept of feeders is easily the most mis-understood part of TM1 (FYI this statement excludes conditional feeders which exponentially complicates matters). Based on this you can bet your bottom dollar there are poorly written feeders in most large models.
The most common performance 'mistake' I come across is users attempting to 'avoid writing feeders' (however still using SKIPCHECK;) by creating rules across both N: and C:
(ie. ['measure1'] = ['measure2'] * ['measure3'].
Also if you want to overwrite C: formula, I would suggest you can create another element to accommodate that formula... instead of overwriting consolidation element.
Bhushan Parulekar
Empty mind is Devil's Workshop, but my devils creative!!
- Steve Rowe
- Site Admin
- Posts: 2416
- Joined: Wed May 14, 2008 4:25 pm
- OLAP Product: TM1
- Version: TM1 v6,v7,v8,v9,v10,v11+PAW
- Excel Version: Nearly all of them
Re: Rule Performance Tuning
Hi,
I'd be looking hard at this one
['Inventory'] => DB ('Forecast',!Year,!Scenario, IF (DIMIX('Week',!Week)<DIMIX('Week',DB('Setting','Current Week','Value') & DIMIX('Week',!Week)>DIMIX('Week',DB('Setting','Start Week','Value'), ATTRS('Week',!Week,'Next')),!Country, !SKU, !Measure);
First did you copy the feeder from a compiled rule sheet as I'm not sure that the above would compile. The last Dimix has too many arguments and there is no true false....Its possible the rule compiler has not picked this up and this is where your performance hit is coming from.
When you are feeding forward through time you need to take great care.
The first thing to watch out for is not to set the feeder up so every week feeds every week in front of it since this can lead to massive overfeeding from the left, which is what kills load times. I expect your dimix logic is an attempt to do this.
The other thing is the Next attr, I normally create one called Next Feeder Safe that is not fully populated into the future so that I can manage how far foward the feeder rolls forward.
Check you are not blowing the feeder stack, you will see a bunch of msgs in the msg log, this is sometimes unavoidable but is always an indication that you need to look hard at what you are doing.
Using Dimix in rules is considered bad practice (by me at least) since it is very hard to control dimension order, I'd probably be looking at moving all my conditonal feeder logic into a TI that populates an attr (or leaves it blank) that says which week to feed. Your feeder then becomes
['Inventory'] => DB ('Forecast',!Year,!Scenario, ATTRS('Week',!Week,'Conditional Next'),!Country, !SKU, !Measure);
Which should run much faster as all your conditional logic has been pre-calculated.
Just some thoughts....
HTH
I'd be looking hard at this one
['Inventory'] => DB ('Forecast',!Year,!Scenario, IF (DIMIX('Week',!Week)<DIMIX('Week',DB('Setting','Current Week','Value') & DIMIX('Week',!Week)>DIMIX('Week',DB('Setting','Start Week','Value'), ATTRS('Week',!Week,'Next')),!Country, !SKU, !Measure);
First did you copy the feeder from a compiled rule sheet as I'm not sure that the above would compile. The last Dimix has too many arguments and there is no true false....Its possible the rule compiler has not picked this up and this is where your performance hit is coming from.
When you are feeding forward through time you need to take great care.
The first thing to watch out for is not to set the feeder up so every week feeds every week in front of it since this can lead to massive overfeeding from the left, which is what kills load times. I expect your dimix logic is an attempt to do this.
The other thing is the Next attr, I normally create one called Next Feeder Safe that is not fully populated into the future so that I can manage how far foward the feeder rolls forward.
Check you are not blowing the feeder stack, you will see a bunch of msgs in the msg log, this is sometimes unavoidable but is always an indication that you need to look hard at what you are doing.
Using Dimix in rules is considered bad practice (by me at least) since it is very hard to control dimension order, I'd probably be looking at moving all my conditonal feeder logic into a TI that populates an attr (or leaves it blank) that says which week to feed. Your feeder then becomes
['Inventory'] => DB ('Forecast',!Year,!Scenario, ATTRS('Week',!Week,'Conditional Next'),!Country, !SKU, !Measure);
Which should run much faster as all your conditional logic has been pre-calculated.
Just some thoughts....
HTH
Technical Director
www.infocat.co.uk
www.infocat.co.uk
-
- Posts: 63
- Joined: Wed Sep 14, 2011 3:10 am
- OLAP Product: TM1
- Version: PA 2.0
- Excel Version: Office 365
Re: Rule Performance Tuning
.. just realised i missed out some closing brackets and true false value, hence causing you find it difficult to apprehend. Anyway, seem you figured out the logic behind it though
You are proposing to use feeder like below instead
['Inventory'] => DB ('Forecast',!Year,!Scenario, ATTRS('Week',!Week,'Conditional Next'),!Country, !SKU, !Measure);
and use a TI to feed into the 'Conditional Next'. So for the weeks that is before the start week OR weeks that are greater than end week, 'Conditional Next' will not have any values?
For the rule, I just keep it as it is ? My current rule is like below..
['Inventory'] = N: IF (DB('SettingByWeek',!Year,!Scenario,!Week,'fActual') = 1
% DIMIX('Week',!Week) <= DIMIX ('Week' , DB('SettingByScenario',!Year,!Scenario,'N Week') )
% DIMIX('Week',!Week) > DIMIX ('Week' , DB('SettingByScenario',!Year,!Scenario,'Scenario Week End') ) , STET
, DB('Forecast',!Year,!Scenario , ATTRS('Week', !Week ,'Previous'),!Country,!SKU, !Measure)
+ ['Receiving'] -['Sales']-['Transfer']-['Blocked Stk'] ) ;
C: IF (LONG(ATTRS('Week','Week',!Week))=4 % LONG(!Week)=8
, DB('Forecast',!Year,!Scenario
, ELCOMP ( 'Week' ,!Week , ELCOMPN('Week', !Week ) )
,!Country,!SKU,!Measure)
, STET
) ;
Note: currently, we are actually having 2 setting cube, 1 is SEttingByWeek, 1 is SEttingByScenario. But in the previous example I quoted, I had "summarise" the feeders.
You are proposing to use feeder like below instead
['Inventory'] => DB ('Forecast',!Year,!Scenario, ATTRS('Week',!Week,'Conditional Next'),!Country, !SKU, !Measure);
and use a TI to feed into the 'Conditional Next'. So for the weeks that is before the start week OR weeks that are greater than end week, 'Conditional Next' will not have any values?
For the rule, I just keep it as it is ? My current rule is like below..
['Inventory'] = N: IF (DB('SettingByWeek',!Year,!Scenario,!Week,'fActual') = 1
% DIMIX('Week',!Week) <= DIMIX ('Week' , DB('SettingByScenario',!Year,!Scenario,'N Week') )
% DIMIX('Week',!Week) > DIMIX ('Week' , DB('SettingByScenario',!Year,!Scenario,'Scenario Week End') ) , STET
, DB('Forecast',!Year,!Scenario , ATTRS('Week', !Week ,'Previous'),!Country,!SKU, !Measure)
+ ['Receiving'] -['Sales']-['Transfer']-['Blocked Stk'] ) ;
C: IF (LONG(ATTRS('Week','Week',!Week))=4 % LONG(!Week)=8
, DB('Forecast',!Year,!Scenario
, ELCOMP ( 'Week' ,!Week , ELCOMPN('Week', !Week ) )
,!Country,!SKU,!Measure)
, STET
) ;
Note: currently, we are actually having 2 setting cube, 1 is SEttingByWeek, 1 is SEttingByScenario. But in the previous example I quoted, I had "summarise" the feeders.
- Steve Rowe
- Site Admin
- Posts: 2416
- Joined: Wed May 14, 2008 4:25 pm
- OLAP Product: TM1
- Version: TM1 v6,v7,v8,v9,v10,v11+PAW
- Excel Version: Nearly all of them
Re: Rule Performance Tuning
Hi,
So I would probably put another two measures in your SettingByWeek cube.
One a numeric measure that I flag with a 1 (say its Conditional Calc), we want to test vs a numberin the rules rather than population of a string value as testing vs a string is more expensive than testing vs a string (legend has it...)
One a string that I put the week I want to feed (Conditional Next) as already discussed.
They would both be populated by your logic in a TI, that is run when the weeks roll over.
DB('SettingByWeek',!Year,!Scenario,!Week,'fActual') = 1
% DIMIX('Week',!Week) <= DIMIX ('Week' , DB('SettingByScenario',!Year,!Scenario,'N Week') )
% DIMIX('Week',!Week) > DIMIX ('Week' , DB('SettingByScenario',!Year,!Scenario,'Scenario Week End') )
(actually I still wouldnt use dimix as this is postional logic which is dangerous. Either use a text function to derive the week no from the element name or create and populate a week no attr and use this to do the testing against)
Next create a consolidation that sums + ['Receiving'] -['Sales']-['Transfer']-['Blocked Stk'] this part. Basically never do a straight addition in a rule its much more efficent to do a consolidation, use -ive weights to do the subtraction. Call this 'Stock Movement' (if thats what that is..)
Your rule is then
['Inventory'] = N: IF (DB('SettingByWeek',!Year,!Scenario,!Week,'Conditional Calc') = 1, STET
, DB('Forecast',!Year,!Scenario , ATTRS('Week', !Week ,'Previous'),!Country,!SKU, !Measure)
+ ['Stock Movement']);
Feeder as previously stated.
Your C rule I would also change since it contains positional logic too on the ElComp / ElCompN. Create and populate a string atttr that holds the last week for the C level element.
C: IF (LONG(ATTRS('Week','Week',!Week))=4 % LONG(!Week)=8
, DB('Forecast',!Year,!Scenario
, Attrs ('Week' , !Week, 'LastWeek')
,!Country,!SKU,!Measure)
, STET
) ;
Thats about all I can see
So I would probably put another two measures in your SettingByWeek cube.
One a numeric measure that I flag with a 1 (say its Conditional Calc), we want to test vs a numberin the rules rather than population of a string value as testing vs a string is more expensive than testing vs a string (legend has it...)
One a string that I put the week I want to feed (Conditional Next) as already discussed.
They would both be populated by your logic in a TI, that is run when the weeks roll over.
DB('SettingByWeek',!Year,!Scenario,!Week,'fActual') = 1
% DIMIX('Week',!Week) <= DIMIX ('Week' , DB('SettingByScenario',!Year,!Scenario,'N Week') )
% DIMIX('Week',!Week) > DIMIX ('Week' , DB('SettingByScenario',!Year,!Scenario,'Scenario Week End') )
(actually I still wouldnt use dimix as this is postional logic which is dangerous. Either use a text function to derive the week no from the element name or create and populate a week no attr and use this to do the testing against)
Next create a consolidation that sums + ['Receiving'] -['Sales']-['Transfer']-['Blocked Stk'] this part. Basically never do a straight addition in a rule its much more efficent to do a consolidation, use -ive weights to do the subtraction. Call this 'Stock Movement' (if thats what that is..)
Your rule is then
['Inventory'] = N: IF (DB('SettingByWeek',!Year,!Scenario,!Week,'Conditional Calc') = 1, STET
, DB('Forecast',!Year,!Scenario , ATTRS('Week', !Week ,'Previous'),!Country,!SKU, !Measure)
+ ['Stock Movement']);
Feeder as previously stated.
Your C rule I would also change since it contains positional logic too on the ElComp / ElCompN. Create and populate a string atttr that holds the last week for the C level element.
C: IF (LONG(ATTRS('Week','Week',!Week))=4 % LONG(!Week)=8
, DB('Forecast',!Year,!Scenario
, Attrs ('Week' , !Week, 'LastWeek')
,!Country,!SKU,!Measure)
, STET
) ;
Thats about all I can see
Technical Director
www.infocat.co.uk
www.infocat.co.uk