Page 1 of 1

Improve TI process time for cubes with heavy rules

Posted: Thu Oct 05, 2017 7:34 pm
by kenship
Hi, I'm developing a salary and benefits model. For the purpose of posting, I simplify the model to the following:

Cubes:
- Salary Rate
- Benefits Rate
- Input
- Calculation
- Report
- Mapping

Rules:
- Input sends FTE and headcount data into Calculation
- Calculation calculates salary and each benefit based on Input, Salary Rate and Benefits Rate
- Input receives salary$ and benefits $ from Calculation

*One the reasons to use rule is that FTE change can take place any month, depending on whether the change is internal transfer or new hire, benefits can be very different; There are a few more reasons why we can't use a standardized, master salary and benefits but to rely on a ruled-calculated cube.

Processes:
- Input to Mapping (every 10 min, beginning :00)
- Input to Report (every 10 min, beginning :05)

-----

Problem:
The 2 processes take long time (3 min.) whenever there is change in Input. If Input doesn't change, processes take literally a few seconds. I suspect cache view is helping to speed it up. However, during planning season I expect Input to be changed consistently during work hours for a few months. The time needed for the processes is too long for what we need. I'm looking for way to reduce the complexity of the model and improve the time needed to run the processes.

Thanks for looking into this.

Kenneth

Re: Improve TI process time for cubes with heavy rules

Posted: Thu Oct 05, 2017 7:45 pm
by tomok
If you really need to run this every 10 minutes have you considered just moving the data for changed employees (assuming you have an employee dimension)?

Re: Improve TI process time for cubes with heavy rules

Posted: Thu Oct 05, 2017 8:18 pm
by kenship
I can probably try this, thanks! But not much data would be excluded because the model calculates the year to year changes and therefore basically every employee will see their salary and benefits amount changed every year.

Unless there's way the cube is smart enough to identify the changed cells only.

Re: Improve TI process time for cubes with heavy rules

Posted: Thu Oct 05, 2017 9:04 pm
by tomok
Sorry, you presented your problem in very simplified terms so I can only give you very simplified ideas.

Re: Improve TI process time for cubes with heavy rules

Posted: Fri Oct 06, 2017 7:29 am
by gtonkin
Similar models I have running at clients are instantaneous and also handle joiners, leavers, transfers, annual increases, performance increases (based on tenure), benefits based on role,grading etc. etc. Everything is rule based and a change e.g. termination date, override in salary, change to benefit driver are lightning.
Admittedly we are only dealing with 3000 employees but further detail of your environment may get you more helpful answers.

Re: Improve TI process time for cubes with heavy rules

Posted: Fri Oct 06, 2017 8:15 am
by Steve Rowe
With a ten minute refresh by design your very close to wanting the system "always live" anyway, why use a TI instead of rules?

Is the issue with the performance of the TI just the elapsed time or the fact that it is locking? If it always takes 3 minutes to run and is non-locking your data is still updated every 10 minutes. So no problem. If it is locking, this should be a solvable problem.
The 2 processes take long time (3 min.) whenever there is change in Input. If Input doesn't change, processes take literally a few seconds. I suspect cache view is helping to speed it up. However, during planning season I expect Input to be changed consistently during work hours for a few months. The time needed for the processes is too long for what we need. I'm looking for way to reduce the complexity of the model and improve the time needed to run the processes.
You've probably worked this out but since you've not explicitly stated it. The performance issue is with your rules not the TI process, the input is destroying the pre-calculated cached of ruled values that the TI depends on and so the rule engine is recalculating before the TI executes.
Check you have MTQ set to something. (PA ships at 1 for example).
Check you are not over-feeding the right hand side of the rule.
Check you are performing the calculation in as few steps as possible (i.e. if a= b * c * d * e, where b to e are all calculations of some form, don't keep b to e as separate measures).

Re: Improve TI process time for cubes with heavy rules

Posted: Fri Oct 06, 2017 12:27 pm
by kenship
tomok wrote: Thu Oct 05, 2017 9:04 pm Sorry, you presented your problem in very simplified terms so I can only give you very simplified ideas.
Fully understood. No need to say sorry at all.

Re: Improve TI process time for cubes with heavy rules

Posted: Fri Oct 06, 2017 12:36 pm
by kenship
gtonkin wrote: Fri Oct 06, 2017 7:29 am Similar models I have running at clients are instantaneous and also handle joiners, leavers, transfers, annual increases, performance increases (based on tenure), benefits based on role,grading etc. etc. Everything is rule based and a change e.g. termination date, override in salary, change to benefit driver are lightning.
Admittedly we are only dealing with 3000 employees but further detail of your environment may get you more helpful answers.
Hi, thanks for reply.

I think we are quite close. In our case we have 6000+ employees but after grouping we have only >2000 distinct line items.
Our complexity is that we need to calculate every change, and to further complicate the situation:
1. We throw in another dimension so that we can keep 3 sets of scenarios and therefore calculation;
2. We run a multi-year budget and once there's a budget restatement we will be calculating the restatement for all years in the multi-year budget cycle.

This is why things are complicated.

Re: Improve TI process time for cubes with heavy rules

Posted: Fri Oct 06, 2017 12:50 pm
by kenship
Steve Rowe wrote: Fri Oct 06, 2017 8:15 am With a ten minute refresh by design your very close to wanting the system "always live" anyway, why use a TI instead of rules?

Is the issue with the performance of the TI just the elapsed time or the fact that it is locking? If it always takes 3 minutes to run and is non-locking your data is still updated every 10 minutes. So no problem. If it is locking, this should be a solvable problem.
The 2 processes take long time (3 min.) whenever there is change in Input. If Input doesn't change, processes take literally a few seconds. I suspect cache view is helping to speed it up. However, during planning season I expect Input to be changed consistently during work hours for a few months. The time needed for the processes is too long for what we need. I'm looking for way to reduce the complexity of the model and improve the time needed to run the processes.
You've probably worked this out but since you've not explicitly stated it. The performance issue is with your rules not the TI process, the input is destroying the pre-calculated cached of ruled values that the TI depends on and so the rule engine is recalculating before the TI executes.
Check you have MTQ set to something. (PA ships at 1 for example).
Check you are not over-feeding the right hand side of the rule.
Check you are performing the calculation in as few steps as possible (i.e. if a= b * c * d * e, where b to e are all calculations of some form, don't keep b to e as separate measures).
Thanks Steve.

To answer:
1. I don't think locking is the issue at this time;
2. MTQ - Unfortunately I don't have access to server configuration and setting, but I'll look it up;
3. Over-feeding is certainly something I will look at;
4. Simplify calculation is also something I will look at as well;

Kenneth

Re: Improve TI process time for cubes with heavy rules

Posted: Mon Oct 09, 2017 5:37 am
by macsir
kenship wrote: Fri Oct 06, 2017 12:36 pm
Hi, thanks for reply.

I think we are quite close. In our case we have 6000+ employees but after grouping we have only >2000 distinct line items.
Our complexity is that we need to calculate every change, and to further complicate the situation:
1. We throw in another dimension so that we can keep 3 sets of scenarios and therefore calculation;
2. We run a multi-year budget and once there's a budget restatement we will be calculating the restatement for all years in the multi-year budget cycle.

This is why things are complicated.
There is always a room for performance improvement. I think there are several things you need to consider and most of them have been said by our gurus already.
1. TI process
Try to run TI processes in parallel if there is no locking or even break processes to subprocesses using TM1RunTI command.
2. Rule
Over-feeding is definitely your main problem here. Break the entire calculation logic into small pieces and use static value where possible to feed next step of calculation. Always try to use natural consolidation of TM1, rather than putting sum or average in the rule.
3. Server
Use more powerful CPU with more cores and higher frequency.

Re: Improve TI process time for cubes with heavy rules

Posted: Mon Oct 09, 2017 8:59 am
by Drg
If you Output generate many data and cube receiver not final in chain calcultion.
You need find the narrow neck of your process:
long calc or long insert.
based on the result of the analysis, I can assume that you have further logic for calculating the data, and the TI side-by-side starts will not give a result.

After analyzing the process, I would go over to an analysis of the result of the calculation. that the snapshot occurs as the strongest data are distributed and what opportunities are available to optimize further calculations (it is possible to add more off-line processes to aggregate the data)

Also look at the operation of server configuration (virtual machine)

Re: Improve TI process time for cubes with heavy rules

Posted: Mon Oct 09, 2017 9:42 pm
by kenship
Hi all,

Thanks for all the reply.

I gave some thoughts to the model and found a way to separate calculation of salary and benefits into 2 streams:

First stream will have salary and benefits pre-calculated for the full year, on full time basis, values will be hardcoded.
Second stream will deal with the rest.

Due to the fact the vast majority of headcount will be falling into first stream, I believe a lot of resource will be made available to handle the second stream and will significantly improve processing time.

Kenneth