What can cause variability in recalculation times?
Posted: Thu Mar 29, 2018 2:15 pm
Hi All
I have been challenged to improve the performance of our budgeting model.
It is very slow ~ over 15 minutes for a system recalc on average.
I have also been required to set challenging SLA's for system performance, which I will be held to and have to report against.
As such I have been working to bring down the model recalculation time.
I have built a testing process around viewconstruct to enable me to record the time the main calculation cube takes, and test/confirm the impact of changes.
I have found that despite the source data remaining the same, and there being no other demands on the server, the model can vary substantially in the time that it takes to calculate this view.
So, for example the quickest recalc would be 8 min, the longest 28 min (not an outlier), and the average around 15 min. This forces me to run the test at a volume, eg 30 times to get a justifiable data set of what the performance range is.
I watch the taskmanager during this process and can see that sometimes, (the quick refreshes) it uses 100 CPU utilisation, and yet other times, it will fluctuate wildly between 100% and 10%.
I am coming up with a testing regime, What could be the cause of this variability in calculation time?
I read somewhere that TM1 decides on the fly the path its going to take to calculate an output, maybe there is someway to help direct it? eg with more specific feeders?
I have been challenged to improve the performance of our budgeting model.
It is very slow ~ over 15 minutes for a system recalc on average.
I have also been required to set challenging SLA's for system performance, which I will be held to and have to report against.
As such I have been working to bring down the model recalculation time.
I have built a testing process around viewconstruct to enable me to record the time the main calculation cube takes, and test/confirm the impact of changes.
I have found that despite the source data remaining the same, and there being no other demands on the server, the model can vary substantially in the time that it takes to calculate this view.
So, for example the quickest recalc would be 8 min, the longest 28 min (not an outlier), and the average around 15 min. This forces me to run the test at a volume, eg 30 times to get a justifiable data set of what the performance range is.
I watch the taskmanager during this process and can see that sometimes, (the quick refreshes) it uses 100 CPU utilisation, and yet other times, it will fluctuate wildly between 100% and 10%.
I am coming up with a testing regime, What could be the cause of this variability in calculation time?
I read somewhere that TM1 decides on the fly the path its going to take to calculate an output, maybe there is someway to help direct it? eg with more specific feeders?