Improving Performance in Intra-Cube TI Processes

Process run times are often ignored if they remain within acceptable timeframes which are generally constraints set by the business but can sometimes be measured by user patience. Many times, it is not until they exceed these limits that they are addressed. Tackling performance of processes, especially in large models, is no simple task and there are numerous issues and associated solutions to consider. It’s crucial to address performance issues, especially in large models, as they can significantly impact efficiency. This article focuses on one issue that can cause drastic performance degradation when moving data within a cube.

There are many situations where a process is used to move data within a cube, such as copying versions, performing calculations, or allocating data. In certain situations, this type of process can perform much worse than using an alternative method. When all the following are true, a process moving data within a cube can experience poor performance:

  • The Cube has Rules
  • The Process uses CellGetN to retrieve values from the Cube
  • The Process uses CellPutN/CellIncrementN to populate the values in the Cube

 

 

When the above are all present, a process could take drastically longer. For instance, an intra-cube process that completes in 10 seconds when the cube has no rules may take over 3 minutes once any rule is added to the cube, even if the rule has no direct impact on the data involved in the process. This issue becomes more pronounced with larger datasets, possibly resulting in unacceptable process run times. Note that this situation has been noted to cause this issue, but may not be the only situation where this performance issue occurs.

A known way to resolve this problem is simply by removing one of the conditions that are contributing to degradation of process performance. This can be done by removing the cube rules, removing the need to CellGetN from the cube in the process, or removing the CellPutN/CellIncrementN and exporting the data to a file instead; to be imported with a new process.

If possible, the cube rules can be removed. The developer would need to determine if those rules can or need to be replaced with processes instead and consider the run times of those newly added processes. There are many situations where it would not make sense to remove all the rules from the cube, or there are not simple alternatives, so another route can be approached.

If there are CellGetN functions in the process, it is likely the process needs additional supporting data to perform calculations. Many times, these can’t be avoided when the data is necessary to determine the data that will eventually be put back into the cube. If the same data is available in another cube, the process could grab the data from a different cube to circumvent this issue.

The last, and possibly easiest, solution is to remove the CellPutN/CellIncrementN functions from the process and replace them by exporting data to a file. Of course, this solution includes creating a new process to import the data from the exported file. This can be done with an AsciiOutput/TextOutput or ExecuteCommand function to append text to a file. If the file contains all the information needed to perform the CellPutN/CellIncrementN into the cube, it makes for a simple import process.

By addressing performance bottlenecks such as this, developers can significantly improve the efficiency of intra-cube processes, ensuring smoother operations within the model.