Skip to main content

Cubes, even small ones, use quite a bit of RAM. 1 GB for an average sized cube is not exceptional. One of our larger cubes with lots of dimension and value fields easily exceeds 2 GB and that’s already after applying pre-filters to minimize the data set. This RAM usage is fine when the cube is actually being used but I’ve noticed the application never releases the RAM after the cube has been closed. I checked to make sure that memory optimization for the table is switched off but that doesn’t seem to affect cubes. I’m a bit worried we will hit the limit of our terminal server despite the 48 GB of RAM per machine. We plan to add more cubes in our application in the near future.

Is there a way to force the application to release the RAM after a cube is closed? Perhaps even release the RAM when the cube is opened in the background for a while? Because even when there is plenty of available RAM left it doesn’t make sense not to release it, especially when reopening the same cube only seems to take even more RAM.

(...) but I’ve noticed the application never releases the RAM after the cube has been closed.

(…) Because even when there is plenty of available RAM left it doesn’t make sense not to release it

This is actually by design and even though it may seem counterintuitive, this behaviour is what's best for the performance of the application, both at the moment of closing the first cube and at the moment of opening the second one.

The Windows and Web GUIs are both .NET applications and this is the way garbage collection works for .NET applications. It is not a design choice made by us, it is part of Microsoft's .NET runtime. The fact that there is plenty of RAM available is actually a very good reason not to release the memory that's in use, because garbage collection is very expensive and so is reobtaining memory blocks that can be used by the application.

Explaining this in more detail would get quite technical, but you can read more about what triggers garbage collection here. And you can read more about why this behaviour is not a problem here.

 

I hope this explains a few things.


I do have a rudimentary understanding about caching methods and the advantage of utilizing RAM versus not using it at all. But shouldn’t that be done by the OS instead of in the application layer? Now I don’t want to start a lengthy discussion about a topic that eludes me for the most part. What made me worry is my observation that allocated RAM does not seem to be reused. In fact, I built a small cube yesterday with very little test data in it and after quite a few model refreshes RAM usage of the application quickly rose from only a couple 100 MB’s to over 2 GB. Closing and reopening the same cube several times without refreshing the model (similar to what a typical user would do) pushed RAM statistics up even further. Then I kept it running for an hour and nothing changed.

I do understand that pre-allocating or retaining a block of RAM can be very good for performance (something that Windows doesn’t do enough to my liking but that’s another topic) but I don’t really grasp what good sitting on RAM does when it doesn’t seem to be reused at all.

Thank you for clarifying this is Microsoft’s design choice and beyond our control here. In that link it says I shouldn’t worry so I won’t. Let’s just wait and hope for the best. :)


Reply