Page 1 of 1

Out of Memory when Analyzing Snapshot data

Posted: Mon Aug 10, 2015 9:20 pm
by wortho

I have a large snapshot (762 MB) taken from a 64 bit server process using NmpCore. When I load the snaphot into MemProfiler and try to inspect some of the instances MemProfile just keeps consuming memory until the machine runs out of available memory (16GB).
Is there any way to reduce the memory used by MemProfiler?



Re: Out of Memory when Analyzing Snapshot data

Posted: Tue Aug 11, 2015 4:30 pm
by Andreas Suurkuusk
Some of the algorithms used by .NET Memory Profiler can consume quite a lot of memory, especially when the snapshot includes many instances or when the instance graph is very complex.

The profiler performs many calculations in the background, and if multiple memory intensive calculations are performed at the same time this will of course increase the memory pressure. We are looking into ways of detecting high memory usage and complex instance graphs and avoid multiple concurrent calculations if the memory pressure becomes to high. We may also cancel some calculations (with a message) if we detect that the calculation will take too long time to finish or will use too much memory.

If it is possible for you to send us the session file that causes the high memory usage, we could take a look at it. Maybe there's some problem in the profiler that causes the high memory usage, or at least it can help us to optimize the memory usage in the profiler.

If you can send us the session file, please contact us at for information about how to send us the file.

Re: Out of Memory when Analyzing Snapshot data

Posted: Fri Aug 14, 2015 7:18 am
by wortho
Thanks. I will send you the file.


Re: Out of Memory when Analyzing Snapshot data

Posted: Tue Aug 18, 2015 2:20 pm
by Andreas Suurkuusk
Thanks for the session files. We have been investigating the snapshots a bit now and found a few problems in the profiler.

The snapshots include a lot of instances and also include very long linked lists. For instance, you have TreeHandler instances that are rooted through a linked list that is more than 3 million entries long.
Long linked list
A linked list is the worst type of structure for the profiler to handle. The root paths for a node (and its data) in a linked list can become very deep and that causes a problem for many of the algorithms in the profiler, e.g. the instance graph, the shortest root paths under Type details, and held bytes calculations.

However, while working on this issue, we have made significant improvements to the memory usage and performance of the instance graph (and the root paths in the instance details view). These improvements will be includes in the next maintenance release, and will hopefully make it possible for you to work with snapshots.

To minimize the memory usage by the instance graph, I recommend that you do not include related instances (Held, Reachable, Referrer) or non-shortest paths.

We will continue to look into the problems related to very large snapshots, and will include further improvements in a future version.

Re: Out of Memory when Analyzing Snapshot data

Posted: Tue Aug 18, 2015 8:45 pm
by wortho
Thanks for the reply. I will try the profiler again without the related instances.
It is a standard product (Microsoft Dynamics NAV) so I can't do anything about the internal data structures.
I am trying to analyze a memory leak in a customer solution which we suspect is due to a native memory leak in a third party component.

Regards David

Re: Out of Memory when Analyzing Snapshot data

Posted: Tue Aug 18, 2015 9:00 pm
by Andreas Suurkuusk
I understand that you cannot change the data structures in the program. It is our intention that the profiler should be able to present useful information in all situations, but as you noticed, this does not always succeed.

Even if you disable the related instances presentation, you will probably not be able to work with these session files before we release the next maintenance version. The root paths presentation under Instance details makes an unnecessary copy of merged nodes for each root path, which in your case can consume over 30 GB of memory. With the fixes, we should hopefully be able to reduce the memory consumption to a maximum of 8-10 GB. Still a lot, but we will try to reduce this further in a future version.

I will reply to this post as soon as the next maintenance version is released.