Back to my case study: After spending many hours of tweaking the app's memory management configuration and that of the profiler's settings, I have finally come up with a successful profiling session at the brink of the 2GB limit. I had to use Low-Impact Profiling + Delayed Instance Cleanup and more importantly, I had to block out any sort of call stack information from being collected by setting Call Stack Depth to 0 and choosing "Only Functions with Source", as I had no sources. Any deviation from this set of settings of the profiler results in an OutofMemoryException at the end of a 10 minute test run. With many such runs going repeatedly bad at the last moment, I thought of reporting this and knowing about the planned improvements.
According to the native memory page, the committed memory shows 500MB of profiler data and the physical memory shows 700MB of profiler data (after subtracting Â´the app's memory consumption from what is shown in the parantheses).
When run without the profiler, the app's memory usage peaks at 1.5GBof VM size -- so the overall memory profiler overhead seems to be about 430MB in size because the VM size is about 1.93GB as reported by perfmon when the mem consumption during the test workflow is peaking.
So, I basically am looking forward to some good news in terms of reduced profiler overhead in the (round-the-corner) upcoming releases -- or am I asking for too much?
Using this new option you should hopefully be able to profile your application, without an out-of-memory error. However, even though the memory overhead will be drastically reduced, there will be some overhead. Profiling an application that uses close to 2GB of memory (I assume as a 32-bit process?) can still be problematic. The application itself is almost running out of memory, and any additional overhead might cause an out of memory error. If possible, it would be good if you were able to investigate the memory problems before the memory usage starts to reach its limit.
A preview of the next version of the profiler should hopefully be available before the end of August.
SciTech Software AB
Just to note that our 32-bit app uses 1.3GB to 1.5GB max in the scenarios I tested. It is with the profiler overhead that it goes all the way up to 2GB.
I may be jumping the gun on this point, and if so, please ignore it: Regarding the new profiling levels I presume that, to know what exactly will be captured at each level, should I be choosing the Custom option? I'm sure the accompanying documentation will cover them in detail, but for a developer in a hurry, it may not be clear on first usage as to what exactly gets left out when I go for minimal impact...
Low: No instance specific call stacks, limited instance tracking, limited disposable instances tracking, and real-time data collection.
Very low:: No call stacks, instances not tracked between GCs, no disposable instances tracking, and no real-time data collection.
Limited instance tracking allows instances to be identified as new or old, but no call stack, or age information, will be presented for specific instances. Instance identifiers will be reassigned for each snapshot.
When you select the custom option you will be given the chance to select instance tracking level (None, Limited, Full), and whether call stacks, dispose tracker and heap utilization tracker should be enabled. Hopefully the options should be sufficiently self-explanatory in the user interface. Additional information will be included in the documentation.
In your case, it is the full instance tracking that causes the main memory overhead (the current version only has full instance tracking), and this will be drastically reduced if you select limited or no instance tracking.
SciTech Software AB
Users browsing this forum: No registered users and 12 guests