We have been suffering from irregular performance on our production web site. Our ISP has told us that this is because our ASP.net (v1.1) application has a memory/resource leak. Quite possible as we have never checked/analyzed the development.
To try and identify the problem, and to prove/disprove our ISPs statement, we have purchased the .NET Memory Profiler. Our (simplistic!) approach has been as follows:
1) Open the web site's URL in the Memory Profiler.
2) Wait for the default/home page to load and take a snapshot.
3) Either refresh the home page and/or browse around the web site and then take another snapshot.
4) Open the 'Types' tab and set the filter to 'With new or removed instances' and set the view to show 'Dispose Info'.
5) Sort the Live instances -> Delta column and review which have a positive entry.
Actually we 'only' have 4 entries:
System.Web.Caching.CacheDependency Delta = 5
System.Web.DirMonCompletion Delta = 2
System.Web.DirectoryMonitor Delta = 2
System.Web.Bitmap Delta = 2
The question is is whether this approach is valid for what we are trying to achieve, i.e. a well managed web application, or whether there is a better approach or perhaps something additional we should be testing?
Any help/guidance would be much appreciated.
The numbers you included are not very alarming, but if the number of instances keeps increasing you might have a problem. Some things you could look for are cached items that keeps on increasing, session and view states that do not get collected etc. Since timeout are involved with for instance session states, you might need to run the ASP.NET application for a longer period, preferably using several sessions.
SciTech Software AB
If you're looking for memory leaks it is probably a good idea to stop hitting the site and wait for the session states to be flushed before collecting the snapshot. For example, you can use ACT to create a set of sessions, request several pages and then pause to allow the session state to be flushed. Before running the ACT script you can collect a heap snapshot, and after running the script (and waiting for session states to be flushed) a new snapshot should be collected. To avoid waiting for too long before collecting the second snapshot you can lower the session state timeout value. Now you should be able to to compare the snapshots and investigate the types with new instances. Hopefully there will be less data to investigate using this approach, making it easier to identify any potential memory leaks.
SciTech Software AB
Users browsing this forum: Google [Bot] and 13 guests