In my past life I used to program dBase and Foxpro. I am the sort of programmer that is kind of lazy, and I like looking for samples of code and then working by code around it, or using the sample to get an Idea how to do stuff.
Typically I use msdn and a number of blogs as my main source of stuff. That was until I found the Windows PowerShell Cookbook.
Lee Holmes has created a book that is just what I want and need. It gives me a description of things I would want to do with PowerShell and then working sample code. It is a god send.
Today examples are how can I get perfmon counters from another machine using PowerShell? That would be on Page 270 [Hint: New-Object System.Diagnostics.PerformanceCounter
Every day is a school day, and this make some light reading for a Monday Morning. Time to start up perfmon on my Exchange Mailbox Cluster
… So, in simplistic terms, the page file is used by Windows to hold temporary data which is swapped in and out of physical memory in order to provide a larger virtual memory set.
… However, now consider a system managed page file on a 64-bit server with 32GB of RAM. The page file size would range from 32GB to 96GB! This is why understanding the performance of your server is so important. Although there are general recommendations about page file sizing that are based on the amount of physical RAM in a system, this is not 100% valid. If you think about it, the more memory you have, the less likely you are to need to page data out.
The page file needs of an individual system will vary based on the role of the server, load etc. There are some performance counters that you can use to monitor private committed memory usage on a systemwide or per-page-file basis. There is no way to determine how much of a process’ private committed memory is resident and how much is paged out to paging files.
… So with this information in mind, what’s the best way to determine the correct page file size? The first thing is to gather a baseline. Set up a page file that is statically sized to 1.5GB of RAM. Then monitor the server using Performance Monitor over a period of time. Ensure that the peak usage times of the server are monitored as this is when the server will be under the most load (for example, month-end / year-end processing etc). Using the information from the counters above and also examining the Peak Commit Charge number in Windows Task Manager (shown below) will give you an idea how much page file space would be needed if the system had to page out all private committed virtual memory.