Home » Get Started » 5 Proactive Tuning

5 Proactive Tuning

My Top Techniques to help  keep you proactive.


1- Create partition’s for your paging and temp files to increase performance and safety

Your system is constantly creating temporary  files most are named with a  .tmp extension or .dmp extension. The best plan of action is to redirect all temporary and junk files to a  “temp” directory on the “pagefile partition. (this is what I call this partition however, you may choose any name that makes since to you). You do this by selecting “environment variables” from the “system properties” dialog box. Note: there are two areas you do this in the user and the system box. Once you have redirected all the obvious files to the temp area reset the temporary folder location in your Internet web browser to the same place. You will also want to reduce the size of your disk space assigned to temporary files to 10-20mb. I have never needed more then that since I started browsing in 1995. These files are are self purging though it is a good idea to delete them once and a while. All of this is trash collecting and having a area to put all this junk in is smart. While doing the cleanup run the “Disk Cleanup”  program from the accessories system tools menu and select the drive you set as your “temp” file area.

The reason I call this partition the “pagefile” partition is because that is were the pagefile.sys (virtual memory file) is. Being that the pagefile.sys is a type of temp file I make the “temp” folder in the same partition.  Setup the pagefile so you leave between 50-100MB on the boot partition and then make another custom file on the pagefile partition. This one should be equal to your memory plus. You will now have a well balanced system with less chance of tmp and junk files screwing up your boot or data partitions. This will correct problems caused by having everything in one partition and you will have less problems when you cleanup and de-fragment your partitions. I have used this type of setup since the early 90’s with great success. You will also be able to create and restore images much easier since you have a partition to make them in.


2-Keep a copy of your image on a local drive partition and copy the entire image to a external drive or network location (if you are not fimiliar with imaging start here http://www.proactiveuser.com/?page_id=158)

The above two screens show my boot drive images (in gold highlight).  I store the orginal image on the physical drive that is inside the computer drive  and a copy on a external sata drive  this gives me several recovery options. If I am on the road I can recover from the local image on the drive inside the computer. If something is wrong with the inside drive I can recover from the external drive. Two copies are always better then one especially if they are on different drives. Always image and make a copy.


3-Test the performance of your drives and distribute the work load across them all.

Learn to use and read the performance monitor. There is a wealth of information available within a few hours of experimenting.  Above are two different performance graphs that give you a indication of what will happen when you copy large files 3GB and small files 3MB between two computers (the 3GB file is 1000 times larger then the 3MB file). The first counter issue that may confuse you is how can the top graph be over 100%? If you look close at the “Instance” you will see that the physical disk in test has two partitions I: F: this means that while the large file was being copied to F: there was some activity on I:. There is also a decay in the copying almost equal t the intial transfer which is caused by the fact that the physical disk and CPU subsystems have to UNLOAD the cache. If you look at  the lower graph there is virtually no activity with small file copying. The moral of these two graphs is when copying large files be prepared to WAIT to use files on the same drive.  Experiment and learn.


4- Heat the silent killer of systems.

 One of the techinical parameters that is not covered in the performance monitor is HEAT monitoring. There are several tools available the current one I use  is “SpeedFan” from http://www.almico.com/sfdownload.php this is a very easy to use tool for temperature monitoring and also includes voltage, fan speeds and SMART disk info.  In the images above I have run Speedfan on my Intel I7  portable with 8 cores as I typed this page. I also have notebook coolers from ZALMAN http://www.zalman.com installed on all three of my notebooks which I have disconnected during part of the article and turned back on during the last part to test cooling. The ZALMAN units drop the average temperature from 8-10 degrees C or  46-50 degrees F an enormous amount.  I always completely test any freeware or shareware that I recommend in this site. 


5-Email and Faxing belong at a hosting service 

EMAIL (and FAX) are probably the number ONE  headache in most small businesses .  EMAIL problems can be solved with ONE word. OUTSOURCE. There are thousands of companies specializing in hosted Exchange, POP , IMAP and FAX and yet I have people call me all the time asking how do I setup a in house email server. This is lunacy at  best.  I think to my self are you CRAZY?  The security issue is always brought up. I answer do you realize how many legal and government firms use hosted email?  GET WITH THE PROGRAM PASS THE PROBLEM OUTSIDE.  Hosted email and fax services for one YEAR cost less then a 30 minute service call to your favorite tech.


6-Measure your systems overall subsystem performance to help you in determining when to make changes to hardware buy using the “Resource Monitor”.  Note: The performance monitor in item 3 above has all the same featutes as the resource monitor however, it requires you to setup the total testing environment while the resource monitor is already setup for you. Use the resource monitor for a quick overall view.

The above image tells me that spending money on a new CPU, faster Disks, faster Network components or Memory is really a waste  based on my curreny needs.  You can quickly figure this out by looking at the little “green and blue graph” icons in the middle of the overview image.  How?  Load all the applications you normally use and work as you would for an hour. Then look at the data let me explain.

  • CPU    My cpu which is a Intel i7 is barely using 10% of it’s capacity at a frequency of around 92%  of clock rating
  • DISK   My disk subsystem is coasting along at 172KB/Sec (.168 MB or about 1/1000th of it’s average capacity) it peaked at 5%
  • NETWORK  My network is also in a near unused state it shows 10Kb peaks on a network capable of a hundred times that
  • RULE “If it is not using 50% plus of some subsystem (continually) leave it alone until your next sell and replace time

All of the above lack of system use and I am running many of my normal applications.  So I will save my money as most of you should too. Set a sell off time and replace that your budget is comfortable with and use your system to the full potential.  I rotate computers every 1.5 to 2 years.

IF you are an instense game player the above concepts may not apply.

7-Do you need Raid?

As a lifetime hardware tester  I could not wait until I had the chance to build my first RAID. It was 1988 and I was selling Silicon Graphics workstations.  Setting up a Raid in UNIX is a rather daunting task since it was done via a command line by typing in strings of characters. As years moved on raid was made simpler so almost anyone that could read could do it . The problem is most people do not know what or why they need a Raid.  .

  • Do you need a Raid?    Most likely not. Today’s systems are so fast that drive performance is not much of a issue. If you use good imaging and backup techniques  as I described in lessons 3 and 4 your data is safe.
  • So when do you need a Raid?   Let me review several possibilities.  a) If you have a systems who’s disk activity is always over 50% when running various applications ( use the performance counter and isolate what they are).  b) You deal with disk intensive applications such as Photoshop, Video and Audio editing, FEA or other disk bound analysis programs.  c) You want better data and backup storage performance
  • What does Raid do for you ?   Be aware of a something before you  invest in a RAID: Raid’s  DO NOT protect you from corrupt files only failed hardware. So if you use a Raid 1 or 5 for data protection the best you will get is the condition of the backed up file when transferred.  IE: JUNK IN JUNK OUT.  (Note: various high end industrial Raid’s do overcome this problem by checking before they write however, they are very expensive systems and not for small users.)
  • Never use software Raid. Software raid has only one thing going for it NO COST. Being that it is part of the operating system you may have problems moving your raid from on system to another. Worst yet if you have a failure or corruption kiss your data off (trust me this does happen ). Software raids are very unreliable stay away!
  • Hardware Raid is my preferred choice.  My choices are type 0, 1 or 5. Let me explain the application for each. Type 0 Raid is called stripping. Two drives are installed to act as one. You get twice the performance with no hardware failure protection. This is a good choice for users who run programs like “Photoshop” which use a lot of temporary file space (you redirect your temp files to the raid). If your raid fails all you loose are junk files which cane be recreated.  Type 1 Raid is mirroring which uses two drives to that are exact copies of each other. The problem with this is if something goes wrong in software you have two drives with exact copies (of bad files).  Type 5 Raid is my personal favorite since one drive can fail in the raid (made up of a minimum of 3 drives and you can recover). This is good for storing all your data  and it gives you very good performance.  Keep in mind 5 drives in a Raid 5 with 10k hours of  life now have 2k of life and should be replaced accordingly for data saftey. (Note Raid 6 is relatively new and allows two drives to fail)

Raid performance issues.  The top image is a Raid 5 using 5  2.5″ sata drives. The bottom image is a standalone USB2 sata 3.5″ drive.  The performance for the Raid 5 is more then five times that of the USB2 drive and it provides you with hardware protection.  Stay away from USB2 connected drives if possible since they barely run at one third network speed. This means slow transfer rates. Another consideration is that 2.5″ drives last longer, use less power (heat will be less) and preform better then 3.5″ drives. Most industrial grade servers use 2.5″ drives for this reason ( as well as space savings). They do cost more. Remember to burn your raid in for a week copying files back and forth with a simple script (they normally fail from hardware issues quickly so a burn in is a very good idea before committing real data). 

When you select a raid look at these features:

  • Aluminum case for cooling in rack mount or desktop design
  • 2.5″ drives  for small size, power savings and longer life
  • Multiple interface such as  eSata, Firewire 400 and 800 and USB2 (for convenience)
  • If you prefer multiple drives inside your system look at mounts by CremaxUSA (Icydock), and controllers by 3ware (setting this up will require some hardware expertise)


8-The “De-frag to death” symptom. Defragmenting programs can be a blessing and a curse.  Most of the new MS operating systems like Vista and Windows 7 have built in defragmenters that re organize your drive for better performance. The problem is ( if you look at the settings below) it says you can improve system performance by defragemented. Most users read this as the more I do this the faster my drives will be. Wrong.  What it does not address is the fact that constant defragementation  burns out hard drives faster and create a lot of extra heat. Moving files back and forth is just like writing and reading them. 

  •  Limit that amount of defragmentation you do. (once a month is more then enough) 
  • Clean all the temp and unwanted files of your drive before you defrage using “cleanmgr” and manual deleting.
  • Only defrage the boot (operating system) and data volumes

You can get to the defragmentation program by right clicking on a drive , selecting properties and then the tools dialog as shown above.


9-The wireless network channel 6 problem.

Wireless networks have made our computer use far more flexible in the past 10 years. We have seen performance jump from barely usable to almost wire speed. This has come at a price. Congestion on the home or small network.  One of the most common problems users will see are multiple routers on the same channel(top image).  This will play havoc withyour network even when the signal strengthisway down at -80. If you have a wireless network make sure you have at least one wired connection to verify connectivity then get a simple signal strength program that is graphical and install on your system. Above is a screen capture from “inSSider” from http://www.metageek.net/products/inssider a free analyzer.  If your wireless connection starts going in and out or crashes a lot run a chart first and see if other routers are on the same channel as your router. If they are change your channel BEFORE you call someone for help. Almost all wireless routers a shipped set to channel 6 which can be a real problem if you live in a apartment or townhome. In the above screen you can see one of my routers is now on channel 3 and the other at channel 9 leaving channel 6 open for the crowd (if you look at the top image you will see I moved to ch 5 however, I had better results going to 3 and 9). Trust me if you share the channel with your neighbors you will have all types of strange occurrences. In one case I could not get my macs to stay on the network and in another my managed switches started rebooting.


10-Do not “auto anything” on your computer

There is myth with computer users that  auto anything is good and needs to be turned on so you do not miss something.  This is intense bad judgement. Why? 

  • Many updates and patches that are distributed cause more problems then the original software had.  This is not because of poor programing but simply to many (almost infinite) variables that cannot be taken into consideration.
  • When patching or updating WAIT for a  few months preferably until the next service pak this will include all fixes in one install (usually).
  • Auto updating sends info to and from the software provider and may lead to security issues or other problems
  • Auto updating drags system performance down
  • The only exception I make to this rule is my anti virus provider since there fixes protect you from new external internet related threats
  • Use common sense when setting up your anti virus such as do not scan every file that is accessed only internet related issues

This may seem sacrilegious to some of you people that think “if it is there I should use it” however, all IT departments I have worked with realize the importance of testing before using as opposed to being a test user. I have one system running the current operating system that businesses and individuals may be considering using and I run it in “auto everything mode” just to test for stability before making recommendations.


11-Go Green and save if you use a lot of gear like me.

As a IT consultant I tend to go through a lot of hardware.  In 1997 I had two racks full of servers and peripheral items. By 2009 my count was down to one rack with six servers six 1400 watt UPS as well as audio, video ,routing and switching. I decided to see if I could use three high performance portables (4 core and 8 core HP HDX units) to replace my dual power supply watt sucking AMD and Intel  gear. I bought a tester from “Kill A Watt” (model p3) and setup a spreadsheet measuring every device over a period of at least 4 hours and logging the usage.  I then took the plunge and bought new systems with ZALMAN notebook coolers and new monitors  one managed 1GB  swicth and kept three  1400 watt rack mount UPS units as well as my Netgear routers and MOTU 8ch audio. I set the new system up in parallel and first and measured the power use. I might  note I do not use power saving mode since two of the new portable units are servers hosting seven VMware and HyperV virtual servers.  My drive array was replaced with 4 standalone  Seagate 1TB units all with  eSATA, FW400 and USB 2 ports. I use eSATA were possible.  The portables all have 8GB ram and 1TB of internal storage. Well after summarizing the two sets of data I estimated I could save  $960 a year in electric!  This was twelve months ago and the actual savings was $990 for the year. This summer was hotter and the winter colder.  Not bad for a few weeks work on and off. The switch to the new hardware cost nothing after selling the old gear.  This is one reason I always tell clients dump your hardware on Ebay within 18 months or keep it until it is junk.


12 – Test your backups!


Most people assume that if they have a backup or two they can feel safe.  WRONG.  Following a good data security plan like I have outlined in section 4 is great however, you still need to TEST YOUR BACKUPS (and images).

  • Do a restore occasionally to a different location and randomlly test the files for accuracy and lack of coruption
  • Recover your operating system image to a second removable boot drive if you can. In some cases you can test a image by using a “Boot Sequence Manager”. This allows you to load your image into the boot manager and select it to boot from to make sure it will work when needed.
  • This may all seem rather mundane for users with good backup practices (like myself) however, after you try to read a backed up file that is corupt the reality of doing this will shine through.

13 – Reboot your system!


Most users leave their systems on or in hibernation without shuting down and restarting for days and then the wonder why the are having slow or erratic performance.  Even the servers that are used jn industry need to be flushed once and a while.  I reboot my primary workstation daily and my file storage or server weekly.  You can do this very easily in any windows operating system by issueing the following command from a simple script or schedule this in the “Task Scheduler”. This is my reboot.cmd (command script). Just copy the following  into a plain text file and rename it reboot.cmd. You can then schedule the script or run it by clicking on it.


echo off
shutdown /r


copyright © 2010 alienconcepts inc