Video/game settings and you.
Greetings T2 players!
It is I, your Tribes Rabbi here with some gems of the settings kind to increase fps or image quality. Or both. Ahem. As with most everything fps is a balance of tradeoffs. Nice image quality vs pure frames per second. If all you want is fps, head straight to the color depth control of your vid card and set it to 16bit and read no further. If you want to see how good t2 (and most any other game) can look with little fps impact, read on.
To begin with, let us discuss overall video card settings as found in any vid card driver control panel. There are two main controls that we are interested in right now. One of the settings is called Anistropic filtering, the other is the setting for Antialiasing. Aniso sharpens images, especially those in the distance, while anti does the opposite, it smudges images a bit to get rid of the jaggies. jaggies are the jagged lines that come from aliasing of pixels. We need them both for best image quality, but they do cost some fps.
See examples of aniso and anti here:
http://www.extremetech.com/article2/0,2845,2136956,00.asp
Now the vid cards of today, and some slightly older ones, can handle the max anisotropic filtering with very little if any performance impact. In this case often 16x anisoropic filtering is available and will make the game, most any game not just t2, look great while not affecting fps much.
The real fps killer is antialiasing, more specifically the grievous memory reads and writes and the gpu calcs done on pixels when anti is cranked way up. Any more than 2x antialiasing will drop fps a good deal, and by good deal I mean by a noticeable amount. With the resolutions most lcds run at, and we should all be running at the native resolution of our lcds, high resolution does a lot towards getting rid of jaggies, but hi res is also an fps killer by itself as the vid card must draw x number of pixels by x number of pixels and then perform aniso and anti on them. This means all we need is to run 2x antialiasing to clean up the most horrid of the jaggies as a high res (greater than 1024x768 for example) naturally eliminates jaggies.
Keep in mind that one must force these options in a profile or other method in the driver control panel.
It is I, your Tribes Rabbi here with some gems of the settings kind to increase fps or image quality. Or both. Ahem. As with most everything fps is a balance of tradeoffs. Nice image quality vs pure frames per second. If all you want is fps, head straight to the color depth control of your vid card and set it to 16bit and read no further. If you want to see how good t2 (and most any other game) can look with little fps impact, read on.
To begin with, let us discuss overall video card settings as found in any vid card driver control panel. There are two main controls that we are interested in right now. One of the settings is called Anistropic filtering, the other is the setting for Antialiasing. Aniso sharpens images, especially those in the distance, while anti does the opposite, it smudges images a bit to get rid of the jaggies. jaggies are the jagged lines that come from aliasing of pixels. We need them both for best image quality, but they do cost some fps.
See examples of aniso and anti here:
http://www.extremetech.com/article2/0,2845,2136956,00.asp
Now the vid cards of today, and some slightly older ones, can handle the max anisotropic filtering with very little if any performance impact. In this case often 16x anisoropic filtering is available and will make the game, most any game not just t2, look great while not affecting fps much.
The real fps killer is antialiasing, more specifically the grievous memory reads and writes and the gpu calcs done on pixels when anti is cranked way up. Any more than 2x antialiasing will drop fps a good deal, and by good deal I mean by a noticeable amount. With the resolutions most lcds run at, and we should all be running at the native resolution of our lcds, high resolution does a lot towards getting rid of jaggies, but hi res is also an fps killer by itself as the vid card must draw x number of pixels by x number of pixels and then perform aniso and anti on them. This means all we need is to run 2x antialiasing to clean up the most horrid of the jaggies as a high res (greater than 1024x768 for example) naturally eliminates jaggies.
Keep in mind that one must force these options in a profile or other method in the driver control panel.
Comments
Ati cards, when playing t2, should have temporal and/or adaptive antialiasing and Catalyst A.I. disabled for t2. These seem to cause trouble in t2 when enabled. Your mileage may vary but I have had a more stable t2 game without these options enabled. I suggest simply running 2x anti and 16x aniso for ati cards, but you may be able to enable the threaded optimisation.
As to nvid cards, I run 2x anti and 16x aniso and set the Antialiasing-Transparency control to supersampling, wich should be much nicer than when set to Multisampling. Also, I have noticed lag when multisampling was enabled rather than supersampling, wich is very odd since Supersampling is much more gpu and mem intensive than Multisampling. What these settings control is how textures look that have transparent parts, such as leaves on trees and grasses and so on. From what I gather the "alpha test" the game performs to see what is transparent and what is not for textures that are partially transparent is laggy when set to multisample rather than the higher quality supersampling, so in this case it's a win, but not a Charlie Sheen win.
SLI
If you have an sli system, meaning two or more vid cards installed (and I am talking nvid here as I have no experience with ati cards running crossfire), you can play t2 with sli enabled. You must enable sli in your control panel, and then go down and set the sli mode, either split frame rendering or alternating frame rendering. Split frame seems to offer more instant response to input from ouse and kb than alternate frame, and that is odd since split frame causes a lot more communication between the vid cards, meaning more latency is added, than if run in alternating frame mode. Pick one mode and see how it plays for you.
Disable vertex lighting; this is a ue maker. Set pretty much everything full right. The reason why full right is good is cuz it causes the game to force all textures into memory rather than nickle and diming these textures on call, wich causes video card memory traffic that is best avoided as that is also called lag. Set to 32bit color. Disable Interior Textured Fog, this may be an eater of fps with no real benefit. Disable the precipitaion if you find it annoying, and yes it does add to the cost in fps. Set the decals (bullet holes, footprints, etc) to your liking. Enable the texture compression setting and set to fastest - if you notice people walking through you or you walking through people, disable texture compression. Also, the terrain texture cache size line in clientprefs.cs may be set to 4096 rather than the 240 or so that it normaly is.
Some settings to try in clientprefs.cs:
(back up your original cs first)
$pref::OpenGL::maxHardwareLights = "8";
$pref::sceneLighting::cacheLighting = 4;
$pref::sceneLighting::cacheSize = 98304;
$pref::sceneLighting::purgeMethod = "lastModified";
$pref::sceneLighting::terrainGenerateLevel = 4;
http://nvidia.custhelp.com/cgi-bin/nvidia.cfg/php/enduser/std_adp.php?p_faqid=2624
"Question
In the launch drivers for GeForce GTX 400 series GPUs, there was a bug in the Transparency Antialiasing implementation that enabled full-screen supersampling. Is there any way to still get full-screen supersampling in Release 256?
Answer
Yes. Release 256 drivers do fix the implementation of Transparency Antialiasing (TRAA) and now offer up to 25% performance improvements with TRAA enabled. However, since some of our gaming really enthusiasts liked the full-screen supersampling, we have created a tool for users that allows them to enable 2x, 4x, and 8x full-screen supersampling. "
If you have an nvid card that is an 8800 or newer core, you can use this tool from nvid that will enable full screen SSAA (Super Sample Anti Aliasing) in all games. This makes t2 look better than I have ever seen it. With 2x antialiasing and 16x anisotropic filtering enabled in the driver control panel, just fire up the SSAA tool and set it to 2x SSAA and enjoy. You must also have an nvid driver that is of the 25xxx series or newer. Keep in mind that this full screen SSAA placeas a relatively huge load on your vid card, but if your card has the power to pull it off with playable fps you will be rewarded with outstanding image quality. It really makes t2 look good. Sorry ati cards, this is nvid only. But you had nice image quality all the time so don't feel bad!
Nicely done.
First off, use a driver that was written when the card was still new - and this advice also applies to just about any vid card. This means mostly the cat 5.11 or thereabouts. This driver will work fine in t2, and has an option to enable threaded optimisation, and should play t2 without stuttering. If the cat 5.11 series doesn't work for your os, not much I can do for you. As to the later ati cards, use a new(er) driver, they should play fine if you follow the above hints on ati cards.
Also, here's a tip from Capt. Kinzi: Capt. Kinzi: cook some derms for me!
Copy the above code and paste it into a .txt file, save it, and rename it to fix.cs, pop it into the gamedata/base/scripts/autoexec folder. You can name it anything you want to that is not already taken, actually, as long as it ends in .cs. If you had microstutter in game before due to running t2 on a multicore system, this "fix" should eliminate or at least reduce the stuttering. With my system being as highly strung as it is, I find the game is smoother with it set to 1 rather than 0, most will find 0 (zero, not O) works best. In the pc world, 1 means true, on, or enabled, 0 means false, off, or disabled, at least as far as booleans go. Try it both ways and see what this boolean can do for you!
If you are displeased by screen tearing in game, there is a method to reduce that tearing, and also to reduce heat, noise, and power consumption in your game system.
An example of screen tearing:
http://en.wikipedia.org/wiki/Screen_tearing
What can be done to eliminate screen tearing is to synch each frame rendered of the game with the refresh rate of the monitor as described in the above wiki link.
There actually is a control in t2 to enable this but most operating systems or vid card drivers override this in-game control, so we must use a driver control panel to force vsynch, or even employ a 3rd party app to force vsynch.
Here is an example of a 3rd party refresh rate locker:
http://www.pagehosting.co.uk/rl/
To use it one must rtfm. In short, you set the lock to the same frequency as your monitor's fastest suitable refresh rate. If you have a driver control panel or operating system control that can do this you are better off as it will or should then be automatic in operation rather than having to start the app each time one wished to play. So now we have cleaned up the tearing but we have mucked the performance a bit, haven't we? I mean going from 400fps with tearing to the refresh rate of our monitor (60 to 120Hz in most cases) without tearing, we feel disarmed and helpless against those who are running at a higher fps.
One way to empower ourselves against "high fps barstewards" is to go into the driver control panel and enable triple buffering. Here is why triple buffering is a joy:
http://www.anandtech.com/show/2794
If you enable vsynch, you should enable triple buffering. You will get a much smoother and responsive game and the added buffer will make sure you can drop an old frame that has since passed its accuracy of gamestate for a new one instantly. Well, really your vid card does it for you.
The reduced heat, noise, and power use that I mentioned earlier stems from the locking of the frame rate to vsynch. Your vid card is running nowhere near full throttle, it is loafing along at your refresh rate. This makes the card run much cooler than if it was at 400fps for 20 minutes. Less work in the vid card means less heat, and also less power usage. Your vid card should last much longer. If you have voiltage regulator whine noise in games from your vid card running at wide open throttle, this whine should be reduced or even eliminated if you run vsynch.
Another benefit I find from the combinatiojn of vsynch and triple buffering is the consistency of play. When you are always at (add your refresh rate here) your game is much more predictable than with the fps all over the place as with vsynch disabled.
What the above code does is "Increases FPS by skipping predetermined frameskip settings".
Like most scripts, this script is support.vl2 aware, so you should have support.vl2 if you do not already. After placing the script (first copy/save as fs.cs or some other desired name) into your autoexec folder you then fire up the game and get to the script option and set it to 2 for some tesing. Play a bit and see if you like it. It is pretty much a full time interpolate script in effect. I find that a setting of 3 is way too much, 2 is just fine and gives a decent apparent boost, and 1 does little to nothing. If you run into ues with this script, and I have, either change settings or disable/delete it and go on your merry way.
This script will show higher fps than refresh rate if you use vsynch and this script. This is normal and vsynch is still working.
Texture filtering is a control found in most vid card driver control panels. This control is basically a cheat control. Cheat in the sense that this control modifies how accurately the vid card renders a scene, either as the scene is presented and according to all setrtings such as aniso or antialias filtration, or if the vid card can skip some filtering routines for the sake of speed or less memory taken.
The settings typically range from High Performance to High Quality, with High Quality pretty much making the vid card respond to every game frame being drawn precisely as the game calls it to be, whereas High Performance allows the vid card to skip a alot of detail that would make the game look better but cost a bit in fps. All you have to remember is High Performance means fast and ugly, High Quality means slower but prettier. Usually much prettier.
When one installs a vid card driver the driver usualy sets everything to a default of Quality texture filtering, and while this is not the worst, it is not the best. This filtering level helps vid card makers get higher scores in benchmarks but it costs in image quality in games. And we all know that T2, while looking great in 2001, is quite ugly compared to most pc games of today. This means we should give T2 all the help it can get.
Here's a writeup on the issue with examples:
http://www.firingsquad.com/hardware/ati_nvidia_image_quality_showdown_august06/
Now, if you never noticed the effect of texture shimmer bef\ore, I am sorry to have made you aware, same goes for the need for anisotropic filtering and antialiasing as described above. But shimmer I find to be distracting as it actually moves as you do so it catches the eye. The way to best reduce texture shimmer is to run the control at High Quality to reduce texture shimmering to a minimum. There will still be some shimmer in odd places at High Quality, but much reduced over settings such as High Performance. Try the various texture filter settings to see what is given up for fps performance.
Example output:
Log Date: 07/04/2011 Log Time: 16:59:57
Server Name: Goon Haven
Server IP: 67.222.138.111:28000
Server Map Name: SoylentGreen
Server Map Type: classic - Capture the Flag
Lowest Ping: 48/0% Lowest FPS: 94
Highest Ping: 167/6% Highest FPS: 102
Average Ping: 77/0% Average FPS: 97
Total Time: 01:26:31 Map Time: 00:18:03
If you have the ability to control certain network settings such as interrupt request moderation, it may be beneficial to game play to do so. Each network event triggers an interrupt to the cpu. The cpu must then stop doing whatever it is doing and see to that net irq. Then it can continue to do whatever it was doing before. Or in the case of smp systems, one core can handle the net irq while the other goes about its business.
The int moderation control moderates the total interrupts seen by the cpu. If the IM control is set to adaptive, the networking driver and os tcp/ip stack tracks the interrupts and when they get to a distracting level to the cpu, the network driver slows the interrupts and bundles them into a single package with one interrupt. When this happens in a game it causes lag. This lag is due to the fact that the network driver has retained net packets for a set time and then released them in a single and delayed interrupt to the cpu for processing. By the time your game gets these delayed packets they may be far too old to mean anything as the action on screen has already chamged. This interrupt moderation is a boon to overworked webservers and other servers, but not to a game client or a game server.
Immediate attention to each and every packet is what game clients and game servers need above all else as far as networking goes if that game is an online game like an fps such as Tribes. In role playing or turn based strategy games, this interrupt moderation may not be noticeable, in an fps it will be.
Many Intel based netwrok cards, both add ins and onboard, have the ability to moderate net interrupts and can have this setting changed by the user. In my case I right click networking icon on the desktop, scroll down to properties, right click the net adaptor, properties, and click the configure button. Click advanced, performance options, proiperties, interrupt moderation rate, and set it to your liking. Off should give immediate packet attention to each packet as it arrives. Adaptive should do close to the same unless there is a lot of activity coming from your network adaptor, as adaptive only impedes and bundles packets if they get too great for the cpu as determined by the adaptive algorithm. Hit ok, then close everything and your system may want to restart before serttings take place.
Some nvidia motherboards can also bundle packets with their onboard net cards. In the case of nvid netcards, you have two choices, one is cpu and the other is network throughput. In this case set it to netwrok throughput for lowest packet latency.
Test your game with the various moderation settings and see if it helps or hurts your gaming.
Timeslices, or quanta for windows weenies, is the time in milliseconds that a cpu is tasked to run a thread. A thread is the code that actually gets executed by the cpu. There are normaly two choices in timeslices or quanta available to a user, adjust for best performance of programs, and adjust for best performance of background services. Now it may seem logical to spend most cpu time on programs running in the forground rather than what is being done in the background, wich means network access, disk acces, etc stuff hidden from the user. But like everything, there is a tradeoff. With set to programs, timeslices are devoted to the forground app as much as possible and only interrupted by important events such as network activity. The time in milliseconds a app may hold the attention of the cpu is what is varied here by this control.
But what happens if we set it to background apps rather than forground? An interesting thing. All actrivities get the same equal timeslice, but that timeslice is made much longer in duration than when set for programs on the desktop. Some gamers may find it a benefit to increase the duration of their app timselices as that greatly increases the total time the cpu can spend on the game thread. See, with this control set to programs, programs get priority, but a smaller timeslce is given overall. With set to background, all threads get the same increased duration timeslice. Give this control a try and see if it plays better for you.
I find this control in xp and server2003 (the os's I use) by right clicking on my computer icon, properties, advanced, performance, advanced, click background services. You should not have to reboot for this setting alteration. In other os's I have no clue but if it is an M$ os you should be able to find the control and give it a shot. You can do the same adjustment in Linux os by renicing your app.
This is about a control that controls how the os (most any M$ os) caches data in memory. If you have much memory, you can enable this control and see a benefit in some apps. To get to this control you once again in xp and server2003 (the os's I use) by right clicking on my computer icon, properties, advanced, performance, advanced, memory usage, set it to system cache. This setting allows the os to cache a lot of data that it would otherwise try to page to disk and clear memory for use by anything else.
The reason why we want things in memory is because of this:
Memory access is almost instantaneous, like sending an email. Disk access, where all data goes when it is paged to disk to clear memory, is like sending the same message as we sent in our above email written on paper, placed into an envelope, dropped in the mail, the postman picks it up, sends it into the mail system, and takes two months on a freighter to Nibi Nibi Island where your great uncle lives. That is exactly how a cpu sees the difference between getting data from memory and getting data from disk. So as you can see, we want as much data we are going to need in memory rather than on disk. If you don't have more than say 512mb system ram do not change this control. It can make a difference in gamiong and overall system performance.
You say you don't have a policy about disks? Well, you do now.
In talking about disk policy, we are once again talking about caching in memory, and as we learned above. data from disk is infinitely slower than the same data found in ram. So we cheat with our disks. What this means is we cheat by telling the os that yes, the data that the os requested to be written to disk was indeed written, but what we didn't tell the os was that we are holding the data to be written on the disk in the disk's cache memory untill a proper time comes along to write that data onto the hard drive. This speeds things up greatly as most disks are far slower in disk writes than disk reads. Well, how do we get this miracle cure?
Once again we right click on my computer, properties, hardware, device manager, disk drives, policies. See? Told ya you had a disk policy.
Now if you see a button marked "enable write caching on disk" check it. If you see another box marked "enable advanced performance" check it too. These allow the disk to lie to the os like I mentioned above, speeding things along.
WARNING! PELIGRO! DANGER WILL ROBINSON! ACHTUNG!
If you enable these caching mechanisms and you have a sys crash or power failure that drops the system, pretty much everything that was in cache, or any and all data that was in some form of memory when the failure occurred, that data is now lost unless it was copied to disk before the crash. Poof. Gone. Adios.
These settings are dangerous if you crash your system or have a power failure. You can lose your os install. Your boot loader. All sorts of bad things(tm) can happen. But you do get a nice boost in performance while it lasts.
Antialiasing - FXAA
FXAA is a fast shader-based post-processing technique that can be applied to any program, including those which do not support other forms of hardware-based antialiasing. FXAA can be used in conjunction with other antialiasing settings to improve overall image quality. Note that enabling this setting globally may affect all programs rendered on the GPU, including video players and the Windows desktop.
Turn FXAA on to improve image quality with a lesser performance impact than other antialiasing settings.
Turn FXAA off if you notice artifacts or dithering around the edges of objects, particularly around text.
http://forums.nvidia.com/index.php?showtopic=201821&view=findpost&p=1248230
To enable fxaa option in the control panel:
Using the Registry Editor, navigate to HKEY_LOCAL_MACHINE\SOFTWARE\NVIDIA Corporation\Global\NVTweak, create a DWORD value called EnableSRS1442 and set it to 1.
Then FXAA will be available in the Nvidia Control Panel.
You must have a nvid vid card and 275xx series driver to experiment with fxaa. I have tried it in Tribes and Tribes2 and it did not like either game, forcing all text and menus to the side of the screen and other odd behaviours. If you get this to work with your system, let me know what os yoiu got it working under please.
http://downloads.guru3d.com/RefreshLock-download-354.html
OR
http://downloads.guru3d.com/ATI-Tray-Tools-download-733.html
Use Rivatuner for Nvid or ati cards both:
http://downloads.guru3d.com/RivaTuner-v2.24c-download-163.html
As always, read the directions.
Refreshlock forces vsynch in every system I have tried so that may be the one you will have to use.
Let me know if you get sorted.
from 'clientprefs.cs':
"$pref::Video::resolution = "1152 864 16";"
Are you running in 16bit for a reason? If so try the game and vsynch in 32bit. At your resolution the 32bit performance will be closer to that of 16bit than if you were running at say 1900x1600. The 9700 I had would do arounf 300+fps in 32bit colour in t2 in open areas, not a bad card at all, and your card is very similar to the 9700.
I will say that ati cards almost make t2 look as good in 16bit as most other cards do in 32bit, so I can understand why one is willing to use 16bit. Also 16bit is usualy around twice as fast as 32bit in almost any card, but in most non-ati card it's so ugly it makes the game not worth playing. Matrox cards also seem to make 16bit look nice too, but they usualy have low fps even in 16bit.
Services, also known as daemons in the *nix world, are background servers that perform tasks on your system. Some of them are needed so as to be able to boot, some are superfluous and unneeded by most. Some are always running, taking cpu time and memory but you never use them. So, what if we could stop these services from doing that? The less services the system needs to peridically run, the more cpu time that can be devoted to your game. That is the goal, the raison d'etre of system tuning, the kaizen of performance tweaking.
Well, you can tweak that system and here is how:
http://www.microsoft.com/download/en/details.aspx?displaylang=en&=tm&id=2155
Please note that the above is straight from M$, how thoughtfull of them. Now there are other tweaking guides out there to peruse as the M$ guide is likely going to be rather conservative in a bad way.
http://support.gateway.com/s/software/MICROSOF/vista/7515418/7515418su584.shtml
The above is a guide written by my favorite oem brand.
http://www.tweakhound.com/vista/tweakguide/page_8.htm
This one is more indepth.
Now, there are services you can disable that will keep your system from booting, so don't disable those. I am not at fault for you fooching your system. But there is recourse if you do fooch your system in the f8 "last known good" booting parameter.
You might be saying to yourself what is this netstack he speaks of? I know no netstack. Besides, what does this stack thing have to do with our beloved game? I am here to reveal your netstack to you wondering eyes and expose and expound upon its features, how you can control and adjust those features, and why you should. That being said, the benefit is hard to notice, but in a game where milliseconds (or fractions of milliseconds) can actually make a huge difference, you want the absolute lowest latency you can wring out of your game system. The hard and fast rule is the fewer protocols in the stack, the smaller and faster that stack. Simple right?
Your netstack is a block of important code that resides in memory, having been loaded there by the os (and partially by the NIC driver) at boot. This stack is the "TCP chimney" that all the incoming and outgoing packets must filter through as packets make their way to and from your system. Packets have to be identified first in the netstack, where they are welcomed, dusted off, given a snack, and introduced to your cpu, memory, or app.Really the stack checks packets as to what kind they are, check for errors to a degree, and have the data stripped from the addressing and overhead, then the data is sent to whatever app it is addressed to. This stack comprises all the networking protocols that have been installed and are available to the NIC according to the os, along with checksumming and other houskeeping duties such as firewalling and so on. They are "stacked" one upon the other like building blocks of a sort, hence the name. Durr.
These networking protocols are languages for lack of a better term. Languages such as TCPIPv4, TCPIPv6, Netbios, and etc protocols such as Simple file sharing and so on and so forth. Most of us can get along with a greatly reduced protocol list. I get along just fine with TCPIPv4 alone, but I don't need TCPIPv6 or NetBios or any file sharing protocols so they get removed from the netstack at install. If you do need these other protos, you need to make sure you do not delete them or deactivate them or your networking will be interrupted. And I am not to be blamed, as you were duly warned.
In XP and Server2003 you can load and unload the various protocols at install, wich I suggest. Oh they are always avalable for the most part as they are in some cab file in the windows folder somewhere, but removing them from the stack ensures they never take up memory or see cpu time. In Vista and 7 you can delete them at install if the os disk has the networking drivers inherent, if not you can delete them when you install the driver after os install. Under a Vista install you find these here:
http://windows.microsoft.com/en-US/windows-vista/Add-or-remove-a-network-protocol-service-or-client
Like I said, the obvious lag reduction diff isn't night and day earthshakingly obvious, but it is the right thing to do, along with reducing unneeded services and all the other tuning paradigms described previously.
Leases are messy. If you are old enough to know or have a lease you know they can be messy. When your system boots up, one of the things it does, if you use DHCP and DNS to obtain an ip address from your network, is request an ip address and DNS server info from the DHCP server online, and it does this by "leasing" the ip. Keep in mind the part about leases being messy and so on, for your ip address is leased from whatever device grants it to you, and it has an expiration date of hours, days, weeks, etc. This is not bad as it saves work and simplifies administration. Imagine having to set an individual ip by hand on a few hundred systems. And what aboyt DNS services? Gatta have those to communicate on teh innernets unless you know exactly what ip address you want to connect to. DHCP/DNS are obtained upon request and when the lease is up that ip may be issued to another system, and as you might have noticed, the same system on a lan may get the same ip addy all the time from the DHCP server, as the DHCP server notices the unique mac address of each DHCP requesting system. Well, DHCP takes care of all that xxx.xxx.xxx.xxx and DNS stuff for us, but at a small price. Now if all you have is one system connected directly to a cable or dsl or fibre modem, unless your isp issues you a static ip you will have to continue using DHCP and DNS. DHCP is also needed for wireless devices, where DHCP makes life much easier.
Why would one want to stop using a perfectly good DHCP and DNS? Well, if your power company is as good as mine and you get several brownouts or even power drops per month, this often glitches networking devices such as rooters and switches, modems and all else, not to mention your poor system. The more devices you have in your lan (firewall, rooter, modem, switch, print server, etc) the more glitches you'll encounter. You can stop the effects of a lot of these issues with a UPS, Uninterruptible Power Supply that feeds ac to each device networked. The UPS kicks in when it senses low voltage or no voltage from your power company. At least that is how they are advertised. And this UPS thingy is the real cure to the glitch issue, other than moving to a place that has stable power.
Another aspect to DHCP and DNS is the length of time it takes for your system and DHCP server to negotiate an IP and DNS servers. You can eliminate this delay by setting your ip manually at your system. I usualy just look at the interface properties at the time, write down the ip address, the dns servers, and then apply them as static figures in the NIC properties sheet. That way you have an ip much sooner, usualy, and another benefit is you can then disable the DHCP server in your system, as well as DNS. Once again, these services are in your control panel, with all the other services. Going to a static ip and dns setup won't reduce lag other than reducing the amount of time after bootup to be able to use the net, but it certainly is geeky.
Here is how to go about it:
http://blog.mclaughlinsoftware.com/2009/11/26/windows-7-static-ip/
All these tuning paradigms amount to efforts at minuteia, the infinitesmal, but we all know how much difference a millisecond can make.