I wrote 'Nuke Redmond'

From: allisonp <allisonp_at_world.std.com>
Date: Sat May 6 09:28:15 2000

>I don't believe you have had a four year uptime with NT unless your NT box
>is not connected to a network. At least admit you reboot the machine from
>time to time. The best Uptime I have had on an NT machine is NT4 Terminal
>Server which stayed up for six months without a reboot. I feel NT is a
>qualified server OS but my particular circumstance is an NT machine with


Well I have to live with three NT3.51 servers and reboots do occur but they
are from power failures that exceed the UPS or when the fan on the cpu
got noisey and needed replacing. Within it's limits it's ok.

>little network activity. On other larger NT server networks I work on I
>have to reboot the server as often as once per week to maintain decent
>availability. I think it has to do with hash tables NT creates to deal


One thing you have to watch for is "memory leaks" from things like ODBC
drivers and the like that don't work right. We did have one drier that
would
take the server to it knees about every three days if we let it. the fix
for
that (kluge in my opine) was to stop that process every night and restart
it.

>with high-demand network activity. My guess is NT does not deallocate the
>ram allocated to these processes which eventually degrades network
>performance to the degree that the admin must 'deallocate' this ram by
>manually rebooting the server at intervals defined by the level of use the
>server receives.


Driver with memory leak is the problem.

>I could imagine a 9x system being stable despite it's poor foundation if
>that foundation were made of diamond. This would not be an estheticly
>pleaseing operating system but it could be made stable if it's core were
>very strong. My main objection to 9x is that application installs replace
>core components with non-tested ones ( at least in that given
>configuration ). In other words, each Windows machine is unique in it's
>core configuration. This is a dangerous design approach and invites
>nearly infinite opportunities for incompatabilities and general
>instability.


Yep, lots of poor apps tend to really muck up the systems as they load
old .dlls and other bad things. there are a million SPs for fixing core
stuff
that get trashed when you install something with copies of old DLLs.

The above behavour is not limited to MS OSs but more common due to
it's widespread use.

>I don't blame you. Linux is not ready for Prime Time on the desktop. It
>lacks the level of user-pretties and network configurability I would look


It's got some things I like but I could not use it at work on the desktop
as the average user there would not fare well (some have difficulty
with win9x that is not OS fault!).

>for in a desktop environment. My personal choice would be OS/2 for just
>about everything desktop related. I wish IBM had the guts to market it.


It's still not adaquate in my book. Any OS where the common user is the
unix equivilent of superuser (root) by default or lack of protections is a
problem in my book. The classic case is the other days when a user
decided to copy a colder to the desktop... save for it was C:\WINDOWS.
It did a lot of damage to that system.

As sysadmin I'd rather see something like VMS where the user has
their sandbox where they can trash and slash but the rest of the box is
off limits. Right now the common OSs that can do that (more or less)
are Unix and clones based on the unix model, WinNT and VMS. I'm sure
there are others but, Win9x and MacOS, DOS and OS2/warp do not
meet this criteria.

Allison
Received on Sat May 06 2000 - 09:28:15 BST

This archive was generated by hypermail 2.3.0 : Fri Oct 10 2014 - 23:33:08 BST