Windows 2000: holy grail or fool's crusade?Beta 3, the most recent version of the biggest Windows OS ever, offers lots of long-awaited features and new capabilities sure to thrill adventurous early adopters
By Rawn
Shah
For the would-be adventure hero, the most exciting new trial-by-fire is the all-important Beta 3 version of Windows 2000. Released at the tail end of April, Beta 3 is simultaneously described by Microsoft as "ready for evaluation" (according to the marketing brochures) and "not yet complete" (according to Microsoft VP Jim Allchin). Several Windows hardware vendors have already started offering workstations and server products configured with Windows 2000 Beta 3. Evaluation versions of Beta 3 are commercially available at $60 a copy under the Corporate Preview Program. If you are one of those ready to rush out and grab a copy of the new OS, you may wish to hold off until you read about what's in store for early adopters. It is not as simple as just installing the software on a machine -- you should use your first Windows 2000 installation as a model for the later migration of your entire network. And you will have to migrate to an all-Windows 2000 network eventually. It's the only way to take full advantage of all the new OS has to offer. You must take into account a number of considerations before planning a move to Windows 2000, if you intend to do it properly. This article is a not-so-brief primer on the new features and functionality of Windows 2000. It's a thorough rundown of the improvements to the OS, so doubtless only some of this information will be relevant to your needs. But even if you're not eagerly awaiting every new feature and capability, there's plenty to look forward to in this Windows overhaul. Looks aren't everything Currently, Microsoft's plan is to put four versions of Windows 2000 on the market:
The first three will be released around the same time, with the DataCenter version coming three months later. The desktop itself isn't too different than that of Windows 9x or NT 4.0. However, looks aren't everything. Small cosmetic changes are visible, such as the integration of Internet Explorer 5.0 into the desktop shell, fewer icons on the taskbar, and a generally sparser look. (See Figure 1.) But the real differences are all under the skin. We're talking new hardware support, a better filesystem, greater stability, new network protocol support, a central directory service, properly implemented clustering services and load balancing, and much more. Table 1 offers an overview of just a small portion of the new or enhanced services available with Windows 2000 in each of the different versions.
Table 1. Features of the Windows 2000 operating system family Hardware Windows 2000 Advanced Server and DataCenter Server take advantage of a feature of the Pentium chip family, called the Physical Address Extension bits, to increase the amount of physical memory that can be supported by the system. The previous limit of 4 GB on Intel-based servers was a detriment to supporting large database applications on NT. With this new feature, Intel servers can have up to 64 GB of physical RAM. On Alpha systems, the hardware memory model already supports significantly larger memory, but had been limited to 28 GB under NT 4.0. With Windows 2000, the OS memory management system has also been enhanced to support 64 GB of RAM on Alpha systems as well. There are two catches, though. The first is that hardware vendors have not yet released Intel servers with this capability. However, it is likely that they will make some announcements in conjunction with the final release. The other catch is that this feature will not be supported in the Windows 2000 Professional or the low-end server versions of the OS. Windows 2000 is also the first Windows server OS that will take advantage of the capabilities of the Intelligent I/O (I20) system available on most Intel servers today. I20 was developed by a group of vendors to reduce processor-intensive tasks created by data transfers between I/O devices on peripheral buses. Thus, disk-controller-to-disk-controller or disk-controller-to-peripheral-printer data transfers can be performed without using too many active cycles on the processor, in turn freeing it up to do the intensive processing. It achieves this functionality through a separate I20 co-processor based primarily on the Intel i960 CPU core. To date, NetWare is the only released system that actually makes use of I20, despite the fact that it has been around since 1997. Driver model and power management Plug-and-play, also known as dynamically loadable device drivers, has been much a much sought-after item for NT administrators. Because devices can be attached or removed at any point in the system operation, this feature also relates to the power management of the devices. Efficient power management is an important part of desktop systems but is absolutely crucial on portable systems. By slowing or shutting down some devices and components on the system when they are not in use, the system cuts down on power consumption. On desktop systems, this is an important component of wake-on-activity services. Because most desktops can go idle for hours at a time, you can save considerably on power bills if the system goes into sleep mode, shutting down most components except for those that monitor input activity. This feature is implemented in wake-on-LAN capable systems and is a facet of the Desktop Management Interface. Device drivers can now be signed and certified by the Windows Hardware Quality Labs at Microsoft. Certification doesn't just mean that Microsoft has verified that the driver is a well-tested production model appropriate for your use. It also indicates that Microsoft will include it in its list of supported hardware, and can provide some support for OS problems with the device. Unsigned drivers may still work with the system, but you don't have the assurance that Microsoft has tested them; however, the hardware vendor may already have done intensive tests of their own. Soon to follow is a driver verifier tool that allows administrators to test the OS interface to drivers and isolate each one for allocation to certain pools only. The tool will also verify the parameters for creating an IRQ request through the kernel. This fairly low-level exposure of the inner workings of drivers will probably only be used by the most experienced administrators or systems programmers. Disk and filesystems
Table 2. Comparison of various Windows 2000 usable filesystems The improved version of NTFS supports new features for encryption built into the filesystem. All data stored on an encrypted filesystem (EFS) volume is encrypted and any read or write access first goes through the new CryptoAPI component of the operating system, which checks both user permissions and user authenticity keys. The new version also includes the ability to mount,
unmount, resize, repartition, and format drives on the fly.
Using Disk Administrator (as opposed to This NTFS finally includes per-user disk quotas as part of the filesystem and user policy system. At last, you can make strict limits on how much disk space each user is allowed to take up on a single volume. Here Microsoft is playing catch-up with the rest of the industry. NetWare, for example, can provide user quotas down to individual directories rather than whole disk volumes -- an ability with which NTFS still can't compete. Also supported are sparse files, a method that saves on allocating space to large files until that portion is actually written to disk. Thus, a 100-GB file might only use 30 GB of actual disk space until the other content is saved to disk. This feature is useful mostly to random-write files in which data is nonsequential but still needs to be kept in a specific order. Databases often use such large random-write files to contain their tables. The Distributed File System (DFS) available with Windows 2000 Servers allows you to combine drives, or directories within drives, to create a single, larger virtual filesystem. This is very similar to the Network File System (NFS)'s ability to concatenate several independent volumes into a single large directory tree. DFS maintains data on individual servers spread throughout the network, but maintains caches of the portions not local to the server. It also manages a distributed file locking mechanism that allows users anywhere on the network to access a file while maintaining its data consistency during write operations. Directory services The AD database maintains information in a hierarchical tree structure, representing every application data object within its nodes. For example, Windows 2000 replaces the flat domain system of NT 4.0 with a hierarchical tree based upon Internet domains. Each domain server in Windows 2000 maintains its own tree of users and groups, which can then be combined with that of other domain servers to create an entire forest of domain trees. Each domain is still handled by its own server, but it is now possible to access any object in the entire forest. Gathering multiple domain trees into a forest mirrors the multi-master domain model that the NT 4.0 domain system offered, albeit at great difficulty. Microsoft smartly made AD backwards compatible with clients in an NT 4.0 domain system. To non-Windows 2000 clients and NT 4.0 domain controllers, the AD servers look just like other NT 4.0 domain controllers, as they all support the same APIs and services. An NT 4.0 domain controller can be upgraded to a Windows 2000 Server system without disrupting the existing network configuration. Replication of AD information can be done either for a whole domain tree or in subtrees known as contexts. Each context may reside on a separate server and can be combined into a single domain. Multiple servers can contain replications of the naming contexts as needed between sites. This minimizes the amount of information that has to be replicated between the sites, thereby achieving greater cost savings through lower bandwidth usage. Windows 2000 implements Dynamic DNS, the latest version of Domain Name Services for IP-based hosts. This allows names and addresses to be dynamically mapped to each other, rather than the traditional static tables that had to be loaded each time a host's name was changed. DDNS is a vital component of an environment that uses the Dynamic Host Configuration Protocol (DHCP) to assign IP address information to other machines. Under the old system, DHCP clients could be assigned randomly selected addresses from a pool during boot time. Thus, a client's host name and IP address could change at different points in the session. This feature makes DHCP unsuitable for servers. It could also cause many firewalls and Internet hosts to reject access from clients whose host name and IP address did not match. DDNS works very much like the current DNS system, except that the server can now access requests to modify addresses from DHCP servers and can dynamically reload static tables. AD and DDNS together obviate the need for Windows Internet Name Services (WINS). This service was created to support a common distribution system for NetBIOS name services. WINS is a direct analogue to DNS, except that it transfers the NetBIOS names and addresses of machines instead of IP host names. Although NetBIOS can use TCP as its delivery protocol, it implements its own naming system on top of the delivery protocol. With the refocus on using IP host names rather than NetBIOS names for system services, there is no real need for WINS, assuming, of course, a pure Windows 2000 environment. Windows 2000 servers will likely still need WINS to support Windows 98 and older machines. Networking
RRAS can run both static and dynamic routing protocols on each network to which the server is connected, including RIPv2, OSPF, and IGMP (for Cisco routers). The server can also act as a DHCP relay agent to pass on DHCP client requests to other networks if they are not supported. If you use dial-up modems to connect, you can set up dial-on-demand for your network. Similarly, you can set up a call-back from your ISP, if you have incoming data from the outside world but are not yet connected. Windows 2000 now supports new protocol features that improve security and quality of service. In addition, the OS has added support for new hardware technologies like asynchronous transfer mode (ATM). The IP Security protocol (IPSec) defines network packet-level data encryption rules and services between any machines that support it. Essentially, this encrypts all data on a per-packet basis, just as it is being transferred over the network. It reduces or eliminates the need for higher-level security, unless that is also desired. Unfortunately, IPSec can be hard on the system because it needs to run encryption and decryption algorithms for each packet. On a busy network server, this additional overhead can significantly reduce performance as traffic increases. It does, however, provide the best level of security and is the nexus of communications for virtual private networks. With IPsec installed on each machine, you can directly include remote computers in your domain with less worry that someone is going to peek at your data. Microsoft already has other methods of creating VPN systems, with Point-to-Point Tunneling Protocol and Layer 2 Tunneling Protocol. These two protocols work at the data-link layer (Ethernet or PPP WAN connections) below the network layer (IP or IPX). This approach can allow any number of different non IP-based protocols to communicate through the VPN channel, but unfortunately, it's limited to Windows OSs only. IPSec has been implemented on most Unix platforms, and there are even hardware accelerators that can offload the encryption/decryption processing for it. Windows 2000 supports four different forms of quality of service (QS) systems. (See Table 3.) A QS system absolutely guarantees that computers on a network will be able to communicate with each other at a certain security, management, route, or speed level. QS implies that some communications on the network will run at a higher priority than others. With best-effort network protocols like TCP/IP and IPX, QS opens the door to a whole new generation of services that perform exactly as required. Until now, almost all such networks try to make the best effort to deliver packets but fall short of delivering them on time, through specific routes, or in proper sequence.
Table 3. Supported QS protocols The QS services in Windows 2000 include support for IP Precedence, the IEEE 802.1p protocol, the Internet Resource Reservation Setup Protocol (RSVP), and Subnet Bandwidth Manager (SBM). The IP Precedence support uses three long-ignored bits that exist in every IP packet. These precedence bits indicate which packets will be preferred for transfer over others. The IEEE 802.1p protocol defines a three bit priority level within the IEEE Ethernet Frame. This protocol functions in much the same way as the first method, only it does so at the data-link layer, rather than the IP Network layer. IEEE 802.1p works within the context of an IEEE 802.1q Virtual LAN, using other frames to distribute information on computer membership within the Virtual LAN group. RSVP is an Internet standard that defines QS levels for each device along the network path, including routers, switches and hubs. Basically, it is a patch method that implements guaranteed service on unguaranteed or best-effort networks, like the current IP system. Because QS cannot work if any member of the network does not support it, RSVP attempts to find network paths that can support the minimum and required levels of QS as requested by the user's application. SBM takes a different route, assuming that computers under its umbrella do not support QS sessions. SBM assigns one server to monitor the network performance of all the other machines and manages the data connections to the best of its knowledge. For example, it takes a 10 Mbps Ethernet connection assigned to a machine and determines how much bandwidth is left once applications start connecting to it. Thus, it can apportion more of its service as requested by the next application. RSVP and SBM both require smart agents that actively monitor how much network traffic is in use or available, and how this traffic is being delivered. Windows 2000 is the first major platform to support all of these QS systems as a part of the OS. Unfortunately, that means you'll have to wait for most other platforms to catch up before these services become widely used. Security Other improvements discussed in this article are the use of IP Security Protocol, the encrypted fileystem (EFS), group policies, and object-level security in the domain structure (in ActiveDirectory). Management The taskpad system can interface with the MMC snap-ins through Web pages to provide a simpler interface that projects only the necessary information. Administrators can limit what appears on the Web page, thus allowing operators to perform only their tasks without having to get total access to the management system. All data snapshots or listviews from snap-ins also can be exported into plain text files for analysis by other non-MMC applications. Microsoft has implemented a common interface based upon the Distributed (formerly Desktop) Management Task Force's Web-Based Enterprise Management (WBEM) standard, which creates a common interface to devices, management tools and system components with which any WBEM-compliant management tool can interface. Microsoft, Dell, Cisco, Compaq, Intel, Novell, SCO, HP, IBM, and Sun are all members of the DMTF, but to date only Microsoft and Cisco have released WBEM-based products. Windows 2000 still supports SNMP-based management, but only through SNMP agents and a basic SNMP management tool. Microsoft has finally caught on and added a task
scheduling system to the OS. Similar to the Unix A new scripting interface known as the Windows Scripting Host (WSH) provides a common object-based method to access system services through several Microsoft programming and scripting languages, like Visual Basic, VBScript, J++, and JScript. Think of WSH as a direct method for executing scripts on the system, the same way Active Server Pages do on Internet Information Server. System stability In order to avoid file mismatches, essential system files can now no longer be replaced. All vital system files are cataloged and verified for corruption or mismatches after a boot. A similar system is used to prevent dynamically linked libraries (DLL) from mismatching in applications. Multiple versions of DLLs with the same name can now coexist -- the system determines which DLL is needed and selects the appropriate match. Also, new service packs from Microsoft can now be slipstreamed into the system. You no longer have to reinstall the service pack when an application modifies or installs new versions of system files. Clustering, load balancing, and
distributed services A separate network load balancing system based upon the Windows Load Balancing Service (WLBS) allows multiple servers to balance out application services. The system links Web servers, FTP servers, and Windows Terminal Server, and also allows connections through Microsoft proxy servers. With load balancing, you can support up to 32 nodes within the balancing cluster. This service requires the first machine to receive incoming requests, which it then redirects to the next available server for actual processing, rewriting IP addresses when appropriate. In truth, WLBS is a software implementation of a Layer-4 switch that manages TCP and UDP connections between multiple servers. It is not dependent on the application, but changes network information and can be set up with any type of IP-based network application. It provides high-availability of the applications handled by the service. Windows Terminal Server is now directly integrated into the system, rather than a separate version of NT. You need to specifically install WTS, because it makes changes to the kernel environment in order to support multi-user sessions. The system is identical to that of the NT 4.0 Terminal Server Edition, so there should be nothing surprising here if you have seen that before. It's highly recommended that you install WTS even if you don't expect that your users will need it. This will at least allow administrators to access and manage the entire system remotely. Remote management Intellimirror is a new creation that allows a sysadmin to specify applications, data files, and preferences that each user on the network may need. When a user moves from one machine to the next, the applications and data are replicated and installed from the server onto that machine. Hence, your data moves with you to wherever you work. The files are stored on the server at all times with only a local copy transmitted to wherever the user sits. After they log out, the resources are resynchronized with the server's copy. It appears that Microsoft is taking a page from the network computing method and making it real: software and data that moves with you wherever you go. Of course, the catch is that the terminal where you go to has be an Intellimirror client (currently only on a Windows 2000 system). These two new features combined make the life of the administrator significantly easier. Workstation, user, and data deployment is a time consuming task for any large network. By allowing administrators and users to access machines remotely to perform these low-level and privileged tasks, Windows 2000 reduces the time, energy and planning usually required for large deployment projects. Microsoft has even stated that Intellimirror directly competes with Terminal Server and may even prove to be a better performer since it does not load all processing on the server side. Groups and policies With AD, local groups exist within single systems and global groups within a single domain. In addition, a new group type called the universal group allows users to span multiple domain trees. This provides users with access to the resources of other domains without having to duplicate that group on each and every domain. The policy system has also changed since NT 4.0. What used to be just policies per system or domain can now be applied to a computer, an organizational group, a site, a domain, or even multiple domains. The concept remains the same. It still boils down to a common set of rules and rights on what a user can and cannot do on the system. Should I be an early adopter? What's more, some of these services are specific to Windows. Microsoft's motto "Embrace and Extend" is to blame here. They have once again taken many standard protocols and created extensions that work exclusively in the Windows environment. Sure, you can still interface Windows 2000 systems with other Windows or non-Windows platforms through these extensions, but you won't get the full benefit. To be fair, Windows 2000 is not just about selling more OSs. Some of its improvements are much-needed and very handy. For example, LDAP does not define a security model for each object in the directory, but rather leaves this to the platform. Active Directory integrates the access control list structure and security model of Windows NT into the directory on a per-object level, giving a tight and focused level of security. This may not be a bad idea in the long run, since these new features can be very useful. Microsoft is exploring quite a bit of new territory with this OS, from support for QS and Internet Printing Protocol, to LDAP Directory Services, and ATM. Now that these protocols and systems have been included in a major OS platform, we will likely see them grow in popularity and usage. It is high time that this happened. Now we can throw out some of the sloppiness that exists in network services today. Many of the network protocols in use now were designed with little foresight, or had different needs and uses that are simply no longer relevant to today's business environment. Others did not take into account the scale and utility that the Internet would demand these days. For example, the Windows NetBIOS printing and the Unix LPD protocol are both insecure, and somewhat inflexible in the types of devices and document formats and services they can support. Another example is the current Internet Protocol itself, which has no real built-in security or guaranteed services at all. On the home front, the NT domain model becomes hopelessly inefficient when it is scaled beyond a few hundred computers. It is about time that most systems be migrated over to avail themselves of the new services and features that Windows 2000 offers. Getting W2K-ready There is a steep learning curve for using most of Windows 2000's new services. Planning a deployment project will require you to consider not only your NT servers, but all other IP-based network servers. For example, you really do need to run the Dynamic DNS server in Windows 2000 to keep Active Directory happy. That means considering the introduction of not only another DNS server but also a DHCP server and perhaps a WINS server. Although the system is more stable than earlier beta versions, there have been many concerns regarding application compatibility and rampant bugs. As common sense dictates, only test it in a safe environment -- that is, on a separate server and workstation LAN. Consider duplicating your existing servers (hardware, data and all) and attempt an upgrade from that system. Although the beta version contains drivers for thousands of devices that have been signed by Microsoft, consider keeping track of the devices and driver files for comparison. Deploying Windows 2000 now is a journey for an adventurous soul like our courageous hero Dr. Jones. Along the way, there will be pitfalls, nests of snakes, angry villagers, and supernatural forces at play as you implement Windows 2000 across your network. But with a little bit of luck and a whole lot of planning, you may just be able to get away with the golden treasure of a reliable network operating system.
|