---------------------------------------------

NOTE: This article is an archived copy for portfolio purposes only, and may refer to obsolete products or technologies. Old articles are not maintained for continued relevance and accuracy.
November 1, 1996

DCE: Unifying Your Network Fabric

Mainframes. Midrange systems. Unix boxes and LAN servers. PCs, more PCs, Macintoshes and oh-wait-don't-forget-about-those-other-PCs. No wonder you're going crazy. Managing any one of these systems can be expensive and cumbersome, but even that pales in comparison with the cost and effort involved in integrating them all. Just as you begin to think of TCP/IP as the panacea for at least a minimal set of unified, vendor-independent network services, you discover that it falls short of providing most of what you'd expect from a contemporary network operating system. It falls way short.

Running TCP/IP services on your systems gives you lots of things—such as Internet standards (and, more importantly, vendor-supported products) for functions such as e-mail, printing and file transfer—but among the pieces missing are some of the most important ones. For example, there are no standards for user and group management, advanced security or a directory service that ties these together. Without these components, you'll never be able to migrate your organization completely or your network computing environment to a truly open one.

You can use proprietary technologies to get these services, but you'll pay the price in terms of limited cross-platform availability and support. For example, you may be able to use Network Information Services (NIS) and/or NIS+ on some of your Unix systems, but you won't get it on your big iron. Likewise, you can use Novell Directory Services (NDS) to provide these services on your NetWare systems, but that won't help with your other platforms. The resulting scenario is one where you still have to manage individual accounts and security controls on each specific platform.

These high-level services are part of what the Distributed Computing Environment (DCE) was designed to provide. The base suite of DCE services consists of a directory and related security controls; combined, they offer a consistent interface to any system DCE runs on, and the list is long. In addition, a solid set of development application programming interfaces (APIs)—another standard part of the DCE distribution—provides a platform-independent network development interface that permits rapid development and deployment of network-centric applications, regardless of the underlying OS protocols or network APIs. Together, these services and APIs constitute a general-purpose networking platform that serves as a somewhat homogeneous interface to your heterogeneous network.

The key word in that sentence is "somewhat." Although a variety of DCE implementations exists, there are some significant gaps in the roster of supported platforms. For example, you can get DCE clients and servers for almost all large-scale systems on the market, as well as for the most popular client platforms. However, there are absolutely no DCE products that run on NetWare or on most second-tier Unix variants. Nor are there many off-the-shelf applications that take advantage of DCE, making it difficult to assemble a full top-to-bottom implementation.

Regardless of these limitations, for platform-independent network-centric services and applications, DCE outshines any other technology on the market. By deploying DCE services on your different systems—and by using DCE for your cross-platform applications—you can dramatically increase the overall functionality and security of your network, while simultaneously reducing your system management costs.

In the Beginning, There Was the Standalone System

As most of you will remember, the LAN wasn't always there. But that didn't stop users from trying to link disparate systems by building cross-platform bridges. Vendors saw this need for network services and responded by trying to lock in their accounts to proprietary networks (along with their proprietary host systems), making the situation worse. Now, each system not only used incompatible security, file system and other critical services, they also used incompatible network protocols.

If you had an order-entry system that ran on a Unix server, a manufacturing control system that ran on a Hewlett-Packard Co. HP-3000, an inventory system that ran on Digital Equipment Corp. VAX and a consolidated billing system that ran on an IBM Corp. mainframe, you had no hope of tying everything into a functional, cross-platform environment. Your best bet—and a solution still implemented at many companies—was to use batch transfers to move data files among systems in hope of automating the process enough to work at least marginally well.

Then, in an act of rare cooperation, a handful of larger system vendors got together to establish network services that would run on any platform or protocol, thereby providing an avenue for the development of cross-platform network-centric applications. Rather than force users to consolidate their heterogeneous systems or network services on a single vendor's proprietary technologies, they gave users vendor-independent tools for integrating their disparate systems.

In 1988, these same vendors launched the Open Software Foundation (OSF), whose task was to develop vendor-independent technologies to foster such objectives as DCE, OSF/1 (a standard Unix) and standard interface systems (Motif, for one). Though many efforts did not succeed, DCE generally has been accepted by OSF's members and outside organizations alike.

OSF projects must go through a series of technology-feedback cycles. Member organizations (of which there are now more than 200) contribute technology to OSF, which then turns the work into distributable code. The members license the resulting source code from OSF, and then integrate it into their respective operating systems and networking products. The result is a highly uniform code base that accommodates high levels of interoperability. Since vendors use the same source code in their products, there is negligible incompatibility among the various implementations, and even less finger-pointing should something go awry.

There are other benefits as well. For example, users do not have to wonder whether a specific system uses 7-bit or 8-bit encoding, or if the CPU is little-endian or big-endian. Because the network services offer a uniform set of APIs, applications communicate directly with the network services, which in turn communicate with the underlying OS. Because the underlying services then become irrelevant, users can focus on DCE's higher-level services.

Product Availability

Ultimately, DCE's success will hinge on OS vendors' willingness to integrate DCE services into their operating systems to provide a mapping among the Remote Procedure Call (RPC) interfaces and the underlying protocols, as well as allowing the higher-level services to run on their specific hardware. Some may choose to tie the underlying user account database directly to the DCE security services, while others may maintain separate databases.

Gradient Technologies' Unix versions of DCE, for example, integrate seamlessly into the system login mechanisms, but its PC-DCE/32 for Windows NT does not—an obvious problem for NT security vendors that cannot get the necessary APIs from Microsoft Corp. If you're forced to maintain separate account systems, there's little value in using the DCE model, as it only increases your workload.

However, if you have DCE services implemented on more than two or three platforms, the payback curve begins to bend in your favor, since you only have to add the user to the DCE system once. As you move toward more DCE-enabled services, you depend less on the native underlying services and more on those provided by DCE, meaning minimal direct account management on the actual end systems.

Almost every major multiplatform vendor offers DCE across its entire OS line—a good thing, since most of the platforms are incompatible. For example, IBM offers DCE products for MVS, OS/400, AIX, OS/2 and even Windows. Since it supports such a variety of platforms, IBM is a natural beneficiary of DCE's functionality. Without it, IBM wouldn't be able to interconnect its range of systems, or at least not to the extent possible with DCE. Others, such as HP and Digital, also offer DCE services across their product lines, thus helping customers integrate a variety of systems, while also enabling the vendors to integrate their own products. Digital, having pinned a portion of its product strategy on NT, supports DCE on NT in order to integrate it with its VMS and Unix platforms.

The vendors that provide the weakest support for DCE are those that do not have incompatible platform lines. For example, Microsoft's Windows clients and servers all incorporate the same basic LAN Manager technology. Without a pressing need for a cross-platform set of self-integrating services, Microsoft has not been a notable producer of DCE technology. Similarly, vendors such as Novell and Sun Microsystems that offer a single OS don't support DCE directly at all. Sun relies on third parties, such as Transarc Corp., to provide DCE services to its customers. There are no NetWare server-based DCE solutions whatsoever, neither from Novell nor third parties. This is perhaps the single greatest limitation with DCE. If the most popular network operating system on the planet doesn't have any DCE security or directory services, it becomes very difficult to create a single, homogeneous interface to the entire network.

NetWare issues aside, many non-OS vendors make their living selling DCE on a variety of systems. Gradient sells DCE for an assortment of platforms, including Apple Computer Macintosh and Windows clients, as well as NT, Sequent Computer Systems, NCR Corp., Unisys Corp. and UnixWare servers. Other companies, such as Open Horizons, offer middleware products that bring DCE functionality to non-DCE applications, implementing aspects such as single sign-on and directory integration components.

How DCE Works

DCE consists of two key pieces: the network services that provide for tasks such as unified account and security controls, and the APIs that enable external applications to communicate with these services. In addition, DCE provides a directory capable of acting as a global naming service for the networks in your LAN. It also offers cross-platform account management services, which brings the Holy Grail of single-sign-on services for any and all DCE-enabled systems to your network.

There are five core components of DCE—security, directory, time synchronization, RPCs and POSIX threads. Each contains several subcomponents and technologies supplied either by OSF's member organizations or based on standards already defined and managed by other "standards" organizations.

The five key components are tightly integrated and rely heavily on each other to provide consistency and base functionality. The directory service, for instance, depends on the time service to guarantee consistency within the various directory database elements, while the security mechanisms rely on the time services to guarantee accuracy within the various events. In fact, these three services represent the core functionality of the DCE services environment that end users interact with most.

The RPCs and POSIX threads are there principally for application developers. Developers may use RPCs as network APIs to avoid inconsistent implementations of network protocol APIs. This way, a developer can write networking services directly to the RPC interface, without worrying whether the underlying protocol is IP, IPX or SNA. Also, by providing a standard thread model within the specification, developers can incorporate consistent thread and process management routines within the network portion of their applications, regardless of the thread model on the underlying OS.

There are no specifications for protocols in the DCE model, because DCE is protocol-independent. DCE-based applications and services communicate directly with the local RPC services, which are bound to the network protocols on the local system.

A simple multiuser chat application written for DCE can run over any protocol in use by DCE on the local system; all a developer has to do is write the network portion of the application so it can talk to the RPC interfaces. This portion of the code would then be completely usable on any other DCE-enabled platform, just by recompiling the code. DCE would not necessarily address the user or logic portions of the application, but it would provide a consistent programming interface for the network portion of the application and consistent services to control how the application could be accessed and by whom.

Compare this to other protocol-specific services, which vary dramatically from system to system. For example, TCP/IP stacks can be implemented in a hundred different ways on as many operating systems, requiring the network-specific code to be ported to and optimized for every target system, even though the underlying protocol itself is "standard." Also, where BSD-based Unix variants use sockets for IP programming, SVR4-based variants use streams. Meanwhile, mainframe and midrange systems have their own TCP/IP APIs that programmers also have to learn. You could spend a lifetime just porting the network-specific portion of the simplistic chat application among these different systems.

By providing a consistent set of high-level services and development interfaces, DCE enables programmers to develop network-centric applications rapidly, and allows administrators to deploy them across systems equally rapidly.

In-Depth DCE

The basic unit of a DCE installation is called a "cell," which is similar in concept to a domain. A cell is a single entity that is controlled by one administrator. There can be multiple cells within an organization, or you can get by with one if your company is small or flexible enough.

At the very least, a DCE cell must have one directory server and one security server. These servers can be combined in a single entity, or be spread across different systems, as appropriate to your installation. Cells can be interrelated to permit users in one cell to access services in another by using simple trust models that join cells at a boundary point. This lets organizations build horizontal relationships across departmental lines or vertical relationships in keeping with a formal corporate hierarchy.

DCE clients query these servers for information about users, applications and other resources available on their network. Because the servers also request services from other servers, each server is also a client. A client can be a full-function system with a complete copy of the directory running locally, or it can be a lightweight agent that queries the cell servers for the information it needs.

The DCE directory server provides a consistent naming service across the network, allowing users to locate and access network resources without having to know where the resources actually are located. If a device changes its network address, the change is stored in the directory, so users can continue to access the shared resource by name.

Again, since DCE itself is protocol-independent, these resources could be using TCP/IP, IPX, SNA or just about any other protocol. When a DCE client needs to communicate with a DCE server, it uses whatever protocol is in place. If, for example, an application needs to communicate using TCP/IP, it will pass the request to the RPC engine, which will then pass it on to the local TCP/IP stack. The TCP/IP stack would issue a Domain Name Service (DNS) lookup on the destination system's name, and then establish an IP connection between the client and the server. If these systems were using IPX instead, then the local IPX stack would issue a SAP lookup to locate the destination system's IPX address. All these functions are invisible (and irrelevant) to the client system.

The DCE directory service comprises a cell directory service (CDS), a global directory service (GDS) and a global directory agent (GDA) that acts as a query agent for locating resources in other cells.

The CDS is the local cell directory. It provides information about resources in the local cell. CDS is a distributed, replicated database service capable of running on any number of systems simultaneously, including the clients themselves.

GDS provides connection information about other cells, both within your organization or outside its bounds. It includes a global name space that connects the local DCE cells into one worldwide hierarchy, and can utilize either the Internet's DNS service or the global X.500 directory.

The GDA acts as a go-between for CDS and GDS. If a client needs to locate a resource running on a remote system, the GDA will issue a request to the GDS, which will locate the device. Alternately, the CDS can be seeded with external entities that will appear local to the users.

For security, the cell security server performs account management, authentication and authorization. Account management is handled by the user registration service, which manages users and groups and provides login services to the cell. Kerberos-based authentication services allow for the secure exchange of credentials without passwords being sent over the wire. Authorization services result from the combination of a privilege server and access control lists (ACLs). All access requests pass through the authorization service.

Directory and security services rely on the time service. Although many systems have relegated time services to check-list functionality, DCE addresses the issues surrounding synchronization and validation head-on by making time a critical component of the network. The DCE time service allows applications to determine event sequencing, duration and scheduling accurately.

One tentative design goal for the 1.2 release of DCE was inclusion of "federated directories" that would let DCE clients communicate with non-DCE directories through the X/Open XFN standard (a continuation of work that originated at Sun). This effort was aborted, however, prompting OSF to consider adding LDAP to DCE sometime in the future. Although this would not address cross-platform security issues (LDAP is a lookup tool, not an authenticator), it would at least provide better integration services than having nothing at all.

Deployment Issues and Futures

As an alternative to proprietary networking technologies, DCE works not only because it squarely addresses the problems that might be encountered with various network services, but also because it utilizes existing technologies to do so. Rather than reinventing any wheels, DCE uses them all and makes the inexpensive source-code solutions available to any and all comers.

This has made DCE successful, but not on the scale vendors would like. Though DCE engines are available for a variety of systems, there aren't very many DCE-aware applications and services—there are no DCE facilities for cross-platform mail or cross-platform printing, for example. In fact, the only widely available cross-platform application is DFS, the distributed file system.

However, DFS is an excellent example of the kinds of applications that can be developed and deployed using DCE. It is an outgrowth of Transarc's AFS file-sharing technology, which was submitted to OSF, rewritten and licensed back to member organizations in source code form. DFS is a client/server application—just as an automated billing system could be—that works on an assortment of clients and servers with little porting effort, since it uses DCE's RPC interfaces. The only coding required is the platform-specific user interface and integration work. DFS also takes advantage of DCE's GDS, providing access to shared file systems around the world via the Internet and the global X.500 directories. All benefits derive from DCE, not AFS.

Unfortunately, this is the only widely deployed public application that uses DCE. Almost all other major DCE-based applications are in-house systems developed by larger institutions. As an example of a prime prototypical DCE installation, you could use DCE to facilitate the automatic exchange of data between an order-entry application running on Unix and an inventory management system running on VMS, while simultaneously updating a consolidated billing system running on an MVS-based mainframe.

Despite the concentration of in-house DCE applications, these applications frequently are tied to products that were not developed in-house. For example, many vertical applications are tied to large-scale databases from vendors such as Oracle Corp. and Sybase. Both vendors offer DCE-aware versions, but they are practically alone in this regard.

Of course, vendors are quick to promise the advent of products and technologies that will address the lack of off-the-shelf solutions and absence of complete support by the various OS vendors—but, like Novell, they're slow to act on those promises. But not having them today is a major obstacle for anyone attempting to roll out DCE services across an entire enterprise network.

Another sticking point for the deployment of DCE services has been the notoriously difficult administration functions. Far from easy to use, the DCE administrative tools are among the most convoluted and ill-conceived we've ever seen. It certainly isn't easy to create or manage objects in either the security or directory servers.

Several vendors offer graphical configuration tools that ease these concerns, however. Chisholm Technologies' graphical DCE Cell Manager system runs on Solaris, while IBM's Directory and Security Server for OS/2 Warp is a graphical cell management product that runs on OS/2. These products can communicate with the cell management systems using RPCs, so they are interoperable with DCE directory and security servers from other vendors.

Still, the dearth of DCE products is disheartening. Without support for NetWare, complete end-to-end integration across a network is highly improbable, if not impossible, for the majority of today's network installations. Likewise, the absence of direct support for DCE in off-the-shelf networking applications penalizes the early adopters of this technology.

But when you consider what DCE can provide in a heterogeneous, multiplatform network, it merits investigation as a potential unifying technology. At worst, you'll be able to integrate many of your legacy applications and databases, eliminating some of the batch processing work that adds to the inflexible nature of offline computing. At best, you'll be able to implement a single, scalable, cross-platform user account and access control system that brings single-sign-on services to your entire user base.

Either way, you'll get more out of DCE than you will out of TCP/IP alone. We're not suggesting that you forestall or forgo deployment of IP services on your network; quite the contrary—we encourage you to do so, just don't set expectations too high.

TCP/IP is valuable in terms of providing a consistent set of platform-independent protocols and basic services, but it doesn't offer the kind of functionality you'd get from existing proprietary solutions. TCP/IP's great allure is that it lets you connect your systems together—you just won't be able to manage them or their users, services and applications.

That's where DCE steps in. Deploy it in conjunction with TCP/IP, and you get the best of both simultaneously: system connectivity, plus the ability to integrate security, services and applications. Not a bad deal at all.

-- 30 --
Copyright © 2010-2017 Eric A. Hall.
Portions copyright © 1996 CMP Media, Inc. Used with permission.
---------------------------------------------