How the NFS Service Works (System Administration Guide: Resource Management and Network Services)
Mục Lục
How the NFS Service Works
The following sections describe some of the complex functions of the
NFS software.
Version 2 and Version 3 Negotiation
NFS servers might be supporting clients that are not using the NFS version
3 software. So, part of the initiation procedure includes negotiation of the
protocol level. If both the client and the server can support version 3, that
version is used. If either the client or the server can only support version
2, that version is selected.
You can override the values that are determined by the negotiation by
using the -vers option with the mount command.
See the mount_nfs(1M)
man page. Under most circumstances, you should not have to specify the version
level, as the best level is selected by default.
UDP and TCP Negotiation
During initiation, the transport protocol is also negotiated. By default,
the first connection-oriented transport that is supported on both the client
and the server is selected. If this selection does not succeed, the first
available connectionless transport protocol is used. The transport protocols
that are supported on a system are listed in /etc/netconfig.
TCP is the connection-oriented transport protocol that is supported by the
release. UDP is the connectionless transport protocol.
When both the NFS protocol version and the transport protocol are determined
by negotiation, the NFS protocol version is given precedence over the transport
protocol. The NFS version 3 protocol that uses UDP is given higher precedence
than the NFS version 2 protocol that is using TCP. You can manually select
both the NFS protocol version and the transport protocol with the mount command. See the mount_nfs(1M) man page. Under most conditions, allow
the negotiation to select the best options.
File Transfer Size Negotiation
The file transfer size establishes the size of the buffers that are
used when transferring data between the client and the server. In general,
larger transfer sizes are better. The NFS version 3 protocol has an unlimited
transfer size. However, starting with the Solaris 2.6 release, the software
bids a default buffer size of 32 Kbytes. The client can bid a smaller transfer
size at mount time if needed, but under most conditions this bid is not necessary.
The transfer size is not negotiated with systems that use the NFS version
2 protocol. Under this condition, the maximum transfer size is set to 8 Kbytes.
You can use the -rsize and -wsize options
to set the transfer size manually with the mount command.
You might need to reduce the transfer size for some PC clients. Also, you
can increase the transfer size if the NFS server is configured to use larger
transfer sizes.
How File Systems Are Mounted
When a client needs to mount a file system from a server, the client
must obtain a file handle from the server. The file handle must correspond
to the file system. This process requires that several transactions occur
between the client and the server. In this example, the client is attempting
to mount /home/terry from the server. A snoop trace for this transaction follows.
client -> server PORTMAP C GETPORT prog=100005 (MOUNT) vers=3 proto=UDP server -> client PORTMAP R GETPORT port=33492 client -> server MOUNT3 C Null server -> client MOUNT3 R Null client -> server MOUNT3 C Mount /export/home9/terry server -> client MOUNT3 R Mount OK FH=9000 Auth=unix client -> server PORTMAP C GETPORT prog=100003 (NFS) vers=3 proto=TCP server -> client PORTMAP R GETPORT port=2049 client -> server NFS C NULL3 server -> client NFS R NULL3 client -> server NFS C FSINFO3 FH=9000 server -> client NFS R FSINFO3 OK client -> server NFS C GETATTR3 FH=9000 server -> client NFS R GETATTR3 OK
In this trace, the client first requests the mount port number from
the portmap service on the NFS server. After the client receives the mount
port number (33492), that number is used to ping the service
on the server. After the client has determined that a service is running on
that port number, the client then makes a mount request. When the server responds
to this request, the server includes the file handle for the file system (9000) being mounted. The client then sends a request for the NFS
port number. When the client receives the number from the server, the client
pings the NFS service (nfsd). Also, the client requests
NFS information about the file system that uses the file handle.
In the following trace, the client is mounting the file system with
the public option.
client -> server NFS C LOOKUP3 FH=0000 /export/home9/terry server -> client NFS R LOOKUP3 OK FH=9000 client -> server NFS C FSINFO3 FH=9000 server -> client NFS R FSINFO3 OK client -> server NFS C GETATTR3 FH=9000 server -> client NFS R GETATTR3 OK
By using the default public file handle (which is 0000),
all of the transactions to obtain information from the portmap service and
to determine the NFS port number are skipped.
Effects of the -public Option and NFS URLs When Mounting
Using the -public option can create conditions that
cause a mount to fail. Adding an NFS URL can also confuse the situation. The
following list describes the specifics of how a file system is mounted when
you use these options.
Public option with NFS URL –
Forces the use of the public file handle. The mount fails if the public file
handle is not supported.
Public option with regular path –
Forces the use of the public file handle. The mount fails if the public file
handle is not supported.
NFS URL only – Use
the public file handle if this file handle is enabled on the NFS server. If
the mount fails when using the public file handle, then try the mount with
the MOUNT protocol.
Regular path only –
Do not use the public file handle. The MOUNT protocol is used.
Client-Side Failover
By using client-side failover, an NFS client can switch
to another server if the server that supports a replicated file system becomes
unavailable. The file system can become unavailable under one of the following
circumstances.
-
If the server that the file system is connected to crashes
-
If the server is overloaded
-
If a network fault occurs
The failover, under these conditions, is normally transparent to the
user. Thus, the failover can occur at any time without disrupting the processes
that are running on the client.
Failover requires that the file system be mounted read-only. The file
systems must be identical for the failover to occur successfully. See What Is a Replicated File System? for a description of what makes a file system identical.
A static file system or a file system that is not changed often is the best
candidate for failover.
You cannot use file systems that are mounted by using CacheFS with failover.
Extra information is stored for each CacheFS file system. This information
cannot be updated during failover, so only one of these two features can be
used when mounting a file system.
The number of replicas that need to be established for every file system
depends on many factors. Ideally, you should have a minimum of two servers.
Each server should support multiple subnets. This setup is better than having
a unique server on each subnet. The process requires that each listed server
be checked. Therefore, if more servers are listed, each mount is slower.
Failover Terminology
To fully comprehend the process, you need to understand
two terms.
-
failover – The process of selecting
a server from a list of servers that support a replicated file system. Normally,
the next server in the sorted list is used, unless it fails to respond. -
remap – To make use of a new server.
Through normal use, the clients store the path name for each active file on
the remote file system. During the remap, these path names are evaluated to
locate the files on the new server.
What Is a Replicated File System?
For the purposes of failover, a file system
can be called a replica when each file is the same size
and has the same vnode type as the original file system. Permissions, creation
dates, and other file attributes are not considered. If the file size or vnode
types are different, the remap fails and the process hangs until the old server
becomes available.
You can maintain a replicated file system by using rdist, cpio, or another file transfer mechanism. Because updating the replicated
file systems causes inconsistency, follow these suggestions for best results:
-
Rename the old version of the file before installing a new
version of the file. -
Run the updates at night when client usage is low.
-
Keep the updates small.
-
Minimize the number of copies.
Failover and NFS Locking
Some software packages
require read locks on files. To prevent these products from breaking, read
locks on read-only file systems are allowed but are visible to the client
side only. The locks persist through a remap because the server does not “know”
about the locks. Because the files should not change, you do not need to lock
the file on the server side.
Large Files
Starting with 2.6, the Solaris release supports files that are over
2 Gbytes. By default, UFS file systems are mounted with the -largefiles option to support the new capability. Previous releases cannot handle
files of this size. See How to Disable Large Files on an NFS Server for instructions.
If the server’s file system is mounted with the -largefiles
option, a Solaris 2.6 NFS client can access large files without the need for
changes. However, not all 2.6 commands can handle these large files. See largefile(5)
for a list of the commands that can handle the large files. Clients that cannot
support the NFS version 3 protocol with the large file extensions cannot access
any large files. Although clients that run the Solaris 2.5 release can use
the NFS version 3 protocol, large file support was not included in that release.
How NFS Server Logging Works
NFS server logging provides records of NFS reads and writes, as well
as operations that modify the file system. This data can be used to track
access to information. In addition, the records can provide a quantitative
way to measure interest in the information.
When a file system with logging enabled is accessed, the kernel writes
raw data into a buffer file. This data includes the following:
-
A timestamp
-
The client IP address
-
The UID of the requester
-
The file handle of the file or directory object that is being
accessed -
The type of operation that occurred
The nfslogd daemon converts this raw data into ASCII
records that are stored in log files. During the conversion, the IP addresses
are modified to host names and the UIDs are modified to logins if the name
service that is enabled can find matches. The file handles are also converted
into path names. To accomplish the conversion, the daemon tracks the file
handles and stores information in a separate file handle-to-path table. That
way, the path does not have to be re-identified each time a file handle is
accessed. Because no changes to the mappings are made in the file handle-to-path
table if nfslogd is turned off, you must keep the daemon
running.
How the WebNFS Service Works
The WebNFS service makes files in a directory available
to clients by using a public file handle. A file handle is an address that
is generated by the kernel that identifies a file for NFS clients. The public file handle has a predefined value, so the server does not
need to generate a file handle for the client. The ability to use this predefined
file handle reduces network traffic by eliminating the MOUNT
protocol. This ability should also accelerate processes for the clients.
By default, the public file handle on an NFS server is established on
the root file system. This default provides WebNFS access to any clients that
already have mount privileges on the server. You can change the public file
handle to point to any file system by using the share command.
When the client has the file handle for the file system, a LOOKUP is run to determine the file handle for the file to be accessed.
The NFS protocol allows the evaluation of only one path name component at
a time. Each additional level of directory hierarchy requires another LOOKUP. A WebNFS server can evaluate an entire path name with a
single multicomponent lookup transaction when the LOOKUP
is relative to the public file handle. Multicomponent lookup enables the WebNFS
server to deliver the file handle to the desired file without exchanging the
file handles for each directory level in the path name.
In addition, an NFS client can initiate concurrent downloads over a
single TCP connection. This connection provides quick access without the additional
load on the server that is caused by setting up multiple connections. Although
web browser applications support concurrent downloading of multiple files,
each file has its own connection. By using one connection, the WebNFS software
reduces the overhead on the server.
If the final component in the path name is a symbolic link to another
file system, the client can access the file if the client already has access
through normal NFS activities.
Normally, an NFS URL is evaluated relative to the public file handle.
The evaluation can be changed to be relative to the server’s root file system
by adding an additional slash to the beginning of the path. In this example,
these two NFS URLs are equivalent if the public file handle has been established
on the /export/ftp file system.
nfs://server/junk nfs://server//export/ftp/junk
How WebNFS Security Negotiation Works
The Solaris 8 release includes a new protocol so a WebNFS client can
negotiate a selected security mechanism with a WebNFS server. The new protocol
uses security negotiation multicomponent lookup, which is an extension to
the multicomponent lookup that was used in earlier versions of the WebNFS
protocol.
The WebNFS client initiates the process by making a regular multicomponent
lookup request by using the public file handle. Because the client has no
knowledge of how the path is protected by the server, the default security
mechanism is used. If the default security mechanism is not sufficient, the
server replies with an AUTH_TOOWEAK error. This reply indicates
that the default mechanism is not valid. The client needs to use a stronger
default mechanism.
When the client receives the AUTH_TOOWEAK error,
the client sends a request to the server to determine which security mechanisms
are required. If the request succeeds, the server responds with an array of
security mechanisms that are required for the specified path. Depending on
the size of the array of security mechanisms, the client might have to make
more requests to obtain the complete array. If the server does not support
WebNFS security negotiation, the request fails.
After a successful request, the WebNFS client selects the first security
mechanism from the array that the client supports. The client then issues
a regular multicomponent lookup request by using the selected security mechanism
to acquire the file handle. All subsequent NFS requests are made by using
the selected security mechanism and the file handle.
WebNFS Limitations With Web Browser Use
Several functions that a web site that uses HTTP can provide are not
supported by the WebNFS software. These differences stem from the fact that
the NFS server only sends the file, so any special processing must be done
on the client. If you need to have one web site configured for both WebNFS
and HTTP access, consider the following issues:
-
NFS browsing does not run CGI scripts. So, a file system with
an active web site that uses many CGI scripts might not be appropriate for
NFS browsing. -
The browser might start different viewers in order to handle
files in different file formats. Accessing these files through an NFS URL
starts an external viewer if the file type can be determined by the file name.
The browser should recognize any file name extension for a standard MIME type
when an NFS URL is used. As an explanation, the WebNFS software does not check
inside the file to determine the file type. So, the only way to determine
a file type is by the file name extension. -
NFS browsing cannot utilize server-side image maps (clickable
images). However, NFS browsing can utilize client-side image maps (clickable
images) because the URLs are defined with the location. No additional response
is required from the document server.
Secure NFS System
The NFS environment is a powerful way and convenient way to share file
systems on a network of different computer architectures and operating systems.
However, the same features that make sharing file systems through NFS operation
convenient also pose some security problems. Historically, most NFS implementations
have used UNIX (or AUTH_SYS) authentication, but stronger authentication methods
such as AUTH_DH have also been available. When using UNIX authentication,
an NFS server authenticates a file request by authenticating the computer
that makes the request, but not the user. Therefore, a client user can run su and impersonate the owner of a file. If DH authentication is
used, the NFS server authenticates the user, making this sort of impersonation
much harder.
With root access and knowledge of network programming, anyone can introduce
arbitrary data into the network and extract any data from the network. The
most dangerous attacks are those attacks that involve the introduction of
data. An example is the impersonation of a user by generating the right packets
or by recording “conversations” and replaying them later. These
attacks affect data integrity. Attacks that involve passive eavesdropping—merely
listening to network traffic without impersonating anybody—are not as
dangerous, as data integrity is not compromised. Users can protect the privacy
of sensitive information by encrypting data that is sent over the network.
A common approach to network security problems is to leave the solution
to each application. A better approach is to implement a standard authentication
system at a level that covers all applications.
The Solaris operating environment includes an authentication system
at the level of remote procedure call (RPC)—the mechanism on which NFS
operation is built. This system, known as Secure RPC, greatly improves the
security of network environments and provides additional security to services
such as the NFS system. When the NFS system uses the facilities that are provided
by Secure RPC, it is known as a Secure NFS system.
Secure RPC
Secure RPC is fundamental to the Secure NFS system. The goal of Secure
RPC is to build a system that is at minimum as secure as a time-sharing system.
In a time-sharing system all users share a single computer. A time-sharing
system authenticates a user through a login password. With data encryption
standard (DES) authentication, the same authentication process is completed.
Users can log in on any remote computer just as users can log in on a local
terminal. The users’ login passwords are their passports to network security.
In a time-sharing environment, the system administrator has an ethical obligation
not to change a password to impersonate someone. In Secure RPC, the network
administrator is trusted not to alter entries in a database that stores public keys.
You need to be familiar with two terms to understand an RPC authentication
system: credentials and verifiers. Using ID badges as an example, the credential
is what identifies a person: a name, address, birthday, and so on. The verifier
is the photo that is attached to the badge. You can be sure the badge has
not been stolen by checking the photo on the badge against the person who
is carrying the badge. In RPC, the client process sends both a credential
and a verifier to the server with each RPC request. The server sends back
only a verifier because the client already “knows” the server’s
credentials.
RPC’s authentication is open ended, which means that a variety of authentication
systems can be plugged into it, such as UNIX, DH, and KERB.
When UNIX authentication is used by a network service, the credentials
contain the client’s host name, UID, GID, and group-access list. However,
the verifier contains nothing. Because no verifier exists, a superuser could
falsify appropriate credentials by using commands such as su.
Another problem with UNIX authentication is that UNIX authentication assumes
all computers on a network are UNIX computers. UNIX authentication breaks
down when applied to other operating systems in a heterogeneous network.
To overcome the problems of UNIX authentication, Secure RPC uses DH
authentication.
DH Authentication
DH authentication uses the Data Encryption Standard (DES) and Diffie-Hellman
public-key cryptography to authenticate both users and computers in the network.
DES is a standard encryption mechanism. Diffie-Hellman public-key cryptography
is a cipher system that involves two keys: one public and one secret. The
public keys and secret keys are stored in the namespace. NIS stores the keys
in the public-key map. These maps contain the public key and secret key for
all potential users. See the System Administration Guide: Naming and Directory Services (DNS, NIS, and
LDAP) for more information on how to set up the maps.
The security of DH authentication is based on a sender’s ability to
encrypt the current time, which the receiver can then decrypt and check against
its own clock. The timestamp is encrypted with DES. The requirements for this
scheme to work are as follows:
-
The two agents must agree on the current time.
-
The sender and receiver must be using the same encryption
key.
If a network runs a time-synchronization program, the time on the client
and the server is synchronized automatically. If a time-synchronization program
is not available, timestamps can be computed by using the server’s time instead
of the network time. The client asks the server for the time before starting
the RPC session, then computes the time difference between its own clock and
the server’s. This difference is used to offset the client’s clock when computing
timestamps. If the client and server clocks get out of synchronization to
the point where the server begins to reject the client’s requests, the DH
authentication system on the client resynchronizes with the server.
The client and server arrive at the same encryption key by generating
a random conversation key, also known as the session key, and by using public-key cryptography to deduce a common key. The common key is a key that only the client and server
are capable of deducing. The conversation key is used to encrypt and decrypt
the client’s timestamp. The common key is used to encrypt and decrypt the
conversation key.
KERB Authentication
Kerberos is an authentication system that was developed at MIT. Encryption
in Kerberos is based on DES. Kerberos support is no longer supplied as part
of Secure RPC, but a server-side and client-side implementation is included
with the Solaris 9 release. See “Introduction to SEAM” in System Administration Guide: Security Services for more
information about the Solaris 9 implementation of Kerberos Authentication.
Using Secure RPC With NFS
Be aware of the following points if you plan to use Secure RPC:
-
If a server crashes when no one is around (after a power failure,
for example), all the secret keys that are stored on the system are deleted.
Now no process can access secure network services or mount an NFS file system.
The important processes during a reboot are usually run as root.
Therefore, these processes would work if root’s secret key were stored away,
but nobody is available to type the password that decrypts it. keylogin
-r allows root to store the clear secret key in /etc/.rootkey, which keyserv reads. -
Some systems boot in single-user mode, with a root login shell
on the console and no password prompt. Physical security is imperative in
such cases. -
Diskless computer booting is not totally secure. Somebody
could impersonate the boot server and boot a devious kernel that, for example,
makes a record of your secret key on a remote computer. The Secure NFS system
provides protection only after the kernel and the key server are running.
Otherwise, no way exists to authenticate the replies that are given by the
boot server. This limitation could be a serious problem, but the limitation
requires a sophisticated attack, using kernel source code. Also, the crime
would leave evidence. If you polled the network for boot servers, you would
discover the devious boot server’s location. -
Most setuid programs are owned by root.
If the secret key for root is stored in /etc/.rootkey, these programs behave as they always have. If a setuid program
is owned by a user, however, the setuid program might not always work. For
example, suppose that a setuid program is owned by dave
and dave has not logged into the computer since it booted.
The program would not be able to access secure network services. -
If you log in to a remote computer (using login, rlogin, or telnet) and use keylogin to gain access, you give access to your account. The reason is
that your secret key is passed to that computer’s key server, which then stores
your secret key. This process is only a concern if you do not trust the remote
computer. If you have doubts, however, do not log in to a remote computer
if the remote computer requires a password. Instead, use the NFS environment
to mount file systems that are shared by the remote computer. As an alternative,
you can use keylogout to delete the secret key from the
key server. -
If a home directory is shared with the -o sec=dh
option, remote logins can be a problem. If the /etc/hosts.equiv or ~/.rhosts files are not set to prompt
for a password, the login succeeds. However, the users cannot access their
home directories because no authentication has occurred locally. If the user
is prompted for a password, the user has access to his or her home directory
if the password matches the network password.