Thursday, February 12, 2009
active directory
An active directory is a directory structure used on Microsoft Windows based computers and servers to store information and data about networks and domains. It is primarily used for online information and was originally created in 1996 and first used with Windows 2000.
An active directory (sometimes referred to as an AD) does a variety of functions including the ability to provide information on objects, helps organize these objects for easy retrieval and access, allows access by end users and administrators and allows the administrator to set security up for the directory.
An active directory can be defined as a hierarchical structure and this structure is usually broken up into three main categories, the resources which might include hardware such as printers, services for end users such as web email servers and objects which are the main functions of the domain and network.
It is interesting to note the framework for the objects. Remember that an object can be a piece of hardware such as a printer, end user or security settings set by the administrator. These objects can hold other objects within their file structure. All objects have an ID, usually an object name (folder name). In addition to these objects being able to hold other objects, every object has its own attributes which allows it to be characterized by the information which it contains. Most IT professionals call these setting or characterizations schemas.
Depending on the type of schema created for a folder, will ultimately determine how these objects are used. For instance, some objects with certain schemas can not be deleted, they can only be deactivated. Others types of schemas with certain attributes can be deleted entirely. For instance, a user object can be deleted, but the administrator object can not be deleted.
When understanding active directories, it is important to know the framework that objects can be viewed at. In fact, an active directory can be viewed at either one of three levels, these levels are called forests, trees or domains. The highest structure is called the forest because you can see all objects included within the active directory.
Within the Forest structure are trees, these structures usually hold one or more domains, going further down the structure of an active directory are single domains. To put the forest, trees and domains into perspective, consider the following example.
A large organization has many dozens of users and processes. The forest might be the entire network of end users and specific computers at a set location. Within this forest directory are now trees that hold information on specific objects such as domain controllers, program data, system, etc. Within these objects are even more objects which can then be controlled and categorized.
DIRECTORY SERVICE:
Active Directory is a full-featured directory service. But what is a directory service? Well, a directory service is actually a combination of two things – a directory, and services that make the directory useful. Simply, a directory is a store of information, similar to other directories, such as a telephone book. A directory can store a variety of useful information relating to users, groups, computers, printers, shared folders, and so forth – we call these objects. A directory also stores information about objects, or properties of objects – we call these attributes. For example, attributes stored in a directory for a particular user object would be the user’s manager, phone numbers, address information, logon name, password, the groups they are a part of, and more.
To make a directory useful, we have services interact with the directory. For example, we can use the directory as a store or information against which users are authenticated, or as the place we query to find information about an object. For example, I could query a directory to show me all the color printers in the Frankfurt office, the phone number of Bob in the Delhi office, or a list all of the users accounts whose first name starts with the letter ‘G’. In Windows 2000, Active Directory is responsible for creating and organizing not only these smaller objects, but also larger objects – like domains, organizational units, and sites. In order to fully comprehend what Active Directory is all about, we need to take an initial look at a number of concepts. A deeper discussion on Active Directory will be covered once we get to the AD Implementation and Administration portion of the series
HEIRARCHY OF AD (OBJECT VIEW)
The structure of the Active Directory is a hierarchy, and before installing and implementing the Active Directory, you must have a firm understanding of the structure as well as the components that make up the Active Directory. You will use this hierarchy design to build the Active Directory infrastructure for your organization, so it is important that you have a firm grasp of their meaning and place in the hierarchy before you begin planning. The following sections explore the components in the hierarchy structure. Object An Active Directory object represents a physical object of some kind on the network. Common Active Directory objects are users, groups, printers, shared folders, applications, databases, contacts, and so forth. Each of these objects represents something "tangible." Each object is defined by a set of "attributes." An attribute is a quality that helps define the actual object. For example, a user object could have attributes of a username, actual name, and email address. Attributes for each kind of object are defined in the Active Directory. The attributes define the object itself and allow users to search for the particular object
Organizational Unit An organizational unit (OU) is like a file folder in a filing cabinet. The OU is designed to hold objects (or even other OUs). It contains attributes like an object, but has no functionality on its own. As with a file folder, its purpose is to hold other objects. As the name implies, an OU helps you "organize" your directory structure. For example, you could have an accounting OU that contains other OUs, such as Accounting Group A and Accounting Group B, and inside those OUs can reside objects that belong, such as users, groups, computers, printers, etc OUs also serve as securities and administrative boundaries and can be used to replace domains in multiple Window NT domain networks.
Domain By definition, a domain is a logical grouping of users and computers. A domain typically resides in a localized geographic location, but this is not always the case. In reality, a domain is more than a logical grouping — it is actually a security boundary in a Windows 2000 or NT network. You can think of a network with multiple domains as being like a residential neighborhood. All of the homes make up the neighborhood, but each home is a security boundary that holds certain objects inside and keeps others out. Each domain can have its own security policies and can establish trust relationships with other domains. The Active Directory is made up of one or more domains. Domains contain a schema, which is a set of object class instances. The schema determines how objects are defined with the Active Directory. The schema itself resides within the Active Directory and can be dynamically changed. You can learn more about the Active Directory schema in Chapter 18.
Tree The hierarchy structure of the domain, organizational units, and objects is called a tree. The objects within the tree are referred to as endpoints, while the OUs in the tree structure are nodes. In terms of a physical tree, you can think of the branches as OUs or containers and the leaves as objects — an object is the natural endpoint of the node within the tree.
Domain Trees A domain tree exists when several domains are linked by trust relationships and share a common schema, configuration, and global catalog. Trust relationships in Windows 2000 are based on the Kerberos security protocol. Kerberos trusts are transitive. In other words, if domain 1 trusts domain 2 and domain 2 trusts domain 3, then domain 1 trusts domain 3A domain tree also shares a contiguous namespace . A contiguous namespace follows the same naming DNS hierarchy within the domain tree. For example, if the root domain is smithfin.com and domain A and domain B exist in a domain tree, the contiguous namespace for the two would be domaina.smithfin.com and domainb.smithfin.com. If domain A resides in smithfindal.com and domain B resides in the smithfin.com root, then the two would not share a contiguous name space.
Forest A forest is one or more trees that do not share a contiguous name space. The trees in the forest do share a common schema, configuration, and global catalog, but the trees do not share a contiguous name space. All trees in the forest trust each other through Kerberos transitive trusts. In actuality, the forest does not have a distinct name, but the trees are viewed as a hierarchy of trust relationships. The tree at the top of the hierarchy normally refers to the tree. For example, corp.com, production.corp.com, and mgmt.corp.com form a forest with corp.com serving as the forest root.
Site A site is not actually considered a part of the Active Directory hierarchy, but is configured in the Active Directory for replication purposes. A site is defined as a geographical location in a network containing Active Directory servers with a well-connected TCP/IP subnet. Well-connected means that the network connection is highly reliable and fast to other subnets in the network. Administrators use the Active Directory to configure replication between sites. Users do not have to be aware of site configuration. As far as the Active Directory is concerned, users only see domains.
TRUST
Server uses trust to determine wheather access is allowed or not
Active Directory uses two types of trust:
n Transitive: Two objects are able to access each others domains and trees that means user is allowed accessed to another tree or domain,
n Non transitive (one way transitive) :One object can access trees & domain of other but other domain does not allow access to the domain & trees of first. E.g. admin-->user
GOALS
Two primary goals are
n USER
User should access resource throughout the domain using a single login
n ADMINISTRATOR
Administrator should be able to centrally manage both users & resources
DESIGN GOALS OF THE ACTIVE DIRECTORY
The Active Directory's design goals are simple, yet very powerful, allowing Active Directory to provide the desired functionality in virtually any computing environment. The following list describes the major features and goals of the Active Directory technology.
Scalable — The Active Directory is highly scalable, which means it can function in small networking environments or global corporations. The Active Directory supports multiple stores, which are wide groupings of objects, and can hold more than one million objects per store.
Extensible — The Active Directory is "extensible," which means it can be customized to meet the needs of an organization.
Secure — The Active Directory is integrated with Windows 2000 security, allowing administrators to control access to objects.
Seamless — The Active Directory is seamlessly integrated with the local network and the intranet/Internet.
Open Standards — The Active Directory is based on open communication standards, which allow integration and communication with other directory services, such as Novell's NDS.
Backwards Compatible — Although Windows 2000 operating systems make the most use of the Active Directory, the Active Directory is backwards compatible for earlier versions of Windows operating systems. This feature allows implementation of the Active Directory to be taken one step at a time.
samba server in linux
Samba allows you to share files with Windows PCs on your network, as well as access Windows file and print servers, making your Linux box fit in better with Windows-centric organizations.
In Windows file and printer sharing, SMB is sometimes referred to as CIFS (Common Internet File System), which is an Internet standard network file system definition based on SMB, or NetBIOS, which was the original SMB communication protocol.
Samba is a software package that enables you to share file systems and printers on a network with computers that use the Session Message Block (SMB) protocol. This package is distributed with most Linux flavors but can be obtained from www.samba.org if you do not find it on your distribution. SMB is the protocol that is delivered with Windows operating systems for sharing files and printers. Although you can’t always count on NFS being installed on Windows clients (unless you install it yourself), SMB is always available (with a bit of setup).
The Samba software package contains a variety of daemon processes, administrative tools, user tools, and configuration files. To do basic Samba configuration, start with the Samba Server Configuration window, which provides a graphical interface for configuring the server and setting directories to share.
Most of the Samba configuration you do ends up in the /etc/samba/smb.conffile. If you need to access features that are not available through the Samba Server Configuration window, you can edit /etc/samba/smb.confby hand or use SWAT, a Web-based interface, to configure Samba. Daemon processes consist of smbd (the SMB daemon) and nmbd (the NetBIOS name server). The smbd daemon makes the file-sharing and printing services you add to your Linux system available to Windows client computers.
SAMBA PACKAGE SUPPORT
The Samba package supports the following client computers:
Windows 9x
Windows NT
Windows ME
Windows 2000
Windows XP
Windows for workgroups
MS Client 3.0 for DOS
OS/2
Dave for Macintosh computers
Mac OS X
Samba for Linux
Mac OS X Server ships with Samba, so you can use a Macintosh system as a server. You can then have Macintosh, Windows, or Linux client computers. In addition, Mac OS X ships with both client and server software for Samba.
As for administrative tools for Samba, you have several shell commands at your disposal:testparm and testprns, with which you can check your configuration files; smbstatus, which tells you what computers are currently connected to your shared resources; and the nmblookup command, with which you can query computers.
Samba uses the NetBIOS service to share resources with SMB clients, but the underlying network must be configured for TCP/IP. Although other SMB hosts can use TCP/IP, NetBEUI, and IPX/SPX to transport data, Samba for Linux supports only TCP/IP. Messages are carried between host computers with TCP/IP and are then handled by NetBIOS.
Getting and Installing Samba
You can get Samba software in different ways, depending on your Linux distribution. Here are a few examples:
Debian
To use Samba in Debian, you must install the samba and smbclient packages using apt-get. Then start the Samba service by running the appropriate scripts from the /etc/init.d directory, as follows:
# apt-get install samba samba-common smbclient swat
# /etc/init.d/samba start
# /etc/init.d/smb-client start
Gentoo
With Gentoo, you need to have configured net-fs support into the kernel to use Samba server features. Installing the net-fs package (emergenet-fs) should get the required packages. To start the service, run rc-updateand start the service immediately:
# emerge samba
# rc-update add samba default
# /etc/init.d/samba start
Fedora Core and other Red Hat Linux systems
You need to install the samba, samba-client, samba-common, and optionally, the system-config-samba and samba-swat packages to use Samba in Fedora. You can then start Samba using the serviceand chkconfig commands as follows:
# service smb start
# chkconfig smb on
6) SWAT
The commands and configuration files are the same on most Linux systems using Samba. The Samba project itself comes with a Web-based interface for administering Samba called Samba Web Administration Tool (SWAT). For someone setting up Samba for the first time, SWAT is a good way to get it up and running.
Configuring Samba with SWAT
In addition to offering an extensive interface to Samba options, SWAT also comes with an excellent help facility. And if you need to administer Samba from another computer, SWAT can be configured to be remotely accessible and secured by requiring an administrative login and password.
Before you can use SWAT, you must do some configuration. The first thing you must do is turn on the SWAT service, which is done differently in different Linux distributions.
Here’s how to set up SWAT in Fedora Core and other Red Hat Linux systems:
1. Turn on the SWAT service by typing the following, as root user, from a Terminal window:
# chkconfig swat on
2. Pick up the change to the service by restarting the xinetd startup script as follows:
# service xinetd restart
Linux distributions such as Debian, Slackware, and Gentoo turn on the SWAT service from the inetd superserver daemon. After SWAT is installed, you simply remove the comment character from in front of the swat line in the /etc/inetd.conf file (as root user, using any text editor) and restart the daemon. Here’s an example of what the swatline looks like in Debian:
swat stream tcp nowait.400 root /usr/sbin/tcpd /usr/sbin/swat
With the SWAT service ready to be activated, restart the inetd daemon so it rereads the inetd.conf file. To do that in Debian, type the following as root user:
# /etc/init.d/inetd restart
The init.dscript and xinetd services are the two ways that SWAT services are generally started in Linux. So if you are using a Linux distribution other than Fedora or Debian, look in the /etc/inetd.conf file or /etc/xinetd.ddirectory (which is used automatically in Fedora), for the location of your SWAT service.
When you have finished this procedure, a daemon process will be listening on your network interfaces for requests to connect to your SWAT service. You can now use the SWAT program to configure Samba.
Starting with SWAT
You can run the SWAT program by typing the following URL in your local browser: http://localhost:901/ Enter the root username and password when the browser prompts you. The SWAT window appears as follows.
mounting in linux
Automatically Mounting:
After a server exports a directory over the network using NFS, a client computer connects that directory to its own file system using the mount command. That’s the same command used to mount file systems from local hard disks, CDs, and floppies, but with slightly different options.
mount can automatically mount NFS directories added to the /etc/fstabfile, just as it does with local disks. NFS directories can also be added to the /etc/fstabfile in such a way that they are not automatically mounted (so you can mount them manually when you choose). With a noauto option, an NFS directory listed in /etc/fstabis inactive until the mountcommand is used, after the system is up and running, to mount the file system.
To set up an NFS file system to mount automatically each time you start your Linux system, you need to add an entry for that NFS file system to the /etc/fstabfile. That file contains information about all different kinds of mounted (and available to be mounted) file systems for your system. Here’s the format for adding an NFS file system to your local system:
host:directory mountpoint nfs options 0 0
The first item (host:directory) identifies the NFS server computer and shared directory. mountpoint is the local mount point on which the NFS directory is mounted. It’s followed by the file system type (nfs). Any options related to the mount appear next in a comma-separated list. (The last two zeros configure the system to not dump the contents of the file system and not to run fsck on the file system.) The following are examples of NFS entries in /etc/fstab:
maple:/tmp /mnt/maple nfs rsize=8192,wsize=8192 0
oak:/apps /oak/apps nfs noauto,ro 0
In the first example, the remote directory /tmpfrom the computer named maple (maple:/tmp) is mounted on the local directory /mnt/maple(the local directory must already exist). The file system type is nfs, and read (rsize) and write (wsize) buffer sizes (discussed in the “Using mount Options” section later in this chapter) are set at 8192to speed data transfer associated with this connection. In the second example, the remote directory is /appson the computer named oak. It is set up as an NFS file system (nfs) that can be mounted on the /oak/apps directory locally. This file system is not mounted automatically (noauto), however, and can be mounted only as read-only (ro) using the mountcommand after the system is already running.
Manually Mounting an NFS File System:
If you know that the directory from a computer on your network has been exported (that is, made available for mounting), you can mount that directory manually using the mount command. This is a good way to make sure that it is available and working before you set it up to mount permanently. Here is an example of mounting the /tmpdirectory from a computer named maple on your local computer:
# mkdir /mnt/maple
# mount maple:/tmp /mnt/maple
The first command (mkdir) creates the mount point directory (/mnt is a common place to put temporarily mounted disks and NFS file systems). The mountcommand identifies the remote computer and shared file system separated by a colon (maple:/tmp), and the local mount point directory (/mnt/maple) follows.
ENSURE MOUNTING
To ensure that the mount occurred, type mount. This command lists all mounted disks and NFS file systems. Here is an example of the mount command and its output (with file systems not pertinent to this discussion edited out):
# mount
/dev/hda3 on / type ext3 (rw)
..
..
..
maple:/tmp on /mnt/maple type nfs (rw,addr=10.0.0.11)
The output from the mount command shows the mounted disk partitions, special file systems, and NFS file systems. The first output line shows the hard disk (/dev/hda3), mounted on the root file system (/), with read/write permission (rw), with a file system type of ext3 (the standard Linux file system type). The just-mounted NFS file system is the /tmpdirectory from maple (maple:/tmp). It is mounted on /mnt/maple and its mount type is nfs. The file system was mounted read/write (rw), and the IP address of maple is 10.0.0.11(addr=10.0.0.11).
This is a simple example of using mountwith NFS. The mount is temporary and is not remounted when you reboot your computer. You can also add options for NFS mounts:
OPTIONS
-a: Mount all file systems in /etc/fstab(except those indicated as noauto).
-f: This goes through the motions of (fakes) mounting the file systems on the command line (or in /etc/fstab). Used with the -v option, -fis useful for seeing what mount would do before it actually does it.
-r: Mounts the file system as read-only.
-w: Mounts the file system as read/write. (For this to work, the shared file system must have been exported with read/write permission.)
Sharing NFS File Systems
To share an NFS file system from your Linux system, you need to export it from the server system. Exporting is done in Linux by adding entries into the /etc/exportsfile. Each entry identifies a directory in your local file system that you want to share with other computers. The entry also identifies the other computers that can share the resource (or opens it to all computers) and includes other options that reflect permissions associated with the directory.
Remember that when you share a directory, you are sharing all files and subdirectories below that directory as well (by default). So, you need to be sure that you want to share everything in that directory structure.
Configuring the /etc/exports File
To make a directory from your Linux system available to other systems, you need to export that directory. Exporting is done on a permanent basis by adding information about an exported directory to the /etc/exports file. The format of the /etc/exportsfile is
Directory Host(Options) # Comments
where Directoryis the name of the directory that you want to share, and Hostindicates the host computer to which the sharing of this directory is restricted. Optionscan include a variety of options to define the security measures attached to the shared directory for the host. (You can repeat Host/Optionpairs.) Comments are any optional comments you want to add (following the #sign).
As root user, you can use any text editor to configure /etc/exportsto modify shared directory entries or add new ones. Here’s an example of an /etc/exports file:
/cal *.linuxtoys.net(rw) # Company events
/pub (ro,insecure,all_squash) # Public dir
/home maple(rw,squash uids=0-99) spruce(rw,squash uids=0-99)
The /cal entry represents a directory that contains information about events related to the company. It is made accessible to everyone with accounts to any computers in the company’s domain (*.linuxtoys.net). Users can write files to the directory as well as read them (indicated by the rwoption). The comment (#Company events) simply serves to remind you of what the directory contains.
The /pubentry represents a public directory. It allows any computer and user to read files from the directory (indicated by the ro option) but not to write files. The insecureoption enables any computer, even one that doesn’t use a secure NFS port, to access the directory. The all_squash option causes all users (UIDs) and groups (GIDs) to be mapped to the nfsnobody user, giving them minimal permission to files and directories.
The /homeentry enables a set of users to have the same /home directory on different computers. Say, for example, that you are sharing /home from a computer named oak. The computers named maple and spruce could each mount that directory on their own /homedirectories. If you gave all users the same username/UIDs on all machines, you could have the same /home/userdirectory available for each user, regardless of which computer they are logged into. The uids=0–99is used to exclude any administrative login from another computer from changing any files in the shared directory.
These are just examples; you can share any directories that you choose, including the entire file system (/). Of course, there are security implications of sharing the whole file system or sensitive parts of it (such as /etc). Security options that you can add to your /etc/exports file are described throughout the sections that follow.
Hostnames in /etc/exports
You can indicate in the/etc/exports file which host computers can have access to your shared directory in the following ways:
Individual host
You can enter one or more TCP/IP hostnames or IP Addresses. If the host is in your local domain, you can simply indicate the hostname. Otherwise, you can use the full host.domain format. These are valid ways of indicating individual host computers:
maple
maple.handsonhistory.com
10.0.0.11
IP network
To allow access to all hosts from a particular network address, indicate a network
number and its netmask, separated by a slash (/). These are valid ways of indicating network numbers:
10.0.0.0/255.0.0.0
172.16.0.0/255.255.0.0
192.168.18.0/255.255.255.0
TCP/IP domain
Yu can include all or some host computers from a particular domain level. Here are some valid uses of the asterisk and question mark wild cards:
*.handsonhistory.com
*craft.handsonhistory.com
???.handsonhistory.com
The first example matches all hosts in the handsonhistory.com domain. The second example matches
woodcraft, basketcraft, or any other hostnames ending in craft in the handsonhistory.com domain. The final example matches any three-letter hostnames in the domain.
Note Using an asterisk doesn’t match subdomains. For example, *.handsonhistory.com would not cause the
hostname mallard.duck.handsonhistory.com to be included in the access list.
NIS groups
You can allow access to hosts contained in an NIS group. To indicate a NIS group, precede the group name with an at (@) sign (for example, @group).
Link and access options in /etc/exports
You don’t have to just give away your files and directories when you export a directory with NFS. In the options part of each entry in /etc/exports, you can add options that allow or limit access based on user ID, subdirectory, and read/write permission. These options, which are passed to NFS, are as follows:
ro
Only allow the client to mount this exported file system read-only. The default is to mount the file system read/write.
rw
Explicitly ask that a shared directory be shared with read/write permissions. (If the client chooses, it can still mount the directory read-only.)
noaccess
All files and directories below the given directory are not accessible. This is how you would exclude selected subdirectories of a shared directory from being shared. The directory will still appear to the client that mounts the file system that includes this directory, but the client will not be able to view its contents.
link_relative
If absolute symbolic links are included in the shared file system (that is, ones that identify a full path), the full path is converted to a relative path. To do this, each part of the path is converted to two dots and a slash (../) to reach the root of the file system.
Link_absolute
Don’t change any of the symbolic links (default).
User mapping options in /etc/exports
Besides options that define how permissions are handled generally, you can also use options to set the permissions that specific users have to NFS shared file systems.
One method that simplifies this process is to have each user with multiple user accounts have the same user
name and UID on each machine. This makes it easier to map users so that they have the same permission on a
mounted file system as they do on files stored on their local hard disk. If that method is not convenient, user
IDs can be mapped in many other ways. Here are some methods of setting user permissions and the
/etc/exports option that you use for each method:
root user
Normally, the client’s root user is mapped into the anonymous user ID. This prevents the root user from a client computer from being able to change all files and directories in the shared file system. If you want the client’s root user to have root permission on the server, use the no_root_squash option.
There may be other administrative users, in addition to root, that you want to squash. I recommend squashing UIDs 0–99 as follows: squash_uids=0–99.
Anonymous user/group
By using anonymous user ID and group ID, you essentially create a user/group whose permissions will not allow access to files that belong to any users on the server (unless those users open permission to everyone). However, files created by the anonymous user/group will be available to anyone assigned as the anonymous user/group. To set all remote users to the anonymous user/group, use the all_squash option.
The anonymous user assigned by NFS is typically the "nobody" user name with a UID and GID -2 (because -2 cannot be assigned to a file, UIDs and GIDs of 65534 are assigned when the "nobody" user owns a file). This prevents the ID from running into a valid user or group ID. Using anonuid or anongid, you can change the anonymous user or group, respectively. For example, anonuid=175 sets all anonymous users to UID 175 and anongid=300 sets the GID to 300.
User mapping
If the same users have login accounts for a set of computers (and they have the same IDs), NFS, by default, will map those IDs. This means that if the user named mike (UID 110) on maple has an account on pine (mike, UID 110), from either computer he could use his own remotely mounted files from the other computer.
If a client user that is not set up on the server creates a file on the mounted NFS directory, the file is assigned to the remote client’s UID and GID. (An ls -l on the server would show the UID of the owner.) You can identify a file that contains user mappings using the map_static option.
The exports man page describes the map_static option, which should let you create a file that contains new ID mappings. These mappings should let you remap client IDs into different IDs on the server.
Exporting the Shared File Systems
After you have added entries to your /etc/exportsfile, run the exportfscommand to have those directories exported (made available to other computers on the network). Reboot your computer or restart the NFS service, and the exportfscommand runs automatically to export your directories.
If you want to export them immediately, run exportfsfrom the command line (as root). It’s a good idea to run the exportfs command after you change the exports file. If any errors are in the file, exportfs identifies them for you.
Here’s an example of the exportfscommand:
# /usr/sbin/exportfs -a -
exporting maple:/pub
exporting spruce:/pub
exporting maple:/home
exporting spruce:/home
exporting *:/mnt/win
The –a option indicates that all directories listed in /etc/exportsshould be exported.
The -v option says to print verbose output. In this example, the /puband /home directories from the local server are immediately available for mounting by those client computers that are named (maple and spruce). The /mnt/windirectory is available to all client computers.
Running the exportfscommand temporarily makes your exported NFS directories available. To have your NFS directories available on an ongoing basis (that is, every time your system reboots), you need to set your nfs startup scripts to run at boot time.
file server in linux
A file system is usually a structure of files and directories that exists on a single device (such as a hard disk partition or CD-ROM). A Linux file system refers to the entire directory structure (which may include file systems from several disks or NFS resources), beginning from root (/) on a single computer. A shared directory in NFS may represent all or part of a computer’s file system, which can be attached (from the shared directory down the directory tree) to another computer’s file system.
1.2) CENTRALIZED SHARING:
Most networked computers are on the network in the first place so that users can share information. Some users need to collectively edit documents for a project, share access to spreadsheets and forms used in the daily operation of a company, or perform any number of similar file-sharing activities. It also can be efficient for groups of people on a computer network to share common applications and directories of information needed to do their jobs. By far the best way to accomplish the centralized sharing of data is through a file server.
A centralized file server can be backed up, preserving all stored data in one fell swoop. It can focus on the tasks of getting files to end users, rather than running userapplications that can use client resources. And a centralized file server can be used to control access to information — security settings can dictate who can access what.
Linux systems include support for each of the most common file server protocols in use today. Among the most common file server types in use today are the Network File System (NFS), which has always been the file-sharing protocol of choice for Linux and other UNIX systems, and Samba (SMB protocol), which is often used by networks with many Windows and OS/2 computers
2) NFS FILE SERVER
Instead of representing storage devices as drive letters (A, B, C, and so on), as they are in Microsoft operating systems, Linux systems connect file systems from multiple hard disks, floppy disks, CD-ROMs, and other local devices invisibly to form a single Linux file system. The Network File System (NFS) facility enables you to extend your Linux file system in the same way, to connect file systems on other computers to your local directory structure.
An NFS file server provides an easy way to share large amounts of data among the users and computers in an organization. An administrator of a Linux system that is configured to share its file
2.1) STEPS TO SET UP NFS
Systems using NFS has to perform the following tasks to set up NFS:
1. Set up the network. If a LAN or other network link is already connecting the computers on which you want to use NFS, you already have the network you need.
2. Choose what to share on the server. Decide which file systems on your Linux NFS server to make available to other computers. You can choose any point in the file system and make all files and directories below that point accessible to other computers.
3. Set up security on the server. You can use several different security features to suit the level of security with which you are comfortable. Mount-level security lets you restrict the computers that can mount a resource and, for those allowed to mount it, lets you specify whether it can be mounted read/write or read-only. With user-level security, you map users from the client systems to users on the NFS server so that they can rely on standard Linux read/write/execute permissions, file ownership, and group permissions to access and protect files. Linux systems that support Security Enhanced Linux (SELinux), such as Fedora and Red Hat Enterprise Linux, offer another means of offering or restricting shared NFS files and directories.
4. Mount the file system on the client. Each client computer that is allowed access to the server’s NFSshared file system can mount it anywhere the client chooses. For example, you may mountfile system from a computer called maple on the /mnt/maple directory in your local file system. After it is mounted, you can view the contents of that directory by typing ls /mnt/maple. Then you can use the cd command below the /mnt/maple mount point to see the files and directories it contains.
2.3) Getting started NFS:
While nearly every Linux system supports NFS client and server features, NFS is not always installed by default. You’ll need different packages for different Linux systems to install NFS. Here are some examples:
· Fedora Core and other Red Hat Linux systems:
You need to install the nfs-utils package to use Fedora as an NFS server. There is also a graphical NFS Configuration tool that requires you to install the system-config-nfs package. NFS client features are in the base operating system. To turn on the nfs service, type the following:
# service nfs start
# chkconfig nfs on
· Debian:
To act as an NFS client, the nfs-common and portmap packages are required;
for an NFS server, the nfs-kernel-server package must be added. The following apt-get command line (if you are connected to the Internet) installs them all. Then, after you add an exported file system to the /etc/exports file (as described later), you can start the nfs-common and nfs-kernel-server scripts, as shown here:
# apt-get install nfs-common portmap nfs-kernel-server
# /etc/init.d/nfs-kernel-server start
# /etc/init.d/nfs-common start
· Gentoo
With Gentoo, NFS file system and NFS server support must be configured into the kernel to use NFS server features. Installing the nfs-utils package (emergenfs-utils) should get the required packages. To start the service, run rc-updateand start the service immediately:
# emerge nfs-utils
# rc-update add portmap default
# rc-update add nfs default
# /etc/init.d/nfs start
The commands (mount, exportfs, and so on) and files (/etc/exports, /etc/fstab, and so on) or actually configuring NFS are the same on every Linux system I’ve encountered.
Enhance Data rate for GSM Evolution
EDGE offers on average up to three times the data throughput of GPRS — yielding a level of service that can support widespread data adoption, allowing new service offerings to evolve and making mobile multimedia services more affordable to more subscribers. For example, the EDGE technology enables an operator to handle the mass adoption of services like MMS and Push to Talk while also enhancing the end user’s experience of these as well as other services like Web Browsing, FTP and Video/Audio Streaming. With a global footprint of over 550 GSM networks in more than 180 countries, EDGE is in 111 networks and is expected to continue to be widely adopted given its cost-effective evolution properties.
The network EDGE plays a critical role in addressing these challenges because it is the network EDGE that creates the services customers will value. Enhanced Data rates for GSM Evolution (EDGE) was designed to deliver the capacity and the performance necessary to offer high-speed data services that will hold their own value against other 3G standards. EDGE shares spectrum and resources with GSM and GPRS, solving the spectrum availability dilemma for 3G services, and allowing for a highly flexible implementation, minimizing network impact and costs.
1. EDGE Overview
The edge of the network is the boundary between the network infrastructure and the data center servers. A data request moves from the client, over the Internet using the networking infrastructure, and then must travel across the gray area of the edge of the network before being passed on to a Tier 1 Web server. It is here, at the edge, where all the preparation for moving data into the server room happens, with these functions focusing on traffic processing rather than actual data processing. The reason this is considered a gray area is because these functions can be performed either by or with Tier 1 systems or by networking equipment such as routers and switches.
Enhanced Data rates for Global Evolution (EDGE) is a third-generation (3G) wireless technology that’s capable of high-speed data. EDGE occasionally is called “E-GPRS” because it’s an enhancement of the General Packet Radio Service (GPRS) network. EDGE can’t be deployed by itself; it must be added to an existing GPRS network. So, for example, an operator could offer GSM/GPRS/EDGE but not GSM/EDGE.
Cingular Wireless launched the world’s first commercial EDGE network in June 2003 in Indianapolis, Indiana.3 In September, CSL deployed EDGE in Hong Kong.4 Both operators introduced their services with a single handset model – the Nokia 62005 and Nokia 62206 respectively – although they say more models will be available sometime in the near future. The EDGE device that’s most likely to hit the market next is the Sony Ericsson GC82 PC card, 7 although its release date has been pushed back at least once. Cingular’s launch is noteworthy, if only because EDGE has been promised and then postponed so many times. For example, in 1998, Ericsson forecast EDGE deployments by 2000.
Three years later, AT&T Wireless and Nokia forecast commercial launches by 2002. By being late out of the gate, EDGE may have missed its window of opportunity in following key respects. First, EDGE has to catch up with other 3G technologies such as CDMA2000 and W-CDMA, which have been commercially deployed for more than three years.
1.1 GPRS & EDGE: -
Nortel’s EDGE solution integrates almost seamlessly into current GPRS networks. From a Core Network perspective, the same GPRS SGSN and GGSN are used. As more users and services become available, Nortel GGSN can be leveraged to implement new IP-based service offerings including VPNs, personal content portals and content-based billing. All of the nodes in the packet core network follow an aggressive capacity growth curve that is possible through simple software updates. For our Access portfolio, all currently shipped hardware is EDGE-ready with the only upgrade required for older base stations being a new radio transceiver and power amplifier. The resulting EDGE coverage footprint can be better than the original GSM RF plan.
2.1.1 Minimizing Costs, Maximizing Spectrum: -
Nortel has earned a reputation for improving spectral efficiency and minimizing cost in our GSM/GPRS core and access portfolios. For our Access portfolio, all currently shipped hardware is EDGE-ready with the only upgrade required for older base shipped hardware is EDGE-ready with the only upgrade required for older base stations being a new radio transceiver and power amplifier. The resulting EDGE coverage footprint can be better than the original GSM RF plan. Nortel’s EDGE solution was designed such that the power amplifier and radio modules support voice, GPRS and EDGE simultaneously, reducing the need for separate radios and RF spectrum for GPRS/EDGE and voice. Implemented with such flexibility, Nortel’s EDGE solution can be deployed on a cell site configured with only one radio per sector. Cost savings aside, Nortel’s EDGE solution will be differentiated in the marketplace by its spectral efficient characteristics of the access solution. Nortel continues to demonstrate innovation and leadership in spectral efficiency with major technological breakthroughs in system capacity and end-user quality of service (QoS)
2.1.2 Speeding Time To Revenue: -
With most network elements requiring only a software upgrade to support EDGE, the time interval between the business decision to deploy EDGE and the implementation of EDGE on the network can be quite short. In addition, Nortel has devised other features for driving EDGE services revenue right from the start. The most important of these is the bandwidth management for classes of users, which Nortel refers to as the PCUSN QoS Management feature. The ability to differentiate users and services also provides flexibility for the operator to articulate different service packages and address a greater number of market segments. High-end business users with high-bandwidth requirements can be provisioned a Gold or Silver attribute, for example, while flat rate mass market (text) subscribers can use a Bronze subscription. At the busy hour, the Gold and Silver segments can be set to leave, perhaps, 10 percent of relative bandwidth to the Bronze subscription. This
implementation will ensure a good level of customer satisfaction and loyalty from business users. Overall this could dramatically increase the revenues being generated during busy hours depending on the operator’s specific service offerings and user profiles.
In the near future software release from Nortel, R99 terminals supporting Packet Flow Context as defined in the 3GPP Standards release 99 will have the additional benefit of guaranteed throughput for conversational and streaming services. This additional service differentiation will ensure the availability of bandwidth for services like PTT delivering customer satisfaction even under very high traffic loading condition.
2.1.3 Operational excellence: -
The introduction of EDGE in the network translates into more bandwidth delivered to the base station, which requires more backhaul transmission. Furthermore, there is a benefit in being able to closely monitor and fine-tune EDGE radio parameters to maximize EDGE performance for the end users. Nortel’s EDGE solutions come with a set of features that
addresses these operational needs and network transmission costs through a robust, intuitive and easy-to-optimize EDGE software solution and two backhaul transport efficiency features that complement Nortel’s unique strengths in the backbone. Nortel’s EDGE software provides intuitive parameter options for simple but highly effective EDGE performance
optimization, giving the operator a competitive edge.
Backhaul considerations are integrated into the Nortel EDGE solution portfolio. To support EDGE with a standard TDM interface, EDGE’s new modulation and coding scheme would require up to eight additional DS0s per radio. To mitigate this impact, Nortel is introducing an asynchronous interface on the backhaul that can reduce the need for additional DS0s by dynamically sharing a smaller pool of resources, reducing the additional bandwidth
required at the cell site by up to 50 percent. The dynamic AGPRS interface between the BSC and the PCUSN also provides some pooling of resources with a similar 30 percent gain on the number of T1s required.
3.2 Optimal Resource Use With N1: -
As part of the shift from application delivery to network service delivery, data management becomes considerably more complex. Companies need to be able to consolidate computing resources, easily manage them from a centralized view, and optimize resource utilization to maximize their return on investment. Sun solutions first disaggregate resources by delivering component solutions that are tuned for specific functions. At the lowest end, blade servers will provide tightly integrated edge and Tier 1 functionality, enabling intelligent blades to achieve optimal processing for a specific function, such as SSL encryption or load balancing. Sun plans to then reaggregate all of a company’s optimized computing resources with N1— Sun’s vision, architecture, and products for making entire data centers appear as one system. Instead of requiring system management for each server, N1 will provide a single virtual view of a company’s complete computing infrastructure. This will enable a roomful of servers, blades, storage devices, applications, and networking components to appear as a single entity. As a result, system administrators will be able to manage all of these infrastructure components from a single, central view instead of sending out a team of engineers to reconfigure compute resources as workloads change. This virtual view will also support automated configuration, enabling a manager to state a parameter that will then be implemented intelligently across all affected tiers, converting centralized policy into distributed local policy. N1 data center management promises revolutionary benefits, including the following:
· Increases business agility by supporting dynamic reallocation of resources as processing demands and business needs change
· Eliminates the need for individual systems to maintain excess capacity for peak processing demands by allowing excess capacity to be shared
· Boosts server utilization from industry norms of 15 to 30 percent up to 80 percent or higher
· Significantly reduces resource management complexity and the need for manual intervention
· Simplifies deployment of new services
· Protects technology investments by integrating existing equipment
· Increases availability by leveraging the N1 pool of resources to reassign services
· Provides a Web-based single point of control that delivers anywhere, anytime administration
Ultimately, these benefits can result in significantly lower operations costs by eliminating manual management tasks and simplifying resource allocation. These cost savings are critical, as today’s companies spend more than 70 percent of their information technology budget on managing data center complexity. By integrating edge functions into intelligent blade servers that support centralized N1 management, Sun will help to optimize resource use, simplify installation and administration, and deliver exceptional price/performance both in Tier 1 and at the edge of the network. For more information on Sun’s N1 vision, see the Additional References section below.
Tuesday, January 6, 2009
BIT TORRENT
ON
BIT TORRENT
CERTIFICATE
This is to certify that Mr. Suraj Kadam has successfully completed his seminar on Bit torrent in partial fulfillment of third year degree course in Information Technology in academic year 2006 – 2007.
Date :
Prof. Mr. Narendra Pathak Prof. Mrs. Rathi Prof. Dr. A.S Tavildar
(Head of the Department) Seminar Guide Principal
V.I.I.T, Pune. V.I.I.T, Pune. V.I.I.T, Pune.
Bract’s VIIT Pune – 48.
Department Of Information Technology
Sr. No. – 2/3/4,
Kondhawa Bdruk.