a project report on linux server administration

93
A PROJECT REPORT ON LINUX SERVER ADMINISTRATION Splunk is also added Avinash Kumar 10/11/2016

Upload: avinash-kumar

Post on 13-Apr-2017

97 views

Category:

Internet


2 download

TRANSCRIPT

Page 1: A Project Report on Linux Server Administration

A PROJECT REPORT ON LINUX SERVER ADMINISTRATION

Splunk is also added

Avinash Kumar

10/11/2016

Page 2: A Project Report on Linux Server Administration

ABSTRACT

Linux Server Administration is important to ensure the proper working of the servers to

provide services to the client. There is a relationship between Server & Client. The

purpose of the server is to fulfil the request made by the client. When there are a lot of

clients to handle for a server, the server needs to be administered by qualified personnel

or authorized operator. For example, suppose there are 30,000 hits per minutes to a server

and those hits requests for different types of services to the server. Then a server has to

determine the number of requests and fulfil their entire request in time without any error

and breakdown. Another instance may be that, if due to increasing number of hits server

gets down. Then there must be qualified personals to inquire the defects and bring back

all the downed servers to online. So Linux Server Administration is totally coined

towards management and deployment of Linux Servers.

Keyword: Linux Server Administration, Server system

1

Page 3: A Project Report on Linux Server Administration

TABLE CONTENTS

CHAPTER TILTE PAGEABSTRACT

CONTENTS

LIST OF TABLE

LIST OF FIGURE

1 INTRODUCTION 1

1.1 NEED OF SERVERS 2

1.2 A SERVER-CLIENT RELATIONSHIP 2 1.3 COMPONENTS OF A SERVER

2 SOFTWARE AND HARDWARE REQURIMENTS 3

2.1 SOFTWARE REQURIMENT 32.1.1 INSTALLING A LINUX SYSTEM 32.1.2 CONFIGURING THE SYSTEM &

INSTALLING ADDITIONAL PACKAGE

2.2 HARDWARE REQUIREMENT 4

3 WEB SERVER DESCRIPTION 53.1 HTTPD 3.2 FTP 3.3 NFS 3.4 NIS 3.5 NTP3.6 SAMBHA3.7 SSH3.8 TELNET3.9 MAIL SERVER3.10 DHCP3.11 DNS

4 PROJECT DISCRIPTION 12

5 SPLUNK 24

2

Page 4: A Project Report on Linux Server Administration

6 OUTPUT 33

CONCLUSION 42REFERENCE

3

Page 5: A Project Report on Linux Server Administration

LIST OF FIGURE

FIGURES TITLE PAGE

Figure.1.1 A CLIENT-SERVER RELATIONSHIP 11Figure.1.2 A LOOK OF A SERVER 15Figure.2.1 INSTALLING REDHAT ENTERPRISE LINUX 7.2 17Figure.2.2 SOFTWARE & HARDWARE REQUIREMENTS 18Figure.3.1 THE APACHE WEB SERVER 21Figure.3.2 THE ACTIVE & PASSIVE FTP WEB SERVER 26Figure.3.3 THE NFS WEB SERVER 29Figure.3.4 THE NTP WEB SERVER 34Figure.3.5 THE SAMBA SERVER 40Figure.3.6 THE SSH SERVER 46Figure.3.7 THE TELNET SERVER 51Figure.3.8 THE MAIL SERVER 56Figure.3.9 THE DHCP SERVER 61Figure.3.10 THE DNS SERVER 66

4

Page 6: A Project Report on Linux Server Administration

Chapter-1

INTRODUCTION

1 Introduction:

In a technical sense, a server is an instance of a computer program that

accepts and responds to requests made by another program, known as a

client. Less formally, any device that runs server software could be

considered a server as well. Servers are used to manage network resources.

For example, a user may setup a server to control access to a network,

send/receive e-mail, manage print jobs, or host a website.

Some servers are committed to a specific task, often referred to as dedicated.

As a result, there are a number of dedicated server categories, like print

servers, file servers, network servers, and database servers. However, many

servers today are shared servers which can take on the responsibility of e-

mail, DNS, FTP, and even multiple websites in the case of a web server.

Because they are commonly used to deliver services that are required

constantly, most servers are never turned off. Consequently, when servers

fail, they can cause the network users and company many problems. To

alleviate these issues, servers are commonly high-end computers setup to be

fault tolerant.

1.1 NEED OF SERVERS:

5

Page 7: A Project Report on Linux Server Administration

As we know that internet is an ocean of data. Every nook & cranny of the

world uses internet. There are millions of websites containing text, audio,

video, images etc. the user of internet always access these contents from all

over the world. As we know that each and every website is stored on

someone’s storage device and every one cannot keep their devices online for

a long time. So we need a device that can be kept online for long times

without any discontinuity. That’s comes the need of servers. The server is a

place where we can place our data (websites, images, video, audio etc.) at

one place with 24x7 access to all our users. Following are the other

advantages of server:

All time access to all users.

The hardware & software is upgraded according to time. The owner of

any website has not to worry about their technical front.

All information is at one place.

No need of technical expertization of any server related term because

the entire tasks are done by server personnel.

Data processing is fast.

Can track records of accessing.

1.2A CLIENT-SERVER RELATIONSHIP:

The client–server model is a distributed application structure that partitions

tasks or workloads between the providers of a resource or service, called

servers, and service requesters, called clients. Often clients and servers

communicate over a computer network on separate hardware, but both client

6

Page 8: A Project Report on Linux Server Administration

and server may reside in the same system. A server host runs one or more

server programs which share their resources with clients. A client does not

share any of its resources, but requests a server's content or service function.

Clients therefore initiate communication sessions with servers which await

incoming requests. Examples of computer applications that use the client–

server model are Email, network printing, and the World Wide Web.

The Client-server characteristic describes the relationship of cooperating

programs in an application. The server component provides a function or

service to one or many clients, which initiate requests for such services.

Servers are classified by the services they provide. For instance, a web

server serves web pages and a file server serves computer files. A shared

resource may be any of the server computer's software and electronic

components, from programs and data to processors and storage devices. The

sharing of resources of a server constitutes a service.

Whether a computer is a client, a server, or both, is determined by the nature

of the application that requires the service functions. For example, a single

computer can run web server and file server software at the same time to

serve different data to clients making different kinds of requests. Client

software can also communicate with server software within the same

computer. Communication between servers, such as to synchronize data, is

sometimes called inter-server or server-to-server communication.

7

Page 9: A Project Report on Linux Server Administration

Fig. 1.1 A Client-Server relationship

1.3 COMPONENTS OF A SERVER:

The hardware components that a typical server computer comprises are

similar to the components used in less expensive client computers. However,

server computers are usually built from higher-grade components than client

computers. The following paragraphs describe the typical components of a

server computer.

Motherboard

The motherboard is the computer's main electronic circuit board to which all

the other components of your computer are connected. More than any other

component, the motherboard is the computer. All other components attach to

the motherboard.

8

Page 10: A Project Report on Linux Server Administration

The major components on the motherboard include the processor (or CPU),

supporting circuitry called the chipset, memory, expansion slots, a standard

IDE hard drive controller, and input/output (I/O) ports for devices such as

keyboards, mice, and printers. Some motherboards also include additional

built-in features such as a graphics adapter, SCSI disk controller, or a

network interface.

Processor

The processor, or CPU, is the brain of the computer. Although the processor

isn't the only component that affects overall system performance, it is the

one that most people think of first when deciding what type of server to

purchase. At the time of this writing, Intel had four processor models

designed for use in server computers:

Itanium 2: 1.60GHz clock speed; 1–2 processor cores

Xeon: 1.83–2.33GHz clock speed; 1–4 processor cores

Pentium D: 2.66-3.6GHz clock speed; 2 processor cores

Pentium 4: 2.4-3.6GHz clock speed; 1 processor core

Each motherboard is designed to support a particular type of processor.

CPUs come in two basic mounting styles: slot or socket. However, you can

choose from several types of slots and sockets, so you have to make sure

that the motherboard supports the specific slot or socket style used by the

CPU. Some server motherboards have two or more slots or sockets to hold

two or more CPUs.

9

Page 11: A Project Report on Linux Server Administration

NOTE: The term clock speed refers to how fast the basic clock that drives

the processor's operation ticks. In theory, the faster the clock speed, the

faster the processor. However, clock speed alone is reliable only for

comparing processors within the same family. In fact, the Itanium processors

are faster than Xeon processors at the same clock speed. The same holds true

for Xeon processors compared with Pentium D processors. That's because

the newer processor models contain more advanced circuitry than the older

models, so they can accomplish more work with each tick of the clock.

The number of processor cores also has a dramatic effect on performance.

Each processor core acts as if it's a separate processor. Most server

computers use dual-core (two processor cores) or quad-core (four cores)

chips.

Memory

Don't scrimp on memory. People rarely complain about servers having too

much memory. Many different types of memory are available, so you have

to pick the right type of memory to match the memory supported by your

motherboard. The total memory capacity of the server depends on the

motherboard. Most new servers can support at least 12GB of memory, and

some can handle up to 32GB.

Hard drives

Most desktop computers use inexpensive hard drives called IDE drives

(sometimes also called ATA). These drives are adequate for individual users,

but because performance is more important for servers, another type of drive

10

Page 12: A Project Report on Linux Server Administration

known as SCSI is usually used instead. For the best performance, use the

SCSI drives along with a high-performance SCSI controller card.

Recently, a new type of inexpensive drive called SATA has been appearing

in desktop computers. SATA drives are also being used more and more in

server computers as well due to their reliability and performance.

Network connection

The network connection is one of the most important parts of any server.

Many servers have network adapters built into the motherboard. If your

server isn't equipped as such, you'll need to add a separate network adapter

card.

Video

Fancy graphics aren't that important for a server computer. You can equip

your servers with inexpensive generic video cards and monitors without

affecting network performance. (This is one of the few areas where it's

acceptable to cut costs on a server.)

Power supply

Because a server usually has more devices than a typical desktop computer,

it requires a larger power supply (300 watts is typical). If the server houses a

large number of hard drives, it may require an even larger power supply.

11

Page 13: A Project Report on Linux Server Administration

Fig. 1.2 A look of a Server

12

Page 14: A Project Report on Linux Server Administration

Chapter-2

Software and Hardware Requirement

2.1 SOFTWARE REQUIREMENT:

To use your local computer to develop your server, you must install a Linux

system. Windows can also be used to create & deploy servers but carrying

these tasks in windows becomes difficult. It’s recommended to use Linux

system. RedHat Enterprise Linux 7.2 is one of the best Linux OS that can be

used.

2.1.1 INSTALLIG THE REDHAT ENTERPRISE LINUX 7.2:

Installing a Linux system is easy and fast task. There is one more reason to

use Linux system is because it’s free.

13

Page 15: A Project Report on Linux Server Administration

Fig 2.1 Installing Redhat Enterprise Linux 7.2

2.1.2 CONFIGURING THE SYSTEM:

As the Linux system is installed i.e. RHEL 7.2, log in as root. Now we’ve to

configure it by installing some additional packages and upgrading the

system packages.

Open the Terminal and type following commands to install updates:

[root@localhost Desktop] # yum install updates

14

Page 16: A Project Report on Linux Server Administration

2.2 HARDWARE EQUIREMENT:

Minimum requirement is Pentium 4 or AMD or Celeron Processor. All the

processors above this configuration would be very well working to go with

Linux. So, the processors like Core 2 Duo Processor, Dual Core Processor,

Dual core i3, Dual core i5, Dual core i7, AMD Duron, AMD Sempron,

AMD Turion, MD Opteron, AMD Phenom 1, and Celeron III are

recommended.

Minimum of 512 MB RAM is required and the RAM above this size would

be recommended.

Fig 2.2: Software & Hardware Requirements

15

Page 17: A Project Report on Linux Server Administration

3. WEB SERVER DESCRIPTION:

3.1. HTTPD:

INTRODUCTION:

The Hypertext Transfer Protocol (HTTP) is an application protocol for

distributed, collaborative, hypermedia information systems. HTTP is the

foundation of data communication for the World Wide Web.

Hypertext is structured text that uses logical links (hyperlinks) between

nodes containing text. HTTP is the protocol to exchange or transfer

hypertext.

Development of HTTP was initiated by Tim Berners-Lee at CERN in 1989.

Standards development of HTTP was coordinated by the Internet

Engineering Task Force (IETF) and the World Wide Web Consortium

(W3C), culminating in the publication of a series of Requests for Comments

(RFCs). The first definition of HTTP/1.1, the version of HTTP in common

use, occurred in RFC 2068 in 1997, although this was obsoleted by RFC

2616 in 1999.

A later version, the successor HTTP/2, was standardized in 2015, and is now

supported by major web servers.

HTTP functions as a request–response protocol in the client–server

computing model. A web browser, for example, may be the client and an

application running on a computer hosting a web site may be the server. The

client submits an HTTP request message to the server. The server, which

16

Page 18: A Project Report on Linux Server Administration

provides resources such as HTML files and other content, or performs other

functions on behalf of the client, returns a response message to the client.

The response contains completion status information about the request and

may also contain requested content in its message body.

A web browser is an example of a user agent (UA). Other types of user

agent include the indexing software used by search providers (web

crawlers), voice browsers, mobile apps, and other software that accesses,

consumes, or displays web content.

HTTP is designed to permit intermediate network elements to improve or

enable communications between clients and servers. High-traffic websites

often benefit from web cache servers that deliver content on behalf of

upstream servers to improve response time. Web browsers cache previously

accessed web resources and reuse them when possible to reduce network

traffic. HTTP proxy servers at private network boundaries can facilitate

communication for clients without a globally routable address, by relaying

messages with external servers.

HTTP is an application layer protocol designed within the framework of the

Internet Protocol Suite. Its definition presumes an underlying and reliable

transport layer protocol, and Transmission Control Protocol (TCP) is

commonly used. However HTTP can be adapted to use unreliable protocols

such as the User Datagram Protocol (UDP), for example in HTTPU and

Simple Service Discovery Protocol (SSDP).

HTTP resources are identified and located on the network by uniform

resource locators (URLs), using the uniform resource identifier (URI)

17

Page 19: A Project Report on Linux Server Administration

schemes http and https. URIs and hyperlinks in Hypertext Markup Language

(HTML) documents form inter-linked hypertext documents.

HTTP/1.1 is a revision of the original HTTP (HTTP/1.0). In HTTP/1.0 a

separate connection to the same server is made for every resource request.

HTTP/1.1 can reuse a connection multiple times to download images,

scripts, stylesheets etc. after the page has been delivered. HTTP/1.1

communications therefore experience less latency as the establishment of

TCP connections presents considerable overhead.

Fig. 3.1 The Apache Web Server

18

Page 20: A Project Report on Linux Server Administration

INSTALLATION:

NOTE: Installation of any web server package on RHEL 7.2 or any other

Linux requires only 3-steps:-

Step 1: Install the required software.

Step 2: Configure the software.

Step 3: Start the service (daemon).

Step 1: Install the httpd package:

Open the terminal. Then write the following command to install the

httpd package.

[root@localhost Desktop] # yum install httpd

Once the httpd package is installed properly then go to the next step.

Step 2: Configure the software:

Configuring the software means changing the internal settings of the

software. Internal settings contain default port no. , default location to look

up for webpages, default type of webpage to accept etc. if there is any need

to configure these settings then type the following command:

[root@localhost Desktop] # vim /etc/httpd/conf/httpd.conf

19

Page 21: A Project Report on Linux Server Administration

Step 3: Starting the service:

Now start the service i.e. the daemon by typing following command:

[root@localhost Desktop] # systemctl start httpd

The service of Apache Web Server (httpd) is started.

NOTE: When there is communication over the network, there comes the

concept of firewalls. Firewall prevents any unauthorized connection over

any network. To prevent this intervention caused by the firewall in RHEL 7.2

we write following commands:

[root@localhost Desktop] # setenforce 0

[root@localhost Desktop] # iptables –F

This must be done on each and every server which is going to be created.

20

Page 22: A Project Report on Linux Server Administration

3.2 FTP:

INTRODUCTION:

File Transfer Protocol (FTP) is a standard Internet protocol for transmitting

files between computers on the Internet over TCP/IP connections.

FTP is a client-server protocol that relies on two communications channels

between client and server: a command channel for controlling the

conversation and a data channel for transmitting file content. Clients initiate

conversations with servers by requesting to download a file. Using FTP, a

client can upload, download, delete, rename, move and copy files on a

server. A user typically needs to log on to the FTP server, although some

servers make some or all of their content available without login, also known

as anonymous FTP.

FTP sessions work in passive or active modes. In active mode, after a client

initiates a session via a command channel request, the server initiates a data

connection back to the client and begins transferring data. In passive mode,

the server instead uses the command channel to send the client the

information it needs to open a data channel. Because passive mode has the

client initiating all connections, it works well across firewalls and Network

Address Translation (NAT) gateways.

FTP was originally defined in 1971, prior to the definition of TCP and IP,

and has been redefined many times -- e.g., to use TCP/IP (RFC 765 and

RFC 959), and then Internet Protocol Version 6 (IPv6), (RFC 2428). Also,

because it was defined without much concern for security, it has been

21

Page 23: A Project Report on Linux Server Administration

extended many times to improve security: for example, versions that encrypt

via a TLS connection (FTPS) or that work with Secure File Transfer

Protocol (SFTP), also known as SSH File Transfer Protocol.

Users can work with FTP via a simple command line interface (for example,

from a console or terminal window in Microsoft Windows, Apple OS X or

Linux) or with a dedicated graphical user interface (GUI). Web browsers can

also serve as FTP clients.

Although a lot of file transfer is now handled using HTTP, FTP is still

commonly used to transfer files "behind the scenes" for other applications --

e.g., hidden behind the user interfaces of banking, a service that helps build a

website, such as Wix or SquareSpace, or other services. It is also used, via

Web browsers, to download new applications.

22

Page 24: A Project Report on Linux Server Administration

Fig. 3.2 The Active & Passive FTP Web Server

23

Page 25: A Project Report on Linux Server Administration

INSTALLATION:

Step 1: Install the vsftpd package:

Open the terminal. Then write the following command to install the vsftpd

package.

[root@localhost Desktop] # yum install vsftpd

Once the vsftpd package is installed properly then go to the next step.

Step 2: Configure the software:

Configuring the software means changing the internal settings of the

software. Internal settings contain default port no. , default location to look

up for webpages, default type of webpage to accept etc. if there is any need

to configure these settings then type the following command:

[root@localhost Desktop] # vim /etc/vsftpd/vsftpd.conf

Step 3: Starting the service:

Now start the service i.e. the daemon by typing following command:

[root@localhost Desktop] # systemctl start vsftpd

The service of FTP Web Server (vsftpd) is started.

24

Page 26: A Project Report on Linux Server Administration

3.3. NFS:

INTRODUCTION:

The Network File System (NFS) is a client/server application that lets a

computer user view and optionally store and update files on a remote

computer as though they were on the user's own computer. The

NFS protocol is one of several distributed file system standards for network-

attached storage (NAS).

NFS allows the user or system administrator to mount (designate as

accessible) all or a portion of a file system on a server. The portion of the

file system that is mounted can be accessed by clients with whatever

privileges are assigned to each file (read-only or read-write). NFS uses

Remote Procedure Calls (RPC) to route requests between clients and servers.

NFS was originally developed by Sun Microsystems in the 1980's and is

now managed by the Internet Engineering Task Force (IETF). NFSv4.1

(RFC-5661) was ratified in January 2010 to improve scalability by adding

support for parallel access across distributed servers. Network File Sytem

versions 2 and 3 allows the User Datagram Protocol (UDP) running over

an IP network to provide stateless network connections between clients and

server, but NFSv4 requires use of the Transmission Control Protocol (TCP).

25

Page 27: A Project Report on Linux Server Administration

Fig. 3.3 The NFS Web Server

INSTALLATION:

Step 1: Install the nfs-utils package:

Open the terminal. Then write the following command to install the nfs-utils

package.

[root@localhost Desktop] # yum install nfs-utils

Once the nfs-utils package is installed properly then go to the next step.

26

Page 28: A Project Report on Linux Server Administration

Step 2: Configure the software:

Configuring the software means changing the internal settings of the

software. Internal settings contain default port no. , default location to look

up for webpages, default type of webpage to accept etc. if there is any need

to configure these settings then type the following command:

[root@localhost Desktop] # vim /etc/exports

Step 3: Starting the service:

Now start the service i.e. the daemon by typing following command:

[root@localhost Desktop] # systemctl start nfs-server

The service of NFS Web Server is started.

27

Page 29: A Project Report on Linux Server Administration

3.4. NIS:

INTRODUCTION:

NIS (Network Information System) is a network naming and administration

system for smaller networks that was developed by Sun Microsystems. NIS+

is a later version that provides additional security and other facilities. Using

NIS, each host client or server computer in the system has knowledge about

the entire system. A user at any host can get access to files or applications on

any host in the network with a single user identification and password. NIS

is similar to the Internet's domain name system (DNS) but somewhat simpler

and designed for a smaller network. It's intended for use on local area

networks.

NIS uses the client/server model and the Remote Procedure Call (RPC)

interface for communication between hosts. NIS consists of a server, a

library of client programs, and some administrative tools. NIS is often used

with the Network File System (NFS). NIS is a UNIX-based program.

Although Sun and others offer proprietary versions, most NIS code has been

released into the public domain and there are freeware versions available.

NIS was originally called Yellow Pages but because someone already had a

trademark by that name, it was changed to Network Information System. It

is still sometimes referred to by the initials: "YP".

Sun offers NIS+ together with its NFS product as a solution for Windows

PC networks as well as for its own workstation networks.

28

Page 30: A Project Report on Linux Server Administration

INSTALLATION:

Step 1: Install the ypserv package:

Open the terminal. Then write the following command to install the nfs-utils

package.

[root@localhost Desktop] # yum install ypserv

Once the ypserv package is installed properly then go to the next step.

Step 2: Configure the software:

Configuring the software means changing the internal settings of the

software. Internal settings contain default port no. , default location to look

up for webpages, default type of webpage to accept etc. if there is any need

to configure these settings then type the following command:

[root@localhost Desktop] # vim /etc/yp.conf

Step 3: Starting the service:

Now start the service i.e. the daemon by typing following command:

[root@localhost Desktop] # systemctl start ypserv

The service of NIS Web Server is started.

29

Page 31: A Project Report on Linux Server Administration

3.5. NTP:

INTRODUCTION:

NTP (Network Time Protocol) is a network protocol that enables you to

synchronize clocks on devices over a network. It works by using one or

more NTP servers that maintain a highly accurate time, and allows clients to

query for that time. These client devices query the server, then automatically

adjust their own internal clock to mirror the NTP server. The NetBurner

NTP server obtains highly accurate time by synchronizing it’s local clock to

GPS satellites. Once plugged in to your network, the NTP device will allow

your devices to maintain synchronized time.

NTP Servers are generally categorized in to several tiered categories. These

categories are referred to as stratum. As the stratum number increases, the

accuracy of the time generally decreases.

1. Stratum 0 devices are devices such as atomic, GPS, and radio clocks.

These devices offer the highest accuracy, but are not usually publicly

accessible.

2. Stratum 1 devices are network servers that are connected directly to

stratum 0 devices. Some public stratum 1 devices can be found, but they

often come with usage restrictions, including limiting the number of requests

and limiting usage for commercial devices.

3. Stratum 2 devices are network servers that synchronize their time to one

or more stratum 1 or 2 devices. Public, open use NTP servers often fall in to

this category.

30

Page 32: A Project Report on Linux Server Administration

Stratum numbers can keep increasing, up to a theoretical stratum 256 device.

However, any device listed as stratum 16 or greater should be considered

inaccurate.

The NetBurner NTP Server is a stratum 1 device connected directly to a

GPS time chip.

Fig. 3.4 The NTP Web Server

31

Page 33: A Project Report on Linux Server Administration

INSTALLATION:

Sometimes Internet NTP servers do not meet your needs. The PK70 NTP

device is a low cost NTP server that can be added to your local network.

Setting up the NetBurner NTP server could not be easier. Unbox the device,

plug in the power cable, network cable, and attach the included antennae.

For optimal usage, the antenna receiver should be placed next to to a

window with a clear view of the sky. Once the device powers up, the red led

light will turn green, indicating the device is synchronized.

Some configuration options, status screens, and XML output can be reached

on the PK70 NTP device by pointing your web browser to the IP address of

the device. Click to see a live demonstration of the NTP device web server.

If you are unsure of the local IP address of your NetBurner NTP server,

download IPSetup, which will can your local network for NetBurner devices

and display their HTTP web address.

Typical Linux distributions include ntpd, the daemon for syncing to an NTP

server. If you are missing ntpd, then you should install ntpd with your

favorite package manager.

Step 1: From the command line, use sudo privileges to edit the /etc/ntp.conf

file. sudo vi /etc/ntp.conf

Step 2: Input one or more ntp servers, one per line. Prepend “server” to

every URL

32

Page 34: A Project Report on Linux Server Administration

Example ntp.conf file

server time.apple.com

server time.nist.gov

server 10.1.1.78

Step 3: Restart ntpd, usually accomplished with /etc/init.d/ntpd restart

Once restarted, you can monitor ntpd with the command ntpq -p. This will

list all of the NTP server in use, and include diagnostic information for all

known NTP servers. It may take several minutes for an NTP server to be

selected and synchronized with. Once an NTP server is selected, it will be

indicated with a * in the ntpq output.

33

Page 35: A Project Report on Linux Server Administration

3.6. SAMBA:

INTRODUCTION:

Samba is a free software re-implementation of the SMB/CIFS networking

protocol, and was originally developed by Andrew Tridgell. Samba provides

file and print services for various Microsoft Windows clients and can

integrate with a Microsoft Windows Server domain, either as a Domain

Controller (DC) or as a domain member. As of version 4, it supports Active

Directory and Microsoft Windows NT domains.

Samba runs on most Unix, OpenVMS and Unix-like systems, such as Linux,

Solaris, AIX and the BSD variants, including Apple's OS X Server, and OS

X client (version 10.2 and greater). Samba is standard on nearly all

distributions of Linux and is commonly included as a basic system service

on other Unix-based operating systems as well. Samba is released under the

terms of the GNU General Public License. The name Samba comes from

SMB (Server Message Block), the name of the standard protocol used by the

Microsoft Windows network file system.

Samba allows file and print sharing between computers running Microsoft

Windows and computers running Unix. It is an implementation of dozens of

services and a dozen protocols, including:

34

Page 36: A Project Report on Linux Server Administration

NetBIOS over TCP/IP (NBT)

SMB

CIFS (an enhanced version of SMB)

DCE/RPC or more specifically, MSRPC, the Network Neighborhood

suite of protocols

A WINS server also known as a NetBIOS Name Server (NBNS)

The NT Domain suite of protocols which includes NT Domain

Logons

Security Accounts Manager (SAM) database

Local Security Authority (LSA) service

NT-style printing service (SPOOLSS), NTLM and more recently

Active Directory Logon which involves a modified version of

Kerberos and a modified version of LDAP.

DFS server

All these services and protocols are frequently incorrectly referred to as just

NetBIOS or SMB. The NBT (NetBIOS over TCP/IP) and WINS protocols

are deprecated on Windows.

Samba sets up network shares for chosen Unix directories (including all

contained subdirectories). These appear to Microsoft Windows users as

normal Windows folders accessible via the network. Unix users can either

mount the shares directly as part of their file structure using the smbmount

command or, alternatively, can use a utility, smbclient (libsmb) installed

with Samba to read the shares with a similar interface to a standard

command line FTP program. Each directory can have different access

privileges overlaid on top of the normal Unix file protections. For example:

home directories would have read/write access for all known users, allowing

35

Page 37: A Project Report on Linux Server Administration

each to access their own files. However they would still not have access to

the files of others unless that permission would normally exist. Note that the

netlogon share, typically distributed as a read only share from

/etc/samba/netlogon, is the logon directory for user logon scripts.

Samba services are implemented as two daemons:

smbd, which provides the file and printer sharing services, and

nmbd, which provides the NetBIOS-to-IP-address name service.

NetBIOS over TCP/IP requires some method for mapping NetBIOS

computer names to the IP addresses of a TCP/IP network.

Samba configuration is achieved by editing a single file (typically installed

as /etc/smb.conf or /etc/samba/smb.conf). Samba can also provide user

logon scripts and group policy implementation through poledit.

Samba is included in most Linux distributions and is started during the boot

process. On Red Hat, for instance, the /etc/rc.d/init.d/smb script runs at boot

time, and starts both daemons. Samba is not included in Solaris 8, but a

Solaris 8-compatible version is available from the Samba website.

Samba includes a web administration tool called Samba Web Administration

Tool (SWAT). SWAT was removed starting with version 4.1.

36

Page 38: A Project Report on Linux Server Administration

Fig 3.5 The Samba Web Server

INSTALLATION:

Step 1: Install the samba-client package:

Open the terminal. Then write the following command to install the

samba-client package.

[root@localhost Desktop] # yum install samba-client

Once the samba-client package is installed properly then go to the next step.

Step 2: Configure the software:

37

Page 39: A Project Report on Linux Server Administration

Configuring the software means changing the internal settings of the

software. Internal settings contain default port no. , default location to look

up for webpages, default type of webpage to accept etc. if there is any need

to configure these settings then type the following command:

[root@localhost Desktop] # vim /etc/samba/smb.conf

Step 3: Starting the service:

Now start the service i.e. the daemon by typing following command:

[root@localhost Desktop] # systemctl start smb

The service of Samba Web Server is started.

38

Page 40: A Project Report on Linux Server Administration

3.7. SSH:

INTRODUCTION:

Secure Shell (SSH) is a cryptographic network protocol for operating

network services securely over an unsecured network. The best known

example application is for remote login to computer systems by users.

SSH provides a secure channel over an unsecured network in a client-server

architecture, connecting an SSH client application with an SSH server.

Common applications include remote command-line login and remote

command execution, but any network service can be secured with SSH. The

protocol specification distinguishes between two major versions, referred to

as SSH-1 and SSH-2.

The most visible application of the protocol is for access to shell accounts on

Unix-like operating systems, but it sees some limited use on Windows as

well. In 2015, Microsoft announced that they would include native support

for SSH in a future release.

SSH was designed as a replacement for Telnet and for unsecured remote

shell protocols such as the Berkeley rlogin, rsh, and rexec protocols. Those

protocols send information, notably passwords, in plaintext, rendering them

susceptible to interception and disclosure using packet analysis. The

encryption used by SSH is intended to provide confidentiality and integrity

of data over an unsecured network, such as the Internet, although files

leaked by Edward Snowden indicate that the National Security Agency can

sometimes decrypt SSH, allowing them to read the content of SSH sessions.

39

Page 41: A Project Report on Linux Server Administration

SSH uses public-key cryptography to authenticate the remote computer and

allow it to authenticate the user, if necessary. There are several ways to use

SSH; one is to use automatically generated public-private key pairs to

simply encrypt a network connection, and then use password authentication

to log on.

Another is to use a manually generated public-private key pair to perform

the authentication, allowing users or programs to log in without having to

specify a password. In this scenario, anyone can produce a matching pair of

different keys (public and private). The public key is placed on all computers

that must allow access to the owner of the matching private key (the owner

keeps the private key secret). While authentication is based on the private

key, the key itself is never transferred through the network during

authentication. SSH only verifies whether the same person offering the

public key also owns the matching private key. In all versions of SSH it is

important to verify unknown public keys, i.e. associate the public keys with

identities, before accepting them as valid. Accepting an attacker's public key

without validation will authorize an unauthorized attacker as a valid user.

SSH is typically used to log in to a remote machine and execute commands,

but it also supports tunneling, forwarding TCP ports and X11 connections; it

can transfer files using the associated SSH file transfer (SFTP) or secure

copy (SCP) protocols. SSH uses the client-server model.

The standard TCP port 22 has been assigned for contacting SSH servers. An

SSH client program is typically used for establishing connections to an SSH

daemon accepting remote connections. Both are commonly present on most

modern operating systems, including Mac OS X, most distributions of

40

Page 42: A Project Report on Linux Server Administration

Linux, OpenBSD, FreeBSD, NetBSD, Solaris and OpenVMS. Notably,

Windows is one of the few modern desktop/server OSs that does not include

SSH by default. Proprietary, freeware and open source (e.g. PuTTY and the

version of OpenSSH which is part of Cygwin) versions of various levels of

complexity and completeness exist. Native Linux file managers (e.g.

Konqueror) can use the FISH protocol to provide a split-pane GUI with

drag-and-drop. The open source Windows program WinSCP provides

similar file management (synchronization, copy, remote delete) capability

using PuTTY as a back-end. Both WinSCP and PuTTY are available

packaged to run directly off a USB drive, without requiring installation on

the client machine. Setting up an SSH server in Windows typically involves

installation (e.g. via installing Cygwin ).

SSH is important in cloud computing to solve connectivity problems,

avoiding the security issues of exposing a cloud-based virtual machine

directly on the Internet. An SSH tunnel can provide a secure path over the

Internet, through a firewall to a virtual machine.

SSH is a protocol that can be used for many applications across many

platforms including most Unix variants (Linux, the BSDs including Apple's

OS X, and Solaris), as well as Microsoft Windows. Some of the applications

below may require features that are only available or compatible with

specific SSH clients or servers. For example, using the SSH protocol to

implement a VPN is possible, but presently only with the OpenSSH server

and client implementation.

For login to a shell on a remote host (replacing Telnet and rlogin)

For executing a single command on a remote host (replacing rsh)

41

Page 43: A Project Report on Linux Server Administration

For setting up automatic (password less) login to a remote server (for

example, using OpenSSH)

Secure file transfer

In combination with rsync to back up, copy and mirror files efficiently

and securely

For forwarding or tunneling a port (not to be confused with a VPN,

which routes packets between different networks, or bridges two

broadcast domains into one).

For using as a full-fledged encrypted VPN. Note that only OpenSSH

server and client supports this feature.

For forwarding X from a remote host (possible through multiple

intermediate hosts)

For browsing the web through an encrypted proxy connection with

SSH clients that support the SOCKS protocol.

For securely mounting a directory on a remote server as a filesystem

on a local computer using SSHFS.

For automated remote monitoring and management of servers through

one or more of the mechanisms discussed above.

For development on a mobile or embedded device that supports SSH.

42

Page 44: A Project Report on Linux Server Administration

Fig. 3.6 The SSH Web Server

INSTALLATION:

Step 1: Install the openssh-server package:

Open the terminal. Then write the following command to install the

openssh-server package.

[root@localhost Desktop] # yum install openssh-server

Once the openssh-server package is installed properly then go to the next

step.

Step 2: Configure the software:

Here we don’t need to configure the configuration file because the

configuration file is already configured for the network connection. The

43

Page 45: A Project Report on Linux Server Administration

default connection is stable as well as acceptable over any network. The

connection is secure, there is no any worry of breaching of security over any

network.

Step 3: Starting the service:

Now start the service i.e. the daemon by typing following command:

[root@localhost Desktop] # systemctl start sshd

The service of SSH Web Server is started.

44

Page 46: A Project Report on Linux Server Administration

3.8. TELNET:

INTRODUCTION:

Telnet is an application layer protocol used on the Internet or local area

networks to provide a bidirectional interactive text-oriented communication

facility using a virtual terminal connection. User data is interspersed in-band

with Telnet control information in an 8-bit byte oriented data connection

over the Transmission Control Protocol (TCP).

Telnet was developed in 1969 beginning with RFC 15, extended in RFC

854, and standardized as Internet Engineering Task Force (IETF) Internet

Standard STD 8, one of the first Internet standards.

Historically, Telnet provided access to a command-line interface (usually, of

an operating system) on a remote host, including most network equipment

and operating systems with a configuration utility (including systems based

on Windows NT). However, because of serious security concerns when

using Telnet over an open network such as the Internet, its use for this

purpose has waned significantly in favor of SSH.

The term telnet is also used to refer to the software that implements the

client part of the protocol. Telnet client applications are available for

virtually all computer platforms. Telnet is also used as a verb. To telnet

means to establish a connection with the Telnet protocol, either with

command line client or with a programmatic interface. For example, a

common directive might be: "To change your password, telnet to the server,

log in and run the passwd command." Most often, a user will be telnetting to

45

Page 47: A Project Report on Linux Server Administration

a Unix-like server system or a network device (such as a router) and

obtaining a login prompt to a command line text interface or a character-

based full-screen manager.

When Telnet was initially developed in 1969, most users of networked

computers were in the computer departments of academic institutions, or at

large private and government research facilities. In this environment,

security was not nearly as much a concern as it became after the bandwidth

explosion of the 1990s. The rise in the number of people with access to the

Internet, and by extension the number of people attempting to hack other

people's servers, made encrypted alternatives necessary.

Experts in computer security, such as SANS Institute, recommend that the

use of Telnet for remote logins should be discontinued under all normal

circumstances, for the following reasons:

Telnet, by default, does not encrypt any data sent over the connection

(including passwords), and so it is often feasible to eavesdrop on the

communications and use the password later for malicious purposes;

anybody who has access to a router, switch, hub or gateway located

on the network between the two hosts where Telnet is being used can

intercept the packets passing by and obtain login, password and

whatever else is typed with a packet analyzer.

Most implementations of Telnet have no authentication that would

ensure communication is carried out between the two desired hosts

and not intercepted in the middle.

Several vulnerabilities have been discovered over the years in

commonly used Telnet daemons.

46

Page 48: A Project Report on Linux Server Administration

These security-related shortcomings have seen the usage of the Telnet

protocol drop rapidly, especially on the public Internet, in favor of the

Secure Shell (SSH) protocol, first released in 1995. SSH provides much of

the functionality of telnet, with the addition of strong encryption to prevent

sensitive data such as passwords from being intercepted, and public key

authentication, to ensure that the remote computer is actually who it claims

to be. As has happened with other early Internet protocols, extensions to the

Telnet protocol provide Transport Layer Security (TLS) security and Simple

Authentication and Security Layer (SASL) authentication that address the

above concerns. However, most Telnet implementations do not support

these extensions; and there has been relatively little interest in implementing

these as SSH is adequate for most purposes.

It is of note that there are a large number of industrial and scientific devices

which have only Telnet available as a communication option. Some are built

with only a standard RS-232 port and use a serial server hardware appliance

to provide the translation between the TCP/Telnet data and the RS-232 serial

data. In such cases, SSH is not an option unless the interface appliance can

be configured for SSH.

47

Page 49: A Project Report on Linux Server Administration

Fig. 3.7 The TELNET Web Server

INSTALLATION:

Step 1: Install the telnet-server package:

Open the terminal. Then write the following command to install the

telnet-server package.

[root@localhost Desktop] # yum install telnet-server

Once the telnet-server package is installed properly then go to the next step.

Step 2: Configure the software:

Here we don’t need to configure the configuration file because the

configuration file is already configured for the network connection. The

48

Page 50: A Project Report on Linux Server Administration

default connection is stable as well as acceptable over any network. The

connection is secure, there is no any worry of breaching of security over any

network.

Step 3: Starting the service:

Now start the service i.e. the daemon by typing following command:

[root@localhost Desktop] # systemctl start telnet.socket

The service of Telnet Web Server is started.

49

Page 51: A Project Report on Linux Server Administration

3.9. MAIL SERVER:

INTRODUCTION:

Within Internet message handling services (MHS), a message transfer agent

or mail transfer agent (MTA) or mail relay is software that transfers

electronic mail messages from one computer to another using a client–server

application architecture. An MTA implements both the client (sending) and

server (receiving) portions of the Simple Mail Transfer Protocol.

The terms mail server, mail exchanger, and MX host may also refer to a

computer performing the MTA function. The Domain Name System (DNS)

associates a mail server to a domain with an MX record containing the

domain name of the host(s) providing MTA services.

A mail server is a computer that serves as an electronic post office for email.

Mail exchanged across networks is passed between mail servers that run

specially designed software. This software is built around agreed-upon,

standardized protocols for handling mail messages and any data files (such

as images, multimedia or documents) that might be attached to them.

A message transfer agent receives mail from either another MTA, a mail

submission agent (MSA), or a mail user agent (MUA). The transmission

details are specified by the Simple Mail Transfer Protocol (SMTP). When a

recipient mailbox of a message is not hosted locally, the message is relayed,

that is, forwarded to another MTA. Every time an MTA receives an email

message, it adds a Received trace header field to the top of the header of the

50

Page 52: A Project Report on Linux Server Administration

message,[4] thereby building a sequential record of MTAs handling the

message. The process of choosing a target MTA for the next hop is also

described in SMTP, but can usually be overridden by configuring the MTA

software with specific routes.

An MTA works in the background, while the user usually interacts directly

with a mail user agent. One may distinguish initial submission as first

passing through an MSA – port 587 is used for communication between an

MUA and an MSA while port 25 is used for communication between MTAs,

or from an MSA to an MTA;[5] this distinction is first made in RFC 2476.

For recipients hosted locally, the final delivery of email to a recipient

mailbox is the task of a message delivery agent (MDA). For this purpose the

MTA transfers the message to the message handling service component of

the message delivery agent. Upon final delivery, the Return-Path field is

added to the envelope to record the return path.

The function of an MTA is usually complemented with some means for

email clients to access stored messages. This function typically employs a

different protocol. The most widely implemented open protocols for the

MUA are the Post Office Protocol (POP3) and the Internet Message Access

Protocol (IMAP), but many proprietary systems exist for retrieving

messages (e.g. Exchange, Lotus Domino/Notes). Many systems also offer a

web interface for reading and sending email that is independent of any

particular MUA.

At its most basic, an MUA using POP3 downloads messages from the server

mailbox onto the local computer for display in the MUA. Messages are

51

Page 53: A Project Report on Linux Server Administration

generally removed from the server at the same time but most systems also

allow a copy to be left behind as a backup. In contrast, an MUA using IMAP

displays messages directly from the server, although a download option for

archive purposes is usually also available. One advantage this gives IMAP is

that the same messages are visible from any computer accessing the email

account, since messages aren't routinely downloaded and deleted from the

server. If set up properly, sent mail can be saved to the server also, in

contrast with POP mail, where sent messages exist only in the local MUA

and are not visible by other MUAs accessing the same account.

The IMAP protocol has features that allow uploading of mail messages and

there are implementations that can be configured to also send messages like

an MTA,[6] which combine sending a copy and storing a copy in the Sent

folder in one upload operation.

The reason for using SMTP as a standalone transfer protocol is twofold:

To cope with discontinuous connections. Historically, inter-network

connections were not continuously available as they are today and

many readers didn't need an access protocol, as they could access their

mailbox directly (as a file) through a terminal connection. SMTP, if

configured to use backup MXes, can transparently cope with

temporary local network outages. A message can be transmitted along

a variable path by choosing the next hop from a preconfigured list of

MXes with no intervention from the originating user.

Submission policies. Modern systems are designed for users to submit

messages to their local servers for policy, not technical, reasons. It

was not always that way. For example, the original Eudora email

52

Page 54: A Project Report on Linux Server Administration

client featured direct delivery of mail to the recipients' servers, out of

necessity. Today, funneling email through MSA systems run by

providers that in principle have some means of holding their users

accountable for the generation of the email is a defense against spam

and other forms of email abuse.[7]

Fig. 3.8 The Mail Server

INSTALLATION:

Step 1: Install the postfix package:

Open the terminal. Then write the following command to install the posfix

package.

53

Page 55: A Project Report on Linux Server Administration

[root@localhost Desktop] # yum install postfix

Once the postfix package is installed properly then go to the next step.

Step 2: Configure the software:

Configuring the software means changing the internal settings of the

software. Internal settings contain default port no. , default location to look

up for webpages, default type of webpage to accept etc. if there is any need

to configure these settings then type the following command:

[root@localhost Desktop] # vim /etc/postfix/main.conf

This configuration file is configured default to send email to anyone but

can’t receive. To receive we have to disable firewall feature of Linux.

Step 3: Starting the service:

Now start the service i.e. the daemon by typing following command:

[root@localhost Desktop] # systemctl start postfix

The service of Mail Server is started.

54

Page 56: A Project Report on Linux Server Administration

3.10. DHCP:

INTRODUCTION:

As long as you're learning about your IP address, you should learn a little

about something called DHCP—which stands for Dynamic Host

Configuration Protocol. Why bother? Because it has a direct impact on

millions of IP addresses, most likely including yours.

DHCP is at the heart of assigning you (and everyone) their IP address. The

key word in DHCP is protocol—the guiding rules and process for Internet

connections for everyone, everywhere. DHCP is consistent, accurate and

works the same for every computer. Remember that without an IP address,

you would not be able to receive the information you requested. As you've

learned (by reading IP: 101), your IP address tells the Internet to send the

information that you requested (Web page, email, data, etc.) right to the

computer that requested it.

Those incredible protocols

There are more than one billion computers in the world, and each individual

computer needs its own IP address whenever it's online. The TCP/IP

protocols (our computers' built-in, internal networking software) include a

DHCP protocol. It automatically assigns and keeps tabs of IP addresses and

any "subnetworks" that require them. Nearly all IP addresses are dynamic, as

opposed to "static" IP addresses that never change.

55

Page 57: A Project Report on Linux Server Administration

DHCP is a part of the "application layer," which is just one of the several

TCP/IP protocols. All of the processing and figuring out of what to send to

whom happens virtually instantly.

Clients and servers

The networking world classifies computers into two distinctive categories:

1) individual computers, called "hosts," and

2) computers that help process and send data (called "servers"). A DHCP

server is one computer on the network that has a number of IP address at its

disposal to assign to the computers/hosts on that network. If you use a cable

company for Internet access, making them your Internet Service Provider,

they likely are your DHCP server.

Permission slips

Think of getting an IP address as similar to obtaining a special permission

slip from the DHCP server to use the Internet. In this scenario, you are the

DHCP client—whenever you want to go on the Internet, your computer

automatically requests an IP address from the network's DHCP server. If

there's one available, the DHCP server sends a response containing an IP

address to your computer.

How DHCP works

The key word in DHCP is "dynamic." Because instead of having just one

fixed and specific IP address, most computers will be assigned one that is

available from a subnet or "pool" that is assigned to the network. The

Internet isn't one big computer in one big location. It's an interconnected

56

Page 58: A Project Report on Linux Server Administration

network of networks, all created to make one-on-one connections between

any two clients that want to exchange information.

One of the features of DHCP is that it provides IP addresses that "expire."

When DHCP assigns an IP address, it actually leases that connection

identifier to the user's computer for a specific amount of time. The default

lease is five days.

Here is how the DHCP process works when you go online:

1. Your go on your computer to connect to the Internet.

2. The network requests an IP address (this is actually referred to as a

DHCP discover message).

3. On behalf of your computer's request, the DHCP server allocates

(leases) to your computer an IP address. This is referred to as the

DHCP offer message.

4. Your computer (remember—you're the DHCP client) takes the first IP

address offer that comes along. It then responds with a DHCP request

message that verifies the IP address that's been offered and accepted.

5. DHCP then updates the appropriate network servers with the IP

address and other configuration information for your computer.

6. Your computer (or whatever network device you're using) accepts the

IP address for the lease term.

Typically, a DHCP server renews your lease automatically, without you (or

even a network administrator) having to do anything. However, if that IP

address's lease expires, you'll be assigned a new IP address using the same

DHCP protocols.

57

Page 59: A Project Report on Linux Server Administration

Here's the best part: You wouldn't even be aware of it, unless you happened

to check your IP address. Your Internet usage would continue as before.

DHCP takes place rather instantly, and entirely behind the scenes. We, as

everyday, ordinary computer users, never have to think twice about it. We

just get to enjoy this amazing and instantaneous technology that brings the

Internet to our fingertips when we open our browsers. I guess you could say

DHCP stands for "darn handy computer process"...or something like that.

Fig.3.9 The DHCP Server

58

Page 60: A Project Report on Linux Server Administration

INSTALLATION:

Step 1: Install the dhcp package:

Open the terminal. Then write the following command to install the posfix

package.

[root@localhost Desktop] # yum dhcp postfix

Once the dhcp package is installed properly then go to the next step.

Step 2: Configure the software:

Configuring the software means changing the internal settings of the

software. Internal settings contain default port no. , default location to look

up for webpages, default type of webpage to accept etc. if there is any need

to configure these settings then type the following command:

[root@localhost Desktop] # vim /etc/dhcp/dhcpd.conf

Step 3: Starting the service:

Now start the service i.e. the daemon by typing following command:

[root@localhost Desktop] # systemctl start dhcp

The service of DHCP Server is started.

59

Page 61: A Project Report on Linux Server Administration

3.11. DNS:

INTRODUCTION:

The Domain Name System (DNS) is a hierarchical decentralized naming

system for computers, services, or any resource connected to the Internet or

a private network. It associates various information with domain names

assigned to each of the participating entities. Most prominently, it translates

more readily memorized domain names to the numerical IP addresses

needed for the purpose of locating and identifying computer services and

devices with the underlying network protocols. By providing a worldwide,

distributed directory service, the Domain Name System is an essential

component of the functionality of the Internet.

The Domain Name System delegates the responsibility of assigning domain

names and mapping those names to Internet resources by designating

authoritative name servers for each domain. Network administrators may

delegate authority over sub-domains of their allocated name space to other

name servers. This mechanism provides distributed and fault tolerant service

and was designed to avoid a single large central database.

The Domain Name System also specifies the technical functionality of the

database service which is at its core. It defines the DNS protocol, a detailed

specification of the data structures and data communication exchanges used

in the DNS, as part of the Internet Protocol Suite. Historically, other

directory services preceding DNS were not scalable to large or global

directories as they were originally based on text files, prominently the

60

Page 62: A Project Report on Linux Server Administration

HOSTS.TXT resolver. The Domain Name System has been in use since the

1980s.

The Internet maintains two principal namespaces, the domain name

hierarchy and the Internet Protocol (IP) address spaces. The Domain Name

System maintains the domain name hierarchy and provides translation

services between it and the address spaces. Internet name servers and a

communication protocol implement the Domain Name System. A DNS

name server is a server that stores the DNS records for a domain; a DNS

name server responds with answers to queries against its database.

The most common types of records stored in the DNS database are for Start

of Authority (SOA), IP addresses (A and AAAA), SMTP mail exchangers

(MX), name servers (NS), pointers for reverse DNS lookups (PTR), and

domain name aliases (CNAME). Although not intended to be a general

purpose database, DNS can store records for other types of data for either

automatic lookups, such as DNSSEC records, or for human queries such as

responsible person (RP) records. As a general purpose database, the DNS

has also been used in combating unsolicited email (spam) by storing a real-

time blackhole list. The DNS database is traditionally stored in a structured

zone file.

An often-used analogy to explain the Domain Name System is that it serves

as the phone book for the Internet by translating human-friendly computer

hostnames into IP addresses. For example, the domain name

www.example.com translates to the addresses 93.184.216.119 (IPv4) and

2606:2800:220:6d:26bf:1447:1097:aa7 (IPv6). Unlike a phone book, DNS

can be quickly updated, allowing a service's location on the network to

61

Page 63: A Project Report on Linux Server Administration

change without affecting the end users, who continue to use the same host

name. Users take advantage of this when they use meaningful Uniform

Resource Locators (URLs), and e-mail addresses without having to know

how the computer actually locates the services.

Additionally, DNS reflects administrative partitioning. For zones operated

by a registry, also known as public suffix zones, administrative information

is often complemented by the registry's RDAP and WHOIS services. That

data can be used to gain insight on, and track responsibility for, a given host

on the Internet.

An important and ubiquitous function of DNS is its central role in

distributed Internet services such as cloud services and content delivery

networks. When a user accesses a distributed Internet service using a URL,

the domain name of the URL is translated to the IP address of a server that is

proximal to the user. The key functionality of DNS exploited here is that

different users can simultaneously receive different translations for the same

domain name, a key point of divergence from a traditional "phone book"

view of DNS. This process of using DNS to assign proximal servers to users

is key to providing faster response times on the Internet and is widely used

by most major Internet services today.

62

Page 64: A Project Report on Linux Server Administration

Fig. 3.10 The DNS Server

INSTALLATION:

Step 1 – Install Bind Packages

Bind packages are available under default yum repositories. To install

packages simple execute below command.

# yum install bind bind-chroot

Step 2 – Edit Main Configuration File

Default bind main configuration file is located under /etc directory. But using chroot environment this file is located at /var/named/chroot/etc directory. Now edit main configuration file and update content as below.

# vim /var/named/chroot/etc/named.conf

63

Page 65: A Project Report on Linux Server Administration

Content for the named.conf file

// /var/named/chroot/etc/named.confoptions { listen-on port 53 { 127.0.0.1; 192.168.1.0/24; 0.0.0.0/0; }; listen-on-v6 port 53 { ::1; }; directory "/var/named"; dump-file "/var/named/data/cache_dump.db"; statistics-file "/var/named/data/named_stats.txt"; memstatistics-file "/var/named/data/named_mem_stats.txt"; allow-query { localhost; 192.168.1.0/24; 0.0.0.0/0; }; recursion yes;

dnssec-enable yes; dnssec-validation yes; dnssec-lookaside auto;

/* Path to ISC DLV key */ bindkeys-file "/etc/named.iscdlv.key";

managed-keys-directory "/var/named/dynamic";};

logging { channel default_debug { file "data/named.run"; severity dynamic; };};

zone "." IN { type hint; file "named.ca";};

zone "demotecadmin.net" IN { type master; file "/var/named/demotecadmin.net.db";};

include "/etc/named.rfc1912.zones";include "/etc/named.root.key";

Step 3 – Create Zone File for Your Domain

After creating bind main configuration file, create a zone file for you domain

as per configuration, for example demotecadmin.net.db in this article.

#vim/var/named/chroot/var/named/demotecadmin.net.db

Content for the zone file

64

Page 66: A Project Report on Linux Server Administration

; Zone file for demotecadmin.net$TTL 14400@ 86400 IN SOA ns1.tecadmin.net. webmaster.tecadmin.net. ( 3013040200 ; serial, todays date+todays 86400 ; refresh, seconds 7200 ; retry, seconds 3600000 ; expire, seconds 86400 ; minimum, seconds )demotecadmin.net. 86400 IN NS ns1.tecadmin.net.demotecadmin.net. 86400 IN NS ns2.tecadmin.net.demotecadmin.net. IN A 192.168.1.100demotecadmin.net. IN MX 0 mail.demotecadmin.net.mail IN CNAME demotecadmin.net.www IN CNAME demotecadmin.net.

If you are having more domain, its required to create zone files for each

domain individually.

Step 4 – Add More Domains

To add more domains in dns, create zone files individually for all domain as

above. After that add any entry for all zones in named.conf like below.

Change demotecadmin.net with your domain name.

zone "demotecadmin.net" IN { type master; file "/var/named/demotecadmin.net.db";};

Step 5 – Start Bind Service

Start named (bind) service using following command.

# service named restart

65

Page 67: A Project Report on Linux Server Administration

References

I have studied about PHP, MySQL etc. Dream weaver CS5 was the main source in

working of PHP. I have also used Apache Server and MySQL to store the data in

database. In the making of report I got a lot of help from books and websites.

The sources are:-[1] www.techmint.net

[2] www.w3schools.com

[3] www.google.com

Books Are:-[4] PHP-A BEGINNER’S GUIDE (VIKRAM VASWANI)

[5] PHP6 and MySQL (BIBLE)

[6] SAMS Teach Yourself HTML and CSS (Mike Wooldridge & Linda Wooldridge)

66