Saturday, 25 April 2015


OpenLDAP Server Configuration on RHEL 7 / CentOS 7

Step 1: Install the following packages:

# yum install -y openldap openldap-clients openldap-servers migrationtools

Step 2: Generate a LDAP encrypted password for Manager user  (here redhat):

# slappasswd -s redhat -n > /etc/openldap/secret-passwd

Step 3: Configure OpenLDAP Server: 

#vim /etc/openldap/slapd.d/"cn=config"/"olcDatabase={2}bdb.ldif"

#do the following changes

olcSuffix: dc=example,dc=com

olcRootDN: cn=Manager,dc=example,dc=com

olcRootPW: PASTE YOUR ENCRYPTED PASSWORD HERE from /etc/openldap/secret-passwd

olcTLSCertificateFile: /etc/pki/CA/cacert.pem

olcTLSCertificateKeyFile: /etc/pki/CA/private/cakey.pem

:wq (save abd exit)

Step 4: Configure Monitoring Database Configuration file: 

#vim /etc/openldap/slapd.d/"cn=config"/"olcDatabase={1}monitor.ldif"

#do the following change

olcAccess: {0}to * by dn.base="gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth" read by dn.base="cn=Manager,dc=example,dc=com" read by * none


:wq (save and exit)

Step 5: Generate a X509 self sign certificate which is valid for 365 days:

# openssl req -new -x509 -nodes -out /etc/pki/CA/cacert.pem -keyout /etc/pki/CA/private/cakey.pem -days 365

Country Name (2 letter code) [XX]: IN

State or Province Name (full name) []: Delhi

Locality Name (eg, city) [Default City]: New Delhi

Organization Name (eg, company) [Default Company Ltd]: Example, Inc.

Organizational Unit Name (eg, section) []: Training

Common Name (eg, your name or your server's hostname) []:server1.example.com

Email Address []: root@server1.example.com

Step 6: Secure the content of the /etc/pki/CA/ directory:

# cd /etc/pki/CA/

# chown ldap:ldap cacert.pem

 # cd /etc/pki/CA/private/

# chown ldap:ldap cakey.pem

# chmod 600 cakey.pem

Step 7: Prepare the LDAP database:


# cp -rvf /usr/share/openldap-servers/DB_CONFIG.example /var/lib/ldap/DB_CONFIG

# chown -R ldap:ldap /var/lib/ldap/

Step 8: Enable LDAPS: 

#vim /etc/sysconfig/slapd

 #Do the following changes

SLAPD_URLS="ldapi:///   ldap:///   ldaps:///"

:wq (save and exit)


Step 9: Test the configuration:

# slaptest -u

Step 10: Start and enable the slapd service at boot: 

# systemctl start slapd

# systemctl enable slapd

Step 11: Check the LDAP activity:

# netstat -lt | grep ldap

#netstat -tunlp | egrep "389|636"


Step 12: To start the configuration of the LDAP server, add the follwing LDAP schemas:

# cd /etc/openldap/schema

# ldapadd -Y EXTERNAL -H ldapi:/// -D "cn=config" -f cosine.ldif

# ldapadd -Y EXTERNAL -H ldapi:/// -D "cn=config" -f nis.ldif

# ldapadd -Y EXTERNAL -H ldapi:/// -D "cn=config" -f collective.ldif

# ldapadd -Y EXTERNAL -H ldapi:/// -D "cn=config" -f corba.ldif

# ldapadd -Y EXTERNAL -H ldapi:/// -D "cn=config" -f core.ldif

# ldapadd -Y EXTERNAL -H ldapi:/// -D "cn=config" -f duaconf.ldif

# ldapadd -Y EXTERNAL -H ldapi:/// -D "cn=config" -f dyngroup.ldif

# ldapadd -Y EXTERNAL -H ldapi:/// -D "cn=config" -f inetorgperson.ldif

# ldapadd -Y EXTERNAL -H ldapi:/// -D "cn=config" -f java.ldif

# ldapadd -Y EXTERNAL -H ldapi:/// -D "cn=config" -f misc.ldif

# ldapadd -Y EXTERNAL -H ldapi:/// -D "cn=config" -f openldap.ldif

# ldapadd -Y EXTERNAL -H ldapi:/// -D "cn=config" -f pmi.ldif

# ldapadd -Y EXTERNAL -H ldapi:/// -D "cn=config" -f ppolicy.ldif

                        ##################################################
                        # NOTE-: You can add schema files according to your need: #
                        ##################################################

Step 13: Now use Migration Tools to create LDAP DIT: 

# cd /usr/share/migrationtools

# vim migrate_common.ph

#do the following chnages

on the Line Number 61, change "ou=Groups"
  $NAMINGCONTEXT{'group'}             = "ou=Groups";

on the Line Number 71, change your domain name
 $DEFAULT_MAIL_DOMAIN = "example.com";

on the line number 74, change your base name
         $DEFAULT_BASE = "dc=example,dc=com";

on the line number 90, change schema value
 $EXTENDED_SCHEMA = 1;


:wq (save and exit)

Step 14: Generate a base.ldif file for your Domain DIT: 

#./migrate_base.pl > /root/base.ldif

Step 15: Load "base.ldif" into LDAP Database: 

#ldapadd -x -W -D "cn=Manager,dc=example,dc=com" -f /root/base.ldif

Step 16: Now Create some users and Groups and migrate it from local database to LDAP database: 

#mkdir /home/guests
#useradd -d /home/guests/ldapuser1 ldapuser1
#useradd -d /home/guests/ldapuser2 ldapuser2
#useradd -d /home/guests/ldapuser3 ldapuser3
#useradd -d /home/guests/ldapuser4 ldapuser4
#useradd -d /home/guests/ldapuser5 ldapuser5

#echo 'password' | passwd --stdin ldapuser1
#echo 'password' | passwd --stdin ldapuser2
#echo 'password' | passwd --stdin ldapuser3
#echo 'password' | passwd --stdin ldapuser4
#echo 'password' | passwd --stdin ldapuser5

Step 17: Now filter out these Users and Groups and it password from /etc/shadow to different file: 

#getent passwd | tail -n 5 > /root/users

#getent shadow | tail -n 5 > /root/shadow

# getent group | tail -n 5 > /root/groups

Step 18: Now you can delete these users from local database: 

#userdel ldapuser1
#userdel ldapuser2
#userdel ldapuser3
#userdel ldapuser4
#userdel ldapuser5

Step 19: Now you need to create ldif file for these users using migrationtools: 

# cd /usr/share/migrationtools/

# vim migrate_passwd.pl

#search /etc/shadow and replace it into /root/shadow on Line Number 188.

:wq (save and exit)

# ./migrate_passwd.pl /root/users > /root/users.ldif

# ./migrate_group.pl /root/groups > /root/groups.ldif

Step 20: Upload these users and groups ldif file into LDAP Database: 

# ldapadd -x -W -D "cn=Manager,dc=example,dc=com" -f /root/users.ldif

# ldapadd -x -W -D "cn=Manager,dc=example,dc=com" -f /root/groups.ldif

Step 21: Now search LDAP DIT for all records: 

# ldapsearch -x -b "dc=example,dc=com" -H ldap://server1.example.com

Step 22: Now share ldapusers home directories via NFS: 

#vim /etc/exports

#Add the folloiwng line:

/home/guests    192.168.48.0/255.255.255.0(rw,sync)


:wq (save and exit)

#systemctl start nfs

#systemctl enable nfs

Step 23: Share your CA Certificate to clients via FTP/HTTP: 

#yum install vsftpd httpd -y

# cp -rvf /etc/pki/CA/cacert.pem /var/ftp/pub/

# ln -s /var/ftp/pub/ /var/www/html/

#systemctl start vsftpd

#systemctl enable vsftpd

#systemctl start httpd

#systemctl enable httpd

Step 24: Now Go to the client machine and install the following packages: 

#yum install openldap-clients sssd pam_ldap authconfig-gtk -y

Step 25: Run the "authconfig-gtk" command to configure as a LDAP Client: 

# authconfig-gtk

Click on "Identity & Authentication" Tab

Click on drop down menu in "User Account Database" and Select "LDAP"

in LDAP Search Base DN: dc=example,dc=com

in LDAP Server: ldap://server1.example.com

Select the check Box of "Use TLS to encrypt connections"

Click "Download CA Certificate"

In Certificate URL: type http://server1.example.com/pub/cacert.pem

Authentication Protocol: LDAP Password

Click "OK"


# getent passwd ldapuser1

Step 26: Now Configure your client machine to access ldapusers  home directory from    
                "server1.example.com" 

#yum install autofs -y

#vim /etc/auto.master

#add the following line

/home/guests /etc/auto.guests

:wq (save and exit)

#vim /etc/auto.guests

#add the following line

* -rw server1.example.com:/home/guests/&

:wq (save and exit)

Step 27: Now start and enable autofs service at boot: 

#systemctl restart autofs

#systemctl enable autofs

Step 28: Now try to login as ldapuseer on client machine: 

#ssh ldapuser1@client.example.com

Password: password

[ldapuser1@client.exmaple.com ~]$


 You may have some issue with Firewall/iptables, So add Ports/Services into firewall or disable it. 

############Congratulations, You have configured LDAP server and client##############

Friday, 24 April 2015

How to Configure SSL Certificate in Tomcat

How to Configure SSL Certificate in Tomcat


We are assuming that you already have installed working Tomcat server in your system. If not you can visit to earlier article Install Tomcat 7 on CentOS, RHEL or Ubuntu, Debian Systems. This article can be used for Linux as well as Windows hosts both, the only thing we need to change directory path of keystore.

Step 1. Create Keystore

A Java KeyStore (JKS) is a repository of security certificates. keytool is the command line utility for creating and managing keystore. This command is available with JDK and JRE both. We just need to make sure that jdk or jre is configured with PATH environment variable.
# keytool -genkey -alias svr1.tecadmin.net -keyalg RSA -keystore /etc/pki/keystore
[Samle Output]
Enter keystore password:
Re-enter new password:
What is your first and last name?
  [Unknown]:  Rahul Kumar
What is the name of your organizational unit?
  [Unknown]:  Web
What is the name of your organization?
  [Unknown]:  TecAdmin Inc.
What is the name of your City or Locality?
  [Unknown]:  Delhi
What is the name of your State or Province?
  [Unknown]:  Delhi
What is the two-letter country code for this unit?
  [Unknown]:  IN
Is CN=Rahul Kumar, OU=Web, O=TecAdmin Inc., L=Delhi, ST=Delhi, C=IN correct?
  [no]:  yes

Enter key password for 
        (RETURN if same as keystore password):
Re-enter new password:

Step 2. Get CA Signed SSL [ Ignore SelfSigned Users ]

You don’t need to do this step if you are going to use self signed SSL certificate. If you want to purchased a valid ssl from certificate authorities, then you need to first create a CSR, Use following command to do it.
Create CSR:
# keytool -certreq -keyalg RSA -alias svr1.tecadmin.net -file svr1.csr -keystore /etc/pki/keystore
Above command will prompt for keystore password and generate the CSR file. Use this CSR and purchase ssl certificate from any certificate authorities.
After issued certificate by CA, you will have following files – root certificate, intermediate certificate and certificate file. In my case the filenames are
A. root.crt (root certificate)
B. intermediate.crt (intermediate certificate)
C. svr1.tecadmin.net.crt ( Issued certificate by CA )
Install the root certificate:
# keytool -import -alias root -keystore /etc/pki/keystore -trustcacerts -file root.crt
Install the intermediate certificate:
# keytool -import -alias intermed -keystore /etc/pki/keystore -trustcacerts -file intermediate.crt
Install the issued certificate:
# keytool -import -alias svr1.tecadmin.net -keystore /etc/pki/keystore -trustcacerts -file svr1.tecadmin.net.crt

Step 3. Configure Tomcat with Keystore

Now go to your tomcat installation directory and edit conf/server.xml file in your favorite editor and update the configuration as below. You may also change the port from 8443to some other port if required.
    <Connector port="8443" protocol="HTTP/1.1"
                connectionTimeout="20000"
                redirectPort="8443"
                SSLEnabled="true"
                scheme="https"
                secure="true"
                sslProtocol="TLS"
                keystoreFile="/etc/pki/keystore"
                keystorePass="_password_" />

Step 4. Restart Tomcat

Use your init script (if have) to restart tomcat service, In my case i use shell scripts (startup.sh and shutdown.sh) for stopping and starting tomcat.
# ./bin/shutdown.sh
# ./bin/startup.sh

Step 5. Verify Setup

As we have done all the required configuration for tomcat setup. lets access tomcat in your browser on configured port in step 2.
tomcat-with-ssl

Friday, 10 April 2015

Run a Java Application as a Service on Linux Centos 6.5

Run a Java Application as a Service on Linux Centos 6.5 
 
 
 
Step 1

java -jar /your path/WebServer.jar &
 
 
 
Try This if above not working
 
 
 
#!/bin/sh
SERVICE_NAME=MyService
PATH_TO_JAR=/your path/MyJar.jar
PID_PATH_NAME=/tmp/MyService-pid
case $1 in
    start)
        echo "Starting $SERVICE_NAME ..."
        if [ ! -f $PID_PATH_NAME ]; then
            nohup java -jar $PATH_TO_JAR /tmp 2>> /dev/null >> /dev/null &
                        echo $! > $PID_PATH_NAME
            echo "$SERVICE_NAME started ..."
        else
            echo "$SERVICE_NAME is already running ..."
        fi
    ;;
    stop)
        if [ -f $PID_PATH_NAME ]; then
            PID=$(cat $PID_PATH_NAME);
            echo "$SERVICE_NAME stoping ..."
            kill $PID;
            echo "$SERVICE_NAME stopped ..."
            rm $PID_PATH_NAME
        else
            echo "$SERVICE_NAME is not running ..."
        fi
    ;;
    restart)
        if [ -f $PID_PATH_NAME ]; then
            PID=$(cat $PID_PATH_NAME);
            echo "$SERVICE_NAME stopping ...";
            kill $PID;
            echo "$SERVICE_NAME stopped ...";
            rm $PID_PATH_NAME
            echo "$SERVICE_NAME starting ..."
            nohup java -jar $PATH_TO_JAR /tmp 2>> /dev/null >> /dev/null &
                        echo $! > $PID_PATH_NAME
            echo "$SERVICE_NAME started ..."
        else
            echo "$SERVICE_NAME is not running ..."
        fi
    ;;
esac  

ROCKS CLUSTER

ROCKS CLUSTER


3.2. Install and Configure Your Fronten



This section describes how to install your Rocks cluster frontend.
Warning
The minimum requirement to bring up a frontend is to have the following rolls:
  • Kernel/Boot Roll CD
  • Base Roll CD
  • OS Roll CD - Disk 1
  • OS Roll CD - Disk 2
Additionally, the official Red Hat Enterprise Linux 5 (5.8) or Linux (6.3) can be substituted substituted for the OS Rolls. Also, any true rebuild of RHEL 5 update 8 or RHEL 6 Update 3 can be used. If you substitute the OS Rolls with one of the above distributions, you must supply all the CDs from the distribution (which usually is 6 to 9 CDs).
  1. Insert the Kernel/Boot Roll CD into your frontend machine and reset the frontend machine.
    Note
    For the remainder of this section, we'll use the example of installing a bare-bones frontend, that is, we'll be using the Kernel/Boot Roll, base Roll, OS - Disk 1 Roll and the OS - Disk 2 Roll.
  2. After the frontend boots off the CD, you will see:
    When you see the screen above, type:

    build
    Warning
    The "boot:" prompt arrives and departs the screen quickly. It is easy to miss. If you do miss it, the node will assume it is a compute appliance, and the frontend installation will fail and you will have to restart the installation (by rebooting the node).
    Tip
    It is possible to bypass the DHCP process and have the install ask for network parameters. If you know the name of device used by the kernel for public access (e.g.eth1,p2p1,...), then specify as follows (using p2p1 for the public net): build ksdevice=p2p1 asknetwork respectively.
    Tip
    If the installation fails, very often you will see a screen that complains of a missing /tmp/ks.cfg kickstart file. To get more information about the failure, access the kickstart and system log by pressing Ctrl-Alt-F3 and Ctrl-Alt-F4 respectively.
    After you type build, the installer will start running.
  3. Warning
    All screens in this step may not appear during your installation. You will only see these screens if there is not a DHCP server on your public network that answers the frontend's DHCP request.
    If you see the screen below:
    You'll want to: 1) enable IPv4 support, 2) select manual configuration for the IPv4 support (no DHCP) and, 3) disable IPv6 support. The screen should look like:
    After your screen looks like the above, hit "OK". Then you'll see the "Manual TCP/IP Configuration" screen:
    In this screen, enter the public IP configuration. Here's an example of the public IP info we entered for one our frontends:
    After you fill in the public IP info, hit "OK".
  4. Soon, you'll see a screen that looks like:
    From this screen, you'll select your rolls.
    In this procedure, we'll only be using CD media, so we'll only be clicking on the 'CD/DVD-based Roll' button.
    Click the 'CD/DVD-based Roll' button.
  5. The CD will eject and you will see this screen:
    Put your first roll in the CD tray (for the first roll, since the Kernel/Boot Roll is already in the tray, simply push the tray back in).
    Click the 'Continue' button.
  6. The Kernel/Boot Roll will be discovered and display the screen:
    Select the Kernel/Boot Roll by checking the 'Selected' box and clicking the 'Submit' button.
  7. This screen shows you have properly selected the Kernel/Boot Roll.
    Repeat steps 3-5 for the Base Roll and the OS rolls.
  8. When you have selected all the rolls associated with a bare-bones frontend, the screen should look like:
    When you are done with roll selection, click the 'Next' button.
  9. Then you'll see the Cluster Information screen:
    Note
    The one important field in this screen is the Fully-Qualified Host Name (all other fields are optional).
    Choose your hostname carefully. The hostname is written to dozens of files on both the frontend and compute nodes. If the hostname is changed after the frontend is installed, several cluster services will no longer be able to find the frontend machine. Some of these services include: SGE, NFS, AutoFS, and Apache.
    Fill out the form, then click the 'Next' button.
  10. The public cluster network configuration screen allows you to set up the networking parameters for the ethernet network that connects the frontend to the outside network (e.g., the internet).
    The above window is an example of how we configured the external network on one of our frontend machines.
    Tip
    The installer allows you select which physical interface is the public interface, if there is more than one interface. the network interface is a pull down menu as below
  11. The private cluster network configuration screen allows you to set up the networking parameters for the ethernet network that connects the frontend to the compute nodes.
    Note
    It is recommended that you accept the defaults (by clicking the 'Next' button). But for those who have unique circumstances that requires different values for the internal ethernet connection, we have exposed the network configuration parameters.
    Note
    If you have only one physical interface, the installer will create a virtual ethernet interface (e.g. eth0:0).
    Warning
    The installer does not check if you selected the identical interface for your public and private interfaces. It is an error to do this
  12. Configure the the Gateway and DNS entries:
  13. Input the root password:
  14. Configure the time:
  15. The disk partitioning screen allows you to select automatic or manual partitioning.
    To select automatic partitioning, click the Auto Partitioning radio button. This will repartition and reformat the first discovered hard drive that is connected to the frontend. All other drives connected to the frontend will be left untouched.
    The first discovered drive will be partitioned like:

    Table 3-1. Frontend -- Default Root Disk Partition
    Partition NameSize
    /16 GB
    /var4 GB
    swap1 GB
    /export (symbolically linked to /state/partition1)remainder of root disk
    Warning
    When you use automatic partitioning, the installer will repartition and reformat the first hard drive that the installer discovers. All previous data on this drive will be erased. All other drives will be left untouched.
    The drive discovery process uses the output of cat /proc/partitions to get the list of drives.
    For example, if the node has an IDE drive (e.g., "hda") and a SCSI drive (e.g., "sda"), generally the IDE drive is the first drive discovered.
    But, there are instances when a drive you don't expect is the first discovered drive (we've seen this with certain fibre channel connected drives). If you are unsure on how the drives will be discovered in a multi-disk frontend, then use manual partitioning.
  1. If you selected manual partitioning, then you will now see Red Hat's manual partitioning screen:
    Above is an example of creating a '/', '/var', swap and '/export' partitions.
    Warning
    If you select manual partitioning, you must specify at least 16 GBs for the root partition and you must create a separate /export partition.
    Warning
    LVM is not supported by Rocks.
    When you finish describing your partitions, click the 'Next' button.
  2. The frontend will format its file systems, then it will ask for each of the roll CDs you added at the beginning of the frontend installation.
    In the example screen above, insert the Kernel/Boot Roll into the CD tray and click 'OK'.
    The contents of the CD will now be copied to the frontend's hard disk.
    Repeat this step for each roll you supplied in steps 3-5.
    Note
    After all the Rolls are copied, no more user interaction is required.
  3. After the last roll CD is copied, the packages will be installed:
  4. Finally, the boot loader will be installed and post configuration scripts will be run in the background. When they complete, the frontend will reboot.

Setup PXE Boot Environment Using Cobbler On CentOS 6.5

In our previous tutorials, we had showed you how to setup PXE environment on Ubuntu 14.04, and CentOS 6.5.
Setting up PXE Server can be very handy while installing large number of systems, and it just enables a System Administrator to install the client systems from a centralized PXE server without the need of CD/DVD or any USB thumb drives.
In this tutorial, let us see how to setup a PXE boot environment using Cobbler, and automate the client system installation from the PXE server. For those who don’t know, Cobbler is a Linux installation server that allows for rapid setup of network installation environments. It glues together and automates many associated Linux tasks so you do not have to hop between many various commands and applications when deploying new systems, and, in some cases, changing existing ones. Cobbler can help with provisioning, managing DNS and DHCP, package updates, power management, configuration management orchestration, and much more.
For the purpose of tutorial, I will be using a testbox running with CentOS 6.5 server for setting up PXE boot server. My testbox IP address is 192.168.1.190/24. Well, now let me walk you through into Cobbler installation and configuration on CentOS server.

Prerequisites

To reduce the complexity, I disabled SELinux. But, If you want keep it enable, refer this link.
To disable it, edit file /etc/sysconfig/selinux file,
vi /etc/sysconfig/selinux
Set SELINUX value to disabled.
[...]
SELINUX=disabled
[...]
Turn off the iptables.
service iptables stop
chkconfig iptables off
Or Allow the following ports, if you want it enabled.
vi /etc/sysconfig/iptables
Allow the http ports(80/443), Cobbler’s ports 69, and 25151.
[...]
-A INPUT -m state --state NEW -m tcp -p tcp --dport 80 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 443 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 69 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 25151 -j ACCEPT
[...]
Save and close the file. Restart iptables service to save the changes.
service iptables restart
Reboot your server to take effect the SELinux and iptables changes. For the sake of easy, and testing purpose, I disabled both iptables and SELinux.

Install Cobbler

Cobbler is not available on CentOS default repositories, so let us add EPEL repository first, and install Cobbler. To add and enable EPEL repository, refer the below link.
Now, install cobbler, cobbler web interface ,and its dependencies as shown below.
yum install cobbler cobbler-web dhcp debmirror pykickstart system-config-kickstart dhcp mod_python tftp cman -y

Enable TFTP and rsync

The following changes should be made before start using Cobbler.
First of all, we should enable TFTP and Rsync in xinetd configuration.
Edit file /etc/xinetd.d/tftp,
vi /etc/xinetd.d/tftp
Change disable = yes to disable = no.
 # default: off
 # description: The tftp server serves files using the trivial file transfer \
 #       protocol.  The tftp protocol is often used to boot diskless \
 #       workstations, download configuration files to network-aware printers, \
 #       and to start the installation process for some operating systems.
 service tftp
 {
 socket_type             = dgram
 protocol                = udp
 wait                    = yes
 user                    = root
 server                  = /usr/sbin/in.tftpd
 server_args             = -s /var/lib/tftpboot
         disable                 = no
 per_source              = 11
 cps                     = 100 2
 flags                   = IPv4
 }
Save and close the file. Then, edit /etc/xinetd.d/rsysnc file,
vi /etc/xinetd.d/rsync
Change disable = yes to disable = no.
 # default: off
 # description: The rsync server is a good addition to an ftp server, as it \
 #       allows crc checksumming etc.
 service rsync
 {
         disable = no
 flags           = IPv6
 socket_type     = stream
 wait            = no
 user            = root
 server          = /usr/bin/rsync
 server_args     = --daemon
 log_on_failure  += USERID
 }
Save and close the file.

Configure DHCP

Copy the sample dhcpd configuration file.
cp /usr/share/doc/dhcp-4.1.1/dhcpd.conf.sample /etc/dhcp/dhcpd.conf
Edit dhcpd.conf file,
vi /etc/dhcp/dhcpd.conf
Find the following directive, and match to suits your configuration. Here is mine.
[...]
# A slightly different configuration for an internal subnet.
subnet 192.168.1.0 netmask 255.255.255.0 {
  range 192.168.1.100 192.168.1.254;
  option domain-name-servers server.unixmen.local;
  option domain-name "unixmen.local";
  option routers 192.168.1.1;
  option broadcast-address 192.168.1.255;
  default-lease-time 600;
  max-lease-time 7200;
}
[...]
Now, start all services.
service httpd start
service dhcpd start
service xinetd start
service cobblerd start
Make all services to start automatically on every reboot.
chkconfig httpd on
chkconfig dhcpd on
chkconfig xinetd on
chkconfig cobblerd on
cobbler has various sample kickstart templates stored in /var/lib/cobbler/kickstarts/.  This controls what install (root) password is set up for those systems that reference this variable.  The factory default is “cobbler” and cobbler check will warn if this is not changed. To change the default password, run the following command:
openssl passwd -1
Sample output:
Password:
Verifying - Password:
$1$U.Svb2gw$MNHrAmG.axVHYQaQRySR5/

Configure Cobbler

Now, we have to edit cobbler’s settings file, and do some a couple changes.
vi /etc/cobbler/settings
Find the line “default_password_crypted”, and set the new generated password which is created with command “opennssl password” command:
[...]
default_password_crypted: "$1$U.Svb2gw$MNHrAmG.axVHYQaQRySR5/"
[...]
Find the line “manage_dhcp: 0″ line, and change it’s value to 1 to enable conbbler’s dhcp management features.
[...]
manage_dhcp: 1
[...]
Set your Cobbler’s IP address in “server” and “next_server” fields.
[...]
next_server: 192.168.1.200
[...]
server: 192.168.1.200
[...]
Once you modified all the above settings, save and close the file.
Now, edit file /etc/cobbler/dhcp.template,
vi /etc/cobbler/dhcp.template
Make the changes as shown below. Replace the IP range with your own range.
 subnet 192.168.1.0 netmask 255.255.255.0 {
 option routers             192.168.1.1;
 option domain-name-servers 192.168.1.1;
 option subnet-mask         255.255.255.0;
 range dynamic-bootp        192.168.1.100 192.168.1.254;
 default-lease-time         21600;
 max-lease-time             43200;
 next-server                192.168.1.200;
 class "pxeclients" {
 match if substring (option vendor-class-identifier, 0, 9) = "PXEClient";
 if option pxe-system-type = 00:02 {
 filename "ia64/elilo.efi";
 } else if option pxe-system-type = 00:06 {
 filename "grub/grub-x86.efi";
 } else if option pxe-system-type = 00:07 {
 filename "grub/grub-x86_64.efi";
 } else {
 filename "pxelinux.0";
 }
 }
Specify your Cobbler server’s Ip address in the next_server field. Once you made all changes, save and close the file.
Next, we should enable Cobbler’s web interface, and set username and password for Cobbler’s web interface.
To enable, Cobbler’s web interface, edit file /etc/cobbler/modules.conf,
vi /etc/cobbler/modules.conf
Change the following settings as shown below.
[...]
[authentication]
module = authn_configfile
[...]
[authorization]
module = authz_allowall
[...]
Next, we have to setup the setup the username and password for the cobbler web interface. To do that, run the following command. Input your preferred password twice.
htdigest /etc/cobbler/users.digest "Cobbler" cobbler
Here, my cobbler web interface user name is “cobbler”, and its password is “centos”.
Download the required network boot loaders using the following command.
cobbler get-loaders
Sample output:
task started: 2014-07-24_130618_get_loaders
task started (id=Download Bootloader Content, time=Thu Jul 24 13:06:18 2014)
path /var/lib/cobbler/loaders/README already exists, not overwriting existing content, use --force if you wish to update
downloading http://www.cobblerd.org/loaders/COPYING.elilo to /var/lib/cobbler/loaders/COPYING.elilo
downloading http://www.cobblerd.org/loaders/COPYING.yaboot to /var/lib/cobbler/loaders/COPYING.yaboot
downloading http://www.cobblerd.org/loaders/COPYING.syslinux to /var/lib/cobbler/loaders/COPYING.syslinux
downloading http://www.cobblerd.org/loaders/elilo-3.8-ia64.efi to /var/lib/cobbler/loaders/elilo-ia64.efi
downloading http://www.cobblerd.org/loaders/yaboot-1.3.14-12 to /var/lib/cobbler/loaders/yaboot
downloading http://www.cobblerd.org/loaders/pxelinux.0-3.86 to /var/lib/cobbler/loaders/pxelinux.0
downloading http://www.cobblerd.org/loaders/menu.c32-3.86 to /var/lib/cobbler/loaders/menu.c32
downloading http://www.cobblerd.org/loaders/grub-0.97-x86.efi to /var/lib/cobbler/loaders/grub-x86.efi
downloading http://www.cobblerd.org/loaders/grub-0.97-x86_64.efi to /var/lib/cobbler/loaders/grub-x86_64.efi
*** TASK COMPLETE ***
Edit /etc/debmirror.conf,
vi /etc/debmirror.conf
comment out ‘dists’, and ‘arches’ lines.
[...]
#@dists="sid";
[...]
#@arches="i386";
[...]
Finally, restart all services once or reboot your server.
service httpd restart
service dhcpd restart
service xinetd restart
service cobblerd restart
Then, run the “cobbler check” command to check if everything is OK on the Cobbler server.
cobbler check
Sample result:
No configuration problems found.  All systems go.
If you got the output like above, you’re good to go.
Restart cobblerd service, and then run ‘cobbler sync’ to apply changes.
service cobblerd restart
cobbler sync
Sample output:
task started: 2014-07-24_130807_sync
task started (id=Sync, time=Thu Jul 24 13:08:07 2014)
running pre-sync triggers
cleaning trees
mkdir: /var/lib/tftpboot/pxelinux.cfg
mkdir: /var/lib/tftpboot/grub
mkdir: /var/lib/tftpboot/s390x
mkdir: /var/lib/tftpboot/ppc
mkdir: /var/lib/tftpboot/etc
removing: /var/lib/tftpboot/grub/images
copying bootloaders
trying hardlink /var/lib/cobbler/loaders/pxelinux.0 -> /var/lib/tftpboot/pxelinux.0
trying hardlink /var/lib/cobbler/loaders/menu.c32 -> /var/lib/tftpboot/menu.c32
trying hardlink /var/lib/cobbler/loaders/yaboot -> /var/lib/tftpboot/yaboot
trying hardlink /usr/share/syslinux/memdisk -> /var/lib/tftpboot/memdisk
trying hardlink /var/lib/cobbler/loaders/grub-x86.efi -> /var/lib/tftpboot/grub/grub-x86.efi
trying hardlink /var/lib/cobbler/loaders/grub-x86_64.efi -> /var/lib/tftpboot/grub/grub-x86_64.efi
copying distros to tftpboot
copying images
generating PXE configuration files
generating PXE menu structure
rendering DHCP files
generating /etc/dhcp/dhcpd.conf
rendering TFTPD files
generating /etc/xinetd.d/tftp
cleaning link caches
running post-sync triggers
running python triggers from /var/lib/cobbler/triggers/sync/post/*
running python trigger cobbler.modules.sync_post_restart_services
running: dhcpd -t -q
received on stdout: 
received on stderr: 
running: service dhcpd restart
received on stdout: Shutting down dhcpd: [  OK  ]
Starting dhcpd: [  OK  ]

received on stderr: 
running shell triggers from /var/lib/cobbler/triggers/sync/post/*
running python triggers from /var/lib/cobbler/triggers/change/*
running python trigger cobbler.modules.scm_track
running shell triggers from /var/lib/cobbler/triggers/change/*
*** TASK COMPLETE ***

Importing ISO files to Cobbler server

We have completed all necessary tasks. Now, let us import ISO images of any Linux distribution into Cobbler server.
I already have CentOS 6.5 ISO image on my Cobbler server /root directory. Mount the ISO file to any preferred location. For example, I am going to mount it in /mnt directory.
mount -o loop CentOS-6.5-i386-bin-DVD1.iso /mnt/
Now, let us import the ISO to our cobbler server as shown below.
cobbler import --path=/mnt/ --name=CentOS_6.5
Sample output:
 task started: 2014-07-24_132814_import
 task started (id=Media import, time=Thu Jul 24 13:28:14 2014)
 Found a candidate signature: breed=redhat, version=rhel6
 Found a matching signature: breed=redhat, version=rhel6
 Adding distros from path /var/www/cobbler/ks_mirror/CentOS_6.5:
 creating new distro: CentOS_6.5-i386
 trying symlink: /var/www/cobbler/ks_mirror/CentOS_6.5 -> /var/www/cobbler/links/CentOS_6.5-i386
 creating new profile: CentOS_6.5-i386
 associating repos
 checking for rsync repo(s)
 checking for rhn repo(s)
 checking for yum repo(s)
 starting descent into /var/www/cobbler/ks_mirror/CentOS_6.5 for CentOS_6.5-i386
 processing repo at : /var/www/cobbler/ks_mirror/CentOS_6.5
 need to process repo/comps: /var/www/cobbler/ks_mirror/CentOS_6.5
 looking for /var/www/cobbler/ks_mirror/CentOS_6.5/repodata/*comps*.xml
 Keeping repodata as-is :/var/www/cobbler/ks_mirror/CentOS_6.5/repodata
 *** TASK COMPLETE ***

Start Installing clients Using Cobbler Server

The client may be any system that has network boot enabled option (PXE boot). You can enable this option in your Bios settings.
Due to lack of resources, here I will explain using a Virtual Machine client on my Oracle VirtualBox.
Open up the Oracle VirtualBox. Click on the New button in the menu bar. Enter your Virtual machine name.
Create Virtual Machine_002
Enter the Virtual machine RAM size.
Create Virtual Machine_003
Select “Create a virtual hard drive now” option.
Create Virtual Machine_004
Select the virtual hard drive type.
Create Virtual Hard Drive_005
Select whether the new virtual hard drive file should grow as it is used or if it should be created as fixed size.
Create Virtual Hard Drive_006
Enter the virtual hard disk size.
Create Virtual Hard Drive_007
That’s it. A new virtual machine has been created. Now, we should make the client to boot from the network. To do that, go to the Vitual machine Settings option. Select the System tab on the left, and Choose Network from the boot order option on the right side.
CentOS 6.5 PXE Client - Settings_008
Go to the Network tab and select “Bridged Adapter” from the “Attached to” drop down box.
CentOS 6.5 PXE Client - Settings_009
Once you done all the above steps, click OK to save the changes. That’s it. Now power on the Virtual client system. You should see the following screen.
CentOS 6.5 PXE Client [Running] - Oracle VM VirtualBox_010
That’s it. Start installing CentOS 6.5 using your Cobbler server.

Adding Kickstart file to Cobbler server

Copy the default kickstart file to cobbler server.
cp anaconda-ks.cfg /var/lib/cobbler/kickstarts/centos6.ks
Now, edit file centos6.ks,
vi /var/lib/cobbler/kickstarts/centos6.ks
Make the following changes. The changes are marked in bold.
# Kickstart file automatically generated by anaconda.

 #version=DEVEL
 install
 url --url http://192.168.1.200/cobbler/ks_mirror/CentOS_6.5/
 lang en_US.UTF-8
 keyboard us
 network --onboot no --device eth0 --bootproto dhcp --noipv6
 rootpw  --iscrypted $6$vfcAiwECqxbydGwi$FSHgxeM9bBaitrkSuoEhIhrfMZZLZGxW8BMsJoyBu3iAanwJLvYDKkzKxHD6i2vEfPn5fSNfKeJ85kCchBARH0
 firewall --service=ssh
 authconfig --enableshadow --passalgo=sha512
 selinux --enforcing
 timezone --utc Asia/Kolkata
 bootloader --location=mbr --driveorder=sda --append="crashkernel=auto rhgb quiet"
 # The following is the partition information you requested
 # Note that any partitions you deleted are not expressed
 # here so unless you clear all partitions first, this is
 # not guaranteed to work
 #clearpart --all --drives=sda
 
 #part /boot --fstype=ext4 --size=500
 #part pv.008002 --grow --size=1

 #volgroup vg_server --pesize=4096 pv.008002
 #logvol / --fstype=ext4 --name=lv_root --vgname=vg_server --grow --size=1024 --maxsize=51200
 #logvol swap --name=lv_swap --vgname=vg_server --grow --size=1248 --maxsize=1248

 repo --name="CentOS"  --baseurl=cdrom:sr0 --cost=100

 %packages
 @base
 @console-internet
 @core
 @debugging
 @directory-client
 @hardware-monitoring
 @java-platform
 @large-systems
 @network-file-system-client
 @performance
 @perl-runtime
 @server-platform
 @server-policy
 @workstation-policy
 oddjob
 sgpio
 device-mapper-persistent-data
 pax
 samba-winbind
 certmonger
 pam_krb5
 krb5-workstation
 perl-DBD-SQLite
 %end
Save and close the file. Add the distribution information to the pxe server.
cobbler distro add --name=CentOS_6.5 --kernel=/var/www/cobbler/ks_mirror/CentOS_6.5/isolinux/vmlinuz --initrd=/var/www/cobbler/ks_mirror/CentOS_6.5/isolinux/initrd.img
And then, add the kickstart file(centos6.ks) to the pxe server.
cobbler profile add --name=CentOS_6.5_KS --distro=CentOS_6.5 --kickstart=/var/lib/cobbler/kickstarts/centos6.ks
Restart cobbler once again, and run “cobble sync” command to save the changes.
service cobblerd restart
cobbler sync
Now, boot up the pxe client, and you should see the following screen now. Choose the Kickstart file, and start installing CentOS.
CentOS 6.5 PXE Client [Running] - Oracle VM VirtualBox_011
After installing the PXE clients, login with user name ‘root’, with password that you have created earlier using“openssl password” command.

Adding Multiple Distributions

If you want to add different distros like Ubuntu, its also possible. For example, let me add Ubuntu 14.04 server distribution to Cobbler server. To do that, first mount Ubuntu 14.04 ISO to any preferred location:
mount -o loop ubuntu-14.04-server-i386.iso /mnt/
Then, import the Ubuntu 14.04 ISO image to the cobbler server as shown below.
cobbler import --path=/mnt/ --name=Ubuntu14
Now, boot up your PXE client. This time you’ll find the Ubuntu distro has been added to the PXE server.
Ubuntu 14 PXE client [Running] - Oracle VM VirtualBox_014
Like this way, you can add as many as distributions you wanted to the Cobbler server, and start installing different distros from a single PXE server. Sounds awesome? yes It should be.

Cobbler Web interface

If you find difficult to work on command line, you can use the simple web interface to configure, and manage pxe clients. To access the Cobbler web interface, open up your browser, and navigate to: https://ip-address-of-cobbler/cobbler_web.
The following screen should appear. Enter the cobbler web interface username and password that you’ve created earlier using “htdigest” command.
Cobbler Web Interface - Mozilla Firefox_012
Cobbler Dashboard:
This is how my Cobbler dashboard looked.
Cobbler Web Interface - Mozilla Firefox_013
From here, you can create, add and manage new distros, profiles, Systems, and kickstart templates easily.