List TLS version and ciphers

On this small script you can get list of all TLS versions and ciphers availble connecting a remore destination . the chalange on that script is that sometimes the number of supported ciphers is great and that consumes time. the main tool used here is openssl along with parallel . I also added timeout and custom port that can be set during the run

The script is build around two loops, one that loop the TLS version , and one that loop the TLS ciphers on each verison. the main command is generating a file that later will be called using parallel command . feel free to copy and modify


TLS_V="tls1 tls1_1 tls1_2 tls1_3"

for V in $TLS_V
	TLS_CIPHEPS=`openssl ciphers -$V | tr ':' ' '`
	[ $V = "tls1_3" ] && CIPHER_COMAND="ciphersuites"
		echo "echo | timeout $TIMEOUT openssl s_client -$V -$CIPHER_COMAND $CIPHER -connect $TARGET:$TARGET_PORT &>/dev/null && echo \"$V $CIPHER\" >>$LOG" >>$RUN_F

parallel --gnu -k -j 100 <$RUN_F
cat $LOG | sort
rm -f $RUN_F $LOG

Distributed RAID over ISCSI

In an early post building-raid-over-network-share I had
show how to build distributed RAID over network using CIFS (samba or windows share)
now i wanted to show you a similar method by using ISCSI .
ISCSI is a protocol supported by all OS versions Windows,Linux,BSD,Solaris,AIX,OSX and other OS out there .
In this how to , i am going to use CentOS 6 . we will have 3 node servers as iscsi target and one server as
initiator that will be managing the RAID and will also act as target for the created RAID device .
The explain is very brief and will not cover all ISCSI settings , for better performance
I do suggest reading documentation for each individual distribution OS . another suggestion it is highly recommend
using a spare network interface with dedicated VLAN for ISCSI as it may load the network with high traffic .
node settings ( do this step at all nodes )
1. install scsi-target-utils

# yum -y install scsi-target-utils

2. create a device file

# dd if=/dev/zero of=/mydrive.file bs=1M count=1024

3. edit /etc/tgt/targets.conf , and add a new target lun .
make sure each server uses a uniq iqn id

<target iqn.2013-05:node1.0>
backing-store /mydrive.file

4. start the tgtd service

# service tgtd start

5. on the initiator server ( our mdadm raid master ) search for the iscsi devices

# iscsiadm -m discovery -t sendtargets -p,1 iqn.2013-05:node1.0
# iscsiadm -m discovery -t sendtargets -p,1 iqn.2013-05:node2.0
# iscsiadm -m discovery -t sendtargets -p,1 iqn.2013-05:node3.0

6. start iscasi service and get toy know the new devices

# service iscsi start

* the command “fdisk -l” may now show new scsi devices , in my case its sda,sdb,sdc
—- time to build the soft RAID —–
1. install mdadm

# yum search mdadm

2. create a new RAID device

# mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 /dev/sda /dev/sdb /dev/sdc 

from now on you may have a new RAID device called /dev/md0 you can use ,
* update
its a good idea to add a spare device to the raid , and even better to use the initiator itself ( the host that run the mdadm )
so I create a new device using dd command , then create a loop device then add it to the raid

dd if=/dev/zero of=/myfile.img bs=1M count=100
losetup -v /dev/loop1 /myfile.img
mdadm --add /dev/md0 /dev/loop1

So now the RAID may look like this :

 Number Major Minor RaidDevice State
0 8 0 0 active sync /dev/sda
1 8 16 1 active sync /dev/sdb
3 8 32 2 active sync /dev/sdc
4 7 1 - spare /dev/loop1

Building RAID over network share

This How-To will explain how to use remote SMB shares in order to build a RAID device
that can be shared back as over NFS/SMB . The idea is very simple , to use all storage resources
on our LAN in order to build a central NFS/SMB share (can be use for private clouds like CloudStack) .
This was tested with Debian 6 x86_64 , but you will find all command are similar on all Linux distributions .
Note that this configuration is experimental and should not be use for production ( yet )
1. create a device file on each share , execute on all nodes
change [X] to match node number the size is bs*count , in this case 1024M

~# dd if=/dev/zero of=/myshare/node[X].img bs=1M count=1024

2. mount this shares to central server , run only on master server
make directory to hold all smb mounts

~# mkdir -p /nodeshares/node{1,2,3}

mount each share into its own directory , note that the password is the one chosen on smbpasswd command

~# mount -t cifs -o username=myname,password=1 // /nodeshares/node1
~# mount -t cifs -o username=myname,password=1 // /nodeshares/node2
~# mount -t cifs -o username=myname,password=1 // /nodeshares/node3

3. here comes the fun part , create loop devices from the files we created on step 2
note that the files are actually on a remote host .

~# losetup -v /dev/loop1 /nodeshares/node1/node1.img ~# losetup -v /dev/loop2 /nodeshares/node2/node2.img ~# losetup -v /dev/loop3 /nodeshares/node3/node3.img
4. build RAID device out of all shared device files .create RAID level 5 using the loop devices , the devices order does matter~# mdadm –create /dev/md0 –level=5 –raid-disks=3 /dev/loop1 /dev/loop2 /dev/loop3
watch the build process by viewing mdstat~# cat /proc/mdstat
5. create a partition on md1 , mount and export the new partition as NFS or SMB back to the network .— Persistancy —
In order to keep the RAID once the master server boots , we will need to tell the kernel
to look for any MD devices and then refresh mdadm . edit /etc/mdadm/mdadm.conf and add the loop devices by
adding this line :DEVICE /dev/loop*
then run this command to add the new created RAID into persistancymdadm –detail –scan >> /etc/mdadm/mdadm.conf
— what happens when a node restart or gets disconnected —
the mdadm will see the loop device as faulty and mark it as fail , can be seen on mdstat
in this example i shutdown samba on node3~# cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid5 loop3[3](F) loop1[0] loop2[1] 1047552 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [UU_]unused devices:To repair it you need to remove the faulty device~# mdadm –manage /dev/md0 –remove /dev/loop3 mdadm: hot removed /dev/loop3 from /dev/md1after you remount the node , you can add the device back to the array~# mdadm –manage /dev/md0 –add /dev/loop3 mdadm: added /dev/loop3

pfsense under KVM with isolated LAN

In this manual I will explain some bugs and tricks for installing pfsense firewall.
this firewall is based FreeBSD , so some of the settings can be use for installing FreeBSD hosts as well .
we will start with creating an isolate network , this network will be use for our LAN . this step
is not mandatory but if you want to isolate your LAN from other network/bridge its recommended .
on this manual i will be using virsh command line tool because its faster and because easy to understand .
I am running KVM under Ubuntu 11.10 64bit but the idea is the same under any Host server such as CentOS .
creating a private network :
create a template file /root/privlan.xml
all we need is to tel virsh to get the next available bridge ,
no need for any other settings .

<bridge name="virbr%d" />

Now lets go into virsh and create the network

root~# virsh
Welcome to virsh, the virtualization interactive terminal.Type: 'help' for help with commands
'quit' to quitvirsh # net-create /root/privlan.xml
Network privlan created from /root/privlan.xmlvirsh # net-list
Name State Autostart
default active yes
privlan active no

now due to some bug , the virsh doesn’t create all files needed ,
to overcome this we are going to edit the file using net-edit inside virsh.
the change can be very small like name or MAC change .

virsh # net-autostart privlan
error: failed to mark network privlan as autostarted
error: Requested operation is not valid: cannot set autostart for transient networkvirsh # net-edit privlan
Network privlan XML configuration edited.virsh # net-autostart privlan
Network privlan marked as autostarted

now we are ready to fire up the install , using virt-install we are going to create a domain called pfsense
that has 2G of memory ,2 virtual CPU and 2 network interfaces model e1000.
the first interface is the regular default and the second one is the one we just created .
note that you can change this settings as needed ,
for example some may want to use bridge interface instead of the default NAT network .
unlike Linux , FReeBSD cannot work with virtual disk caching and it does not support virtio .
best performance i tested was scassi bus .

virt-install ––connect qemu:///system -n pfsense -r 2048 ––vcpus=2
––disk path=/var/lib/libvirt/images/pfsense.img,size=10,cache=none,bus=scsi
-c /root/CD_DVD/pfSense-2.0.1-RELEASE-amd64.iso ––vnc ––os-variant=freebsd8

pfsense install window will come up , you can go on and install just bare in mind that em0 is your LAN .
after install pfsense allow connection only on LAN interface but we created an isolated network ,
so the trick here is to allow connection on the WAN interface .
when pfsense comes up , go into shell (8) , then edit the config.xml via “viconfig”
look for the wan interface and rewmove if needed this 2 lines :


then create a new filter rule just after the any/any rule of the lan :

<descr><![CDATA[Default allow WAN to any rule]]></descr>

Now just reboot the pfsense , the new config will automatically refresh after saving but just to be sure .
after it comes up you can connect with browser to the new pfsense and just make sure to remove/set the rule
we just did to allow only trusted networks into the pfsense .

Simple Squid based CDN

Building a self made CDN based on Squid server , is a very simple task
all we need to do is tell squid what domains we want to serve , and where
to get its origin files .
The main idea of CDN is to act as reverse proxy for the origin servers ,
most of the cdn providers out works in the same way as we are about to do .
the first step is to prepare the server . in this example i used a clean squid.conf
for the simplicity of things , this was tested on squid version 3.1 but can work with older versions
as well with small changes . so install squid server , then edit your squid.conf file
we are going to proxy 2 domains and .
in order to let squid know the origin ip we are going to use 2 more domains . and now lets explain how it would work in general
1. user request
2. DNS resolve CNAME
3. squid get request for domain and look for its origin peer
4. squid reverse proxy by delivering the files
from back to the user while caching it for the next request .
So now we need to edit squid.conf and make it act as acceleration ( what use to be transparent proxy )
and listen on port 80

http_port 80 accel ignore-cc vhost
acl mysite1 dstdomain
acl mysite2 dstdomain
cache_peer parent 80 0 no-query originserver name=wwwip
cache_peer parent 80 0 no-query originserver name=imgip
cache_peer_access wwwip allow mysite1
cache_peer_access imgip allow mysite2
http_access allow mysite1
http_access allow mysite2
cache_mem 1 GB

Installing Oracle RAC 11.2 under CentOS

—————– Installing Oracle RAC release 11.2 under CentOS —————-
This tutorial was tested under CentOS x86_64 kernel 2.6.18-194.3.1.el5
this version is old but it matches the ASMlib .
also this tutorial was deploy on one node only , if there are more nodes
in the cluster , a shared storage should be used . oracle support NAS,SAN,NFS,RAW DEVICES etc.
Table of content
1. Preparing OS properties
2. Creating users and groups
3. Installing and setting ASMLib
4. verifying Grid Infrastructure preparation
5. Installing Grid Infrastructure
6. Installing Database software
7. creating database
1. Preparing the operating system parameters
list of packages require to be installed on all nodes
the versions should be >=
compat-libstdc++-33-3.2.3 (32 bit)
glibc-2.5-12 (32 bit)
glibc-devel-2.5-12 (32 bit)
libaio-0.3.106 (32 bit)
libgcc-4.1.1 (32 bit)
libstdc++-4.1.1 (32 bit)
libstdc++-devel 4.1.1
yum command :

[root]# yum install compat-libstdc++-33.i386 compat-libstdc++-33.x86_64 elfutils-libelf.x86_64 elfutils-libelf-devel.x86_64 glibc.i686 glibc.x86_64 glibc-common.x86_64 glibc-devel.i386 glibc-devel.x86_64 libaio.i386 libaio.x86_64 libaio-devel.x86_64 libgcc.i386 libgcc.x86_64 libstdc++.i386 libstdc++.x86_64 libstdc++-devel.x86_64 binutils gcc gcc-c++ make -y

1.1 First thing is to set all recommended kernel parameters
edit the file /etc/sysctl.conf , and change/add this parameters

# Oracle Setting
kernel.sem=250        32000   100      128
net.ipv4.ip_local_port_range = 9000 65000
net.core.rmem_default = 4194304
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576
fs.file-max = 6815744

Then run the command sysctl -p as root
so that the new settings will be load into the kernel parameters
1.2 Setting shared partition permissions , in order for ASM/Oracle to gain ownership
In this step it depend on the partitions you have attached from the storage .
here we are using local partitions but its the same , we gonna use 3 partitions
sda6,7,8 where sda6 will be for voting and OCR as ASM , sda7 will be for undo and sda8 will hold the data

[root@~]# fdisk -lDisk /dev/sda: 896.9 GB, 896998047744 bytes
255 heads, 63 sectors/track, 109053 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytesDevice Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1        2550    20482843+  83  Linux
/dev/sda2            2551        5100    20482875   83  Linux
/dev/sda3            5101        6120     8193150   82  Linux swap / Solaris
/dev/sda4            6121      109053   826809322+   5  Extended
/dev/sda5            6121       18279    97667136   83  Linux
/dev/sda6           18280       18402      987966   83  Linux
/dev/sda7           18403       18525      987966   83  Linux
/dev/sda8           18526       30684    97667136   83  Linux

Create a udev rule for ownership and permissions , by creating a file /etc/udev/rules.d/51-oracle.permissions.rules
and entering this lines :

# OCR disks
KERNEL=="sda6" , OWNER="grid" GROUP="asmdba" , MODE="0660"
# UNDO disks
KERNEL=="sda7" , OWNER="grid" GROUP="asmdba" , MODE="0660"
# DATA disks
KERNEL=="sda8" , OWNER="grid" GROUP="asmdba" , MODE="0660"

after creating the file , the system must reboot (but first create grid user in step 2), after reboot the device should look like this :

[[email protected] app]# ls -l /dev/sda*
brw-r----- 1 root disk     8, 0 Mar 30 16:17 /dev/sda
brw-r----- 1 root disk     8, 1 Mar 30 16:17 /dev/sda1
brw-r----- 1 root disk     8, 2 Mar 30 16:17 /dev/sda2
brw-r----- 1 root disk     8, 3 Mar 30 16:17 /dev/sda3
brw-r----- 1 root disk     8, 4 Mar 30 16:17 /dev/sda4
brw-r----- 1 root disk     8, 5 Mar 30 16:17 /dev/sda5
brw-rw---- 1 grid asmdba 8, 6 Mar 30 16:17 /dev/sda6
brw-rw---- 1 grid asmdba 8, 7 Mar 30 16:17 /dev/sda7
brw-rw---- 1 grid asmdba 8, 8 Mar 30 16:17 /dev/sda8

1.2.1 Network settings ,
each node must have at least 2 interfaces , one for public and one for interconnect
also each node must have 3 hosts resolved for example
eth0 = (public)
eth0:0 = (vip) no need to set , but needs to be resolved
eth1 = (private)
etc/hosts should look like this : dbtest dbtest-vip dbtest-priv
1.3 DNS settings ,
for Oracle version 11.2 we need to set a SCAN ip and VIP in the DNS,
Oracle recommend that we set 3 ip for SCAN
SCAN ip should be part of the host domain for example IN A IN A IN A IN A IN A IN A
* its a good idea to set all the domains in DNS ,
* If you want to use GNS you need to have DHCP as well ( we are not going to use GNS here )
1.4 fake CentOS as Redhat
Setting OS as fake REDHAT , backup this files :
/etc/issue and /etc/redhat-release
edit this files and replace the CentOS with this line
Red Hat Enterprise Linux Server release 5 (Tikanga)
also install the dummy package redhat-release dummy
can be found here as spec file and compile it,
or you can download the RPM i had compile from that spec from this site here
and last thing is running this command , for the grid install process

[root@~]# echo "redhat-release-5Server-5" > /tmp/.linux_release
[root@~]# chattr +i /tmp/.linux_release

2. Creating users and groups ,
we need to create 2 users one for oracle grid system
and one for the database , lets call this users grid and oracle for simplicity .
first create the users , then create the groups :

groupadd oinstall
groupadd dba
groupadd asmadmin
groupadd asmdba
groupadd asmoper
groupadd oper
useradd -g oinstall -G dba,oper,asmdba oracle
useradd -g oinstall -G dba,asmoper,asmdba,asmadmin grid

2.1 setting ssh connections between all nodes with no password
all nodes must be free access via ssh from the install server , for that we need to create ssh keys
for both grid and oracle user , by running ssh-keygen and copying the certificates between all nodes
as append to the file ~/.ssh/authorized_keys
2.2 Setting up limits for oracle and grid users .
by default CentOS have a open file limit of 1024 and process ,
this limit will not do for oracle .
setting limits is done by editing the file /etc/security/limits.conf .
oracle and grid users should have unlimited processes and open files

grid soft nproc 2047
grid hard nproc 16384
grid soft nofile 1024
grid hard nofile 65536
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536

3. Download and install ASMlib can be found at
select the corrent kernel version
in our case
3.1 after installing ASM we need to configure and create our ASM partitions ,
make sure you choose the correct user and group for ASM

[root@~]# /etc/init.d/oracleasm configure
Configuring the Oracle ASM library driver.This will configure the on-boot properties of the Oracle ASM library
driver.  The following questions will determine whether the driver is
loaded on boot and what permissions it will have.  The current values
will be shown in brackets ('[]').  Hitting <ENTER> without typing an
answer will keep that current value.  Ctrl-C will abort.Default user to own the driver interface [grid]:
Default group to own the driver interface [asmdba]:
Start Oracle ASM library driver on boot (y/n) [y]:
Scan for Oracle ASM disks on boot (y/n) [y]:
Writing Oracle ASM library driver configuration: done
Initializing the Oracle ASMLib driver:                     [  OK  ]
Scanning the system for Oracle ASMLib disks:               [  OK  ][[email protected] app]# /etc/init.d/oracleasm listdisks
[[email protected] app]# /etc/init.d/oracleasm createdisk OCR /dev/sda6
Marking disk "OCR" as an ASM disk:                         [  OK  ]
[[email protected] app]# /etc/init.d/oracleasm createdisk UNDO /dev/sda7
Marking disk "UNDO" as an ASM disk:                        [  OK  ]
[[email protected] app]# /etc/init.d/oracleasm createdisk DATA /dev/sda8
Marking disk "DATA" as an ASM disk:                        [  OK  ]
[[email protected] app]# /etc/init.d/oracleasm listdisks
UNDO[[email protected] app]# ls -l /dev/oracleasm/disks/
total 0
brw-rw---- 1 grid asmdba 8, 8 Mar 31 10:33 DATA
brw-rw---- 1 grid asmdba 8, 6 Mar 31 10:33 OCR
brw-rw---- 1 grid asmdba 8, 7 Mar 31 10:33 UNDO

4. Unzip the grid infrastructure under grid user home directory
cd into grid folder and run the with parameters
./ stage -pre crsinst -n <node name>
you can also create a fixup file that can help on solving all problems
4.1 solving NTP issue
by edit the file /etc/sysconfig/ntpd
and adding -x to ntp options
OPTIONS=”-x -u ntp:ntp -p /var/run/”
5. run the Grid Infrastructure installer and follow this guide lines
create 2 directories , one for Grid and one for the Software , and set permissions
note that the grid and software has different homes !

[root@~]# mkdir -p /opt/app/OCR/base
[root@~]# mkdir -p /opt/app/OCR/software
[root@~]# chown -R grid:oinstall /opt/app/OCR[root@~]# mkdir -p /opt/app/DB
[root@~]# chown -R oracle:oinstall /opt/app/DB

5.1 next steps are part of the Grid installer
a. choose “Install and configure Grid Infrastructure for cluster”
b. choose the “Advance installation”
c. choose English and next
d. set this settings :
Cluster name : dbtest_cluster
SCAN name :
SCAN port : 1521
* uncheck use of GNS
e. select the nodes you want to install
f. make sure the correct interfaces are set with public and private
g. choose the OCR to be installed on ASM
h. now we choose the ASM partition we dedicated for OCR and name the diskgroup OCR
i. set both password the same for SYS and ASMSNMP
j. do not install IPMI
k. set the os groups for management
OSDBA= asmdba
OSOPER= asmoper
l. set oracle base ( /opt/app/OCR/base ) , and software base ( /opt/app/OCR/software ) from stage 5.
m. set the Inventory folder ( i choose the default ) /opt/app/oraInventory
o. the check may fail on some parameters , fix and continue
p. you will be prompted for running from root terminal , all should come clean ,
and must be run on all nodes before you could continue .
before running the script , you must edit the file $OCR_HOME/lib/
and change this lines to support CentOS , or you will get an error

ADVM/ACFS is not supported on centos-release-5-5.el5.centos

so edit the file and make sure CentOS is one of the supported OS

if (($release =~ /^redhat-release/) || # straight RH
($release =~ /^enterprise-release/) || # Oracle Enterprise Linux
($release =~ /^centos-release/)) # CentOS

q. the install should now finish with no errors !
6. installing database , for that we need to it as oracle user ( very important )
this are the steps from the installer GUI
a. install the database software only , later we will create a database
b. choose the RAC and the nodes involved . and test ssh conectivity
c. choose English and continue
d. we use Standard edition
e. here we set the ORACLE_BASE as we did in the step 5 and ORACLE_HOME under ORACLE_BASE
in our case i choose
ORACLE_BASE = /opt/app/DB
ORACLE_HOME = /opt/app/DB/11.2.0
f. choosing OSDBA and OSOPER as dba,oper
g. fix any warning and continue (or ignore)
h. during install you will be request to run script as root user
7. creating ASM disk groups
by running as user grid $OCR_HOME/bin/asmca
we will create the +UNDO and +DATA
8. Create a database by
running the $ORACLE_HOME/bin/dbca as user oracle

How to build LDAP

In this HowTo we will build a simple LDAP tree ,
the scope of this how to is only seeting up LDAP server .
the system used in this how to is CentOS 5.5 i386
1. require packages :
2. building LDAP tree :
edit file /etc/openldap/slapd.conf
and put your domain and suffix , as well as the ldap root password
you can use use slappasswd for encryping the password for encrypting the password

suffix          "dc=CentOS"
rootdn          "cn=root,dc=CentOS"
rootpw          {SSHA}BbW/c1wp2uyM+mHR7EN+mVHkfHxBRXmg

* you can test the LDAP server configuration using
slaptest -u
3. create the database config file :
the easy way to do that is to copy the example

cp /etc/openldap/DB_CONFIG.example /var/lib/ldap/DB_CONFIG

now we can start building the tree

~]# service ldap start
Checking configuration files for slapd:
config file testing succeeded                       [  OK  ]
Starting slapd:                                            [  OK  ]

4. creating the base tree :
this tree will include the domain (suffix) users and groups
create an ldif base file ( you can use /usr/share/openldap/migration/ for that )
a simple base would look something like this ( lets call it base.ldif )

dn: dc=CentOS
dc: CentOS
objectClass: top
objectClass: domain 
dn: ou=People,dc=CentOS
ou: People
objectClass: top
objectClass: organizationalUnit
dn: ou=Group,dc=CentOS
ou: Group
objectClass: top
objectClass: organizationalUnit

now add it to the LDAP tree via ldapadd

~]# ldapadd -x -W -D "cn=root,dc=CentOS" -f base.ldif
Enter LDAP Password:
adding new entry "dc=CentOS" 
adding new entry "ou=People,dc=CentOS"
adding new entry "ou=Group,dc=CentOS"

once its finished we can start adding users and groups :
lets add two groups to our LDAP , by creating groups.ldif file

dn: cn=group1,ou=Group,dc=CentOS
objectClass: posixGroup
objectClass: top
cn: group1
userPassword: {crypt}x
gidNumber: 5000 
dn: cn=group2,ou=Group,dc=CentOS
objectClass: posixGroup
objectClass: top
cn: group2
userPassword: {crypt}x
gidNumber: 5001

now add this groups under the LDAP tree

~]# ldapadd -x -W -D "cn=root,dc=CentOS" -f groups.ldif
Enter LDAP Password:
adding new entry "cn=group1,ou=Group,dc=CentOS" 
adding new entry "cn=group2,ou=Group,dc=CentOS"

now lets add two users :
again create a users.ldif file ,
the password can be created via slappasswd

dn: uid=user1,ou=People,dc=CentOS
uid: user1
cn: My name is user1
objectClass: account
objectClass: posixAccount
objectClass: top
objectClass: shadowAccount
userPassword: {SSHA}cEcqMNFk1Jd1N1L7U1JdybZdsb+5qG2T
shadowLastChange: 14791
shadowMax: 99999
shadowWarning: 7
loginShell: /bin/bash
uidNumber: 6001
gidNumber: 6001
homeDirectory: /home/user1
gecos: My name is user1 
dn: uid=user2,ou=People,dc=CentOS
uid: user2
cn: My name is user2
objectClass: account
objectClass: posixAccount
objectClass: top
objectClass: shadowAccount
userPassword: {SSHA}cEcqMNFk1Jd1N1L7U1JdybZdsb+5qG2T
shadowLastChange: 14791
shadowMax: 99999
shadowWarning: 7
loginShell: /bin/bash
uidNumber: 6002
gidNumber: 6002
homeDirectory: /home/user2
gecos: My name is user2

you may wonder why there are so many entries we need to fill ,
well thats because each attribute we add to the LDAP , we will need to fill
all require entries .
lets add this users :

~]# ldapadd -x -W -D "cn=root,dc=CentOS" -f users.ldif
Enter LDAP Password:
adding new entry "uid=user1,ou=People,dc=CentOS" 
adding new entry "uid=user2,ou=People,dc=CentOS"

and that’s about it , in order to manage LDAP in a more friendly manner
you can use one of many ldap managment tools like phpldapadmin etc.