Cadaver – A command-line WebDAV client for Unix

What it does?

file upload, download, on-screen display, namespace operations (move and copy), collection creation and deletion, and locking operations.

How to get it:

You can get it from here . As I am sitting on Gentoo and it’s repository has it already,so I go it like below:

bhaskar@bhaskar-laptop_10:47:41_Fri Sep 24:~> sudo emerge -av cadaver

These are the packages that would be merged, in order:

Calculating dependencies... done!
[ebuild  N    ] net-misc/cadaver-0.23.3  USE="nls" 813 kB

Total: 1 package (1 new), Size of downloads: 813 kB

Would you like to merge these packages? [Yes/No] y

>>> Verifying ebuild manifests

>>> Emerging (1 of 1) net-misc/cadaver-0.23.3
>>> Downloading ''
--2010-09-24 10:47:56--
Resolving,,, ...
Connecting to||:80... connected.
HTTP request sent, awaiting response... 302 Found
Location: [following]
--2010-09-24 10:48:00--
Resolving,,, ...
Reusing existing connection to
HTTP request sent, awaiting response... No data received.

--2010-09-24 10:48:06--  (try: 2)
Connecting to||:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 831884 (812K) [application/x-gzip]
Saving to: `/usr/portage/distfiles/cadaver-0.23.3.tar.gz'

==================================>] 831,884     1.15K/s   in 2m 33s

For working with it I have created a directory for it in web space like below:

bhaskar@bhaskar-laptop_10:57:11_Fri Sep 24:~>sudo mkdir -p/var/www/localhost/htdocs/webdav

bhaskar@bhaskar-laptop_11:08:07_Fri Sep 24:~>ls -ls /var/www/localhost/htdocs/webdav/
total 0

bhaskar@bhaskar-laptop_11:08:25_Fri Sep 24:~>ls -ld /var/www/localhost/htdocs/webdav/
drwxr-xr-x 2 apache apache 4096 Sep 24 11:08 /var/www/localhost/htdocs/webdav/

Now I have mod_dav built with apache as module.Here it is:

bhaskar@bhaskar-laptop_11:17:54_Fri Sep 24:/etc/apache2/modules.d>ls | grep mod_dav

And the config look like below:

 <IfDefine DAV>
 2 DavLockDB "/var/lib/dav/lockdb"
 4 # The following directives disable redirects on non-GET requests for
 5 # a directory that does not include the trailing slash.  This fixes a
 6 # problem with several clients that do not appropriately handle
 7 # redirects for folders with DAV methods.
 8 <IfModule setenvif_module>
 9 BrowserMatch "Microsoft Data Access Internet Publishing Provider" redirect-carefully
10 BrowserMatch "MS FrontPage" redirect-carefully
11 BrowserMatch "^WebDrive" redirect-carefully
12 BrowserMatch "^WebDAVFS/1.[012345678]" redirect-carefully
13 BrowserMatch "^gnome-vfs/1.0" redirect-carefully
14 BrowserMatch "^XML Spy" redirect-carefully
15 BrowserMatch "^Dreamweaver-WebDAV-SCM1" redirect-carefully
16 </IfModule>
17 </IfDefine>
19 # vim: ts=4 filetype=apache

Now in main apache config file it shows like this:

 <IfDefine DAV>
 85 LoadModule dav_module modules/
 86 </IfDefine>
 87 <IfDefine DAV>
 88 LoadModule dav_fs_module modules/
 89 </IfDefine>
 90 <IfDefine DAV>
 91 LoadModule dav_lock_module modules/
 92 </IfDefine>

I am going to create a virtualhost for cadaver to below:

<VirtualHost *:80>
49     ServerName localhost
50     Include /etc/apache2/vhosts.d/default_vhost.include
51     DocumentRoot /var/www/localhost/htdocs/webdav/
52     <Directory /var/www/localhost/htdocs/webdav>
53       Options Indexes Multiviews
54       AllowOverride None
55       Order Allow deny
56       Allow from all
57     </Directory>

Alias /webdav /var/www/localhost/htdocs/webdav
61    <Location /webdav>
62      DAV on
63      AuthType Basic
64      AuthName "webdav"
65      AuthUserFile /var/www/localhost/htdocs/webdav/passwd.dav
66      Require valid-user
67    </Location>

58  </Virtualhost>

Adding a user and password to operate with cadaver:

bhaskar@bhaskar-laptop_11:31:59_Fri Sep 24:/etc/apache2/vhosts.d> sudo /usr/sbin/htpasswd2
-c /var/www/localhost/htdocs/webdav/passwd.dav cadaver
New password:
Re-type new password:
Adding password for user cadaver

And change the permission of that file like below:

bhaskar@bhaskar-laptop_11:35:20_Fri Sep 24:/var/www/localhost/htdocs/webdav>sudo chown
 root:apache passwd.dav

Now restarted the apache like this:

bhaskar@bhaskar-laptop_11:51:30_Fri Sep 24:~> sudo /etc/init.d/apache2 start
 * Starting apache2 ...                                                                                                                                                     [ ok ]

Now to test the cadaver commandline client..see below:

bhaskar@bhaskar-laptop_11:55:52_Fri Sep 24:~> cadaver http://bhaskar-laptop/webdav

Authentication required for webdav on server `bhaskar-laptop’:

Username: cadaver



dav:/webdav/> help

Available commands:

ls         cd         pwd        put        get        mget       mput

edit       less       mkcol      cat        delete     rmcol      copy

move       lock       unlock     discover   steal      showlocks  version

checkin    checkout   uncheckout history    label      propnames  chexec

propget    propdel    propset    search     set        open       close

echo       quit       unset      lcd        lls        lpwd       logout

help       describe   about

Aliases: rm=delete, mkdir=mkcol, mv=move, cp=copy, more=less, quit=exit=bye


Now you can use it like ftp thing .

Hope this will help.




Security Enhanced Linux a.k.a SELinux

Writing about this topic is gruelling one. I am greatly influenced by the articles mentioned in the resources section at bottom of this article.Just  trying to put things together for the avid user to read at a glance.So without much ado we shall start getting into the fact quickly which will give you head start.

But before we dive in ,we must make sure that we have enable the kernel option of the SELinux.So it look like in the .config file of the kernel source below:

When configuring your kernel do the following:
(Under Networking Options, enable Network Packet Filtering.
Under Security Options, enable Capabilities and enable
both IP Networking and SELinux as built-in options.)

  This means having the following in your /usr/src/linux/.config:

  This release of SE Linux depends on XATTR's.  For the Ext3 file system
  use the following settings:

  not required for SE Linux, but do not do any harm either.

Why SE Linux?

SE Linux offers greater security for your system. Users can be assigned predefined roles so that they can not access files or processes that they do not own. There is no “chmod 777” equivalent operation. This differs from regular Unix permissions in that the user defined roles, or security contexts they are placed in, have limited access to files and other resources but in a far more controlled fashion. Take a user’s .rhosts file on a regular Unix system. If they make it world writeable then anyone can login and do lots of damage. Under SE Linux, you can control whether or not the user has the ability to change the permissions on their .rhosts file, and also prevent other people from writing to it even after the owner has made it world writeable.

A common question is how SE Linux permissions relate to standard Unix permissions. When you do a certain operation, the Unix permissions are checked first. If they allow your operation then SE Linux will check next and allow or deny as appropriate. But if the Unix permissions don’t let you do something, the requested operation stops there and the SE Linux checks aren’t performed.

Another example is if there was an exploitable bug in /usr/bin/passwd which could run chmod 666 /etc/shadow SE Linux permissions would still prevent anyone from inappropriately accessing the file.

Terminology used in this parlance


An identity under SE Linux is not the same as the traditional Unix uid (user id). They can coexist together on the same system, but are quite different. identities under SE Linux form part of a security context which will affect what domains can be entered, i.e. what essentially can be done. An SE Linux identity and a standard Unix login name may have the same textual representation (and in most cases they do), however it is important to understand that they are two different things. Running the su command does not change the user identity under SE Linux.

An unprivileged user with the login name faye runs the id command (under SE Linux) and sees the security context of


The identity portion of the security context in this case is “faye”. Now, if faye su’s to root and runs id, she will see the security context is still


so the identity remains the same, and has not changed to root. However, if identity faye has been granted access to enter the sysadm_r role and does so (with the newrole -r command which will be covered later), and runs id command again, she will now see


So the identity remains the same but the role and domain (second and third fields respectively) have changed. Maintaining the identity in this manner is useful where user accountability is required. It is also crucial to system security in that the user identity will determine what roles and domains can be used.


Every process runs in a domain. A domain directly determines the access a process has. A domain is basically a list of what processes can do, or what actions a process can perform on different types. Think of a domain like a standard Unix uid. Say root has a program and does a chmod 4777 on that program (making it setuid root). Anyone on the system, even the nobody user, can run this program as root thereby creating a security issue. With SE Linux however, if you have a process which triggers a domain transition to a privileged domain, if the role of the process is not authorised to enter a particular domain, then the program can’t be run.

Some examples of domains are sysadm_t which is the system administration domain, and user_t which is the general unprivileged user domain. init runs in the init_t domain, and named runs in the named_t domain.


A type is assigned to an object and determines who gets to access that object. The definition for domain is roughly the same, except a domain applies to process and a type applies to objects such as directories, files, sockets, etc.


A role determines what domains can be used. The domains that a user role can access are predefined in policy configuration files. If a role is not authorised to enter a domain (in the policy database), then it will be denied.

In order to allow a user from the user_t domain (the unprivileged user domain) to execute the passwd command, the following is specified in the relevant config file:

role user_r types user_passwd_t

It shows that a user in the user role (user_r) is allowed to enter the user_passwd_t domain i.e. they can run the passwd


security context

A security context has all the attributes that are associated with things like files, directories, processes, TCP sockets and so forth. A security context is made up of the identity, role and domain or type. You can check your own current security context by running id under SE Linux.

There is a very important distinction which needs to be made here, between a domain and a type, as it tends to cause a little confusion later on if you don’t understand it from the start.

Processes have a domain. When you check the security context of a process (for an example, see the explanation of “transition”, below), the final field is the domain such as user_passwd_t (if you were running the passwd command).

Objects such as files, directories, sockets etc have types. When you use the ls –context command on a file for instance, the final field is the type, such as user_home_t for a file created in the home directory of a user in the user_r role.

Here’s where a little confusion can creep in, and that’s whether something is a domain or a type. Consider the /proc filesystem. Every process has a domain, and /proc has directories for each process. Each process has a label, or rather, a security context applied to a file. But in the /proc world, the label contains a type, and not a domain. Even though /proc is representing running processes, the entries under /proc are considered files and therefore have a type instead of a domain.

Running ls –context /proc shows the following listing for the init process (with a process id of 1):

dr-xr-xr-x  root     root     system_u:system_r:init_t         1

The label, or security context, shows that this file has a type of init_t However it also means that the init process is running in the init_t domain. Each file or directory under /proc that has a process id for a filename will follow this convention, i.e. the type listed for that process in the output of a ls –context command will also be the domain that process is running in.

Another thing to note is that commands such as chsid (change the security id) and chcon (change context) don’t work on /proc as /proc does not support changing of labels.

The security context of a file, for example, can vary depending on the domain that creates it. By default, a new file or directory inherits the same type as its parent directory, however you can have policies set up to do otherwise.

user faye creates a file named test in her home directory. She then runs the command ls –context test and sees

-rw-r--r--  faye     faye     faye:object_r:user_home_t        test

She then creates a file in /tmp called tmptest and runs the command ls –context /tmp/tmptest This time, the result is

-rw-r--r--  faye     faye     faye:object_r:user_tmp_t       /tmp/tmptest

In the first example, the security context includes the type “user_home_t” which is the default type for the home directory of an unprivileged user in the user_r role. After running the second ls –context command, you can see that the type is user_tmp_t which is the default type that is used for files created by a user_t process, in a directory with a tmp_t type.


A transition decision, also referred to as a labelling decision, determines which security context will be assigned for a requested operation. There are two main types of transition. Firstly, there is a transition of process domains which is used when you execute a process of a specified type. Secondly, there is a transition of file type used when you create a file under a particular directory.

For the second type of transition (transition of file type), refer to the example give in the “security context” section above. When running the ls –context commands you can see what the file types are labelled as (i.e. user_home_t and user_tmp_t in the above examples). So, here you can see that when the user created a file in /tmp a transition to the user_tmp_t domain occurred and the new file has been labelled as such.

For transition of process domains, consider the following example. Run ssh as a non privileged user, or more specifically, from the user_t domain (remember you can use the id command to check your security context). Then run ps ax –context and note what is listed for ssh. Assuming user faye does this, she sees


as part of the output listing. The ssh process is being run in the user_ssh_t domain because the executable is of type ssh_exec_t and the user_r has been granted access to the user_ssh_t domain.


Policies are a set of rules governing things such as the roles a user has access to; which roles can enter which domains and which domains can access which types. You can edit your policy files according to how you want your system set up.

Studying the SELinux policy

SELinux makes access decisions based on the security contexts assigned to processes, files, and other objects. SELinux provides interfaces for querying these contexts and, given the required access rights, to set them. For instance, SELinux reports process contexts through the procattr interface. If you type:

cat /proc/$$/attr/current

You can see the context of the current process ($$). You can easily view the context of all processes on the system.

Access control methods

Most operating systems use access controls to determine whether an entity (user or program) can access a given resource. UNIX®-based systems use a form of discretionary access control (DAC). This method restricts access to objects based commonly on the groups to which they belong. For examples, files in GNU/Linux have an owner, a group, and a set of permissions. The permissions define who can access a given file, who can read it, who can write to it, and who can execute it. These permissions are split into three sets of users, representing the user (owner of the file), the group (all users who are members of a group), and others (all users who are neither members of the group nor owner of the file).

Lumping access controls like this creates a problem because an exploited program inherits the access controls of the user. Thus the program can do things at the user’s access level, which is undesirable. Rather than define restrictions this way, it’s more secure to use the principle of least privilege: programs can do what they need to perform their task, but nothing more. For example, if you have a program that responds to socket requests but doesn’t need to access the file system, then that program should be able to listen on a given socket but not have access to the file system. That way, if the program is exploited in some way, its access is explicitly minimized. This type of control is called mandatory access control (MAC).

Another approach to controlling access is role-based access control (RBAC). In RBAC, permissions are provided based on roles that are granted by the security system. The concept of a role differs from that of a traditional group in that a group represents one or more users. A role can represent multiple users, but it also represents the permissions that a set of users can perform.

SELinux adds both MAC and RBAC to the GNU/Linux operating system. The next section explores the SELinux implementation and how security enforcement was transparently added to the Linux kernel.

Should you really disable SELinux?

Be aware that by disabling SELinux you will be removing a security mechanism on your system. Think about this carefully, and if your system is on the Internet and accessed by the public, then think about it some more. Joshua Brindle (an SELinux developer) has comments on disabling SELinux here, which states clearly that applications should be fixed to work with SELinux, rather than disabling the OS security mechanism.
You need to decide if you want to disable SELinux temporarily to test the problem, or permanently switch it off. It may also be a better option to make changes to the policy to permit the operations that are being blocked – but this requires knowledge of writing policies and may be a steep learning curve for some people. For the operating system as a whole, there is two kinds of disabling:

  • Permissive – switch the SELinux kernel into a mode where every operation is allowed. Operations that would be denied are allowed and a message is logged identifying that it would be denied. The mechanism that defines labels for files which are being created/changed is still active.
  • Disabled – SELinux is completely switched off in the kernel. This allows all operations to be permitted, and also disables the process which decides what to label files & processes with.

Disabling SELinux could lead to problems if you want to re-enable it again later. When the system runs with file labelling disable it will create files with no label – which could cause problems if the system is booted into Enforcement mode. A full re-labelling of the file system will be necessary.

Temporarily switch off enforcement

You can switch the system into permissive mode with the following command:

echo 0 >/selinux/enforce

You’ll need to be logged in as root, and in the sysadm_r role:

newrole -r sysadm_r

To switch back into enforcing mode:

echo 1 >/selinux/enforce

In Fedora Core and RedHat Enterprise Linux you can use the setenforce command with a 0 or 1 option to set permissive or enforcing mode, its just a slightly easier command than the above.

To check what mode the system is in,

cat /selinux/enforce

which will print a “0” or “1” for permissive or enforcing – probably printed at the beginning of the line of the command prompt.

Permanently Permissive

The above will switch off enforcement temporarily – until you reboot the system. If you want the system to always start in permissive mode, then here is how you do it.

In Fedora Core and RedHat Enterprise, edit /etc/selinux/config and you will see some lines like this:

# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
# SELINUXTYPE= can take one of these two values:
# targeted - Only targeted network daemons are protected.
# strict - Full SELinux protection.

… just change SELINUX=enforcing to SELINUX=permissive, and you’re done. Reboot if you want to prove it.

For the other Linuxes which don’t have the /etc/selinux/config file, you just need to edit the kernel boot line, usually in /boot/grub/grub.conf if you’re using the GRUB boot loader. On the kernel line, add enforcing=0 at the end. For example,

title SE-Linux Test System
	root (hd0,0)
	kernel /boot/vmlinuz-2.4.20-selinux-2003040709 ro root=/dev/hda1 nousb enforcing=0
	#initrd /boot/initrd-2.4.20-selinux-2003040709.img

Fully Disabling SELinux

Fully disabling SELinux goes one step further than just switching into permissive mode. Disabling will completely disable all SELinux functions including file and process labelling.

In Fedora Core and RedHat Enterprise, edit /etc/selinux/config and change the SELINUX line to SELINUX=disabled:

# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
# SELINUXTYPE= can take one of these two values:
# targeted - Only targeted network daemons are protected.
# strict - Full SELinux protection.

… and then reboot the system.

For the other Linuxes which don’t have the /etc/selinux/config file, you just need to edit the kernel boot line, usually in /boot/grub/grub.conf, if you’re using the GRUB boot loader. On the kernel line, add selinux=0 at the end. For example,

title SE-Linux Test System
        root (hd0,0)
        kernel /boot/vmlinuz-2.4.20-selinux-2003040709 ro root=/dev/hda1 nousb selinux=0
        #initrd /boot/initrd-2.4.20-selinux-2003040709.img

You will have to reboot to disable SELinux, you just can’t do it while the system is running.

Re-Enabling SELinux

If you’ve disabled SELinux as in the section above, and you want to enable it again then you’ve got a bit of work to do. The problem will be that files created or changed when SELinux was disabled won’t have the correct file labels on them – if you just reboot in enforcing mode then a lot of stuff won’t work properly.

What you need to do is to enable SELinux by editing /etc/selinux/config (for Fedora/RedHat) or by adding selinux=1 to the kernel boot line, then boot into permissive mode, then relabel everything, and then reboot into (or simply switch to) enforcing mode.

After booting into permissive mode, run :

fixfiles relabel

Alternatively, in Fedora and RedHat Enterprise Linux you can run touch /.autorelabel and reboot or put autorelabel on the boot command line – in both cases the file system gets a full relabel early in the boot process. Note that this can take quite some time for systems with a large number of files.

touch /.autorelabel

After relabelling the filesystem, you can switch to enforcing mode (see above) and your system should be fully enforcing again.

Understanding SELinux modes

Irrespective of the policy or the rules implemented through SELinux Type Enforcement, there are three modes of operation for SELinux:

  1. Disabled
  2. Permissive
  3. Enforcing

Disabled mode implies that SELinux is disabled and not implemented on the host. This has been the most common choice in installations seen by me. Hopefully, by the end of this series, we shall be able to bring about a change in that practice by encouraging more system administrators to adopt SELinux.

Permissive mode is similar to Debugging Mode. In Permissive Mode, SELinux policies and rules are applied to subjects and objects, but actions (for example, Access Control denials) are not effected. The biggest advantage of Permissive Mode is that log files and error messages are generated based on the SELinux policy implemented.

In other words, if the SELinux policy would prevent the httpd subject (Apache Web server) from accessing the object folder /webdata on my system, implementing SELinux in Permissive Mode would let the Apache Web server access the folder /webdata but log a denial in the log files.

This error logging informs the system administrator that if SELinux is activated in the Enforcing Mode, the httpd subject would be disallowed access to the /webdata folder on my system.

Permissive Mode is the initiating point for all those wanting to explore the world of Type Enforcement through SELinux. Without blocking access to your favourite programs such as, Evolution, etc, it provides you with enough debugging information to fine tune your policy before deploying it on your system.

Enforcing Mode, as the name signifies, is SELinux in action. All production systems, when hardened, should enable SELinux in Enforcing Mode. SELinux through Access Controls does have a minor performance overhead, but compared to the advantages that it brings to the table, I am sure it will soon become the norm to implement SELinux on production servers.

Now few mundane thing to know about the SELinux thing on the system . Here are things:

Check the status of SELinux:

bhaskar@bhaskar-laptop_08:06:39_Tue Sep 21:~> sudo sestatus
[sudo] password for bhaskar:
SELinux status:                 enabled
SELinuxfs mount:                /selinux
Current mode:                   enforcing
Mode from config file:          enforcing
Policy version:                 24
Policy from config file:        targeted

Get the information about the policy:

bhaskar@bhaskar-laptop_08:15:48_Tue Sep 21:~> sudo getenforce

The file you need to look in for SElinux in RHEL system is located at /etc/sysconfig/selinux.You can get into this file through any editor and change/edit your preferences.By default, there are two policies shipped along with Red Hat Enterprise Linux: Targeted and Strict.

The Targeted Policy is the first step in assisting system administrators to understand and implement SELinux. It only ‘targets’ certain network daemons such as the Apache Web server, FTP server, BIND DNS server and a few others, while leaving the vast majority of end-user applications largely untouched. It creates an ‘unconfined’ domain ‘confinement’ (interesting paradox, isn’t it?) and does not apply Access Control Restrictions to most applications in the unconfined domain.

The Strict Policy, on the other hand, is a true restrictive Access Control Policy. Before implementing this policy, make sure you understand SELinux concepts and policies well.

How to manage context?

There is binary called “semanage” which will do the job for us. So here we go to get the details:

bhaskar@bhaskar-laptop_08:38:02_Tue Sep 21:~> sudo semanage fcontext -l | less

SELinux fcontext                                   type               Context

/                                                  directory          system_u:object_r:root_t:s0
/.*                                                all files          system_u:object_r:default_t:s0
/[^/]+                                             regular file       system_u:object_r:etc_runtime_t:s0
/\.autofsck                                        regular file       system_u:object_r:etc_runtime_t:s0
/\.autorelabel                                     regular file       system_u:object_r:etc_runtime_t:s0
/\.journal                                         all files          <<None>>
/\.suspended                                       regular file       system_u:object_r:etc_runtime_t:s0
/a?quota\.(user|group)                             regular file       system_u:object_r:quota_db_t:s0
/afs                                               directory          system_u:object_r:mnt_t:s0

…..output snipped for the sake of clarity.

Change context temporarily:

How do you do that? SELinux comes with a binary called “chcon“,by which we will made this possible.So here is implication of it:

bhaskar@bhaskar-laptop_08:42:42_Tue Sep 21:~> sudo touch /tmp/change_context
[sudo] password for bhaskar:

bhaskar@bhaskar-laptop_08:48:45_Tue Sep 21:~> ls -lZ /tmp/change_context
-rw-r–r–. root root unconfined_u:object_r:user_tmp_t:s0 /tmp/change_context

Here I have created an empty file on /tmp and listing the default context attached with it.Now we are going to change it:

root@bhaskar-laptop_08:52:11_Tue Sep 21:~ # chcon -t unconfined_t /tmp/change_context

root@bhaskar-laptop_08:52:37_Tue Sep 21:~ # ls -Z /tmp/change_context

Restorecon: the healer

Now if something goes wrong about it then we can get back the thing by running “restorecon” on the affected files and directories

root@bhaskar-laptop_09:51:42_Tue Sep 21:/home/bhaskar # restorecon -v /fileODir_goes_bad









Hope this will help.



Musing with GNU/Linux!: SASL : IMAP authentication system

Musing with GNU/Linux!: SASL : IMAP authentication system.

SASL : IMAP authentication system

>In this article I am going to take you through cyrus-sasl to built with postfix mail server.So fasten your seat belt for the ride.

SASL stand for Simple Authentication Security Layer and I will integrate it with an IMAP server built with postfix.SASL is defined in RFC-2222.SASL is a means for authenticating yourself to the server without providing your password in the clear. This can also be used to provide extended capabilities based on your authorization. In plainer words, a SASL mechanism can provide authentication only, or it can also provide integrity checking, and possibly encryption as well.

I do not issue any guarantee that this will work for you.

First and foremost thing to get the cyrus-sasl source from here or go to the website dedicated to it.

Ok, I am building it on Gentoo,so I will furnish the steps required to get it work on it.But the prime focus would be to show you how it work.

bhaskar@bhaskar-laptop_08:22:37_Sat Sep 18:~> sudo emerge -av cyrus-sasl

These are the packages that would be merged, in order:

Calculating dependencies… done!
[ebuild R ] dev-libs/cyrus-sasl-2.1.23-r1 USE=”berkdb crypt gdbm ldap pam ssl -authdaemond -java -kerberos -mysql -ntlm_unsupported_patch -postgres -sample -sqlite -srp -urandom” 0 kB

Total: 1 package (1 reinstall), Size of downloads: 0 kB

Would you like to merge these packages? [Yes/No]

I choose N or no here because I have it already in the system.But when did I have it into the system? lets find out:

bhaskar@bhaskar-laptop_08:47:55_Sat Sep 18:~> sudo genlop -t cyrus-sasl
* dev-libs/cyrus-sasl

Fri Nov 13 18:08:00 2009 >>> dev-libs/cyrus-sasl-2.1.23-r1
merge time: 2 minutes and 5 seconds.

Right! now move on. We need to add an user to operate to manage that software.So here we go:

bhaskar@bhaskar-laptop_08:49:18_Sat Sep 18:~> sudo useradd -g mail cyrus

Now you see the user like this:

bhaskar@bhaskar-laptop_08:49:18_Sat Sep 18:~> id cyrus
uid=110(cyrus) gid=12(mail) groups=12(mail)

Next we change the password of it ,the user cyrus created in the previous steps, like this:

root@bhaskar-laptop_09:02:10_Sat Sep 18:/home/bhaskar # passwd cyrus
Retype new password:
passwd: password updated successfully

Cool! You can go ahead and test the user before you start to implement other thing with it.So now little bit of placeholders need to created for it.

Creating the necessary directories

This list of instructions will set up all the directories necessary for imap.

1. mkdir /var/adm

2. touch /var/adm/imapd.log /var/adm/auth.log

3. mkdir /var/imap /var/spool/imap /var/imap/srvtab

4. chown cyrus /var/imap /var/spool/imap /var/imap/srvtab

5. chgrp mail /var/imap /var/spool/imap /var/imap/srvtab

6. chmod 750 /var/imap /var/spool/imap /var/imap/srvtab

Pretty easily understandable those commands ,if not please cut and paste it your system to see the effect of it.

Now lets working as user cyrus created the chances will be less to intermingle with other thing:

root@bhaskar-laptop_09:12:35_Sat Sep 18:/home/bhaskar # su cyrus

cyrus@bhaskar-laptop_09:12:39_Sat Sep 18:/home/bhaskar>

Now we are going to put some entry into the syslog config file so the logger will take action on it.

bhaskar@bhaskar-laptop_09:18:02_Sat Sep 18:/etc/syslog-ng> sudo vim syslog-ng.conf

and we put into it below lines:

local6.debug /var/adm/imapd.log
auth.debug /var/adm/auth.log

Ok,one more thing to do before I jump into the cyrus thing. We need to edit the file called /etc/imapd.conf,because we are integrating with an imap server.

bhaskar@bhaskar-laptop_09:22:38_Sat Sep 18:~> sudo vim /etc/imapd.conf

Once you are inside the file please add those below line and save the file:

configdirectory: /var/imap
6 partition-default: /var/spool/imap
7 sievedir: /var/imap/sieve
9 tls_ca_path: /etc/ssl/certs
10 tls_cert_file: /etc/ssl/cyrus/server.crt
11 tls_key_file: /etc/ssl/cyrus/server.key
13 # Don’t use an everyday user as admin.
14 admins: cyrus
16 hashimapspool: yes
17 allowanonymouslogin: no
18 allowplaintext: no

So these are entry I have put in.As I have ssl cert too!.I have use the sasldb to check my password and the entry look in the file is :

# Use saslauthd if you want to use pam for imap.
27 # But be warned: login with DIGEST-MD5 or CRAM-MD5
28 # is not possible using pam.
29 sasl_pwcheck_method: saslauthd

Now we need to check the services file which reside in /etc directory to hold the information about the system services.We need to look into that file for some specific lines mentioned below:

pop3 110/tcp
imap 143/tcp
imsp 406/tcp
kpop 1109/tcp
sieve 2000/tcp


root@bhaskar-laptop_09:36:28_Sat Sep 18:/var/spool # grep pop3 /etc/services
pop3 110/tcp pop-3 # Post Office Protocol – Version 3
pop3 110/udp pop-3
pop3s 995/tcp # pop3 protocol over TLS/SSL
pop3s 995/udp


root@bhaskar-laptop_09:37:15_Sat Sep 18:/var/spool # grep imap /etc/services
imap 143/tcp imap2 # Internet Message Access Protocol
imap 143/udp imap2
imap3 220/tcp # Interactive Mail Access
imap3 220/udp
imaps 993/tcp # imap4 protocol over TLS/SSL
imaps 993/udp


root@bhaskar-laptop_09:37:22_Sat Sep 18:/var/spool # grep imsp /etc/services
imsp 406/tcp # Interactive Mail Support Protocol
imsp 406/udp


root@bhaskar-laptop_09:38:29_Sat Sep 18:/var/spool # grep kpop /etc/services
kpop 1109/tcp # Pop with Kerberos


root@bhaskar-laptop_09:39:09_Sat Sep 18:/var/spool # grep sieve /etc/services
cisco-sccp 2000/tcp sieve # Cisco SCCP
cisco-sccp 2000/udp sieve

So things are in place.Cool! looks good indeed.Now we need to modify the superserver file called inetd or in modern system called xinetd.

imap stream tcp nowait cyrus /usr/cyrus/bin/imapd imapd
pop3 stream tcp nowait cyrus /usr/cyrus/bin/pop3d pop3d

As I said before that we are going to be integrated with postifx,so need to check out this thing in /etc/postfix/ for the user cyrus :

root@bhaskar-laptop_09:58:51_Sat Sep 18:/etc # grep cyrus /etc/postfix/
cyrus unix – n n – – pipe
# flags=R user=cyrus argv=/cyrus/bin/deliver -e -m ${extension} ${user}
# Also specify in cyrus_destination_recipient_limit=1
#cyrus unix – n n – – pipe
# user=cyrus argv=/cyrus/bin/deliver -e -r ${sender} -m ${extension} ${user}

So in my system it in place but if you don’t have this thing set please do that.

Now we need to add the cyrus administrator for the monitoring and administrative works.Here is the steps:

root@bhaskar-laptop_10:12:00_Sat Sep 18:/etc # /usr/sbin/saslpasswd2 cyrus
Again (for verification):

Now it’s time for the testing the server with authentication…so here we go:

cyrus@bhaskar-laptop_10:46:00_Sat Sep 18:/root> cyradm –auth login localhost
verify error:num=18:self signed certificate
IMAP Password:

bhaskar-laptop.localdomain> ?
authenticate, login, auth authenticate to server
chdir, cd change current directory
createmailbox, create, cm create mailbox
deleteaclmailbox, deleteacl, dam remove ACLs from mailbox
deletemailbox, delete, dm delete mailbox
disconnect, disc disconnect from current server
exit, quit exit cyradm
help, ? show commands
info display mailbox/server metadata
listacl, lam, listaclmailbox list ACLs on mailbox
listmailbox, lm list mailboxes
listquota, lq list quotas on specified root
listquotaroot, lqr, lqm show quota roots and quotas for mailbox
mboxcfg, mboxconfig configure mailbox
reconstruct reconstruct mailbox (if supported)
renamemailbox, rename, renm rename (and optionally relocate) mailbox
server, servername, connect show current server or connect to server
setaclmailbox, sam, setacl set ACLs on mailbox
setinfo set server metadata
setquota, sq set quota on mailbox or resource
subscribe, sub subscribe to a mailbox
unsubscribe, unsub unsubscribe from a mailbox
version, ver display version info of current server
xfermailbox, xfer transfer (relocate) a mailbox to a different server

Creating mailbox for the specified user

bhaskar-laptop.localdomain> cm user.bhaskar
bhaskar-laptop.localdomain> lm
user.bhaskar (\HasNoChildren)

Here “lm” stands for list mailbox, which is available by the help command shown above.

Now you can do so many thing with the mail server namely create a user,set quota for their mails box to name a few.Please look at the command enlisted above to utilise it.

Hope this will help.


Exploring /dev/random vs. /dev/urandom and /dev/zero vs. /dev/null

In this article I will take you through the mystries behind those files. All those are very critical and important to most of the open system specifically to GNU/Linux system.So one has to have an idea how it going underneath to deal with them and utilise it proper way.

First I shall explore /dev/random vs. /dev/urandom, so here we go:

One of they key thing come into the mind to generate random rumber during the public-private key pair creation time.There will be many more instances where this files will come into play.So how do go about them?

Linux implements a purely algorithmic random number generator, accessible as /dev/random. Its results are good enough for most purposes, but there are times when true randomness is needed. To that end, the kernel attempts to harvest randomness (called “entropy”) from its environment. The timing between the keystrokes , exhibits some randomness. The same is true of, for example, the timing of disk interrupts. The lower bits of the system time stamp counter can also provide a bit of entropy. The kernel collects this entropy into a special pool of bits, and uses this entropy pool when true random numbers (obtained from /dev/random) are required. The amount of accumulated entropy is also tracked; if there is insufficient entropy in the pool to satisfy a random number request, the requesting process will block until the needed entropy arrives.

When we generate the random number we should do some intensive work to fill the entropy pool.That is do some disk I/O work,move the mouse ,punch some keystroke etc.

With /dev/random:

bhaskar@bhaskar-laptop_07:32:47_Wed Sep 15:~> sudo dd if=/dev/urandom of=/tmp/uran
654082+0 records in
654081+0 records out
334889472 bytes (335 MB) copied, 82.0738 s, 4.1 M

And now with /dev/urandom:

bhaskar@bhaskar-laptop_09:39:45_Wed Sep 15:~> sudo dd if=/dev/urandom of=/tmp/ran
1377600+0 records in
1377600+0 records out
705331200 bytes (705 MB) copied, 173.111 s, 4.1 MB/s

Both the cases it was a 3 min interval and that time I do lot of disk intensive work.Now few info about those character files:

/dev/random blocks when entropy pool exhausted whereas /dev/urandom draws from entropy pool until depleted and essentially falls back to pseudo-random-number generators.The entropy pool is maintained in the file /var/lib/random-seed between boots and inplemented by the random script service in RHEL system.

Now it’s time to look into /dev/zero and /dev/null,so here we go:

Writing to both the files are equall .both send your output to blackhole. Executing either of the above commands will satisfy your requirements if you just want to “dump” output to “nowhere.” They should both be character (or raw) devices, have identical major device numbers and only differ at the minor device number level. These numbers will differ from OS to OS, but the basic definitions above should hold relatively true.

Reading from /dev/null and /dev/zero: This is where the difference between the two files becomes apparent. The most significant difference is exposed in the “reading” since this action highlights the major way in which the two differ.

/dev/null is, essentially, a black hole. Writes to it (as noted above), basically go down the drain. They go nowhere, stay there and you can’t get them back. When you “read” from /dev/null, the same rule holds true. /dev/null is virtually “nothing,” and all reads from it produce no output whatsoever. For instance, Linux‘s “strace”  shows what happens when /dev/null is read from (e.g. “cat /dev/null“) – below, what you’d see at the command line, followed by a snippet of strace output from the almost-immediate end of the command’s execution:

bhaskar@bhaskar-laptop_09:54:39_Wed Sep 15:~> sudo cat /dev/null

bhaskar@bhaskar-laptop_11:49:10_Wed Sep 15:~> sudo strace cat /dev/null
execve(“/bin/cat”, [“cat”, “/dev/null”], [/* 16 vars */]) = 0
brk(0)                                  = 0x8269000
mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb7816000
access(“/etc/”, R_OK)      = -1 ENOENT (No such file or directory)
open(“/etc/”, O_RDONLY)      = 3
fstat64(3, {st_mode=S_IFREG|0644, st_size=117500, …}) = 0
mmap2(NULL, 117500, PROT_READ, MAP_PRIVATE, 3, 0) = 0xb77f9000
close(3)                                = 0
open(“/lib/”, O_RDONLY)        = 3
read(3, “\177ELF\1\1\1\3\3\1\320m\1004″…, 512) = 512
fstat64(3, {st_mode=S_IFREG|0755, st_size=1347988, …}) = 0
mmap2(NULL, 1354184, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0xb76ae000
mmap2(0xb77f3000, 12288, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x145) = 0xb77f3000
mmap2(0xb77f6000, 10696, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0xb77f6000
close(3)                                = 0
mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb76ad000
set_thread_area({entry_number:-1 -> 6, base_addr:0xb76ad6c0, limit:1048575, seg_32bit:1, contents:0, read_exec_only:0, limit_in_pages:1, seg_not_present:0, useable:1}) = 0
mprotect(0xb77f3000, 8192, PROT_READ)   = 0
mprotect(0xb7834000, 4096, PROT_READ)   = 0
munmap(0xb77f9000, 117500)              = 0
brk(0)                                  = 0x8269000
brk(0x828a000)                          = 0x828a000
open(“/usr/lib/locale/locale-archive”, O_RDONLY|O_LARGEFILE) = 3
fstat64(3, {st_mode=S_IFREG|0644, st_size=1779408, …}) = 0
mmap2(NULL, 1779408, PROT_READ, MAP_PRIVATE, 3, 0) = 0xb74fa000
close(3)                                = 0
fstat64(1, {st_mode=S_IFCHR|0620, st_rdev=makedev(136, 5), …}) = 0
open(“/dev/null”, O_RDONLY|O_LARGEFILE) = 3
fstat64(3, {st_mode=S_IFCHR|0666, st_rdev=makedev(1, 3), …}) = 0
read(3, “”, 32768)                      = 0
close(3)                                = 0
close(1)                                = 0
close(2)                                = 0
exit_group(0)                           = ?

/dev/zero, on the other hand is not the black hole that it appears to be when “writing to it.” When you “read” from /dev/zero, you get a much different result than when you read from /dev/null. This is most specifically because /dev/zero returns zero’s until the cows come home (or you stop reading from it 😉 and “does not” return an EOF like /dev/null. It actually returns the ASCII null character (0x00) ad infinitum.

bhaskar@bhaskar-laptop_11:54:34_Wed Sep 15:~> sudo strace cat /dev/zero
execve(“/bin/cat”, [“cat”, “/dev/zero”], [/* 16 vars */]) = 0
brk(0)                                  = 0x8f0b000
mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb782b000
access(“/etc/”, R_OK)      = -1 ENOENT (No such file or directory)
open(“/etc/”, O_RDONLY)      = 3
fstat64(3, {st_mode=S_IFREG|0644, st_size=117500, …}) = 0
mmap2(NULL, 117500, PROT_READ, MAP_PRIVATE, 3, 0) = 0xb780e000
close(3)                                = 0
open(“/lib/”, O_RDONLY)        = 3
read(3, “\177ELF\1\1\1\3\3\1\320m\1004″…, 512) = 512
fstat64(3, {st_mode=S_IFREG|0755, st_size=1347988, …}) = 0
mmap2(NULL, 1354184, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0xb76c3000
mmap2(0xb7808000, 12288, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x145) = 0xb7808000
mmap2(0xb780b000, 10696, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0xb780b000
close(3)                                = 0
mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb76c2000
set_thread_area({entry_number:-1 -> 6, base_addr:0xb76c26c0, limit:1048575, seg_32bit:1, contents:0, read_exec_only:0, limit_in_pages:1, seg_not_present:0, useable:1}) = 0
mprotect(0xb7808000, 8192, PROT_READ)   = 0
mprotect(0xb7849000, 4096, PROT_READ)   = 0
munmap(0xb780e000, 117500)              = 0
brk(0)                                  = 0x8f0b000
brk(0x8f2c000)                          = 0x8f2c000
open(“/usr/lib/locale/locale-archive”, O_RDONLY|O_LARGEFILE) = 3
fstat64(3, {st_mode=S_IFREG|0644, st_size=1779408, …}) = 0
mmap2(NULL, 1779408, PROT_READ, MAP_PRIVATE, 3, 0) = 0xb750f000
close(3)                                = 0
fstat64(1, {st_mode=S_IFCHR|0620, st_rdev=makedev(136, 5), …}) = 0
open(“/dev/zero”, O_RDONLY|O_LARGEFILE) = 3
fstat64(3, {st_mode=S_IFCHR|0666, st_rdev=makedev(1, 5), …}) = 0
read(3, “”…, 32768) = 32768
write(1, “”…, 32768) = 32768
read(3, “”…, 32768) = 32768
write(1, “”…, 32768) = 32768
read(3, “”…, 32768) = 32768
write(1, “”…, 32768) = 32768
read(3, “”…, 32768) = 32768
write(1, “”…, 32768) = 32768
read(3, “”…, 32768) = 32768

…output snipped.

With /dev/null you can create a zero byte file like this :

bhaskar@bhaskar-laptop_11:58:18_Wed Sep 15:~> sudo cat /dev/null > nullfile

bhaskar@bhaskar-laptop_11:59:28_Wed Sep 15:~> ls -al nullfile
-rw-r–r– 1 bhaskar users 0 Sep 15 11:59 nullfile

Besically people use /dev/zero to fill out disk space .Suppose you want to create a file system on a specific partion and want to erase out everything it has presently then simply you can pass that drive to /dev/zero,so it will be filled with zeros.Later you can make the filesystem on it.Like below:

bhaskar@bhaskar-laptop_11:59:31_Wed Sep 15:~> sudo dd if=/dev/zero of=/sys

So the partition /sys become zero filled and raw.If you wish you can make a filesystem on it to hold data.Please don’t try this on a important partitions in the box.

Now I have enlisted below some url for your understanding and this article is influenced by those.






Hope this will help.



LDAP: The tool to manage enterprise infrastructure

>I am provoked or compelled or whatever you say to write this article about this topic and the article that spurs the interest is here and many more on the internet by some brilliant guys and that’s reminds me of dealing with LDAP with one of my job assignment.Without doubt I must say it is a complex topic to deal with( at least my bend of mind says so.. YMMV).But having said that it is an absolute must for any administrator handling large network infrastructure in the corporation should be very well aware of this protocol as well the usability of it.

I do not issue any guarantee that this will work for you.

I am assuming that readers are aware of this protocol, if not then please look in OpenLDAP Website to get an idea of it.OpenLDAP is an open source suite of software that includes the LDAP server daemon (slapd), a replication daemon (slurpd) and a small collection of command line client tools, like ldapsearch and ldapadd, among others. In this article, we’ll set up and populate a small but functional LDAP server using the slapd daemon, and start to make use of it with a Linux client.

Definition and Components

LDAP stands for Lightweight Directory Access Protocol, which is to say that, by definition, LDAP is a protocol, and nothing else. However, the protocol exists to perform operations on data, and is really pretty useless without it. This brings up the components that make up an LDAP deployment: client software used to send LDAP requests, the server daemon that handles incoming LDAP requests, and the back-end data store. I will refer to the last two collectively as a “directory service.”

Back-end Data Storage

Of these components, the back-end data storage mechanism is the least relevant to you unless you’re administering a production LDAP deployment. Developers writing code that accesses an LDAP server and end users who access a directory service via some client utility should be happy to let the protocol do the job of getting data to them without knowing anything about the back end. Adding, removing, updating, deleting, and fetching data from a directory service occurs through the LDAP protocol.

Now there are few variant of the LDAP thing like :


b) FDS(Fedora Directory Server)

c) Novell’s eDirectory

d)Sun Java System Directory or formarly iPlanet

So we will sticks with OpenLDAP in this article to investigate with.

What Is LDAP Used For?

An LDAP directory service stores information for use by systems as well as end users (and their various applications). Probably the most common use of LDAP is for replacing either flat-file authentication (think /etc/passwd) or legacy networked authentication (think NIS). The benefit of any networked authentication mechanism over a flat file system is clearly that it lifts the burden of having to keep files on all of your systems in sync. The benefit of LDAP over, say, NIS is (among other things) a finer-grained control over the data and how it is accessed (and by whom). You can also make encrypted connections to LDAP servers using TLS or SSL, and you never have to muck with flat file “maps” or complicated Makefiles to change the data.

Because LDAP is a transaction-based system, operations that complete successfully are immediately “live.” Modern Unix-based systems (including Linux, BSD, and OS X) can rely on LDAP to get just about any information they could store in flat files or NIS, including hosts, automounter configuration, users, groups, and more. Add to that the ability to have Samba, Apache, PAM, tcpwrappers, Sendmail, and other applications talk to LDAP for authentication, aliases, and other tidbits of useful information, and you have the beginnings of a very well-integrated, easily maintained, authoritative data source for your entire infrastructure.

LDAP is also popular for use as a “white pages” directory for a department or corporation. For example, most email applications, from Mutt and Pine to Outlook, Evolution, and KMail all know how to talk to an LDAP server. This makes it very easy to, for example, tell KMail to autocomplete addresses as you type using an LDAP directory as its addressbook source instead of (or in addition to) local files.To add to the list Thunderbird too will support that protocol.

A Closer Look at LDAP Data

It’s extremely important when learning about LDAP and how it deals with data to separate the structure (or topology) of the data from the definitions of the objects themselves.

Simply, the structure of LDAP data is a hierarchical collection of objects. Objects can represent anything from people to printers and take their places within the hierarchy using whatever logic you like.


Yes, objects. Each object has a list of attributes associated with it that describe that particular object. When you add or delete an object, make a request for an object, or change the value of an object’s attribute, you do so solely using the LDAP protocol. In short, LDAP exists to manipulate or fetch data about objects.


The layout of the data in an LDAP directory is the Directory Information Tree (DIT). You can customize it to the needs of your organization, but it’s still a hierarchical tree structure. This tree is not dissimilar to a typical filesystem; there’s a “top” or “root” directory, under which are high-level objects (directories in a filesystem). Those help you to categorize the lower level objects that you’re really interested in (in a filesystem, these are the files themselves).

Suppose you want to store information about people using a hierarchical collection of objects. Viewing things as a filesystem, you could create a /People directory, and under that, create a file–/People/whatEverYouWant. That file contains attribute name and value pairs to describe “steve.” One attribute might be “firstname,” with a value of “Dan.” Save the file, and create a new one for each person. Eventually, you have a filesystem that looks something like:


We might create it like department wise like this:


LDAP data are represented in the LDIF(LDAP Data Interchange Format).

What Are Objectclasses?

Objectclasses are prototypes for entries that will actually exist in your directory server. The objectclass definition (which uses ASN.1 syntax) specifies which attributes may or must be used by LDAP entries declared as instances of a particular objectclass.

Get it? Let me explain it backward, in the way that most people get into LDAP: you want to store information about people. The most common attributes associated with people are:

* First name
* Last name
* Email address
* Phone numbers
* Room numbers

These attributes are great for setting up an office whitepages server that users can refer to for information about people in their office. The key now is finding out which objectclass definitions either require or allow for the use of these attributes. When I started with LDAP, I researched this by perusing the actual schema files that come with most (if not all) directory servers. These files are human-readable.

Object Class Definitions

Here’s the definition of the inetOrgPerson objectclass, which is a good place to start:

objectclass ( 2.16.840.1.113730.3.2.2
NAME ‘inetOrgPerson’
DESC ‘RFC2798: Internet Organizational Person’
SUP organizationalPerson
audio $ businessCategory $ carLicense $ departmentNumber $
displayName $ employeeNumber $ employeeType $ givenName $
homePhone $ homePostalAddress $ initials $ jpegPhoto $
labeledURI $ mail $ manager $ mobile $ o $ pager $
photo $ roomNumber $ secretary $ uid $ userCertificate $
x500uniqueIdentifier $ preferredLanguage $
userSMIMECertificate $ userPKCS12 )

The first line states that what follows is an objectclass definition, as opposed to an attributetype definition. The long number is the ASN.1 number assigned to the objectclass. If you create your own objectclasses, this number is significant; it’s where you use your organization’s IANA Enterprise Number to identify any objectclasses that you create.

The NAME line should be self explanatory. It is the name that will appear in your users’ entries to state that the user is of type inetOrgPerson. This line gives you license to use any of the attributes in the objectclass definition to describe the user.

The DESC line is usually a useful description that can help you use this object in a way appropriate to the intent of the definer. You don’t want to use objectclasses in a completely unorthodox way, because when you reach out to others for help, they’ll find themselves asking you more questions than you ask them, which is often a sign that you’ve gone off in the wrong direction.

The SUP line is critical, and the theory is tough to describe without getting pretty verbose. SUP is short for SUPERIOR, and it names another objectclass from which this objectclass inherits. In this case, the superior or parent objectclass is organizationalPerson. The organizationalPerson class inherits from the person objectclass, which inherits from an objectclass called top. If an objectclass has no other superiors, it is always a child of the top objectclass.

It’s an inheritance chain. You need to understand it, because some LDAP servers strictly enforce it, and if you violate it in the creation of your entries, the directory server will unceremoniously spit them back at you.

The MAY line is actually a block. That block (between parentheses) contains a list, delimited with the $ symbol, of all of the attributes that MAY be used to describe an object declared of the type inetOrgPerson.

OK enough internals..lets go ahead and install and deploy the thing.For more curious reader I will provide you the link from where this article is influenced in resource section in the end.

Installing OpenLDAP:

You can download OpenLDAP from the OpenLDAP website. While it is certainly possible to obtain precompiled binary distributions of OpenLDAP in RPM, deb, and other package formats, these tend to be somewhat older releases. There are many useful customizations you can make during an OpenLDAP compile, and I’ve never had much trouble compiling OpenLDAP from source, so this is the method I’m advocating.

This is not to say that there are absolutely no dependencies to satisfy. There are two major dependencies, both of which are very easy to handle:

Berkeley DB

The OpenLDAP team strongly recommends using Sleepycat Software’s Berkeley DB as the data storage mechanism for an OpenLDAP deployment. As we mentioned in Part One of the series, LDAP is not a database, but a protocol for accessing and managing data. But the data has to live somewhere, and Berkeley DB is easy to deal with, even for newbie admins. If you’re among those who have nightmares about databases, take heart in knowing that OpenLDAP does a superb job at hiding the fact that you’re even dealing with one. Download the Berkeley DB source from the user-friendly Sleepycat download page. For my test build, I used Berkeley DB 4.1.25 without strong encryption support.

Building Berkeley DB couldn’t be easier. Unpack the tarball, cd to the build_unix directory, and type ../dist/configure, followed by make and make install (the last as root). This will create a directory called /usr/local/BerkeleyDB.4.1, which contains all of the pertinent parts we need for our OpenLDAP installation.


If you’re using Redhat, Fedora,Gentoo,Arch,Debian or any number of other recent distributions, OpenSSL is probably already installed. If it isn’t, and you wish to enable secure connections to your LDAP server, you need to install it. Luckily, this is a breeze. Grab a source tarball from the OpenSSL Download Page. Untar it, cd to the resulting directory, and run the standard configure and make commands. I also recommend that you run make test, and then (as root, of course) make install. This puts everything you need in the /usr/local/ssl directory by default.

Anyway I have created a user and group which will be able to operate the ldap thing going.Here is that:

bhaskar@bhaskar-laptop_14:51:17_Mon Sep 13:~> id ldap
uid=439(ldap) gid=439(ldap) groups=439(ldap)

Now we need to investigate the slapd.conf file,this is the file which drive ldap.So here we go:

Here’s a quick and dirty slapd.conf that gets the daemon up and running and allows an administrative user to manipulate data:

include /etc/openldap/schema/core.schema
include /etc/openldap/schema/cosine.schema
include /etc/openldap/schema/inetorgperson.schema
include /etc/openldap/schema/nis.schema

Schema files define objects and attributes. When the slapd daemon starts, it includes whichever schema files we tell it to here, and that determines the types of objects and attributes supported by that slapd daemon. So, for example, if we did not include the nis.schema file, we would not be able to add typical Unix accounts using only the other schema files we’ve included. Schema files are human-readable, and you could even create your own schema files if you needed some wacky object-types that aren’t already defined.

allow bind_v2
pidfile /var/run/

In newer versions of OpenLDAP, only LDAPv3 binds are allowed by default, which has caused many a mailing list crisis, since there are applications that don’t support making a version 3 bind to a directory server. In the event we come across any in our travels, we’ve allowed LDAPv2 binds for our proof-of-concept, as you can see here:

database bdb
suffix “dc=bhaskar-laptop,dc=localdomain”
rootdn “cn=Manager,dc=bhaskar-laptop,dc=localdomain”
rootpw secret ————> which is generated by slappasswd previously,just cut and paste
directory /var/lib/ldap

Our database backend is the Berkeley database, which OpenLDAP knows as “bdb.” The slapd.conf man page can tell you other possible values for the database directive. Our suffix uses what’s known as the “domain component” model. This model just takes the parts of a domain and references each part of the domain name as a separate domain component (dc). We’ll talk more about this in a future article.

The rootdn and rootpw values define the administrative username and password for performing on the directory or its data operations that require administrative priveleges. The username is defined using a common name (cn), and the object entry for that user is stored directly under our top-level entry — hence the trailing domain components. The password is generated using the slappasswd command, which simply prompts you for a password and generates output which can be cut and pasted into the slapd.conf file, as I’ve done here.

directory tells the slapd daemon where to store the data files for this particular database definition. There can be several database sections in a slapd.conf file. Here, we’re telling slapd to use its home directory as its data storage directory, which is why the ldap user must be able to write there.

index objectClass eq,pres
index ou,cn,mail,surname,givenname eq,pres,sub
index uidNumber,gidNumber,loginShell eq,pres
index uid,memberUid eq,pres,sub

Defining indexes at this early stage won’t make a great amount of difference. However, as the directory grows and more demands are placed upon it, indexes can mean the difference between users who don’t notice that things that used to be handle by, say, NIS are now handled by something else, and a completely unusable directory server.

Now all boil down to starting the ldap daemon that slapd like below:

/etc/init.d/slapd start——————> Every others else


/etc/rc.d/slapd start ————> Gentoo ,Arch and variant


service slapd start ————> Fedora and Redhat variant

Still it depend on the OS you sit on..but the intention would be same..




c)OpenLDAP Install

d)OpenLDAP Install -II

e)OpenLdap Administration

Hope this will help.


GnuPG : A wonderful security tool

Talking about security is a fad and I don’t like those fellows who hyped it.A system is as secure as one made it.It require constant vigilance to maintain that level.In this article I am going to share my own experience with GnuPG in very precise and compact manner. I will not be able to reveal all the information about it, as you can understand it is not desirable regarding this matter.

What is GnuPG?? In one term it is a GNU version of OpenPGP written by Phil Zimmerman way back in 1991.You might have encounter it everyday if you are working with some corporate.Because the device (laptops specifically) they provide to work are PGP encrypted.And every time you need to work on those device you got to enter the passphrase to unlock it,because data is valuable to business corporates.Now if you are not statisfied the information I gave just now then you might look in here for more details information.

I shall show you the impelementation of GnuPG on Gentoo,but it can be used on any distribution.If the native os repositories doesn’t have it,then get the tarball and install it as you do for many software.

Ok,here we go..first we need to install the software into the system.On Gentoo it is in the repositories,so I will emerge it first:

bhaskar@bhaskar-laptop_06:55:03_Fri Sep 03:~> sudo emerge -av gnupg

These are the packages that would be merged, in order:

Calculating dependencies… done!
[ebuild   R   ] app-crypt/gnupg-2.0.16-r1  USE=”bzip2 ldap nls -adns -caps -doc -openct -pcsc-lite (-selinux) -smartcard -static” 0 kB

Total: 1 package (1 reinstall), Size of downloads: 0 kB

Would you like to merge these packages? [Yes/No]

See I have it already what’s the point to get once more.I choose N or no to this prompt.If you don’t have you might say yes.

A little bit more info when did get it into my system..lets check out:

bhaskar@bhaskar-laptop_07:15:24_Fri Sep 03:~> sudo genlop -t gnupg
* app-crypt/gnupg

Mon Jan  4 17:04:45 2010 >>> app-crypt/gnupg-2.0.11
merge time: 1 minute and 57 seconds.

Thu Feb  4 21:40:53 2010 >>> app-crypt/gnupg-2.0.14
merge time: 7 minutes and 14 seconds.

Wed Jun 30 13:15:26 2010 >>> app-crypt/gnupg-2.0.15
merge time: 2 minutes and 12 seconds.

Thu Aug 19 08:08:52 2010 >>> app-crypt/gnupg-2.0.16-r1
merge time: 2 minutes and 9 seconds.

So many entries because of updates.Now move on. Lets find files deflated into the system because of this software installation.

bhaskar@bhaskar-laptop_07:18:16_Fri Sep 03:~> sudo qlist -a gnupg

We need to generate the keys to operate create it..

bhaskar@bhaskar-laptop_07:18:29_Fri Sep 03:~> gpg -gen-key

The above command will generate the keys and it will ask you for the passphrase ,which will be glue with it.A new key-pair is created (key pair: secret and public key). The first question is which algorithm can be used. The next question is key length. This is something that is very user dependent. You need to choose between security and calculating time. If a key is longer the risk for cracking the message when intercepted decreases. But with a larger key calculation time also increases. If computing time is an issue you still should consider that you want to use the key for sometime. We all know that arithmetic performance increases very quickly, since new processors are getting quicker and quicker. So keep this in mind. The minimal key length GnuPG demands is 768 bits. However some people say you should have at a key-size of 2048 bits (which is also really a maximum with GnuPG at this moment). For DSA 1024 is a standard size. When security is a top priority and performance is less an issue you ought to pick the largest key-size available.

The system now asks to enter names, comment and e-mail address. Based upon the entries here the code is calculated. You can change these settings later.

Finally you have to enter a password (actually passphrase would be more appropriate, since blanks are allowed). This password is used to be able to use the functionality which belongs to your secret key. A good passphrase contains the following elements:

· it is long,

· it has special (non alphanumeric) characters,

· it is something special (not a name),

· it is very hard to guess (so NOT names, birth dates, phone numbers, number of a credit card/checking account, names and number of children, …)

During the period of generation of keys you should some disk activity or keep the system busy with some activity to generate more randomness in the key.But you SHOULD NOT FORGET YOUR KEYS,if you do all will be meaningless.

So I have my public key and I am going to reveal it,but you must not disclose the private key it generate publically.Here we go:

bhaskar@bhaskar-laptop_07:18:29_Fri Sep 03:~> gpg –list-keys
pub   1024D/BC367BF7 2009-11-29
uid                  Bhaskar Chowdhury <>
sub   1024g/489147E2 2009-11-29

So you can see where it resides too..I mean which directory it store the public key. Right.

Exporting keys

Now the time has comes to broaden your horizon by exporting it to a public key server.How do you do that? Follow this:

bhaskar@bhaskar-laptop_07:28:00_Fri Sep 03:~> gpg –export Bhaskar Chowdhury

that’s all you need to do distribute your public key. Now I will show you my credential of public key in one of key server like below:

Please insert your user id you created earlier,as I going to enter mine now on this site where I have uploded my public key. Look below for my public key:

Now if you click on the download button you can get my public id.So If I encrypt something with my private id and you have my public id you can view it.

Lets go back to some basic by showing some tricks:

bhaskar@bhaskar-laptop_07:59:18_Fri Sep 03:~> gpg –list-public-keys
pub   1024D/BC367BF7 2009-11-29
uid                  Bhaskar Chowdhury <>
sub   1024g/489147E2 2009-11-29

Importing keys

When you received someone’s public key (or several public keys) you have to add them to your key database in order to be able to use them. To import into the database the command looks like this:

bhaskar@bhaskar-laptop_08:21:23_Fri Sep 03:~> ggp –import othersPublicKeys

If you don’t mention the file name then it will take from the stdin.

Revoke a key

bhaskar@bhaskar-laptop_08:21:23_Fri Sep 03:~> gpg –gen-revoke

For several reasons you may want to revoke an existing key. For instance: the secret key has been stolen or became available to the wrong people, the UID has been changed, the key is not large enough anymore, etc. This has one disadvantage. If I do not know the passphrase the key has become useless. But I cannot revoke the key! To overcome this problem it is wise to create a revoke license when you create a key pair. And if you do so, keep it safe! This can be on disk, paper, etc. Make sure that this certificate will not fall into wrong hands!!!! If you don’t someone else can issue the revoke certificate for your key and make it useless.

Key signing

This is the authenticity of public keys. If you have a wrong public key you can say bye bye to the value of your encryption. To overcome such risks there is a possibility of signing keys. In that case you place your signature over the key, so that you are absolutely positive that this key is valid. This leads to the situation where the signature acknowledges that the user ID mentioned in the key is actually the owner of that key. With that reassurance you can start encrypting.

Using the gpg –edit-key UID command for the key that needs to be signed you can sign it with the sign command.

You should only sign a key as being authentic when you are ABSOLUTELY SURE that the key is really authentic !!!. So if you are positive you got the key yourself (like on a key signing party) or you got the key through other means and checked it (for instance by phone) using the fingerprint-mechanism. You should never sign a key based on any assumption.

Based on the available signatures and “ownertrusts” GnuPG determines the validity of keys. Ownertrust is a value that the owner of a key uses to determine the level of trust for a certain key. The values are

– 1 = Don’t know – 2 = I do NOT trust – 3 = I trust marginally – 4 = I trust fully

If the user does not trust a signature it can say so and thus disregard the signature. Trust information is not stored in the same file as the keys, but in a separate file.

How to see the signature of the created key? One should sign the key when they create it to keep the sanity with it.Here is my key sign:

bhaskar@bhaskar-laptop_08:21:23_Fri Sep 03:~> gpg –list-sigs
pub   1024D/BC367BF7 2009-11-29
uid                  Bhaskar Chowdhury <>
sig 3        BC367BF7 2009-11-29  Bhaskar Chowdhury <>
sub   1024g/489147E2 2009-11-29
sig          BC367BF7 2009-11-29  Bhaskar Chowdhury <>

Encryption and Decryption through GnuPG:

Say you want to encrypt a file then how do you do that? Here is way to do it:

let me create a file called testpg…

bhaskar@bhaskar-laptop_08:37:12_Fri Sep 03:~> echo “This a test for GnuPG” >> testpg

Encrypt it by gpg:
bhaskar@bhaskar-laptop_08:38:06_Fri Sep 03:~> gpg –encrypt testpg
You did not specify a user ID. (you may use “-r”)

Current recipients:

Enter the user ID.  End with an empty line: Bhaskar Chowdhury

Current recipients:
1024g/489147E2 2009-11-29 “Bhaskar Chowdhury <>”

Enter the user ID.  End with an empty line:

So I did it with the “–encrypt” option. And created file saved with “.gpg” extension ..look below
bhaskar@bhaskar-laptop_08:38:28_Fri Sep 03:~> ls
303706970325_7.pdf  Documents  RealPlayer  febe googleearth  lsap_tux2.png  sys_info    thunderbird-error A_Ducks_Claw  Downloads  SiteDelta   ff_database_optimize  gtop-www.jpg    ls   puppet-dashboard  testpg      Desktop  Gmail  calibre google-earth  kernel_map_files  lsap_tux.png  start-here.jpg    testpg.gpg

So that file is encrypted. Now how do you decrypt it? Here is ordinary way of doing it:

bhaskar@bhaskar-laptop_08:38:30_Fri Sep 03:~> gpg –decrypt testpg.gpg

You need a passphrase to unlock the secret key for
user: “Bhaskar Chowdhury <>”
1024-bit ELG key, ID 489147E2, created 2009-11-29 (main key ID BC367BF7)

gpg: encrypted with 1024-bit ELG key, ID 489147E2, created 2009-11-29
“Bhaskar Chowdhury <>”
This a test for GnuPG

It will ask me to enter my passphrase to decrypt I did and file content is displayed!!! Cool right!!

GnuPG with Thunderbird:

I have been using GnuPG with Thunderbird for quite some time now.What you have to do is get a add-on called Enigmail from the Mozilla site to install with it,which essentially provide the way to integrate with this mail client.As I have it from OS repository of Gentoo:

bhaskar@bhaskar-laptop_09:02:51_Fri Sep 03:~> sudo genlop -t enigmail
* x11-plugins/enigmail

Sat Nov 28 14:04:46 2009 >>> x11-plugins/enigmail-0.95.7-r5
merge time: 12 minutes and 16 seconds.

Sat Nov 28 17:00:14 2009 >>> x11-plugins/enigmail-0.95.7-r5
merge time: 3 minutes and 53 seconds.

Mon Mar  8 11:31:41 2010 >>> x11-plugins/enigmail-1.0.1-r1
merge time: 26 minutes and 57 seconds.

Fri Apr 16 14:08:07 2010 >>> x11-plugins/enigmail-1.0.1-r3
merge time: 6 minutes and 7 seconds.

Tue Aug  3 17:36:32 2010 >>> x11-plugins/enigmail-1.1.2-r1
merge time: 18 minutes and 41 seconds.

Sun Aug  8 08:33:28 2010 >>> x11-plugins/enigmail-1.1.2-r1
merge time: 6 minutes and 31 seconds.

Tue Aug 10 09:29:38 2010 >>> x11-plugins/enigmail-1.1.2-r1
merge time: 6 minutes and 49 seconds.

Once that is installed and the key in place it just a matter of selection in the thunderbird security tab of account setting.

GnuPG come with hell lot of options,yes quite intimidating one for the man page.But please glean over the man page of it to learn more.

In this article I have asuumed many thing about the reader that,they are well versed with the encryption technique or rather know how they work. Whatever the distro the basic underlying fact is the same.So embrace yourself with the knowledge that matters.

Hope this will help.