watch – a useful tool
I’ve been doing a lot of testing on a newly commissioned Linux cluster recently. A lot of the work involves running various scripts and ensuring that the correct output is generated (or that some output is generated). All too often, I end up typing
ls -la
repeatedly to check the output of some test script or to see whether anything is getting output. I’ve come across watch before but it had slipped my mind. After rediscovering it the other day, I was again reminded how elegant and powerful the unix philosophy of having simple commands which do one thing well but can be chained together to do very complex things is. The task of monitoring a directory is much simplified with a single
watch ls -la /directory
Not to mention the reduced RSI! If you haven’t used watch before or have simply forgotten about it – you might want to revisit, especially if you’re doing a lot of repetitive commands to view the status of something on your system.
On a related note, we wanted to monitor the timestamps of various output files – and the ls command’s usual hour:minute timestamp information lacked the granularity we needed for our measurements. The man page came to the rescue with the following
ls -la –time-style=full-iso
which gives seconds and even milliseconds (for some files, I suspect it depends on the command creating the files) in the timestamp.
Kudos to Blacknight
I’ve been considering moving some of our hosted domains off of our office servers for some time. We’ve been hosting our own websites (http://www.aplpi.com, http://blog.aplpi.com, http://www.atlanticlinux.ie and a few others) and email since we first moved into our existing offices over 4 years ago. In those 4 years I think we’ve had maybe 3 or 4 days outage in total and most of that on weekends. Our email has been backed up by DynDns’s excellent MailHop Backup MX service which has kept any incoming email safe while our mail servers were down. Our office servers are, of course, Linux boxes (running Debian GNU/Linux 4.0, Postfix and SpamAssassin for email and Apache HTTP Server for our web sites) and have proved remarkably stable. To be honest, we don’t receive an earth-shattering volume of web traffic, but thanks to spammers, our mailserver gets plenty of exercise. On one day last week, we suffered a major spam attack which resulted in our mailservers processing over 20,000 mails in a 24-hour period. It did take SpamAssassin a while to process the spam backlog (it missed about 300 spams out of 18,000 or so – not bad going) but our mailserver happily chugged its way through the mail in about 8 hours. Not bad for a tiny Linux server sitting at the end of a plain old DSL line.
Despite all this, I’ve been considering moving to a hosted setup for a number of reasons,
- Incoming spam, in particular, is a big consumer of the overall capacity of our office DSL line. It makes me wonder if it wouldn’t make more sense to let an ISP who is geared up to handle this kind of junk filter out most of it for us. This gives us more bandwidth to use productively.
- We’re due to move offices in the next couple of months. Normally, I’d handle this over a weekend and have the infrastructure back up in the new location by the next business day, but it does involve working antisocial hours and you are dependent on all of your service providers having everything set up properly beforehand. I figure if we have the critical stuff hosted, we don’t need to worry about any upheaval during a move.
- As a Linux consulting and support company, it’s important for us to eat our own dog food when it comes to our software and services – we’ve been doing that with these pieces of Linux infrastructure for quite a while now and have learned a lot. But the time we spend managing those can now be spent on newer services and software, if we offload these services to someone else.
- Finally, I’m curious to see how well the big guys handle these services – it’s been a few years since I’ve used any hosting companies.
Never one to rush into anything, I figured we’d start by migrating one of our domains and see how it goes from there. I keep an eye on Blacknight Solutions – they’re an Irish ISP and give good support to various Linux and open source events around Ireland. Also, their MD writes a good blog and he seems to be a Stargate Atlantis fan so they have to be a good company to work with (I suspect I won’t be getting an honorary MBA from anyone for that kind of strategic reasoning – but I’m a firm believer in trusting your gut instincts on these things). Michele recently blogged about their new hosting plans and as it happened, the time is right for us to try one out. I purchased their Minimus hosting package during the week with a view to initially migrating our atlanticlinux.ie domain over to it. If that goes well, I’ll migrate the rest over the next few weeks.
My first impressions of Blacknight’s hosting platform is very impressive – they have a very intuitive web interface that lets you configure pretty much everything without resorting to their support. Not only can you configure the usual web and email – but they have included a lovely application installer which lets you install everything from blogging software to shopping cart software.
I did some testing earlier in the week and ironed out a few migration kinks (the main one being that our existing wordpress blogging system needed a PHP timeout to be extended before I could successfully export my existing blog postings from it) and bit the bullet this evening to do the migration. From start to finish, the entire process took about an hour – and most of that was time spent testing and tweaking one or 2 small problems. Granted, the atlanticlinux.ie website, email system and blog are pretty basic and don’t contain a lot of users – but my god, it really couldn’t have been much simpler. Well done Blacknight!
The icing on the cake for me was the wordpress migration – it took all of 10 minutes to
- Install wordpress on the new site via the Blacknight control panel.
- Export the wordpress blog data from our existing office wordpress installation.
- Import the wordpress blog data to our new hosted wordpress installation.
- And start posting new blog entries like this one.
I’ve done a few wordpress installs in the past and it is a pretty straightforward app to install, but the Blacknight system really does take any issues out.
I’m not generally one to endorse products or services on our blog – but I feel good services and products should be recognised and so far Blacknight have shone in their delivery – both in the product they have and the support they have offered. In my initial testing of their services, I must have logged about 20 support tickets – most of them were answered within minutes and all of the responses I received were intelligent and helpful.
I’ll be the first to publicly complain if I receive a poor service in the future but so far I’ve been amazed with the quality of service I’ve received, especially considering the price, and no, I’m not receiving any favours to endorse the service. I’m just really impressed with it so far.
Thanks Blacknight.
LDAP Replication on Debian 4.0 (etch) with syncrepl
We’re making increasing use of LDAP in our office infrastructure. I spoke about a simple Samba PDC configuration last year. The Samba team recommend using LDAP as your Samba password backend if you require all of the account capabilities supported by Samba or if you have a large userbase. We initially started using the LDAP backend to give us access to all of these account capabilities including per-user profile capabilities (for setting various user policies).
Since migrating to LDAP, I have planned to move us to a more redundant architecture courtesy of our LDAP server’s replication feature. We’re using the OpenLDAP LDAP server. There are a variety of open source and commercial LDAP servers out there – OpenLDAP is the main open source one. We’ve found it works well for small and medium sized business, there have been some question marks raised over it’s performance for larger environments but we haven’t run into any problems yet.
Our master LDAP server has been running for some time now and provides a backend for our Samba PDC as well as a login server for our Linux desktops and a backend for various internal tools including our bugzilla system, our subversion server and so on. If you have anything more than a very simple network infrastructure, it makes sense to move to a unified authentication system built around LDAP – it drastically simplifies access management to your various systems (particularly when staff are starting with or departing from your organisation).
OpenLDAP originally used a push model for replication where the LDAP master server pushed it’s data periodically to the LDAP slave server using a module called slurpd
. Newer versions of OpenLDAP have introduced a pull model for replication using a module called syncrepl
. We’re using Debian 4.0 on our production systems and it comes with OpenLDAP 2.3 which supports both models. According to the OpenLDAP 2.4 replication documentation, from 2.4 onwards slurpd
support has been removed. With this in mind, we opted to go with the syncrepl
approach to LDAP replication with a view to future-proofing our environment.
Our master LDAP server configuration is as follows (largely similar to the standard configuration supplied by Debian). The main changes from the standard OpenLDAP configuration are for Samba (including the schema line and access protection for some of the Samba data highlighted in blue).
include /etc/ldap/schema/core.schema
include /etc/ldap/schema/cosine.schema
include /etc/ldap/schema/nis.schema
include /etc/ldap/schema/inetorgperson.schema
include /etc/ldap/schema/samba.schemapidfile /var/run/slapd/slapd.pid
argsfile /var/run/slapd/slapd.args
loglevel sync
modulepath /usr/lib/ldap
moduleload back_bdb
sizelimit 500
tool-threads 1
backend bdb
checkpoint 512 30database bdb
suffix “dc=example,dc=com”
directory “/var/lib/ldap”
dbconfig set_cachesize 0 2097152 0
dbconfig set_lk_max_objects 1500
dbconfig set_lk_max_locks 1500
dbconfig set_lk_max_lockers 1500index objectClass eq
lastmod onaccess to attrs=userPassword,shadowLastChange,gecos,sambaNTPassword,sambaLMPassword
by dn=”cn=admin,dc=example,dc=com” write
by anonymous auth
by self write
by * noneaccess to dn.base=”” by * read
access to *
by dn=”cn=admin,dc=example,dc=com” write
by * readpassword-hash {MD5}
To convert this LDAP master to a provider (the OpenLDAP term for the server which provides data to a syncrepl
consumer or slave server) we add the highlighted lines to our config as follows,
include /etc/ldap/schema/core.schema
include /etc/ldap/schema/cosine.schema
include /etc/ldap/schema/nis.schema
include /etc/ldap/schema/inetorgperson.schema
include /etc/ldap/schema/samba.schemapidfile /var/run/slapd/slapd.pid
argsfile /var/run/slapd/slapd.args
loglevel sync
modulepath /usr/lib/ldap
moduleload back_bdb
sizelimit 500
tool-threads 1
backend bdb
checkpoint 512 30database bdb
suffix “dc=example,dc=com”
directory “/var/lib/ldap”
dbconfig set_cachesize 0 2097152 0
dbconfig set_lk_max_objects 1500
dbconfig set_lk_max_locks 1500
dbconfig set_lk_max_lockers 1500index objectClass eq
lastmod onaccess to attrs=userPassword,shadowLastChange,gecos,sambaNTPassword,sambaLMPassword
by dn=”cn=admin,dc=example,dc=com” write
by dn=”uid=replicant,ou=Users,dc=example,dc=com” read
by anonymous auth
by self write
by * noneaccess to dn.base=”” by * read
access to *
by dn=”cn=admin,dc=example,dc=com” write
by * readpassword-hash {MD5}
rootdn “cn=admin,dc=example,dc=com”
moduleload syncprov
index entryCSN,entryUUID eq
overlay syncprov
syncprov-checkpoint 100 10
syncprov-sessionlog 200
There are a number of changes here
access to attrs=userPassword,shadowLastChange,gecos,sambaNTPassword,sambaLMPassword
…
by dn=”uid=replicant,ou=Users,dc=example,dc=com” read
…
We have added a new user to our LDAP database uid=replicant,ou=Users,dc=example,dc=com and are granting them read access to the various password fields that most LDAP users won’t have access to but which we allow to be read for the purposes of copying to our consumer or slave server. You can add the above user using any of the standard LDAP management tools – we used the IDEALX samba tools as follows,
smbldap-useradd replicant -P
We then configured the syncprov
overlay which is responsible for tracking changes to the LDAP database and marking them such that the slave server can identify them and copy them. The module takes various arguments for the number and frequency of checkpoints. Note that we’re also adding some indexes for the fields used by this module to optimise reading of this data during replication operations.
rootdn “cn=admin,dc=example,dc=com”
moduleload syncprov
index entryCSN,entryUUID eq
overlay syncprov
syncprov-checkpoint 100 10
syncprov-sessionlog 200
Thats about it on the master / provider. You should restart the server after applying these changes.
On the slave server, we took our original master server configuration (the first one above) and added the following section to enable replication
syncrepl rid=1
provider=ldap://master.example.com:389
type=refreshAndPersist
searchbase=”dc=example,dc=com”
filter=”(objectClass=*)”
scope=sub
schemachecking=off
bindmethod=simple
binddn=”uid=replicant,ou=Users,dc=example,dc=com”
credentials=ReplicantPassword
The consumer / slave is configured to connect to the master / provider master.example.com
on port 389
. The filter
and searchbase
ensure that all data with a base of dc=example,dc=com will be replicated. The consumer connects to the provider using the specified user uid=replicant,ou=Users,dc=example,dc=com
and password ReplicantPassword
which we’ve already given full read access on the LDAP master.
After making these changes on the slave, you can restart the slave OpenLDAP server. Note that we have loglevel sync
on both servers which results in detailed logging of sync operations to /var/log/syslog. You should see details of the sync operations occurring. After a short period, you should be able to retrieve the same details from your consumer / slave server using the ldapsearch
tool or similar e.g.
ldapsearch -H ldap://master.example.com/ -xLLL -b ‘dc=example,dc=com’ -D “uid=replicant,ou=Users,dc=example,dc=com” -wReplicantPassword uid=anExampleUid
and
ldapsearch -H ldap://slave.example.com/ -xLLL -b ‘dc=example,dc=com’ -D “uid=replicant,ou=Users,dc=example,dc=com” -wReplicantPassword uid=anExampleUid
should provide exactly the same results (except for the entryUUID
and modifyTimestamp
fields) on both servers. In particular, you should also be seeing the userPassword
and gecos
fields on both the master and the slave. If you aren’t, you may have omitted the access grant to your replication account.
Once you have this running, all that remains is to modify any services using LDAP to reference both your master and slave LDAP servers.
For samba:
passdb backend = ldapsam:”ldap://master ldap://slave”
For pam_ldap
uri ldap://master/ ldap://slave/
and so on.
Finally, when you’ve verified that all these settings have been applied and you’re configuration is working, trying shutting down your master LDAP server and verifying that your systems still allow authentication and that your Samba server continues to allow users access.
Well done, you’ve just increased the redundancy of your IT infrastructure. In my next posting, we’ll look at setting up a Samba Backup Domain Controller (BDC) for even more redundancy.
Categories
Archives
- September 2010
- February 2010
- November 2009
- September 2009
- August 2009
- June 2009
- May 2009
- April 2009
- March 2009
- February 2009
- January 2009
- October 2008
- September 2008
- August 2008
- July 2008
- June 2008
- May 2008
- April 2008
- March 2008
- February 2008
- November 2007
- September 2007
- April 2007
- March 2007
- February 2007
- January 2007
- December 2006
- September 2006
- July 2006
- June 2006
- April 2006