galway

Repartitioning modern Linux systems without reboot

Friday, April 17th, 2009 | galway, linux, useful tools, web | No Comments

This one is for my own future reference as much as anything. Ever since the move to udev in Linux 2.6, I’ve found it neccesary to do the very un-Linux like thing of rebooting before the appropriate device appeared under /dev. This was only an occasional hassle but still, you shouldn’t need to reboot Linux for such a thing.

Thanks to Robert for his Google magic in turning up partprobe, part of the GNU Parted package. As the Debian man page for partprobe says

partprobe is a program that informs the operating system kernel of
partition table changes, by requesting that the operating  system
re-read the partition table.

Excellent! Parted is normally installed on Debian and Ubuntu by default anyways, if not, simply, aptitude install parted and you’ll have access to the excellent partprobe.

We were trying to add some additional swap to a running system, the full series of commands needed as follows (I could have used parted to create the partition  but the cfdisk tool has a nice interface),

  1. sudo cfdisk /dev/sda (and create new partition of type FD, Linux RAID)
  2. sudo cfdisk /dev/sdb (and create new partition of type FD, Linux RAID)
  3. sudo partprobe
  4. sudo mdadm –create /dev/md3 -n 2 -x 0 -l 1 /dev/sda4 /dev/sdb4 (our swap devices are software RAID1 devices)
  5. sudo /etc/init.d/udev restart (this updates /dev/disk/by-uuid/ with the new RAID device)
  6. sudo mkswap /dev/md3
  7. sudo vi /etc/fstab (and add a new entry for /dev/md3 as a swap device)
  8. sudo swapon -a (to activate the swap device)
  9. sudo swapon -s (to verify it is working)

Tags: , , ,

Subversion sparse checkouts

Tuesday, April 7th, 2009 | galway, software engineering, useful tools | 5 Comments

I’ve been using Subversion for a few years now but as with lots of technology I work with, I’ve learned enough about it to do the job I need to do but I’ve never dug into it exhaustively. It turns out a nice feature called sparse checkouts was introduced into Subversion 1.5. With Subversion,  you can either create one repository for each project or use a single repository for multiple projects. I like using a single repository for multiple projects but there are advantages and disadvantages to both approaches and it’s yet another source of religious debate and flamage so I won’t suggest which would suit your needs best.

One of the disadvantages of using a single repository for multiple projects is that any time you want to check out part of your repository, you either had to do something like this,

svn checkout http://www.example.com/svn/myrepo

to check out the whole repository (and if it’s a big repository, and you’re on a slow connection, you get to watch the world wide wait in action) or something like this

svn checkout http://www.example.com/svn/myrepo/oneofmyprojects

to just check out a teensie part of your repository which should happen faster than the former approach. The disadvantage to the second approach is that you end up with only part of the repository checked out and if you want another part in the future, you’ll have to check that out separately like

svn checkout http://www.example.com/svn/myrepo/anotheroneofmyprojects

Pretty soon, you’ll have a directory full of separately checked out projects, each of which you have to individually svn update, svn commit and so on. Hey, it starts looking like you have one repository for each project. Ideally, what you want to be able to do is to check out your entire repository but only the bits your are interested in, while keeping the option open of checking out other parts in the future and managing them all as the one repository that they are. Sparse checkouts introduced this functionality.

With svn’s sparse directory support, you can do the following,

svn checkout --depth=immediates http://www.example.com/svn/myrepo

This checks out the myrepo repository, but only to a depth of 1, that is, all files and directories immediately under myrepo but not any further subdirectories and files. So a directory listing of your checked out repository might look like,

oneofmyprojects/
anotheroneofmyprojects/
README.txt

This gives you an overview of the myrepo hierarchy without pulling all the files. Furthermore, it is sticky – any subsequent svn update commands you run will honour the scope you set in the first checkout.

If you now want to flesh out parts of the tree, you can do the following

svn update --set-depth=infinity myrepo/oneofmyprojects

This updates the contents of myrepo/oneofmyprojects with all children (files and subdirectories) ensuring you have a full copy of that part of the repository. If you subsequent run an svn update in myrepo – the behaviour for oneofmyprojects continues to be sticky and will result in an update of all files and subdirectories (while not checking out the children of any of  the other myrepo top-level directories).

Unfortunately, you cannot checkout a directory with depth=infinity and then update it to a reduced a depth (the behaviour only works in the direction of increasing depth for now).

More detail is available at http://svnbook.red-bean.com/en/1.5/svn.advanced.sparsedirs.html

I took a quick look at TortoiseSVN (a very nice graphical Subversion client for Windows) and if you do an SVN checkout it has an option  for Checkout Depth which I’m guessing provides the same functionality (but I haven’t tested it).

Tags: ,

Linux and the Semantic Web

Saturday, March 28th, 2009 | galway, hardware, linux, semantic web | No Comments

I’ve recently (well, back in January, but it took me a while to blog about it) started working with the DERI Data Intensive Infrastructure group (DI2). The Digital Enterprise Research Institute (DERI) is a Centre for Science, Engineering and Technology (CSET)  established in 2003 with funding from Science Foundation Ireland. Its mission is to Make the Semantic Web Real – in essence, DERI is working on both the theoretical under-pinnings of the Semantic Web as well as developing tools and technologies which will allow end-users to utilise the Semantic Web.

The group I’m working with,  DI2,  has a number of interesting projects including Sindice which aims to be a search engine for the Semantic Web and a forthcoming project called Webstar which aims to crawl and store most of the current web as structured data. Webstar will allow web researchers to perform large scale data experiments on this store of data, allowing researchers to focus on their goals rather than spending huge resources crawling the web and maintaining large data storage infrastructures.

Sindice and Webstar both run on commodity hardware running Linux. We’re using technologies such as Apache Hadoop and Apache HBase to store these huge datasets distributed across a large number of systems. We are initially working with a cluster of about 40 computers but expect to grow to a larger number over time.

My role in DI2 is primarily the care of this Linux infrastructure – some of the problems that we need to deal with include how to quickly install (and re-install) a cluster of 40 Linux systems, how to efficiently monitor and manage these 40 systems and how to optimise the systems for performance. We’ll use a lot of the same technologies that are used in Beowulf style clusters but we’re looking more at distributed storage rather than parallel processing so there are differences. I’ll talk a little about our approach to mass-installing the cluster in my next post.

Tags: , , , , , ,