Jobs in the Jenkins continuous integration software can be configured to link SCM commit details to a repository browser. Unfortunately the Subversion plugin (and perhaps all the SCM options) only allows one repository browser for all SVN repos in a given job ( My project uses two svn repos, Core and Rind, so this limitation, of course, poses an inconvenience.

Luckily, I was able to find a solution that works for my project. I noticed that the current Core revision numbers are in the 10K range and the Rind revisions are in the 50K range. Ah ha!, an opportunity for an Apache Rewrite rule.

I chose WebSVN for the repository browser and installed the WebSVN2 plugin (the WebSVN bundled with Jenkins is for the WebSVN 1.x series). For the browser URL configuration I used a fake repname (TBD) in the query string.

and then used the following Apache RewriteRules to redirect to the proper repo in WebSVN based on the revision number.

RewriteEngine on
RewriteCond %{QUERY_STRING} repname=TBD&(.*)rev=([1234]\d{4,4})
  RewriteRule ^(.+)$ $1?repname=GUS&%1rev=%2 [R,L,NE]
RewriteCond %{QUERY_STRING} repname=TBD&(.*)rev=([56789]\d{4})
  RewriteRule ^(.+)$ $1?repname=ApiDB&%1rev=%2 [R,L,NE]

That is, if the rev number matches a number between 10000 and 49999 then redirect to the Core repo in WebSVN. If rev number matches a number over 50000 then redirect to the Rind repo.

In both cases the Rewrite depends on the repname being TBD so we do not trigger rewrites when normally browsing WebSVN.

This scheme depends on conditions and assumptions that happen to fit our project:

- We only care about current and future revisions (>10000). That’s ok, Jenkins will not need to link to revisions that predate this setup.

- Rind will always get more commits than Core such that Core revisions will always lag behind Rind revisions. This has been the case for the past 6 years, I don’t expect it to change.

- Core revisions will not exceed 49999 for the life of this Rewrite. If it does (unlikely, it’s taken 6 years to get to the 10K range) we can update the rules.

Obviously this is not a suitable solution for everyone.


I initially started with sventon behind an Nginx proxy before deciding that WebSVN was going be easier for me to maintain in the long run (simply due to my system infrastructure and experience, not due to any complaint against sventon). For the record, this is where I was heading with the Nginx rewrite rules. I did not follow through with testing so I do not know if it works as is, but perhaps it’s a starting point for someone.

if ($request_uri ~ "revision=(.+)" ) {
  set $revision $1;

if ($revision ~ "[1234]\d{4}") {
  rewrite ^$revision?;
if ($revision ~ "[56789]\d{4}") {
  rewrite ^$revision?;

While making some custom RPMs and then remaking them with altered build steps, I stumbled across the behaviour that rpm will let you install updated packages even when an older package is already installed, provided that none of the files in the new package have changed.

Here’s a demonstration using a simple spec file (Listing 1) for an RPM package that installs a README file in /tmp/rpmtest/. Note the %install process creates the README file with some text in it.

Build the RPM package from the spec file:

$ rpmbuild -bb rpmtest.spec 

and install it:

$ sudo rpm -i /scratch/RPM/RPMS/x86_64/rpmtest-1-1.x86_64.rpm

Now, the single README file is installed:

$ ls -l /tmp/rpmtest/README 
-rw-r--r-- 1 cdaily cdaily 10 Nov 14 16:39 /tmp/rpmtest/README

Now we’ll update the rpmtest.spec file to build Version 1, Release 2. Then we change the rpm’s install process but not the README contents. For example, in the rpmtest.spec change the defattr, line 22, from

%defattr(-, %(%{__id_u} -n), %(%{__id_u} -n), -)


%defattr(-, root, root)

and change the spec file’s Release to

Release: 2

(line 4).

The generated README file will be the same, only the file ownership changes between releases.

Build as before and try to install.

$ sudo rpm -ivh /scratch/RPM/RPMS/x86_64/rpmtest-1-2.x86_64.rpm

That worked. I’m able to install Version 1, Release 2 and it does not conflict with Release 1.
The README is now owned by root.

$ ls -l /tmp/rpmtest/README 
-rw-r--r-- 1 root root 10 Nov 14 16:47 /tmp/rpmtest/README

I have both packages installed:

$ rpm -qa rpmtest

And the README is owned by both packages:

$ rpm -qf /tmp/rpmtest/README 

The same happens if you increment the Version. As long as the checksums for all the package files are unchanged, the rpm install will proceed.

Remove them both:

$ sudo rpm -e rpmtest-1-1 rpmtest-1-2

Let’s see what happens when the change a package file. In the rpmtest.spec file, change line 16

echo "This is a readme" > $RPM_BUILD_ROOT/tmp/rpmtest/README


echo VERSION %{version} RELEASE %{release} > $RPM_BUILD_ROOT/tmp/rpmtest/README

That is, the README contents will now have the version and release numbers and so will change with each release.

After building and installing Release 1, then building Release 2, I am unable to install Release 2.

$ sudo rpm -i /scratch/RPM/RPMS/x86_64/rpmtest-1-2.x86_64.rpm
	file /tmp/rpmtest/README from install of rpmtest-1-2.x86_64 conflicts with file from package rpmtest-1-1.x86_64

This time because the README has changed between releases, rpm reports a conflict.

In contrast to installing, upgrading a package release (rpm -U rpmtest-1-2.x86_64.rpm ) will replace Release 1 with Release 2 even when no files have changed. The upgrade option will also install the rpm if an earlier version/release is not already installed, so if you routinely use the upgrade command instead of the install, you don’t have to be concerned with ending up with multiple package release installations.

In summary, rpm only reports install conflicts if files change, it doesn’t make any comparisons of Version or Release numbers.

This seemed odd to me at first but the flexibility it provides does make sense after I thought about it more. So what if multiple packages own the same file? It happens. One common case is with packages for multiple architectures. Consider the neon library as a random example. I have versions installed for both the i386 and x86_64 architectures. The documentation is the same between the two, only the shared objects differ and these are installed in different directories so there’s no conflict.

On the other hand, the observation that a second package installation can change file ownership and permissions of the first package could conceivably be a problem.


Maximum RPM
Read the rest of this entry »

The Howto:MultiMasterReplication wiki page for the 389 Directory Server documents use of the script for setting up replication.

The --with-ssl option is used to setup SSL for replication. I have seen a couple of postings on various forums that the script hangs when using --with-ssl option. That happened to me as well at first. I haven’t seen any answers on how to resolve this so I posting my patch to the script. (Maybe someone will submit it to the script’s maintainer, I’m too lazy. Update 1/14/2011: the maintainer, Rich Megginson, accepted the patch so refer to his repository for the latest version.)

The problem is that in the current version of the script the --with-ssl option also sets the port used for connections – port 636 by default. Just setting the port to 636 is not sufficient to inform the script’s underlying Net::LDAP perl library that SSL communication is expected. Specifying a ldaps scheme with the host name triggers the required SSL support in Net::LDAP.

So, I prepended ldaps:// to the server in each occurrence of Net::LDAP->new(). For example,

my $ldap = Net::LDAP->new(“ldaps://$server”, port => $prt) || die “$@”;

(full source is provided below)

Note that the --with-ssl servers two purposes. One is to tell the script to configure the directory to use SSL for all future replication transactions. That is, the communication between the two directory servers will be encrypted. The other action of --with-ssl is to encrypt the communications the script makes to each directory server during the setup and querying (the Net::LDAP business we just resolved). So, you will also want to use both the --with-ssl option for the script’s other subcommands, such as --display and --remove so those communications are consistently held over secure channels.


Another MMR setup script. I’ve not used it.
Read the rest of this entry »

This post demonstrates a simple example of using groups with TestNG.

For this exercise I’m using a very simple file layout. The jar file was copied directly from the TestNG package. and testng.xml are my test and configuration files, respectively.

[23:31 20090914 crashing@desktop ~/simpletestng]
$ ls testng-5.10-jdk15.jar testng.xml has two test methods, annotated with @Test, and with an assigned group name.

import org.testng.annotations.*;
public class SampleTest {
@Test(groups = { “groupA” })
public void methodA() {
assert true;

@Test(groups = { “groupB” })
public void methodB() {
assert true;

testng.xml defines these two groups as belonging to a test set.

<!DOCTYPE suite SYSTEM ""&gt;
<suite thread-count="5" verbose="1" name="TrialTest" annotations="JDK">

<test name="TestSet" enabled="true">
<include name="groupA"/>
<include name="groupB"/>

<class name="SampleTest"/>


I compile the SampleTest class,

$ javac -classpath testng-5.10-jdk15.jar

and then run all the test methods as configured in testng.xml

$ java -classpath .:testng-5.10-jdk15.jar org.testng.TestNG testng.xml

[Parser] Running:


Total tests run: 2, Failures: 0, Skips: 0

Both test methods ran.

Alternatively I can specify specific groups to run. For this I use the -groups and -testclass option. I do not use the testng.xml file.

java -classpath .:testng-5.10-jdk15.jar org.testng.TestNG -groups groupB -testclass SampleTest

[Parser] Running:
Command line suite


Command line suite
Total tests run: 1, Failures: 0, Skips: 0

Specifying both the testng.xml and a -groups option will cause both configurations to be processed.

[22:58 20090914 crashing@desktop ~/simpletestng]
$ java -classpath .:testng-5.10-jdk15.jar org.testng.TestNG testng.xml -groups groupA -testclass SampleTest
[Parser] Running:
Command line suite


Total tests run: 2, Failures: 0, Skips: 0


Command line suite
Total tests run: 1, Failures: 0, Skips: 0

This is probably not what you want. So, use either the testng.xml or the -groups and -testclass option, not both.

The output html of testng‘s reporter has the columns “Test method” and “Instance” that document each test run.

Test method” can be customized by defining public String getTestName() method in the class providing the test method.

Instance” can be customized by defining public String toString() method in the class providing the test method.

In this posting I document extending Selenium server with a custom javascript method, getAllRadios(), to return all radio button elements in a page. It’s based on the getAllButtons() that is stock in Selenium.

I will demonstrate two ways to add this custom method. The first strategy is to add the custom method to the native javascript files that ship with Selenium server. This is not the recommended way to extend Selenium but it is an almost guaranteed way to be sure the server picks up my new method and that frees me to focus on getting my client syntax right for calling the new method using the client’s doCommand() method.

Method 1 – selenium-browserbot.js + selenium-api.js

Unpack the server jar into a temporary directory to get access to the selenium-browserbot.js and selenium-api.js files.

[20:54 20090906 crashing@server /selenium-server-1.0.1/temp]
$ jar xf ../selenium-server.jar

The new custom method is added to core/scripts/selenium-browserbot.js. It is a slightly modified version of getAllButtons() which is defined in the same file.

BrowserBot.prototype.getAllRadios = function() {
var elements = this.getDocument().getElementsByTagName('input');
var result = [];
for (var i = 0; i < elements.length; i++) {
if (elements[i].type == 'radio') {
return result;

and then the new method is exposed to the Selenium API in core/scripts/selenium-api.js, again modeling after getAllButtons() in selenium-api.js.

Selenium.prototype.getAllRadios = function() {
return this.browserbot.getAllButtons();

Now rebundle the updated javascript into a new server jar, replacing the original.

[21:01 20090906 crashing@server /selenium-server-1.0.1/temp]
$ jar cmf META-INF/MANIFEST.MF ../selenium-server.jar .

Method 2 – user-extension.js

The better way to extend Selenium is to add methods to the user-extensions.js file. This avoids screwing around with the native server jar. I initiall had some trouble getting this working, hence the Method 1 approach, but after a bit of futzing I finally got this to work.

Selenium.prototype.getAllRadios = function() {
var elements = this.browserbot.getDocument().getElementsByTagName('input');
var result = [];
for (var i = 0; i < elements.length; i++) {
if (elements[i].type == 'radio') {
return result;

This is basically the same method used in selenium-browserbot.js. The important changed bits are (1) the method is attached to the Selenium object instead of the BrowserBot (Selenium.prototype rather than BrowserBot.prototype) and (2) calling getDocument() on the browserbot instance.

I start the Selenium server with the -userExtensions option pointing to the user-extensions.js file.

java -jar \
/selenium-server-1.0.1/selenium-server.jar \
-userExtensions /user-extensions.js

Client Coding

For either of the above methods of extending the Selenium server, I call this new method in my client code with the doCommand().

/** incomplete code **/

proc = new HttpCommandProcessor(seleniumServerHost,
seleniumServerPort, brower, seleniumServerStartUrl);
selenium = new DefaultSelenium(proc);;

// radio buttons returned as comma-delimited strings
String allRadios = proc.doCommand("getAllRadios", null);

// alternatively, get radio buttons as a String array
String[] radios = proc.getStringArray("getAllRadios", null);
for (String radio : radios) {

To call custom doCommand()‘s, it’s necessary to instantiate with DefaultSelenium(proc) to inject the HttpCommandProcessor into the DefaultSelenium object.


Select the ids of radio buttons in a page using selenium

Google has made its search input box bigger and its font size larger. Excuse me, they S-U-P-E-R-sized! it. It’s not a mistake, indeed Google actually seems proud of it.

Fortunately, Safari and Firefox empowers mere-mortal users to fix this eyesore with a custom stylesheet. Simply add the following to your userContent.css and restart the web browser. Additional general information on userContent.css, and all the spiffy things you can do with it, is available from the links below.

/* Google input text box */
.lst {
font-size:13px ! important;
/* Google search type-ahead drop down */
.gac_m td {
font-size:13px ! important;
line-height:120% ! important;
/* Google Search/I'm Feeling Lucky buttons */
.lsb {
font-size:11px ! important;
height:auto ! important;
/* Google Search/I'm Feeling Lucky buttons in type-ahead drop down*/
.gac_sb {
font-size:11px ! important;
height:auto ! important;

With the aforementioned CSS, the submit buttons will still appear in the type-ahead drop down menu. Add the following style to remove them all together.

/* Turn off Google Search/I’m Feeling Lucky buttons in type-ahead drop down*/
.gac_sb {

Power to the people.

Related Sites

Customizing Mozilla
Better Ad Blocking for Firefox, Mozilla, Camino, and Safari
Google Chrome Issue 2393: Support user stylesheet

This how I took an existing directory of files on my desktop and put it into a new git repository on a remote server. I found a number of how-tos online but none worked for me – certainly this was because I’m such a newbie with git that I wasn’t sufficiently understanding what I was being told to do. The following works for me; clearly this isn’t the only way or the best way to accomplish the task. This is mostly a note to self.

Create an empty directory on the remote server to hold the repository

[11:44 20090902 crashing@server ~]
$ mkdir -p /var/local/git/repos/CrashTesting

Intialize the empty repository

[11:44 20090902 crashing@server /var/local/git/repos/CrashTesting]
$ git --bare init
Initialized empty Git repository in /var/local/git/repos/CrashTesting/

Now I have the necessary repository components.

[11:44 20090902 crashing@server /var/local/git/repos/CrashTesting]
$ ls
branches config description HEAD hooks info objects refs

On my desktop, in the existing directory of files, init the directory

[11:30 20090902 crashing@desktop ~/Desktop/testws]
$ git init
Initialized empty Git repository in /Users/crashing/Desktop/testws/.git/

This created a .git directory with git control files.

Next, I tell my local desktop repository about the remote repo. The remote repo is given the short name origin.

[11:46 20090902 crashing@desktop ~/Desktop/testws]
$ git remote add origin ssh://

Then, I place my local files under version control.

[11:42 20090902 crashing@desktop ~/Desktop/testws]
$ git add .

Now I can commit the local files to the local repository.

[11:45 20090902 crashing@desktop ~/Desktop/testws]
$ git commit -a -m 'initialize repo'
[master (root-commit) 7871087] initialize repo
23 files changed, 500 insertions(+), 0 deletions(-)
create mode 100755
create mode 100755 build.xml

Finally, push the master branch to the remote repository named origin.

[11:46 20090902 crashing@desktop ~/Desktop/testws]
$ git push origin master
Counting objects: 44, done.
Delta compression using up to 2 threads.
Compressing objects: 100% (31/31), done.
Writing objects: 100% (44/44), 1.65 MiB, done.
Total 44 (delta 1), reused 0 (delta 0)
To ssh://
* [new branch] master -> master

I spent the evening compiling mysql 5.1.37 on Fedora 8. I had no trouble compiling on RHEL4 but on Fedora 8 I ran configure

CFLAGS="-O3" CXX=gcc CXXFLAGS="-O3 -felide-constructors \
-fno-exceptions -fno-rtti" ./configure \
--enable-assembler \
--with-mysqld-ldflags="-all-static" \

followed by make and was getting compile errors

/bin/sh ../libtool --preserve-dup-deps --tag=CXX --mode=link gcc -O3 -felide-constructors -fno-exceptions -fno-rtti -fno-implicit-templates -fno-exceptions -fno-rtti -rdynamic -o mysql mysql.o readline.o sql_string.o completion_hash.o ../cmd-line-utils/libedit/libedit.a -lncursesw -all-static -lpthread ../libmysql/ -lcrypt -lnsl -lm -lz
libtool: link: gcc -O3 -felide-constructors -fno-exceptions -fno-rtti -fno-implicit-templates -fno-exceptions -fno-rtti -rdynamic -o mysql mysql.o readline.o sql_string.o completion_hash.o -static ../cmd-line-utils/libedit/libedit.a -lncursesw -lpthread ../libmysql/.libs/libmysqlclient.a -lcrypt -lnsl -lm -lz
/usr/bin/ld: cannot find -lncursesw
collect2: ld returned 1 exit status
make[1]: *** [mysql] Error 1
make[1]: Leaving directory `/home/crashingdaily/mysql-5.1.37/client'
make: *** [all] Error 2

This was resolved by installing the static ncurses libs via the ncurses-static package.

$ sudo yum install ncurses-static

That then left me with the compile failure (showing only part of the error)

../cmd-line-utils/libedit/libedit.a(term.o): In function `term_echotc':
term.c:(.text+0×1557): undefined reference to `tputs'
term.c:(.text+0×1580): undefined reference to `tgetstr'
term.c:(.text+0×1676): undefined reference to `tgoto'
term.c:(.text+0x169a): undefined reference to `tputs'
term.c:(.text+0×1761): undefined reference to `tgoto'
term.c:(.text+0×1781): undefined reference to `tputs'

This was resolved by adding -ltinfo to the client ldflags in the configure options

CFLAGS="-O3" CXX=gcc CXXFLAGS="-O3 -felide-constructors \
-fno-exceptions -fno-rtti" ./configure \
--enable-assembler \
--with-mysqld-ldflags="-all-static" \
--with-client-ldflags="-all-static -ltinfo"

Now make completed successfully. Lovely.

Today I installed NCBI BLAST on Windows XP SP2 but was unable to run the executables – I was confronted with “the system cannot execute the specified program”.

The fix was to install Microsoft Visual C++ 2005 SP1 Redistributable Package


April 2014
« Oct    



Get every new post delivered to your Inbox.