Hacking TurboGears: Automatically loggin in users

TurboGears
I love the way the Drupal handles account activation and password reset. The user just have to click a link that they receive via e-mail, and they are automatically logged in.

I wanted to do something similar with one of the applications I’m developing right now using TurboGears. I thought I would write a new identity provider, but instead went about hacking TurboGears. I noticed that TurboGears defualt soaprovider can be improved to seperate user authentication and marking a user as authenticated, hence making it reusable.

In my application’s controller I use this newly introduced method to mark the user as authenticated. I thought someone else might hit the same problem, and blogged about it.

You can download the patch from http://www.mohanjith.net/downloads/scripts/python/TurboGears/1.0.4.3/soaprovider.diff, it is created against TurboGears 1.0.4.3.

CAS JDBC Service registry trouble

I had a interesting time trying to figure out why the JA-SIG CAS service registry status was being reset when ever I restarted CAS. After much frustration I figured out the problem was in the schema that hibernate has automatically created.

I’ll explain my setup, I was using MySQL for storing the service registry data via Spring entity manager and Hibernate.

Hibernate was creating BIT(1) for boolean atributes instead of TINYINT(1). Because of this MySQL was not returning anything meaningful for the status. I have now changed the schema, and removed/commented the propery hibernate.hbm2ddl.auto in the enitity manager bean. It seem to work perfectly.

Hope someone in a similar situation will find this information useful.i

CAS Server 3.2 Final released

Today, the CAS development team announced the CAS Server 3.2 release. The release includes a number of enhancements, bug fixes, and new features. This includes updated dependencies (Spring 2.5.1, Log4j, Acegi Security, Spring LDAP, Spring Web Flow) as well as bug fixes in the SPNEGO module and Services Management tool. It also includes enhancements to enable/disable single sign out at the server level.

Finally, it includes a new Hard Timeout Expiration Policy, an updated Spring Configuration mechanism (and modularized Spring configuration files) as well as a utilizing a production-ready auditing/statistics tool/API (Inspektr).

You can download the release from the usual location: http://www.jasig.org/products/cas/downloads/

This is a major release and you should take a look at the major new features (the updated Spring Configuration mechanism and the Inspektr auditing tool) and see how/if it changes your deployment.

Great work Scott Battaglia and the others who contributed.

Duplicity chokes on OSError: [Errno 24] Too many open files

It was little bit too scary. Duplicity backup scripts were failing on the EC2 instances again, this time around it was not about not able to reach S3, but having too many files open. That was weird because it didn’t give such a error in the past. However the work around was to increase the maximum number of file descripters allowed for the user that was running the backup script.

How ever finding this solution was tought, actually it was a FreeBSD forum that had the solution. I though I would just write it down for Linux.

Step 1: Find out the current limit

To find out the current file descripter limit for a given use, log in as the particular user and run the following command.

 $ ulimit -n

By default on Debian it would be 1024.

Step 2: Increase the limit

You would have to edit /etc/security/limits.conf. You will find details on how to setup different limits in limits.conf itself. The record that you have to put in should look like the following.

username hard nofile 2048

Step 3: Log out and Log back in

You would have to log out and log back in as the user that we updated the file descripter limit. Then run the following command.

 $ ulimit -n

You should see the updated file descripter limit.

Hope this helps someone like me in desperation to get the backups in track. I would be doing more investigation as to why there are so many files open. If I find anything interesting I would definitely blog about it. Also for everyone’s reference there is a bug filed at the Savanah bug tracker by someone else who ran into the same issue

Amazon Web Services goes down

Amazon Web Services goes down, takes out some Web 2.0 sites, but not the sites that I was running on EC2. I got a shock when I got a Google alert that had news items about Amazon Web Services are down, I immediately went over to all of the sites I’m responsible for, but all of them were live and kicking. So the next stop was checking my mails, and sure there were mails of the cron job to do the backups failing.

I was using duplicity to backup complete file system of the EC2 instances, I have blogged about my approach in Amazon EC2 with rock solid persistent storage. I had the cron job failing during the S3 downtime, but I was serving all requests without a hitch.

I suspect the sites that went down were using PersistenceFS. Reading there documentation, they assume that S3 is going to be available at all times dispite the 99.99% uptime guarantee. That is a major design flaw. Also it is a utter waste of large storage provided in EC2 during the runtime.

I’m glad to say that despite the S3 downtime all my sites were running. I think the sites that went down reconsider their setup. Also I strongly recommend running redundant EC2 instances for any one planning on hosting sites.

JA-SIG CAS services registry persistence

I was pulling my hair trying to get JA-SIG CAS to work with persistet service registry. By default CAS comes with a in-memory services registry, this is not at all acceptable in production, so I went about following the instructions in http://www.ja-sig.org/wiki/display/CASUM/Configuring under ServicesRegistry and Database Connection. I didn’t pay much attention to “Package your webapp and go for a try”, instead I chose to put in the necessary jars into the lib/ directory manually.

Eventhough I got all the necessary jars in place I was being confronted to a “java.lang.NoClassDefFoundError: org/hibernate/ejb/HibernatePersistence”. I double checked still no avail. I tried Googling, but didn’t lead me to anything very useful.

All the problems went away once I made the changes to the CAS distribution and recompiled. WOW, that was a ride of the life time. It was weird that JDBC authentication handler worked alright by placing the jars in place manually but services registry wouldn’t work the same way.

I guess this post will help someone who runs into the same problem.

Running Drupal behind a reverse proxy

I was supposed to move one of the Drupal sites I’m maintaining behind a Reverse Proxy. The migration was smooth as it could get, but soon throttiling was an issue. All requests were coming from the Proxy server, and Drupal didn’t seem to automagically detect the client IP based on the X-Forwarded-For header.

So I set about investigating what can be done. As I discovered, automagic client IP detection is only available in Drupal 6 (At the time of writing, under development). I went about porting the changes to Drupal 5.x. Porting was as easy as it can be, but it was not working.

Further investigation lead to one of the site configurations; I had enabled Normal Caching in the Drupal site. It was not acceptable to switch off caching, so I went about debugging this code to make it work with caching enabled. Finally I was able to fix the issue, it was trivial but it was not easy to debug. The fix involved removing cached IP address for every request such that the correct IP will be detected. See my patch in drupal.org at http://drupal.org/node/219825.

Hope this helps someone.

Moo your mashups at mooshup.com

Mooshup.com was launched last week, allowing users to create a social network around there mashed up content. Mashing up content became hot few years back among web development community, but with services like mooshup.com even the not so technically incliend can create mashups. mooshup.com is running WSO2 Mashup Server for any one curious.

I’m heading back to mooshup.com to moo my first mashup on mooshup.com, maybe you would also like to do the same. Goto mooshup.com

Amazon EC2 with rock solid persistent storage

With the power of duplicity and chroot we can make a Amazon EC2 image that is as good as a harware node, i.e. with persistent storage. Let me explain how to do it your self as well. However I’ll be leaving out the minute details.

Step 1: Start an instance of a public AMI

I would recommend ami-76cb2e1f because i you are able to use the same image for x-large and large instance powering up. Also it has the ec2 ami tools installed and patched. Login to the instance as root using the certificate you provided when starting the instance. Also do not forget to give the following as User Data.

chroot_bucket=[your_bucket_name]

Step 2: Download and install duplicity and boto

You need to install duplicity 0.4.9 or later and boto 1.0 or later.

Step 3: Create a PGP key

Run the following and follow the instructions that will appear.

 # gpg --gen-key

Please note down the key id because we need it later on. It should look something like 860BCFF6.

gpg: key 860BCFF6 marked as ultimately trusted

Step 4: Install libpam-chroot

You have to install libpam-chroot for it to be possible to push the user inside the chroot when the user logs in via ssh.

Step 5: Create the chroot

Create the chroot and install all the applications you need inside the chroot. Read about how to create a chroot in a debian system here. Create your users inside the chroot. It is important that you understand how chroot works as well.

Step 6: Push the users to chroot

You need to change /etc/security/chroot.cnf and add a line similar to bellow.

[username] /mnt/chroot

Step 6: Download the scripts

You need to download the scripts archive that contains the scripts necessary to do all the magic to ensure that data actually persist. Download it from http://www.mohanjith.net/downloads/amazon/ec2/ec2-chroot-persistence-1.0.tar.gz

Step 7: Extract and edit the scripts
Extract the scripts from out side the chroot, preferably in /.

 # cd /
# tar -xzf [path_to_archive]/ec2-chroot-persistence-1.0.tar.gz

You need to edit /etc/init.d/ec2 and /etc/ec2/cron and change the lines that look like bellow.

export AWS_ACCESS_KEY_ID=[your_aws_access_key_id]
export AWS_SECRET_ACCESS_KEY=[your_aws_secret_access_key]
export PASSPHRASE=[your_gpg_passphrase]
export gpg_key=[your_gpg_key_id]

Step 8: Set up the scripts

You will also have to setup a cron job outside the chroot to backup the data to S3. The script to invoke is /etc/ec2/cron. I would recommend hourly backups, but anything more frequently will be bad because the time it takes to backup will increase drastically.

You will also have to make sure ec2 service (/etc/init.d/ec2) is run on power on, power off and restart. To do that you will have to create sym links to /etc/init.d/ec2 from /etc/rc0.d/K10ec2, /etc/rc3.d/S90ec2, /etc/rc4.d/S90ec2, and /etc/rc6.d/K10ec2.

Step 9: Where to persist data.

Run the bellow as root outside the chroot.

curl http://169.254.169.254/2007-08-29/user-data > /tmp/my-user-data

Step 8: Remaster the AMI

Step 10: Create your machine image
Read more about creating an machine image at Amazon EC2 Getting started guide here.

Step 11: Back up your chroot
Run /etc/ec2/cron to back up the chroot.

Step 12: Power off and power on

Power off the instance you are running with the public image and when it has properly shutdown, start the image we just created in step 10 with the chroot_bucket with the same bucket you provided when you power up the public image.

All the data in /mnt/chroot is backed up to S3 by /etc/ec2/cron and when the instance is started after a shutdown /mnt/chroot is restored from S3. The script is configured to backup on power down but it is always recommended to run /etc/ec2/cron just before a power down.

You might also want to set up dynamic DNS for your instance such that you don’t have to always try hard to remember the ugly public DNS provided by Amazon. You can use ddclient to update the dynamic DNS service with your new IP. You can install ddclient inside the chroot.

This method was tested for more than 1 month and everything worked smoothly for me, but depending on your configuration your experience may defer. It is always good to test before you use in production environment.

Why I was silent last week?

I was extremely busy, not a minute to blog last week. I was assigned the task of coming up with a solution to use Amazon EC2 with persistence such that it feels like a physical server to the users.

I successfully managed to set up such an AMI but, unfortunately I’m not in a position to make the image public, but definitely I’ll be bloggin about how you can set up a similar AMI.

Stand by for the all new discovery!