Moo your mashups at mooshup.com

Mooshup.com was launched last week, allowing users to create a social network around there mashed up content. Mashing up content became hot few years back among web development community, but with services like mooshup.com even the not so technically incliend can create mashups. mooshup.com is running WSO2 Mashup Server for any one curious.

I’m heading back to mooshup.com to moo my first mashup on mooshup.com, maybe you would also like to do the same. Goto mooshup.com

Amazon EC2 with rock solid persistent storage

With the power of duplicity and chroot we can make a Amazon EC2 image that is as good as a harware node, i.e. with persistent storage. Let me explain how to do it your self as well. However I’ll be leaving out the minute details.

Step 1: Start an instance of a public AMI

I would recommend ami-76cb2e1f because i you are able to use the same image for x-large and large instance powering up. Also it has the ec2 ami tools installed and patched. Login to the instance as root using the certificate you provided when starting the instance. Also do not forget to give the following as User Data.

chroot_bucket=[your_bucket_name]

Step 2: Download and install duplicity and boto

You need to install duplicity 0.4.9 or later and boto 1.0 or later.

Step 3: Create a PGP key

Run the following and follow the instructions that will appear.

 # gpg --gen-key

Please note down the key id because we need it later on. It should look something like 860BCFF6.

gpg: key 860BCFF6 marked as ultimately trusted

Step 4: Install libpam-chroot

You have to install libpam-chroot for it to be possible to push the user inside the chroot when the user logs in via ssh.

Step 5: Create the chroot

Create the chroot and install all the applications you need inside the chroot. Read about how to create a chroot in a debian system here. Create your users inside the chroot. It is important that you understand how chroot works as well.

Step 6: Push the users to chroot

You need to change /etc/security/chroot.cnf and add a line similar to bellow.

[username] /mnt/chroot

Step 6: Download the scripts

You need to download the scripts archive that contains the scripts necessary to do all the magic to ensure that data actually persist. Download it from http://www.mohanjith.net/downloads/amazon/ec2/ec2-chroot-persistence-1.0.tar.gz

Step 7: Extract and edit the scripts
Extract the scripts from out side the chroot, preferably in /.

 # cd /
# tar -xzf [path_to_archive]/ec2-chroot-persistence-1.0.tar.gz

You need to edit /etc/init.d/ec2 and /etc/ec2/cron and change the lines that look like bellow.

export AWS_ACCESS_KEY_ID=[your_aws_access_key_id]
export AWS_SECRET_ACCESS_KEY=[your_aws_secret_access_key]
export PASSPHRASE=[your_gpg_passphrase]
export gpg_key=[your_gpg_key_id]

Step 8: Set up the scripts

You will also have to setup a cron job outside the chroot to backup the data to S3. The script to invoke is /etc/ec2/cron. I would recommend hourly backups, but anything more frequently will be bad because the time it takes to backup will increase drastically.

You will also have to make sure ec2 service (/etc/init.d/ec2) is run on power on, power off and restart. To do that you will have to create sym links to /etc/init.d/ec2 from /etc/rc0.d/K10ec2, /etc/rc3.d/S90ec2, /etc/rc4.d/S90ec2, and /etc/rc6.d/K10ec2.

Step 9: Where to persist data.

Run the bellow as root outside the chroot.

curl http://169.254.169.254/2007-08-29/user-data > /tmp/my-user-data

Step 8: Remaster the AMI

Step 10: Create your machine image
Read more about creating an machine image at Amazon EC2 Getting started guide here.

Step 11: Back up your chroot
Run /etc/ec2/cron to back up the chroot.

Step 12: Power off and power on

Power off the instance you are running with the public image and when it has properly shutdown, start the image we just created in step 10 with the chroot_bucket with the same bucket you provided when you power up the public image.

All the data in /mnt/chroot is backed up to S3 by /etc/ec2/cron and when the instance is started after a shutdown /mnt/chroot is restored from S3. The script is configured to backup on power down but it is always recommended to run /etc/ec2/cron just before a power down.

You might also want to set up dynamic DNS for your instance such that you don’t have to always try hard to remember the ugly public DNS provided by Amazon. You can use ddclient to update the dynamic DNS service with your new IP. You can install ddclient inside the chroot.

This method was tested for more than 1 month and everything worked smoothly for me, but depending on your configuration your experience may defer. It is always good to test before you use in production environment.

Why I was silent last week?

I was extremely busy, not a minute to blog last week. I was assigned the task of coming up with a solution to use Amazon EC2 with persistence such that it feels like a physical server to the users.

I successfully managed to set up such an AMI but, unfortunately I’m not in a position to make the image public, but definitely I’ll be bloggin about how you can set up a similar AMI.

Stand by for the all new discovery!

Turn your computer into an Internet TV – Miro

I always wanted to just have an Internet TV, no cable, no satellite, no terrestrial. Even when it comes to TV I want to watch what I want when I want, not when the broadcaster wants. Internet TV was the ideal. Main thing I missed was really good content with really good quality. Only hope was torrents, but it was not the same experience though. You have to download the movie file using a torrent client and then start watching it using a media player. YouTube videos were a big pain to save to share with my friends. Video feeds were another story; I couldn’t find a single descent client that will just work.

Finally there is a free and open source media player + Internet TV, Miro. You can download Miro from http://www.getmiro.com/. It is released under GNU GPL, you are free to use, change and redistribute.

Miro is a great piece of software that makes Internet TV a reality. It is capable of playing most video files if not all, built in guide to video feeds and podcasts with the capability to play them within Miro itself, capable of playing and saving YouTube videos, download Torrents and watch them in Miro, and best of all access to great variety of HD content. All this makes Miro the only media player you will ever want.

Miro binaries are available for all major flavors of Linux, MacOS, Windows. If you are not able to use any of the binaries you could always compile and install from source 😉

I’m impressed about what Miro can do, I think you will also be impressed. Download Miro now from http://www.getmiro.com/.

Automagically ping blog search engines

I wanted to automatically ping Technorati, Icerocket, and Google Blog Search, that means with no intervention the blog search engines should be pinged. I was alright with a delay of 15 minutes.

So I went about exploiting the XML-RPC services provided by the blog search engines. I came up with this python script. I set up a cron job to invoke the script every 15 minutes. See bellow for the source.
[sourcecode language=’py’]#!/usr/bin/python

import xmlrpclib
import urllib2
import os

from hashlib import md5

feed_url = ‘[Yorur feed url]’
blog_url = ‘[Your blog url]’
blog_name = ‘[Your blog name]’
hash_file_path = os.path.expanduser(“~/.blogger/”)

def main():
req = urllib2.Request(feed_url)
response = urllib2.urlopen(req)
feed = response.read()
hash_file_name = hash_file_path + md5(blog_url).hexdigest()

if os.path.exists(hash_file_name):
hash_file = open(hash_file_name, “r+”)
last_digest = hash_file.read(os.path.getsize(hash_file_name))
else:
hash_file = open(hash_file_name, “w”)
last_digest = ”

curr_digest = md5(feed).hexdigest()

if curr_digest != last_digest:
ping = Ping(blog_name, blog_url)
responses = ping.ping_all([‘icerocket’,’technorati’,’google’])
hash_file.write(curr_digest)

hash_file.close()

class Ping:
def __init__(self, blog_name, blog_url):
self.blog_name = blog_name
self.blog_url = blog_url

def ping_all(self, down_stream_services):
responses = []

for down_stream_service in down_stream_services:
method = eval(‘self._’ + down_stream_service)
responses.append(method.__call__())

return responses

def _icerocket(self):
server = xmlrpclib.ServerProxy(‘http://rpc.icerocket.com:10080’)
response = server.ping(self.blog_name, self.blog_url)
# print “Icerocket response : ” + str(response)
return response

def _technorati(self):
server = xmlrpclib.ServerProxy(‘http://rpc.technorati.com/rpc/ping’)
response = server.weblogUpdates.ping(self.blog_name, self.blog_url)
# print “Technorati response : ” + str(response)
return response

def _google(self):
server = xmlrpclib.ServerProxy(‘http://blogsearch.google.com/ping/RPC2’)
response = server.weblogUpdates.ping(self.blog_name, self.blog_url)
# print “Google blog search response : ” + str(response)
return response

main()[/sourcecode]
When ever the script is invoked it will get the post feed content, and create a md5 hash of it and then compare the hash against the last known hash, if they differ ping the given list of service.

This is very convenient if you have someplace to run the cron job. Even your own machine is sufficient if you can keep your machine on for at least 15 minutes after the blog post is made.

To run the script you need to python 2.4 to later and the python package hashlib. Hope you will find this useful.

Epiphany Anti-Phishing extension

I’m happy to announce the release of safe-browsing 0.0.1 for Epiphany, the GNOME Web browser. It will try match the url you are trying to access with the Google Safe Browsing black list of urls. If it is a match the view pane will be disabled such that the user can only view the page but not interact. The user will be free to browse away from the page by typing a new url. The extension will also add a indicator to the browser status bar. In the next relase an error page will be displayed instead of even displaying the evil page.

You can also report phishing sites by clicking Help -> Report Web Forgery…

You can download the extension safe-browsing-0.0.1.tar.gz

Follow the steps bellow to install the extensionn. I’m assuming you have already installed epiphany and epiphany-extensions.

Step 1 – Download the extension archive

 $ wget http://www.mohanjith.net/downloads/gnome/epiphany/extensions/safe-browsing/safe-browsing-0.0.1.tar.gz

Step 2 – Extract the extension archive to epiphany extensions directory

 $ cd /usr/lib/epiphany/2.20/extensions/ $ tar -xzvf [Location_to_archive] .

Step 3 – Restart epiphany and enable Safe browsing extension

Goto Tools -> Extensions and then select the check box against Safe browsing.

Step 4 – Goto a phishing site

Goto a phishing page, e.g. http://202.168.224.161/c.html at the time of posting.

Hope this extension will make your browsing experience safer.

GNOME Web browser Creative Commons extension

I’m happy to announce the release of cc-license-viewer 1.1.0 for Epiphany, the GNOME Web browser. It is capable of detecting Creative Commons licensed web pages either with rdf meta data or with the license badge from creativecomons.org and displaying an icon on the status bar.

This is a modified version of cc-license-viewer released by Jaime Frutos Morales. Jaime Frutos Morales extension is not capable of detecting web pages with the Creative Commons license badge.

You can download the extension cc-license-viewer-1.1.0.tar.gz

Follow the steps bellow to install the extensionn. I’m assuming you have already installed epiphany and epiphany-extensions.

Step 1 – Download the extension archive

 $ wget http://www.mohanjith.net/downloads/gnome/epiphany/extensions/cc-license-viewer/cc-license-viewer-1.1.0.tar.gz

Step 2 – Extract the extension archive to epiphany extensions directory

 $ cd /usr/lib/epiphany/2.20/extensions/ $ tar -xzvf [Location_to_archive] .

Step 3 – Restart epiphany and enable CC extension

Goto Tools -> Extensions and then select the check box against Creative Commons license viewer.

Step 4 – Goto Creative Commons page

Goto a CC licensed page, e.g. http://creativecommons.org

My next plan would be to extend the functionality of this extension such that more informative icon is shown at the status bar. For the time beign have fun with this extension.

The Google Highly Open Participation Contest

Following on from the success of the Google Summer of Code program, Google is pleased to announce a new effort to get young people involved in open source development. Google has teamed up with the open source projects such as Apache Software Foundation, Drupal, GNOME, Joomla!, MoinMoin, Mono, Moodle, Plone, Python, and SilverStripe to give student contestants the opportunity to learn more about and contribute to all aspects of open source software development, from writing code and documentation to preparing training materials and conducting user experience research.

If you’re a student age 13 or older who has not yet begun university studies, you could help out these projects. In return, you’ll learn more about all aspects of developing software – not just programming – and you’ll be eligible to win cash prizes and the all important t-shirt! You will, of course, need your parent or guardian’s permission to participate where applicable.

To Read more and take part go to http://code.google.com/opensource/ghop/2007-8/.

If you have already begun university studies like me you could always spread the word 🙂

Using GNOME remotely via SSH

Have you ever wished that you had a GUI on a remote Linux server without using VNC? Actually you can use GNOME or any other GDM on a remote server via SSH, yep I’m not joking.

You need to have SSH and X11 running on both the client and the server. In addition on the server GNOME should be installed and SSH daemon should be running.

Step 1 – Turn on X11 forwarding on the server:

Add the following if it doesn’t exist or just change no to yes in /etc/ssh/ssh_config and save it.

ForwardAgent yes
ForwardX11 yes
ForwardX11Trusted yes

Add the following if it doesn’t exist or just change to yes in /etc/ssh/sshd_config and save it.

X11Forwarding yes

Step 2 – Connect to the remote server viw SSH with X11 forwarding

In order to enable X11 forwarding when you connect to a remote server via SSH you need to provide the commandline option -X. See the example bellow.

 $ ssh -X username@server

Step 3 – Start GNOME Session

You need to start the GNOME session for the GUI to show. By default GNOME session is not started for remote connections. It might take a while for any change to appear, you should notice GNOME startup sequence appearing in the client and couple of messages in your terminal.

However I do not recommend running X11 or GNOME on a production server, but this should be handy if you want to connect to your home computer from office for example.

TopCoder Open 2008 – Open for registrations

For those of who don’t know about TopCoder; TopCoder is one of the leagues for programming competitions, and it is fast becoming a major league. They conduct online competitions at least once a week and two major online and on site competitions. Read more at topcoder.com

TopCoder Open, more commonly known as TCO is one of the two annual online and onsite competitions. It is basically open to any TopCoder member who is 18 or above. In this year’s TCO there will be four competitions: algorithm competition, component design and development competitions, marathon competition, studio competition. Only the top 120 will make the TCO.

TCOO8 is open for registrations till February 1, 2008. This time around there is a prize purse of $260,000 and the most importantly becoming the TCO08 champion.

If you do register your self as a new TC member do not forget to mention my TC member handle in the referer field; My handle is mohanjith.