It was little bit too scary. Duplicity backup scripts were failing on the EC2 instances again, this time around it was not about not able to reach S3, but having too many files open. That was weird because it didn’t give such a error in the past. However the work around was to increase the maximum number of file descripters allowed for the user that was running the backup script.
How ever finding this solution was tought, actually it was a FreeBSD forum that had the solution. I though I would just write it down for Linux.
Step 1: Find out the current limit
To find out the current file descripter limit for a given use, log in as the particular user and run the following command.
$ ulimit -n
By default on Debian it would be 1024.
Step 2: Increase the limit
You would have to edit /etc/security/limits.conf. You will find details on how to setup different limits in limits.conf itself. The record that you have to put in should look like the following.
username hard nofile 2048
Step 3: Log out and Log back in
You would have to log out and log back in as the user that we updated the file descripter limit. Then run the following command.
$ ulimit -n
You should see the updated file descripter limit.
Hope this helps someone like me in desperation to get the backups in track. I would be doing more investigation as to why there are so many files open. If I find anything interesting I would definitely blog about it. Also for everyone’s reference there is a bug filed at the Savanah bug tracker by someone else who ran into the same issue
Thanks, this was helpful.