Tuesday, March 12, 2013

Bash script self-updates from github

Recently the systems engineering team @ Cloud9ers ($dayjob) has been busy building the infrastructure for a very ambitious and large scale project for one of our customers. The project involves tons of distributed programming besides lots of systems work as well. I'd like to share with the community a small tidbit of information. It's often some small things the one sometimes gives a moment to admire, and hereby share!

The problem

  • We are building a private cloud. Ubuntu server instances running on that are cloned from a master template
  • While most clouds provide what I call "Instance Identity" information through some pre-known web service URI, in our case we utilize VMware's "Invoke-VMScript" API to run scripts inside the cloned template, thus customizing it and giving it its identity, then puppet takes it from there. Note that we're using a simple vCenter+ESXi (no vCloud stuff here) no shared storage even!

Ok the real problem

  • Editing the template, and publishing it across the cloud (a topic for another post) takes considerable time! I wanted a way to be able to quickly update my identity scripts without having to re-build and re-publish images

Solution

  • A script with a trivially simple (thus mostly fixed) "bootstrap" section, which auto-updates itself from github and relaunches its-new-self!

Code

Nothing ground breaking, but still a cool trick eh! Yeah I could have added a broken this into multiple scripts and maybe run-parts them. Of-course this script is quite basic, but it's meant to remain that way since the heavy lifting is puppet's responsibility!
Note: To run this successfully the user this is run under, should have passwordless sudo rights to run "ip" and to run "itself" :) Got cool ideas, thoughts or comments? Leave me a message

Wednesday, December 26, 2012

LXC lies about /.. inode number making FlexLM unhappy

One of our customers wanted to run a flexlm licensed tool on the Amazon EC2 cloud. He turned to us over here at Cloud Niners and we started doing the work. The approach we chose was to start an ec2 Ubuntu precise machine, host a CentOS LXC container on it and run the license daemon inside that. The reason for the CentOS choice is that the tools and license daemons are only certified on RHEL and CentOS is the closest next thing

Trying to start the license daemon, I hit the following error

 8:59:27 (TOOLNAME) Cannot open daemon lock file

Checking the usual suspicions of permissions and the like led no where. Google immediately led me to similar problems for people wanting to run Flexlm tools under Solaris zones (which are quite similar to Linux LXC containers). I was reading about a guy facing a similar problem, and how Solaris' legend Brendan Gregg wrote dtrace scripts to run-time patch memory structures to resolve the issue.

At first I was dismissive that this was what I was actually see'ing. A quick "ls -lai /" just to confirm /. and /.. actually had the same inode number, and I was sure it was something else

# ls -lai / | grep '\.$'
  256 dr-xr-xr-x   1 root root   212 Dec 26 09:08 .
  256 dr-xr-xr-x   1 root root   212 Dec 26 09:08 ..



but I had one of those mmm moments after stracing the binaries and confirming they are actually failing immediately after calling getdents on "/". This sounds suspiciously close to what the Solaris folks were see'ing. I grabbed gcc and built the sample code in getdents kernel doc page (Thankful!) And much to my surprise, inode numbers for /. and /.. were actually different!

# ./a.out / | grep '\.$'
     256  directory    24            1  .
   42669  directory    24          2  ..



Of course this makes sense since the LXC guest was just another directory on the host, but I didn't suspect ls -i was actually lying inside guests!

At this point, I'm not exactly sure how to resolve my issue apart from reinstalling the LXC guest on a separate block device (like lvm) which I think should resolve the issue. This blog post is simply to confirm the issue, and to gather feedback and potential solutions from smart people reading this. Shoot me a comment

Thursday, October 11, 2012

Celebrating Cloud9ers is MEA's leading Amazon Consulting partner


It's been quite a while since I last blogged, I think it was over a year ago! I've been taking a break from blogging and focusing on Cloud Niners my baby startup that I co-founded with a bunch of life long friends.

While many things are interesting over here at Cloud Niners, this blog post is specifically to celebrate the fact that we've just been upgraded to become an Amazon Consulting Partner. I believe this makes us the first active middle-east Amazon AWS cloud consulting partner and certainly the best :) I do hear a lot from Amazon that we are the cloud leader in the middle-east! I am quite happy that we've achieved this milestone. I'm a big believer in cloud computing, and I simply enjoy helping our clients understand how the cloud (and Ubuntu server) can help them achieve more!

My team and I do custom cloud-application development, where we write top notch html5/CSS3/JS web applications that run and make use of the Amazon AWS scalable cloud infrastructure. We also do mobile applications backends and front-end (Android, iOS). We usually use the frankly excellent Ubuntu server, unless the customer forces us otherwise! Ubuntu server is simply the best OS for the cloud IMO. Our DevOps team helps our customers migrate and run their apps on the cloud (and on Ubuntu server) at top quality!

If you've read so far, here's a Gift for you! I'm offering my (and my team's) help (up-to-24hrs) for free in the next 30-days. If you're thinking about coding a custom cloud app, or have questions about any Amazon services, I'll give you free consultation and answer any questions. Shoot me a comment over here, or contact the team. Oh yeah, feel free to share this blog post with friends, to let them have the free goodies!

Friday, September 23, 2011

Oneiric server, Deploy Server fleets p2

Welcome to the second installment of this article series. In the first part of this article we installed an Ubuntu server instance, made sure it became an orchestra installation server. If this is new to you, Orchestra is a new Oneiric server feature that enables admins to very easily deploy fleets of Ubuntu servers. Let's pick up where the first article stopped

First, let's check where we are. You see installing the orchestra server, it automatically downloads and imports various Ubuntu server ISOs and creates all the needed structure (distros, profiles ...etc) in the underlying cobbler system. Let's see what have we

$ sudo cobbler list
distros:
   hardy-i386
   hardy-x86_64  
   lucid-i386
   lucid-x86_64  
   maverick-i386 
   maverick-x86_64
   natty-i386
   natty-x86_64  
   oneiric-i386  
   oneiric-x86_64

profiles:
   hardy-i386
   hardy-i386-juju
   hardy-x86_64
   hardy-x86_64-juju
   lucid-i386
   lucid-i386-juju
   lucid-x86_64
   lucid-x86_64-juju
   maverick-i386
   maverick-i386-juju
   maverick-x86_64
   maverick-x86_64-juju
   natty-i386
   natty-i386-juju
   natty-x86_64
   natty-x86_64-juju
   oneiric-i386
   oneiric-i386-juju
   oneiric-x86_64
   oneiric-x86_64-juju

systems:

repos:
   hardy-i386
   hardy-i386-security
   hardy-x86_64
   hardy-x86_64-security
   lucid-i386
   lucid-i386-security
   lucid-x86_64
   lucid-x86_64-security
   maverick-i386
   maverick-i386-security
   maverick-x86_64
   maverick-x86_64-security
   natty-i386
   natty-i386-security
   natty-x86_64
   natty-x86_64-security
   oneiric-i386
   oneiric-i386-security
   oneiric-x86_64
   oneiric-x86_64-security

images:

mgmtclasses:
   orchestra-juju-acquired
   orchestra-juju-available
woah! that sure makes my life easier. If you're interested to see where the isos were downloaded (like I was) here you are
ls /var/lib/cobbler/isos/
hardy-i386-mini.iso    lucid-i386-mini.iso    maverick-i386-mini.iso    natty-i386-mini.iso    oneiric-i386-mini.iso
hardy-x86_64-mini.iso  lucid-x86_64-mini.iso  maverick-x86_64-mini.iso  natty-x86_64-mini.iso  oneiric-x86_64-mini.iso

Let's create a new virtual box VM, to serve as our new "server" that needs to be installed. Here's how it looks for me
12-oneiric01-vboxsettings

One thing is worth noting however, it's that the NIC is placed on the "intnet" network, which has the IP range 192.168.77.0/24 that we configured in the first part of this article
13-vbox-natty01-netsettings

now the only "real" thing you have to do, is to add a profile on the orchestra server for your new bare server. The profile binds its mac address, to a name and an installation profile (think OS to install, kickstart ..etc)

sudo cobbler system add --name="oneiric01.ubuntu.lan" --mac-address="08:00:27:B7:76:2A" --ip-address="192.168.77.33" --dns-name="oneiric01.ubuntu.lan" --hostname="oneiric01.ubuntu.lan" --profile="oneiric-x86_64-juju" --mgmt-classes="orchestra-juju-available" --kopts=" DEBCONF_DEBUG=developer netcfg/dhcp_timeout=120 netcfg/choose_interface=eth0"
Boot the server, choose PXE (For vbox that's F12 then "l" that's an L)

14-natty-PXEbooting

Watch the installer fly by (look ma hands free)

15-installer-running

and your box is ready!
16-Oneiric01-ready

That's how easy it is to install a fresh server off your orchestra box! So basically the only thing you need to do per server, is to attach it to a profile and that's it. Boot it and it installs whatever you provisioned for it. Of course any good admin already did that manually before, but it took effort and it wasn't standardized. Now you can count on Ubuntu server covering your back when you're tasked with installing a hundred servers

How cool was that! Got thoughts, comments or rotten tomatoes ? Shoot me a comment

Wednesday, September 21, 2011

Oneiric server, Deploy Server fleets p1

I'm gonna be posting a series of articles on new features and cool technology bits that are landing in Ubuntu Oneiric (11.10) server. Why? I like servers, I like cloud, I like Ubuntu, it all mixes well, what's not to like :)

During this first article, I'll be demoing (in a graphically intensive way :) what it takes (hint: not much!) to deploy a server fleet with Oneiric server. Orchestra is the name of a wonderful piece of technology that lands in Oneiric, that's been created on top of the open-source cobbler project. Orchestra is super easy to install and get started with, and enables you to very rapidly deploy tens or hundreds of physical servers. I'll be using virtualbox to build a small test "lab" on my laptop for purposes of this article. I did actually try KVM first, but faced some trouble getting PXE booting reliably, so I opted for virtualbox which worked flawlessly (kudos vbox guys, you rock!)

Let's get started, I created a VM to represent the very first "head node", that will install the rest of all nodes. Here is a summary of its configuration
1-orchestra
Pop in the virtual CD, boot it, press F6, add "priority=critical locale=en_US url=http://bit.ly/uquick" (Thanks Dustin!) so it looks like
2-orchestra-bootoptions
The uquick profile answers all the installer questions, such that the installation is fully automatic. Since the VM contains two NICs however, we'll need to select a primary one (eth0 in my case)
3-orchestra-whicheth
The installation runs like a champ, fully automated, give it a few minutes till it finishes everything and reboots into the server OS (oh that was easy!)
4-orchestra-login
Now I configure eth1 to have a static IP address of 192.168.77.1/24 (I made any address up), here is a snapshot of /etc/network/interfaces and I started eth1 using ifup
5-orchestra-eth1up
At this stage, I rebooted the server but you definitely don't have to. Let's start actually installing Orchestra

sudo apt-get update
sudo apt-get install ubuntu-orchestra-server -y

Everything proceeds automatically, for any question you get during package installation, I'll provide a picture with the answer :)
6-cobbler-password
7-nextserver
8-enable-dns-dhcp
9-dhcp-range
10-dhcp-gw
11-domain-name

That's it! You've just installed and configured your first Ubuntu Orchestra server, and you're now ready to install a fleet of Ubuntu servers the easy way! In part 2 of this article, I'll go through creating a second server, PXE booting and installing it from the orchestra server. (Extra credit: If you can't wait, try PXE booting a fresh server right now. Note that after installation, orchestra actually downloads and auto-imports a few Ubuntu mini ISOs, thus will need a few minutes depending on your internet connection speed)

So, what do you think of this coolness? Is this easier than the last time you tried building yourself an automated network installer? Shoot me a comment, let me know what you think

Monday, September 12, 2011

Torrent download Cloud appliance

The Why



A friend of mine who's a Linux systems geek as well, was tasked with building a library of Linux distro ISOs, this involved downloading tens of ISOs many of which were only offered in torrent form. Even if that were not the case, it would still be good practice to download such large binaries from torrent to avoid loading a certain mirror too much. Anyway, we were chatting about it, and since where I live bandwidth (especially upload) is a scarce resource, he was considering paying some service to download the torrents he needed and convert them to HTTP!

I mentioned I could build something to do just that in about an hour! It wouldn't even be complex. Armed with Ensemble I can very simply launch an EC2 instance, install rtorrent (my fav cli torrent client) and rtgui (rtorrent Web UI) and have it ready to crunch on any of your torrenting needs. We both became interested in seeing how well that would work and so here we go...

The How


I'll assume you already know how to get started with Ensemble. Let's see what it takes to deploy my torrent appliance

$ bzr branch lp:~kim0/+junk/rtgui
$ ensemble bootstrap
# Wait for ec2 to catch up (2~5 mins)
$ ensemble deploy --repository . rtgui
$ ensemble expose rtgui

That is basically all you need to "use" this appliance! Give another few minutes for the rtgui appliance to boot, install and configure itself. You can check status with

$ ensemble status 
2011-09-12 13:23:53,868 INFO Connecting to environment.
machines:
  0: {dns-name: ec2-50-19-19-234.compute-1.amazonaws.com, instance-id: i-2c37ee4c}
  1: {dns-name: ec2-107-20-96-125.compute-1.amazonaws.com, instance-id: i-1c28f17c}
services:
  rtgui:
    exposed: true
    formula: local:rtgui-9
    relations: {}
    units:
      rtgui/0:
        machine: 1
        open-ports: [80/tcp, 55556/tcp, 55557/tcp, 55558/tcp, 55559/tcp, 55560/tcp,
          6881/udp]
        relations: {}
        state: started
2011-09-12 13:24:01,334 INFO 'status' command finished successfully

The import bit to watch for is "state: started", if it's something else, that means the ec2 instance is still being configured. It's nice to note that the following ports have been opened 55556-55560 since rtorrent is configured to use those ports, port 80 was opened for the Web UI, and port 6881 UDP was opened for the DHT network. I am in no way a torrent expert, so this could be completely unoptimized, but hey it seems to work

Ready to test? Machine 1 runs rtgui, so go ahead and visit it in a browser, for me that's http://ec2-107-20-96-125.compute-1.amazonaws.com/rtgui (replace that DNS name with the right one for your instance, and don't forget the trailing /rtgui like I always do). Click "Add torrent" and pass it the URL to a torrent file, I'm gonna be testing with Ubuntu 11.10 beta1 amd64 torrent file. Once the torrent is added, click the green play button to start it. Since EC2 instances have quite some bandwidth available to them, this Ubuntu torrent downloaded in a just few seconds. I am shipping a default configuration with rtorrent that limits upload speed to 100KB (since you're paying for bandwidth), but you can change that from the web UI. Here's how the whole thing looks


Once a torrent file is downloaded, you can download it through http://ec2-107-20-96-125.compute-1.amazonaws.com/complete (again replace the machine DNS name, with the correct name in your case)

A single torrent appliance is not ofcourse limited to a single torrent! You can keep adding as much as you want, however eventually you're going to hit some limit (disk IO, network IO, disk space ...etc). As such (probably only if you're after downloading really large number of torrents) you may need to "scale up" this torrent download appliance (well it's a cloud for God's sake!). If that's what you wish for, you only need to
$ ensemble add-unit rtgui

Simple as that, as with everything Ensemble! So now you know how you can download your 11.10 copy without loading Ubuntu's servers, actually you'd be helping them and all millions of Ubuntu users if you use this method on release day. Once you're done playing with the appliance, you need to destroy it (to stop paying Amazon for the machines)

$ ensemble destroy-environment
WARNING: this command will destroy the 'sample' environment (type: ec2).
This includes all machines, services, data, and other resources. Continue [y/N]y
2011-09-12 13:53:33,018 INFO Destroying environment 'sample' (type: ec2)...
2011-09-12 13:53:36,641 INFO Waiting on 2 EC2 instances to transition to terminated state, this may take a while
2011-09-12 13:54:18,617 INFO 'destroy_environment' command finished successfully

Want to improve it?


Things I wish I had time to improve:
  • Once a file is downloaded, upload it to S3. You can then terminate the appliance, and still download the files at your own pace
  • Parameterize rtorrent rc configuration file, such that you can pass it parameters from Ensemble (such as upload rate...etc)
  • Integrate notification upon download completion (SMS me, email me, IM me ...etc)
  • Add an auto-redirect to /rtgui :)
  • Figure out a way to download completed files from within the rtgui web UI

If you're interested in improving that appliance, drop by in #ubuntu-ensemble on IRC/Freenode and ping me (kim0) or any of the friendly folks around.

What kind of skills do you need to hack on that project? Just bash shell scripting foo! The feature I love most about Ensemble as a cloud orchestration tool, is that it doesn't twist you into using some abstracted syntax. You get to write in whatever language you feel like using, for me that's bash. You can find the script that does all of the above right here.

Interested to learn more about Ensemble and automating Ubuntu server deployments in the cloud or on physical servers ?
Want to hack on this torrent appliance, or do something similar?
Have comments or a better idea?

Let me know about it, just drop me a comment right here! You can also grab me (kim0) over Freenode irc

Sunday, August 21, 2011

Battling Hunger in the Horn of Africa

Hungry children in the Horn of Africa

A humanitarian crisis has slowly unfolded in the Horn of Africa. Drought, conflict, and rising food prices have affected more than 13 million people in the region. On 20 July, famine conditions were declared in several southern regions of Somalia. The Food Security and Nutrition Analysis Unit (FSNAU) forecasts that famine conditions will spread if humanitarian assistance does not increase. In response, WFP is planning to feed over 11.5 million people, including 3.7 million people in Somalia, 3.7 million in Ethiopia, and 2.7 million in Kenya.

Restricted aid access

Access to some vital areas is restricted to humanitarian aid organizations. The hatched area on the map shows areas in which some aid organizations are unable to work— including the places where people are most in need of assistance.

Operational efficiency

The figure of USD $0.50 per person per day is based on the average combined daily costs of World Food Programme's operations within Somalia, Ethiopia, & Kenya, as well as the number of people reached by those efforts.


If you cannot see the embedded map above, click here: http://horn.wfp.org/main.html

Save a child today!