Friday, December 31, 2010

Cloud Community Flash, Adnane

Being my last post of 2010 (Hello 2011!) I wanted it to be special, thus I would like to thank every member and contributor to the Ubuntu cloud community, the whole Ubuntu community, and more generally Linux and the rest of the Open-Source world!

Let's meet Adnane Belmadiaf. Adnane is a committed Ubuntu community member, who has been rocking for the past few months. He's done some great work on the Ubuntu cloud portal. I asked him for a few words about himself, and here's what he has to say


AB: Hi, I'm Adnane, a 22 years old Ubuntu user based in Morocco. I was born to be a Web developer (Yes i am a CSS ninja XD) and I am a happy Linux user since 2008. I started actively contributing in 2009. I participated in many projects for Ubuntu such as (The Ubuntu Manual Project, LoCo Directory, and the Ubuntu Cloud Portal). It is a very good experience for me because I learnt and still learn a lot of things from others! The Ubuntu community is something that I’m very passionate about. It’s just awesome to see how it grows and evolves.it’s an inspiring environment to be part of!

AK: What keeps you motivated contributing to Ubuntu?

There are a lot of things that help me to stay motivated, but first and foremost - it's the great people i am working with, the atmosphere is always fun and everybody around you is there for support. You know that you're not alone. The appreciation for the work i have done - When a project/work is done there is that quite nice "Thank You" that keeps me motivated all the time. I'm constantly challenged to learn new things and I simply enjoy coding and solving problems. I would tell to everyone who want to be involved on the community, that patience and good work are the keys to success. it’s almost one year since i joined the community and the results are quite surprising. I now think I am ready to apply for an Ubuntu membership, which I am currently pursuing. Wish me luck :)

Thursday, December 30, 2010

Cloud Instance with Cloud-Init on KVM

I wanted to run an Ubuntu server cloud instance locally on KVM hypervisor. I also wanted to run cloud-init on the local setup in order to experiment a bit with it. So that means, downloading a UEC image, booting it under KVM, and passing cloud-init parameters to it as it boots. Much to my surprise things were far easier than I expected, all thanks to our rocking Ubuntu cloud team. Here's the script I cobbled together, most of it is basically a rip off from this UEC Images wiki page.

So what this does is, it:
  • Downloads a UEC image of natty server daily i386 if it doesn't exist
  • Sets a few variables
  • Creates a qcow disk image, shadowing/cowing/differencing the downloaded image. This is to keep the originally downloaded image pristine
  • Downloads some sample user-data and meta-data (warning, this runs arbitrary commands and injects keys inside your VM, only use for testing!). To experiment with cloud-init you'd have to modify the user-data to your liking
  • Runs a local simple webserver (port 8000)
  • Boots the cow image in a local kvm. As it boots, you'll notice on your terminal the following two requests being made "GET /meta-data" and "GET /user-data". Cloud-init uses this data to customize the image as it boots
  • Once you close kvm, kills the web-server
So all you need to do, is create an empty directory, put this script in it and run it
uecnattyservercurrent=http://uec-images.ubuntu.com/server/natty/current/natty-server-uec-i386.tar.gz
tarball=$(basename $uecnattyservercurrent)
[ -f ${tarball} ] || wget ${uecnattyservercurrent}
contents=${tarball}.contents
tar -Sxvzf ${tarball} | tee "${contents}"
cat natty-server-uec-i386.tar.gz.contents
base=$(sed -n 's/.img$//p' "${contents}")
kernel=$(echo ${base}-vmlinuz-*)
floppy=${base}-floppy
img=${base}.img
qemu-img create -f qcow2 -b ${img} disk.img
wget http://smoser.brickies.net/ubuntu/uec-seed/meta-data
wget http://smoser.brickies.net/ubuntu/uec-seed/user-data
python -m SimpleHTTPServer &
websrvpid=$!
kvm -drive file=disk.img,if=virtio,boot=on -kernel "${kernel}" -append "ro init=/usr/lib/cloud-init/uncloud-init root=/dev/vda ds=nocloud-net;s=http://192.168.122.1:8000/ ubuntu-pass=ubuntu"
kill $websrvpid

Monday, December 27, 2010

Ubuntu Cloud Tech Content

I had sent out calls to the Ubuntu cloud community asking what kind of technical content or training materials (tutorials, screencasts ..etc) they were interested in seeing. I got quite some feedback that boils down to the following being the top 10 topics
  • Modifying Ubuntu images and rebundling to EC2
  • Creating images from scratch with vmbuilder
  • P2V and V2V conversions (Physical, VirtualBox, VMware...)
  • Advanced cloud-init (custom handlers, multi-server, includes)
  • Provisioning and deploying Web applications (e.g. rails, django) to the cloud
  • Best practices for upgrading a server install cross-release
  • Load balanced LAMP multi-tier installation
  • Best practices around creating cloud instance snapshots
  • Backing up live cloud instances
  • Restoring cloud servers from backup/snapshots

Now, what's a better way to spend your holidays than hacking together content addressing those needs! Yup, nothing beats helping your fellow Ubuntuians :) So if you're feeling like contributing content to any of the above, drop me a comment or shoot me an email (kim0 AT ubuntu.com). If you'd like to contribute to some other content, do grab me as well! If you're unsure, or want to talk about how you can get involved (there's always a way) tune in to the weekly cloud community hour

Wednesday, December 15, 2010

Ubuntu Cloud Community Meeting

Just a quick update, the weekly cloud community meeting time has been changed to 6pm-UTC, so that the US is awake. Everyone is invited, it's a free party! Once in the meeting, we'll all be helping each other out, trying to answer questions, having discussions, agreeing, disagreeing but most certainly having fun :) If you're not sure whether or not you should attend, trust me you should. It's a free form meeting where everything goes! So come along and bring your friends.

More details on how to join are at: https://wiki.ubuntu.com/UbuntuCloudMeeting

Monday, December 13, 2010

Fix Eucalyptus for Natty yourself

In a previous post I mentioned how the Ubuntu server community needs your help fixing some broken packages on the next shiny Ubuntu release "Natty". I wanted to see just how difficult can it be to actually fix those bugs, so I decided to go crunching on a couple. Much to my surprise, fixing those bugs turned out to be way much easier than I thought. I created a couple of patches, mostly just adding one parameter or moving things around to make the build process happy, submitted the branches for review, et voila it gets reviewed, merged, and the bug which is attached to your branch gets automatically closed. Quite an easy process indeed! Once the bug closes, you get an exceptionally warm fuzzy feeling that you just helped the world, contributed to making millions of Ubuntu users happy!

Which brings us to this post, I see the Eucalyptus package is broken on natty as well. The list of remaining packages is at: http://tinyurl.com/server-ftbfs As can be seen, the Eucalyptus packages are "High Importance", so this is quite a significant contribution from anyone who cares about the helping the cloud community (or the server community in general). I would like to encourage you to start fixing those bugs, if you'd like help the #ubuntu-motu channel should be very helpful

Friday, December 10, 2010

Help Fix Ubuntu Server Packages FTBFS

With every transition to a new Ubuntu release, certain packages break due to different reasons such as newer versions of components, an updated tool-chain and so on. Since we all love and care about Ubuntu Server, here is a list of Ubuntu server packages that currently FTBFS (Fails to Build from Source)
http://tinyurl.com/server-ftbfs

This means that those packages are not compiling and building the intended binary packages on Natty. This is a great way for you to get involved! If you know how to build packages from source code (the trio ./configure ; make ; make install) and can debug things like missing development packages, missing headers ...etc, then this will be a walk in the park for you :) Everyone's favorite Daniel Holbach is working on a proper guide for fixing these things, however for now you can find lots of useful information at
https://wiki.ubuntu.com/Bugs/HowToFix

In case you need help, the channel #ubuntu-motu on freenode IRC is a good place to ask

Wednesday, December 8, 2010

Introducing Ubuntu Cloud-Init Technology

Ubuntu Cloud-init is an awesome piece of software that helps Ubuntu run as great as it does on the cloud. Cloud-init kicks in as the server boots, and starts converting your server from the generic template it has been started from, into the server image you need. Coupled with the easy to use cloud-config syntax, it's just so easy and quick getting this rolling. Check out this screencast, where I introduce cloud-init and demo what it can be used for



Questions ? Comments ? I'm all ears, leave me a note in the comments section and I'll reply back
Want to create your own screencasts (it's real easy), ping me (kim0 on irc) and I'll be sure to help you publish them

Friday, December 3, 2010

Announcing Ubuntu Cloud Portal

Another rocking day for the Ubuntu Cloud community! The Ubuntu Cloud Portal has just been launched hurray. The portal helps new-comers to the Ubuntu cloud community quickly find interesting information they may care about such as documentation to read/edit, projects that may interest them and so on. In this first release the following is available

Front page: Lists important news to the Ubuntu Cloud community, latest tweets and happenings (RSS). The front page also features the widely anticipated (drum roll please ... ) AMI Locator application which helps anyone using Ubuntu on the EC2 cloud quickly locate the AMI needed. Go ahead and bookmark this right now
Documentation: Quick links to useful documentation pages on the wiki. If you'd like to start contributing, let me know
Community: Pointers to where you should be, mailing lists you should subscribe to, forums and IRC rooms that should interest you
Developer: Once you're ready to start contributing, this page should list all open-source projects that relate to Ubuntu and the cloud. I have added quick links to locate code/bugs/community/features for every project
Planet: Collecting Every word written about Ubuntu Cloud on the world wide web (RSS). If you think your blog should be aggregated here, let me know!

This is only the first release, so I'm sure lots could be added. Please do ping me if you would like to help improve the portal in anyway, or generally if you'd like to contribute to Ubuntu Cloud efforts. I've put together a quick video demo'ing the portal



I would like to thank the Ubuntu server and cloud teams for their help and guidance. I would also like to thank Adnan Belmadiaf (daker on IRC) for his continued contributions to this portal. Adnan is a 22 years old Ubuntu user living in Salé, Morocco, he works as a Web Developer. He's a member of the Ubuntu Morrocan Team, and involved in different projects such as (The Ubuntu Manual ProjectThe Loco Directory, The elementary project). It's that fuzzy warm feeling of contributing to Ubuntu that keeps its community rocking. Let me know what you guys think in comments.

Ubuntu Cloud Forums a great place

The Ubuntu Cloud Forum is a great place to be in. We launched the forum about 3 months ago, thinking it may not see much activity. However surprisingly (in a positive sense) we're getting tens of active threads per month, each with hundreds if not thousands of views! Thanks to everyone in the Ubuntu forums community for making this such a great place to be in!

In case you're wondering, you can easily access the Ubuntu Cloud Forum at:
http://ubuntuforums.org/forumdisplay.php?f=392&order=desc

And if you're an efficiency superhero who wants the information to come right at your fingertips, you can follow the forums RSS feed at:
http://ubuntuforums.org/external.php?type=RSS2&forumids=392

The Cloud Forum is not only for people wanting to ask questions (although it's great for that!), it is also a great place for people to share experience and knowledge. If you have just setup UEC on a brand new server farm in some fancy way, or if you've built your infrastructure on top of Ubuntu on EC2 and you're proud of what you've built, chances are others, like yourself, are going to want to learn about how to do the same. So join in and share the love

Wednesday, December 1, 2010

What Cloud sessions interest you

In case you didn't know yet, the Ubuntu cloud community is invited for a weekly gathering on IRC in #ubuntu-cloud. You can find more details here. Once inside feel free to shoot any question you may have about Ubuntu Enterprise Cloud, or running Ubuntu as a guest OS on top of a commercial cloud such as EC2 or otherwise. Also I think it'd be a good idea to start hosting some technical sessions through those meetings. I am volunteering to start hosting, but please grab me if you'd like to host some sessions yourself. I'm reaching out for the community to identify topics you would be interested in discussing through those sessions

Leave me a comment mentioning what sort of topics you'd be interested in seeing discussed
Leave me a comment if you're willing to host one of those sessions
Leave me a comment if you're running an Ubuntu cloud setup and would like to share your experiences

You get the idea, leave me a comment because every piece of feedback is valuable. Grab kim0 on irc in #ubuntu-cloud, or shoot me an email to kim0 _AT_ ubuntu.com

Tuesday, November 30, 2010

Ubuntu Server in EC2 Cloud, Easy!

I am starting to create a series of screencasts to demonstrate various topics relating to running Ubuntu on the cloud or as the cloud. The first video demos how easy it is to start your very first Ubuntu server in the Amazon EC2 cloud. If you ever wanted to play with Ubuntu server in the cloud and had any doubts, this video should put them to rest :)



If you think that is cool, and if you want to contribute your own, please grab me.It's real easy to create such screencasts, I can help get you kick-started. And hey you'd be helping the Ubuntu community, and on your way to becoming an online celebrity, what's not to like! If you'd like to follow similar screencasts, subscribe to this youtube channel

Let me know in comments what you would like to see in future screencasts, or whether certain topics interest you. If you are using Ubuntu server in the cloud professionally, I'm very interested to hear back from you. Grab me (kim0) on #ubuntu-cloud on IRC (freenode) for a chat. Awaiting the flood of screencasts :)

Wednesday, November 24, 2010

Ubuntu Cloud Q+A weekly meeting

The Ubuntu cloud community is coming together today [ 3pm UTC/GMT / 7am Pacific / 10am Eastern ] for its first weekly Q+A meeting. If you use Ubuntu as a guest OS on a public cloud, or if you have built your own private cloud infrastructure on top of Ubuntu, or even if you're just interested in any of that cloud babel please do join us in this first meeting for a great chance to connect to other users and developers of Ubuntu cloud technology. Through those online meetings you will get a chance to connect to the rest of the Ubuntu cloud community, share experiences, ask questions, find areas that interest you and perhaps start contributing to them

Information on how to connect and details can be found here

Monday, November 22, 2010

Ubuntu Cloud Screencasts Volunteers

Interested in Ubuntu cloud community ? Want to help ? Awesome! here is your chance

Screencasts are a great way to introduce new-comers to something new. I always find it helpful to view a couple of short videos to "get-a-feel" of thingX before I actually start reading and working on it. That's why I'd like to start a screencast series introducing running Ubuntu in the cloud. The target is to start by simple stuff (no voodoo here sorry) in order to demo how simple running Ubuntu in the cloud really is. Of course this can grow into a gigantic series, however for starters I'd like to focus on basic and very common use cases. Here are a few casts I would like to begin with:
  • Creating your first Ubuntu server in the cloud (GUI, CLI or both)
  • Introducing Ubuntu Cloud-Init technology
  • Customizing (Re-bundling) available Ubuntu images (AMIs)
  • Launching a LAMP app on the cloud
  • Backing up your Ubuntu LAMP cloud instance
  • Creating and Load Balancing a multi-tier LAMP app
This list is by no means set in stone :) This is a dynamic list that will change according to feedback. Feel free to join the ubuntu-cloud mailing list at: https://lists.ubuntu.com/mailman/listinfo/Ubuntu-cloud to discuss and change those topics.

If you're interested to record any of those casts, please do shout at me! You can email me kim0 [AT] ubuntu.com or grab me for a chat in #ubuntu-cloud IRC channel on Freenode

If you're new to all this cloud stuff, and would like to see a screencast covering a certain topic, please let me know in the comments (or email, mailing list, irc ...). If there's some demand on a specific topic, I'll try to cover it. Of course, if you can contribute and cover it yourself, that would be awesome indeed. After all, it's all about the community. Those wishing more information about recording screencasts, can read more information here

Awaiting the flood of excited contributors :)

Thursday, November 18, 2010

Cloud Computing 101, p2

Continuing my part-1 post about cloud computing basics, this second post should continue to define different types of "clouds" as well as what you gain and loose by using them

If you look at a cloud solution, it's really a bunch of software layers stacked on top of each other. You have the hardware (servers, disks, switches, routers), you have bare metal operating systems, hypervisors, virtual servers and inside those you have programming languages (python, java, php), development frameworks, database servers, and your own business logic code living on top! Clouds are categorized as either IaaS, PaaS or SaaS. The type of cloud is basically defined by which layers of the stack the cloud abstracts away from you, and which layers you "own" and control. Another categorization scheme is private, public and hybrid clouds. Let's take a quick tour on what each of those cloud types mean

IaaS is Infrastructure as a Service. The cloud abstracts away as little as possible from you. Basically the cloud provides you with virtual servers, networking and storage and that's it. You use those building blocks, just as you would use them in any physical datacenter to build your own compute infrastructure. The only difference is that you don't worry about how the servers are powered, cooled, what brand of disk or SAN is used ..etc. All you care about is your provider's SLA as mentioned in part-1. Other than that, it's business as usual

PaaS is Platform as a Service. The cloud is abstracting away the infrastructure and some more. The cloud in this case is no longer composed of virtual servers and disks, it is however a "development framework". When you write code, you are coding against the platform, against the cloud itself. A PaaS cloud, assuming you're creating a web application, would tell you how to route requests to your handlers, how to write code to handle specific requests, would provide an API for storage, would perhaps provide an API for database (SQL, or noSQL doesn't matter here). Your application code is written against the API of the cloud. As such, you have no idea about "low level" details such as networking, IP addresses, failed servers or even the number of virtual servers running your code! So essentially you upload your code archive and it just runs on the cloud, no questions asked

SaaS is Software as a Service. In this case, you're only using software running on the cloud that someone else had written. If you've used facebook, Gmail, linkedin, Google docs, SalesForce ...etc that's it. In essence the service you're getting is the actual final "application" you need. This is the highest level of abstraction. You do not concern yourself with infrastructure, nor with code to build an application with. You pay to use the application itself and the SLAs you get are for the application availability and your data availability

What type of cloud suits you best, is a question that needs some thought and that depends on one's set of requirements. IaaS clouds provide the least abstraction and the most control! They are a good first step to migrating off-the-shelf software to the cloud and benefiting from cheap, on-demand, elastic infrastructure. Since they provide the least abstraction, if you'd like a scalable infrastructure, you would have to do all the work yourself. It is generally not so painful to migrate from an IaaS cloud to another. PaaS clouds however, since they provide higher levels of abstraction, are much easier to manage and scale. It essentially auto-scales delivering cloud computing holy grail. However the big price is that you generally have to rewrite your application to the particular cloud platform. Not only is that painful, but it also may lock you in to the cloud vendor making it extremely hard to change vendors afterwards. Which is why I think the open-source world needs great open-source PaaS cloud frameworks (Have a favorite? drop me a line in the comments section). If a SaaS application meets your needs at a good price point, then the only potential disadvantage would be data lock-in, as well as the (in)ability to mashup the SaaS application with other tools. A good piece of advice here is to choose SaaS applications that provide full API access to your data such that you can easily pull off all data and meta-data should you need to.

A different categorization of clouds is private vs public. Private simply means that the cloud infrastructure is built in-house behind the firewall. For example you could turn your corporate datacenter into a private cloud. The benefits being, you gain better efficiency and datacenter utilization across different departments as well as being able to provide an elastic and fast response to your enterprise's departmental IT needs. Should you want to start playing with a private cloud solution, Ubuntu Enterprise Cloud is a good start. Public clouds pertain to a cloud run by a third party service provider. A public cloud is either Iaas, PaaS or SaaS or even a mix of some. Why you would want to migrate some workloads to a public cloud, is simply because public cloud vendors due to their economies of scale are able to provide equivalent if not better service, at a significantly lower price point coupled by the ability to instantly grow. A hybrid cloud on the other hand is a private cloud that can "burst" to a public cloud when its resources are exhausted. The goal is to bring the best of both worlds, the control, data-security of private clouds with the elasticity and economies of large scale public clouds. More and more work-loads are being migrated to the cloud and it's all just starting.

Has your organization migrated some workloads to the cloud already, are you planning on that? Are you planning on building your own private cloud? Please let me know in the comments, let me know the motivations and the challenges you faced. If you have any questions in general, let me know as well

Friday, November 12, 2010

Show Off Ubuntu Desktop on Cloud

Want to show off your Ubuntu desktop in the cloud ? Perhaps you want to demo it to some Windows or OSX friends. Perhaps new users at your loco event want to play with Ubuntu for a bit. Well, look no further. In this article I will create an Ubuntu maverick 10.10 desktop in the Amazon ec2 cloud, connect to it using the x2go terminal server, which leverages the excellent NX remote display libraries

Start by launching the following AMI (ami-1a837773). I chose the official Ubuntu 32bit ami, so that we can run it on a m1.small instance. If you're not sure how to launch this instance, you might want to review my point-n-click guide. After launching the instance and logging in, I do my customary

ssh ubuntu@xxxxx   #replace with your instance's public dns name
sudo -i
screen
apt-get update && apt-get dist-upgrade -y

Let's install x2go terminal server
# gpg --keyserver wwwkeys.eu.pgp.net --recv-keys C509840B96F89133
# gpg -a --export C509840B96F89133 | apt-key add -
# echo "deb http://x2go.obviously-nice.de/deb/ lenny main" >> /etc/apt/sources.list
# apt-get update
# apt-get install x2goserver-home

Optional step: Switch system to libjpeg-turbo

I like to break my Ubuntu system by installing unsupported software, so I will be switching the system's default libjpeg into a newer variant that utilizes your CPU's SIMD instruction set to provide better performance. Since connecting to a desktop remotely, heavily utilizes jpeg compression I suspected this step would provide me a performance boost. It is however not recommended, especially to someone who wouldn't be comfortable fixing his system using console only. You need to do the following on the ec2 server and on your own system. I am assuming 32bit systems, you can 32/64 bit versions here
# wget 'http://sourceforge.net/projects/libjpeg-turbo/files/1.0.1/libjpeg-turbo_1.0.1_i386.deb/download' -O libjpeg-turbo_1.0.1_i386.deb
# dpkg -i libjpeg-turbo_1.0.1_i386.deb
Selecting previously deselected package libjpeg-turbo.
(Reading database ... 25967 files and directories currently installed.)
Unpacking libjpeg-turbo (from libjpeg-turbo_1.0.1_i386.deb) ...
Setting up libjpeg-turbo (1.0.1-20100909) ...

# ls -l /usr/lib/libjpeg.so.62
lrwxrwxrwx 1 root root 17 2010-11-12 12:35 /usr/lib/libjpeg.so.62 -> libjpeg.so.62.0.0
# rm -rf /usr/lib/libjpeg.so.62
# ln -s /opt/libjpeg-turbo/lib/libjpeg.so.62.0.0 /usr/lib/libjpeg.so.62
End-Of-Optional-Step

Install the Ubuntu desktop itself (The GUI)
apt-get install ubuntu-desktop
This takes a good 10-15 minutes. After which your system is ready. Grab yourself a favourite x2go client here. Send your friends links to the Windows and OSX clients and let them see the light :) In my case I just used my Ubuntu system to connect remotely. So I added the same repo we added before and installed "x2goclient" which is a qt4 client. Here are the settings I used

x2go-client-settings

I am using my ssh key to login to the Ubuntu virtual desktop. If you're using Win/OSX and perhaps wouldn't want to use the ssh key, reset the Ubuntu user password and connect using the password. Once connected we see our familiar and beautiful Ubuntu desktop

ubuntu-desktop-x2go-ec2

I was pleasantly surprised to hear the drums-beating sound of Ubuntu booting! wow! That was just awesome. x2go uses pulseaudio to remotely connect and bring audio right to your desktop. I also could easily forward my local files to the instance in the cloud. Anyone already using Ubuntu desktop in a cloud ? let me know about it ? What kind of use cases you'd use such a setup for ? If you have some fancy setup, let me know about it as well

Wednesday, November 10, 2010

OpenStack dev env on EC2

Just as I previously blogged about running your own UEC on top of EC2 (cloud on cloud), here is another cloud on cloud post showing you how to run an OpenStack compute development environment on top of EC2. All of the heavy lifting is really done by the awesome novascript! I started by launching Ubuntu server 10.10 64bit (ami-688c7801) on a m1.large instance. If you're not sure how to get this done, please check my visual pointnclick guide to launching Ubuntu VMs on EC2

Once ssh'ed into my Ubuntu server instance I fire an update
sudo -i
apt-get update && apt-get dist-upgrade

Let's see the available ephemeral storage
root@ip-10-212-187-80:~# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda1             9.9G  579M  8.8G   7% /
none                  3.7G  120K  3.7G   1% /dev
none                  3.7G     0  3.7G   0% /dev/shm
none                  3.7G   48K  3.7G   1% /var/run
none                  3.7G     0  3.7G   0% /var/lock
/dev/sdb              414G  199M  393G   1% /mnt

As you can see, /mnt is auto-mounted for us. We don't really need this. For nova (OpenStack compute component) to start it needs an LVM setup with a LVM volume group called "nova-volumes", so we unmount that /mnt and use sdb for our LVM purposes

# umount /dev/sdb
# apt-get install lvm2

root@ip-10-212-187-80:~# pvcreate /dev/sdb
  Physical volume "/dev/sdb" successfully created
root@ip-10-212-187-80:~# vgcreate nova-volumes /dev/sdb
  Volume group "nova-volumes" successfully created
root@ip-10-212-187-80:~# ls -ld /dev/nova*
ls: cannot access /dev/nova*: No such file or directory
root@ip-10-212-187-80:~# lvcreate -n foo -L1M nova-volumes
  Rounding up size to full physical extent 4.00 MiB
  Logical volume "foo" created
root@ip-10-212-187-80:~# ls -ld /dev/nova*
drwxr-xr-x 2 root root 60 2010-11-10 10:27 /dev/nova-volumes

I had to create an arbitrary volume named "foo" just to get /dev/nova-volumes to be created. If there's some other better way, let me know folks. Let's go checkout the novascript. You need to do that somewhere that has more open permissions than /root :) so /opt is perhaps a good choice

# cd /opt
# apt-get install git -y
# git clone https://github.com/vishvananda/novascript.git
Initialized empty Git repository in /opt/novascript/.git/
remote: Counting objects: 121, done.
remote: Compressing objects: 100% (114/114), done.
remote: Total 121 (delta 42), reused 0 (delta 0)
Receiving objects: 100% (121/121), 16.62 KiB, done.
Resolving deltas: 100% (42/42), done.

From here, we simply follow the novascript instructions to download and install all components
# cd novascript/
# ./nova.sh branch
# ./nova.sh install
# ./nova.sh run

Watch huge amounts of text scroll by as all components are installed. The final "run" line starts a "GNU/screen" session with all nova components running in screen windows. That is just awesome! For some reason though, my first run was unsuccessful. I had to detach from screen, ctrl-c kill it. I then tried starting the nova-api component manually, which did work fine! I then tried to run the script again, and strangely enough this time it worked flawlessly. Probably an initialization thing only. Thought I'd mention this in case any of you guys face this issue. Here's what I did, which you may or may not have to do
./nova/bin/nova-api --flagfile=/etc/nova/nova-manage.conf
# ./nova.sh run   # works this time .. duh

Almost there! Nova's components are now running inside screen. You're dropped into screen window number 7. From there we proceed to create some keys, launch a first instance and watch it spring to life

# cd /tmp/
# euca-add-keypair test > test.pem
# euca-run-instances -k test -t m1.tiny ami-tiny
RESERVATION     r-yehvnkwa      admin
INSTANCE        i-3fxfo2        ami-tiny        10.0.0.3        10.0.0.3        scheduling      test (admin, None)      0               m1.tiny 2010-11-10 10:50:27.337898                      
# euca-describe-instances
RESERVATION     r-yehvnkwa      admin
INSTANCE        i-3fxfo2        ami-tiny        10.0.0.3        10.0.0.3        launching       test (admin, ip-10-212-187-80)  0               m1.tiny 2010-11-10 10:50:27.337898                      
# euca-describe-instances
RESERVATION     r-yehvnkwa      admin
INSTANCE        i-3fxfo2        ami-tiny        10.0.0.3        10.0.0.3        running test (admin, ip-10-212-187-80)  0               m1.tiny 2010-11-10 10:50:27.337898

Let's ssh right in
# chmod 600 test.pem
# ssh -i test.pem root@10.0.0.3
The authenticity of host '10.0.0.3 (10.0.0.3)' can't be established.
RSA key fingerprint is ab:96:c3:ee:22:84:28:2f:77:ad:d9:a9:52:63:7c:f9.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '10.0.0.3' (RSA) to the list of known hosts.
--
-- This lightweight software stack was created with FastScale Stack Manager
-- For information on the FastScale Stack Manager product,
-- please visit www.fastscale.com
--
-bash-3.2# #Yoohoo nova on ec2

Once you detach from screen, all nova services are killed one by one to clean things up. On that setup, you can immediately hack on the code, then re-launch nova components to see the effect. You can use bzr to update the codebase and so on. In case you're wondering if this works on KVM on your local machine, it does work beautifully! Of course instead of the LVM setup on the ephemeral storage step, you'd have to pass a second KVM disk to the VM. Other than that, it's about the same. How awesome is that. Let me know guys if you have any questions or comments, also feel free to jump on IRC on #ubuntu-cloud and grab me (kim0). Have fun

Friday, November 5, 2010

Cloud Computing 101

I get asked every now and then, what is this cloud thing, why is it cool, why is everyone talking about it, why should I care! As you see, that's a lot of Whys! In this post I attempt to put an end to those Whys with some Answers. Most of the blogosphere around cloud computing, gets caught in fine details, and the latest bits and pieces of technology, while ignoring newcomers who are not quite sure why is everyone so hyped about cloud to begin with. I hope to help newcomers gain a better view of what the fuss is all about

Let's assume for a moment, you're called Jim and you're the IT manager at a fictional organization. Your boss walks in, tells you the development team is ready to deploy their ultra scalable video sharing web application. It's very hard to determine how well the market accepts the new web app, we could be the next Youtube, or we could have a much harder start. We estimate to need anywhere between 10 and 100 servers the first couple of weeks, and anywhere between 1 and 50 terabytes of storage depending on market demand. The boss opens the door ready to leave, then he turns around and tells you, can you please have that ready by the end of this week! Talk about poor management in this hypothetical company, in reality things are not that bad, well and in many cases not much better either. So, if you were Jim, you would now probably be thinking of ways to end your life, or at least you'd be writing a farewell email! With the advent of cloud computing however, you have other options. You can snap your fingers, and have a hundred servers created, snap them again and have 50TB of storage appear right next to them, ready to serve you. If you think that's more "magical" than the iPad launch, you would be right although I'm sure Steve Jobs would disagree. That magic is what hypes many IT people about clouds! Well technically, instead of snapping your fingers, you'd perform an API call to a cloud provider. That means you either run a command or click a button in some management tool and those resources spring to life! Can you already feel how enabling this cloud thing is!

Cloud is called "cloud" because you don't really know what's inside it or how it is built. A cloud icon was and is the standard representation of the "Internet" or a remote network that you don't really care about, or don't control. It is in essence a black box to you, with traffic going in and coming out the other end. You should not know or care how it is built, you're not involved in its daily operation. It provides you "services" that you use. In Jim's case, those services were large numbers of servers, storage and of course networking (you still need a way to access those remote resources anyway!). Jim requested his 50TB of storage and got them, he does not really know what is the physical backing store to this storage service. Are those terabytes of storage (which are holding his company's most precious data) living on a fiber connected high-end SAN storage, low-end SAN or is it a NAS filer. How are the servers accessing the storage network, is it high performance infiniband? fiber connections ? iscsi ? AoE? Lots of options, but Jim doesn't really know. Whether or not he cares is a different story. I would say, he should not care about the technology used to build the solution, however he should care about the SLA his money is buying him. i.e. When you buy storage you are not only buying capacity, you're also buying redundancy and performance. Which is probably why many IT people care about the brand of the SAN storage, and the server to storage connectivity to begin with. What they really care about is "Is my data safe on that storage", "Will that storage deliver the performance I need". They really care about the SLA the cloud services provider is able to achieve

A common misconception about "cloud" is that cloud equals virtualization. This is not really true. You could very well build a cloud solution that does not use virtualization, instead a physical server would be powered on, PXE booted, deployed and be ready to service you! It would probably end up being too expensive, inflexible, and with limited billing options but it would not be impossible to build. That's why most commercial cloud vendors end up using some kind of virtualization technology as a building block in their "cloud compute service" i.e. the CPU and memory cloud layer. Virtualization is a neat trick to split up a physical server into multiple virtual machines, each running its own operating system and each having its own completely separate software stack. It enables the cloud service provider to carve up different sizes of virtual servers from the underlying physical servers. As a cloud consumer, you end up paying for only the size your workload needs.

The reason why cloud computing usually ends up being compared to the electricity grid is because both provide you with on-demand services meeting a certain SLA, and that you end up paying for only the amount you used. In case of electricity, you don't care what equipment the electricity company is using to generate your current, you only care that it meets a certain SLA (say 220V able to pump current of upto 100A, and being online for 99.999% of the time). You could run your own generators, but it would be inefficient (expensive) to do so, it would be a hassle to keep everything running, it would require skilled workers keeping everything online, it would not scale if you suddenly need more current! That is why most people do not run their own electricity generators and instead depend on the grid. However, with all the disadvantages mentioned, some businesses still choose to own and operate their own diesel engines for generating electricity at least as a backup solution. Why that is the case, is because those businesses are seeking more "security". They want to be in control, they don't want the electricity company to control such an important resource to their business. Everything mentioned so far about the electricity company applies to cloud vendors as well. Cloud vendors are the IT equivalent of the electricity grid. Running your own datacenter is the analogue of owning a diesel engine. Of course almost every business nowadays owns, builds and operates its own datacenter (diesel engine). However that might be changing rapidly, we're already seeing signs of workloads shifting into the cloud, which is what all the fuss is about. Cloud is the electricity grid of the IT world, and perhaps in the no too distant future it would be powering the vast majority of our personal and professional IT needs. Cloud is all about the commoditization of IT resources and services, coupled with a new business model for consumption lowering the entry barrier for smaller businesses and helping them focus on their core competency instead of focusing on IT

I hope to have helped shed some light on the topic, I'll probably be writing a part-2 soon touching on types and key properties of a cloud as well as adoption barriers and compromises. I understand many "cloud people" disagree about what qualifies as a cloud, it's definition and well basically everything about cloud is debatable, do let me know (politely :) if you disagree with any of the points mentioned. Let me know in the comments what you think are the key properties of a cloud

Update: Continue reading part 2

Tuesday, November 2, 2010

Egypt LoCo Maverick release party

The fun is everywhere :)

Toulan طولان









A link to the whole set
http://www.flickr.com/photos/maggieosama/sets/72157625277893568/show/

Ubuntu is free, fun and global!

Friday, October 29, 2010

Ubuntu Cloud Community needs You

"I'm interested in Ubuntu and the cloud, how do I get involved" is a question I got a few times already. I thought it would be a good idea to answer this as a blog post. I believe one of the very first things you'd want to do, is to make sure you're on the main communication channels, talking to the community, asking questions, seeing other questions being answered, trying to answer some yourself, sharing opinions and generally "connecting" with the rest of the community. That is a great first step. So I'll highlight the main communication venues for the Ubuntu cloud community, as well as way to get kick-started.

Places to be
  • Ubuntu Cloud Forums, while pretty young, there has been some pretty good stir in the forums. While IRC and mailing lists may be more focused on "asking questions", the Forums are a great way to get in touch with other community members. To share your experience building your private clouds, the hardware used, software configuration, tuning and optimization, challenges faced ...etc. Come join in, if you would like to ask questions, or if you would like to share opinions, tips or tricks, get on the forums and make some splash :)
  • The Ubuntu-Cloud mailing list is a great technical resource where most of the experts and developers are subscribed. For very technical discussions, questions, feature suggestions, RFEs, development discussions the mailing list is a great resource.
  • The EC2Ubuntu mailing list is a great resource that focuses on running Ubuntu in the Amazon EC2 public cloud. This list is active with a wealth of info on the topic
  • IRC chat has long always been a primary real-time communication tool used by free software enthusiasts. The Ubuntu cloud IRC room is (surprise, surprise) #ubuntu-cloud on Freenode. Jump in, and engage
Once connected, things you can do include playing with the latest technology such as creating yourself a private UEC cloud, verifying latest features work as advertised, report and fix bugs, suggest features, design and implement new projects to advance the state of Ubuntu on the cloud. While the community is very welcoming, I definitely understand we need to create better new-comer friendly engagement paths, more hand-holding if you will. A better mentoring program from senior members as well as low hanging fruit are things the Ubuntu cloud and server communities need to identify and improve to make it easier to attract and engage fresh talent

Wednesday, October 27, 2010

PointnClick guide to running Ubuntu in the cloud

My previous post about running uec on ec2, drew some comments as being a fairly complex process. That may be true because essentially you're attempting to hack together a configuration which is not entirely supported. Anyway, that led me to want to demo a "point-n-clink" guide to running your first Ubuntu server in the EC2 cloud. i.e. no command line allowed, just point and click :) It doesn't get any easier than this, so let's hit it

Assuming you have setup an account with Amazon and could login to the Amazon console, you should see
pointnclick-guide-ubuntinthecloud-1

Before we begin, let's set ourselves a "key-pair". Which is basically a ssh public/private key-pair that enables you to login to your instance once launched. Click on "Key Pairs" and click "Create key pair", I'm gonna name my key "ubuntu" and click create

pointnclick-guide-ubuntinthecloud-2

The key is promptly created and pushed for download through your browser. Let's proceed with another "preparatory" setup. By default, EC2's firewall denies all inbound traffic to the instance, which means you cannot even ssh into your instance. Let's open ports 22 for ssh, and 80 just in case we wanna test by running an apache2 server. So, add your from-to ports and source IPs as per the next screenshot and "Save" it

pointnclick-guide-ubuntinthecloud-3

Great, now we're ready to actually launch an instance. Click "EC2 Dashboard" link, click the button "Launch Instance", a wizard starts asking for which AMI we would like to use. An AMI is a template that will be copied to your instance and used to start it. Click "Community AMIs", it may take a moment for the AMIs to load. Now here's a trick, to quickly "zoom-in" on the official ubuntu AMIs, use the following search string "ubuntu-images/" in order to locate EBS based images, and "ubuntu-images-us/" in order to locate instance store based images. This is in no way a "supported" feature of neither Amazon nor Ubuntu, it's just a convenient hack that works today and may not work tomorrow. We choose an EBS based AMI because we plan on launching a micro instance and those require EBS AMIs

pointnclick-guide-ubuntinthecloud-4

click the "Select" button beside it. Click continue and choose a t1.micro instance and click continue a couple of times

pointnclick-guide-ubuntinthecloud-5

pointnclick-guide-ubuntinthecloud-6

pointnclick-guide-ubuntinthecloud-7

You're now on the key-pair page, we simply choose our previously generated "Ubuntu" key pair and click continue

pointnclick-guide-ubuntinthecloud-8

On the firewall configuration page, we choose the "default" security group, since this is what we had configured to open ports 22 and 80 previously

pointnclick-guide-ubuntinthecloud-9

Voila, we're ready. Review and confirm the settings on the page and click "Launch"

pointnclick-guide-ubuntinthecloud-10

The cloud starts deploying your virtual server and you get the following message

pointnclick-guide-ubuntinthecloud-11

In a minute your instance should be up and running. Let's locate it and login to it. Clicking the "Instances" link, then clicking our only instance so far and locating its "Public DNS" entry setting allows us to ssh to it

pointnclick-guide-ubuntinthecloud-12

Now we're all set, let's jump to our terminal to ssh into the instance. It seems the "ubuntu.pem" key-pair the browser downloaded gets by default permissions that are too open for ssh's taste. Thus we need to "chmod" it to 700, this is the part where I lied about not using any CLI, but hey chmod doesn't really count ;) let's then ssh straight into our instance using the ubuntu.pem key-pair

pointnclick-guide-ubuntinthecloud-13

Awesome! We're in! let's do what the greeting message tells us to "sudo tasksel --section server" and choose to install a LAMP Server. We get asked for a MySQL password twice, the system installs and configures a full LAMP stack for us. Let's visit the apache web server

pointnclick-guide-ubuntinthecloud-14

That concludes this graphical guide. As you can see launching your first Ubuntu server instance in the cloud can't really get much easier than this :) Before you go for a cup of coffee, do not forget to "Terminate" your instance. If you don't, you keep on getting billed by the hour for it.

pointnclick-guide-ubuntinthecloud-15

Let me know in the comments if a step was unclear. Also, let me know if there are other topics you'd want me to demo. If you're interested in running Ubuntu in a cloud context and have any doubts, drop by on IRC channel #ubuntu-cloud on freenode, and grab me "kim0"

Friday, October 22, 2010

Cloud on Cloud, UEC on EC2

So you wanted to play with Ubuntu Enterprise Cloud (UEC), but didn't have a couple of machines to play with ? Want to start a UEC instance right now, no problem. You can use an Amazon EC2 server instance as your base server to install and run UEC on! Of course the EC2 instance is itself a virtual machine, thus running a VM inside that would require nested virtualization which AFAIK wouldn't work over EC2. The trick here is that we switch UEC's hypervisor temporarily to be qemu. Of course this won't win any performance competitions, in fact it'd be quite slow in production, but for playing, it fits the bill just fine.

If you're thinking doing all that is gonna be complex, you have a point, however it won't! In fact it'll be very easy due to efforts of Ubuntu's always awesome Scott Moser. Scott has written a script that automates all needed steps. But wait, it gets better, we're not even going to run this script ourselves, we're passing it as a parameter to the EC2 instance invocation and due to Ubuntu's cloud-init technology, that script is going to be run upon instance boot-up, doing its work automagically. Now let's get started

On your local machine, let's install bzr and get the needed script

cd /tmp
sudo apt-get install bzr -y
bzr branch lp:~smoser/+junk/uec-on-ec2
cd uec-on-ec2/

The file "maverick-commands.txt" contains the script needed to turn the generic Ubuntu image on ec2 into a fully operational single-node UEC install. If you don't have "ssh keys" (seriously?) generate some

ssh-keygen -t rsa

Now let's do a neat trick to import the keys into EC2, marking them with the name "default"

for r in us-east-1 us-west-1 ap-southeast-1 eu-west-1; do ec2-import-keypair --region $r default --public-key-file ~/.ssh/id_rsa.pub ; done

Let's open a few needed ports in EC2's default security group

for port in 22 80 8443 8773; do ec2-authorize default -p $port ; done

Very well .. We now launch our EC2 instance, passing in the "maverick commands" file. What happens is, the server instance is created, booted, Ubuntu's cloud-init reads up the maverick commands script we passed to it, and executes it, it starts downloading, installing and configuring UEC in the background while you ssh into your new EC2 instance

ec2-run-instances ami-688c7801 --instance-type m1.large -k default --user-data-file=maverick-commands.txt

Give it a minute to boot, then try to ssh in
ec2-describe-instances
ssh ubuntu@ec2-a-b-c-d.compute-1.amazonaws.com

Replace the DNS name for the EC2 instance, with the proper one you get from ec2-describe-instances. Once logged into the EC2 instance, I would start byobu and tail the log file to monitor progress. The whole thing takes less than 5 minutes

byobu
tailf uec-setup.log

The script once finished configuring UEC, actually downloads a tiny linux distro and registers its image in UEC, so that you can start your own instances! You know the script has finished when you see a line that looks like
emi="emi-FDC21818"; eri="eri-53721963"; eki="eki-740D19EC";

UEC is now up and running, let's check the web interface! You login with the default credentials admin/admin

uec-on-ec2-1

Let's navigate to the Images tab, to get the EMI ID (Equivalent of an AMI ID)

uec-on-ec2-2

Is that cool or what, Hell Yes! Now let's start our own VM inside UEC that's inside EC2. Remeber to replace emi-FE03181A with the EMI ID you got from the web interface

euca-run-instances --key mykey --addressing private emi-FE03181A

You can use "euca-describe-instances" to get the new instance internal IP address and ssh to that

ubuntu@domU-12-31-38-01-85-91:~$ ssh -i euca/mykey.pem ubuntu@172.19.1.2
The authenticity of host '172.19.1.2 (172.19.1.2)' can't be established.
RSA key fingerprint is db:9b:47:a4:06:81:26:d7:cf:38:a4:0e:6c:05:54:0d.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '172.19.1.2' (RSA) to the list of known hosts.

Chop wood, carry water.

$ uname -r
2.6.35-16-virtual
$ df -h                                                                                                                                                                                                                                     
Filesystem                Size      Used Available Use% Mounted on
/dev/sda1                23.2M     14.1M      7.9M  64% /
tmpfs                    24.0K         0     24.0K   0% /dev/shm

et voila, you're ssh'ed into a ttylinux instance running inside qemu managed by UEC running over EC2 :) If you do find that cool, what about contributing back. How about you start hacking on that script to make it even more awesome, such as maybe installing over multiple nodes or whatever crazy idea you can think of! If you're interested to start hacking, drop me a hi in #ubuntu-cloud at Freenode

Free Ubuntu Server for a year at Amazon

Yes, you can get your very own Free Ubuntu server in the clouds for one full year! The folks at Amazon have just announced

"Beginning November 1, new AWS customers will be able to run a free Amazon EC2 Micro Instance for a year, while also leveraging a new free usage tier for Amazon S3, Amazon Elastic Block Store, Amazon Elastic Load Balancing, and AWS data transfer. AWS’s free usage tier can be used for anything you want to run in the cloud: launch new applications, test existing applications in the cloud, or simply gain hands-on experience with AWS"

You can find the details of the offer at: http://aws.amazon.com/free/

Ubuntu being arguably the most popular Amazon guest image is also available to you for free today! Get the list of Official Ubuntu images created by the Ubuntu's very own server team at

Maverick http://uec-images.ubuntu.com/server/maverick/current/
Lucid http://uec-images.ubuntu.com/server/lucid/current/

For the free offer, you will want to launch a t1.micro instance, and it seems you will want to wait till Nov 1st for registering your account (credit card needed). This is great news! If you ever wanted to play with Ubuntu server on the cloud, now is the best time to get started!

Update: The terms page mentions "Only accounts created after October 20, 2010 are eligible for the Offer"

Update2: Please read Scott's comment below, on why you will be charged 0.5$/mo if you run the standard Ubuntu image (or Amazon's own AMIs)

Hello Planet-Ubuntu

Hello World! I just became an Ubuntu member yesterday, which means I will no longer be on mute ;) I'm working as the Ubuntu Cloud community liaison, which means I finally bring some cloud and server content to this space instead of all being around fluffy desktops :) If you're interested in using or contributing to Ubuntu in a "cloud" context, feel free to grab me (kim0 on Freenode #ubuntu-cloud). Feel free to drop in, say hi, ask questions or whatever you need. Besides this blog, you may also be interested in my tweets. I'm very proud being part of this great community. Rock on!

Friday, October 15, 2010

OpenWeek Introduction to Cloud IRC session logs

If you couldn't attend the Ubuntu Open Week session live on IRC, don't despair! You can read the logs at
http://irclogs.ubuntu.com/2010/10/13/%23ubuntu-classroom.html#t17:01

Monday, September 20, 2010

what I do

I started with Linux and Open source software around 1999, since then and till today I was amazed about the helpful community that form around different open source projects. In the early days, whenever I'd hit a bug or don't know how to do something, I'd jump over to IRC and someone was always there to help debug the problem! This just felt amazingly empowering, I remember thinking to myself, hell this is way much better than what you get with commercial software! In a few years, I transformed from being a passive user, into someone who tries to help IRC users every time someone has a problem. The culture of open source got me I guess. I started giving presentations at Universities, writing for local IT magazines for free to help spread what FOSS is all about. I always wanted to contribute much more to the free software world, however limited free time available was always an issue. Now that my day job is to work with and help grow the community around Ubuntu, I feel extremely excited and thankful for Canonical for giving me this opportunity.

Being the newest member of the "horsemen" team having joined a little over a month now, I feel like I haven't done enough yet. Nevertheless, I'd like to mention a few of the things that have been keeping me busy the past few weeks

  • The very first thing I had done with Canonical was to write the Ubuntu Server Map application. The idea behind it is to encourage and spur the community behind Ubuntu server. Basically the aim is to make Ubuntu server users feel part of a huge user community all over the world. What the application does is to detect the visitor user's city using his IP address after the user accepts (anonymous process) and then marks that city with a cute little orange Ubuntu logo. So far the Map is full of orange Ubuntu logos. It really feels great to be part of such a world wide community of Ubuntu server and cloud users and contributors

  • As part of helping the Ubuntu cloud community grow around open source cloud technologies, I have focused on consolidating any fragmented communication paths available. Thus far, the Ubuntu cloud community has the following #ubuntu-cloud as the official IRC channel for everything Ubuntu cloud related. And the recently created Ubuntu Cloud Forum as the official Ubuntu cloud forum. The Ubuntu Cloud mailing list is an alternate community communication venue as well

  • Recently I've been putting a lot of focus on creating a Ubuntu cloud portal, which aims to be a central hub for the Ubuntu cloud community. You can read all about the portal specs and give me feedback over IRC (kim0 in #ubuntu-cloud). The portal should provide a list of rolling news that relate to Ubuntu cloud, helping interested community members always be on top of all the new happenings. It also helps community new comers by becoming their one stop shop with links to all documentation and support channels. As well as guide new contributers on how to get involved

  • Something which I am thinking about and which will definitely take a lot of focus soon is studying potential hurdles in the way of new contributers to Ubuntu server and cloud. Basically how to make it easy and more fun for newcomers to join in, find all the information they need in place and start contributing and engaging with the community. Part of that is giving tutorials over IRC or other mediums as well as sponsorship and guidance along the way

It is definitely great being part of such a great community such as the Ubuntu one, and I hope the next period is going to be very exciting for the open source Cloud communities in general

Monday, September 6, 2010

Why our Internet2.0 is broken

The modern Internet which I'll refer to as Internet2.0 is being seen as a new applications platform. It is no longer a series of "pages" that you click through, it is rather a collection of applications. I just needed this intro in case you still thought of the Internet as pages. So the Internet is now the Operating System, and different web sites (Gmail, Facebook, Twitter...etc) are the new applications if you will. I've got news for you, this Internet2.0 thing, is horribly broken! Here is why

To begin understanding the kind of problems we have to go through using the online systems of today, let us apply the same online mechanisms, to standard old and boring desktop apps. Let's imagine the following workflow, You are 3 friends working on an important report, and you all share your progress online, each person via his blog

  • You login to your computer, you start your email application. You find your friend has edited the report, emailed you the new copy and blogged about his progress

  • You check your email, find the attachment, you download that

  • You must copy the attachment over a USB stick in order to be able to open it in any other application. You copy it to the USB stick

  • You open your word processor. You authenticate to it!

  • You plugin your USB stick, open the document, edit it, save it

  • Re-copy it to the USB stick

  • Re-open your email application, upload the attachment, send it

  • You visit your first friend's blog, you add a comment that you updated the report.

  • Your third friend is not notified of your comment


You get the idea. If that user experience sounds horrible, it is pretty similar to what we face today with web applications especially if you want to use different applications from different providers in concert. It is just plain broken. Here is my criticizm spot on

  • Why do I have to login separately to each and every web application (Google, Facebook, Zoho, Twitter, MS...)? On my Linux PC I don't have to go through that

  • Why do I have to teach each web application my social graph. Reconnect to all my friends on facebook, then twitter, then Google Buzz...? The connectivity between me and my friend belongs to us, it does not belong to Facebook. Any other app on the Internet which I allow to access this data, should be able to. It should not be held captive by the likes of Facebook

  • Assuming I love to use a live.com email address (which I don't :) and like to use Google docs to edit my email attachments. Why do I have to download the attachment to my PC first, reupload to Google to edit, save, download, upload, email, yikes! The Unix architecture designed 10s of years ago, was all about sharing data between apps (pipes), why in this modern age are we unable to easily connect different webapps especially from different providers

  • When I write a comment on my friend's blog, the comment is MINE. I created it, I own it. My friend's blog maybe displaying it currently, but it should not own it! If my friends are interested in seeing my comments on every website I visit, and if I allow them to, they should be able to do so very easily. The 20 comments I've written today, should not die if the websites I have written them onto decide to die

  • The same actually goes for using online editors a la Google docs. Google should not "own" the document. The document is mine. It should be stored in a place I control. I may "Allow" Google web based editor to read/write it now, because I "choose" to. Because it maybe the best editor around. Not because I have to, and not because I need to "migrate" my data off of Google should I want to use something else. If I decide to use Zoho editor tomorrow, I should be able to allow it to access my data in-place. Just like how on the desktop one can use MS Office, OpenOffice and Mac Pages to open a presentation on one's desktop



As a new application pops up, say like Apple's new Ping service, you need to (again) rebuild your social graph teaching it all your friends. Afterwards any music you purchase will show on Ping, but won't really show up on Facebook because Apple and Facebook couldn't agree on that. Can you say that again, "I" purchased a music track, and I want to tell "MY" friends about it, and I can't because Apple and Facebook couldn't agree! Can you see how horribly broken our Internet2.0 is


How I see things could improve is as follows. Each user needs to own a certain web exposed storage space somewhere. All of my online trails (documents, friendship, Likes, comments, blogs...) should live there. I should "allow" Google docs to read/write to those document if I like it. Tomorrow I could revoke that access and allow Zoho for example or any other online editor. My connections to my friends should be stored in that storage space. Any application that I allow can see who all or some of my friends are. For example I can allow Linkedin to access my work friends, while allowing Ping to access my music friends. If I make a comment somewhere on the Internet, this is content I created, If I Facebook style "Like" something, it is again content that revolves around me. It should be stored in my storage area, and Facebook should be notified of it and allowed to display it should I want to. This architecture makes it quite easy for a new young startup to create a Facebook or Gmail killer tomorrow. Since the application from day one will have access to all my data, emails, friends, comments, blogs...etc i.e. as much content as I want to give it. We become no longer locked into different online service providers. We can switch at will. This is the way how it can and should be done. The diaspora and FreedomBox folks are doing some awesome work and designing their systems along similar lines of thought. They are however more ambitious, and are after user data confidentiality, while all I'm asking for is open-data access and portability. They want to replace all of today's web applications with distributed clones that you can run inside your home. That would be fantastic, however I still it as advantageous and easier if a user can use "closed" cloud apps like Gmail or Facebook should she want to, and switch to a different provider at will, while still owning her online digital trail and every piece of content she ever created

Monday, August 9, 2010

Ubuntu Server 10.04.1 virtual release party

Ubuntu Server LTS 10.04.1 is targeting to be released this Thursday on the 12th of August. This point release is especially important for the "server" variant of Ubuntu. This is because many conservative sys-admins wait for the first point release before deploying a server OS. The idea being, any bugs the fine QA people at Canonical might have missed, would be caught by the community during the period between the GA and point releases. This means that for many, this next 12th of August is the "actual" release date for Lucid server LTS! In short, on the 12th Ubuntu Server 10.04.1 is ready to rock the world!

In order to celebrate this awesomeness, we'll be having our own little server release party in the clouds :) What a mouthful, well basically I've put up a nice little web application that tracks all cities around the world that are running Ubuntu Server 10.04. Of course you will need to hit that web app with your server box first so that your city becomes registered and shows up on the map! So if you're already running 10.04 server, be sure to hit that application. If you're not, then you should be! Go grab yourself the 10.04 server iso, install it, then hit the app with it

But wait a minute, you say you don't have a web browser on your server box. No problemo. I'll list a few cool ways (well in a serverish way at least) to hit that web app without leaving your comfortable bash shell! Hey community, if you can think of cooler CLI methods, be sure to put them in your comments. if something is very cool, I'll add it to this post. Here we go


1- elinks --dump http://maps.ubuntu.com/hit/

2- curl http://maps.ubuntu.com/hit/

3- wget http://maps.ubuntu.com/hit/

4- python -c 'import urllib; urllib.urlopen("http://maps.ubuntu.com/hit/").read()'

5- /bin/echo -e 'GET /hit/ HTTP/1.1\nHost: maps.ubuntu.com\n\n' | socat - tcp:maps.ubuntu.com:80


Well I guess that should be enough to wet your appetite. Of course "technically" you don't have to visit the web app using your server box. Hitting it from your Ubuntu desktop has the same effect, but where's the fun in that! If you're running Ubuntu Servers in multiple cities, you are encouraged to hit that app from all cities in which you operate. If you coordinate the hit with some cool stuff (maybe a puppet recipe or bounce over ec2 or something of that sort) let me know about it :)

Whichever CLI way you choose to hit the app with, you'd be missing out on seeing lots of cute little Ubuntu logos over a world-wide Google map. Now that is something you wouldn't wanna miss, so be sure to visit the app from your graphical browser as well to view the list of marked cities running Ubuntu Server. On release day (12th of Aug) it would be very cool to have every major city in the world marked on that map, so please spread the word as much as you can (yes that means retweet it, digg it, slashdot it, tell you mom about it ...etc)

Tuesday, August 3, 2010

bash oneliner, get GPS location + street address

Hey there folks,

It's been quite some time since I last blogged, just been busy with $reallife. Today I'll show you how to an answer the eternal question of "Where am I" from the comfort of your bash shell

The following code only works if:
- You run it in a root shell (Ubuntu users do: "sudo -i" then paste it)
- You are connected via Wifi to some access point
- Your wireless adapter is called wlan0 (otherwise replace it with the correct name)
- You're using a Linux system or something similar (i.e. Windows won't really work here)

et voila


/bin/echo '{"version": "1.1.0","host": "maps.google.com","request_address": true,"address_language": "en_GB", "wifi_towers": [{"mac_address": "' $( iwlist wlan0 scan | grep Address | head -1 | awk '{print $5}' | sed -e 's/ //g' ) '","signal_strength": 8,"age": 0}]}' | sed -e 's/" /"/' -e 's/ "/"/g' > /tmp/post.$$ && curl -X POST -d @/tmp/post.$$ http://www.google.com/loc/json | sed -e 's/{/\n/g' -e 's/,/\n/g'


I didn't really spend much time cleaning it up, since I'm busy, so that's like the first thing that worked. If any of you guys can skip writing to the file, let me know and I'll update it

PS: If the returned information is wrong, or some kind of "unknown" .. Consider yourself lucky, very lucky! That means Google does not (yet?) know where your wifi AP is. For the rest of us .. tin-foil all the way