Thursday, January 31, 2008

LCA2008: Hacking on lguest

Great introduction and tutorial about lguest from Rusty Russels at 2008 today (thursday). Lguest (former "lhype") is a very fast and lightweight virtualization approach which concentrates on "running Linux on Linux". New lguests vms are started by a lguest launcher util which basically reserves a specifc memory region in the Host kernel and then re-maps the (l)guests hypercalls.

The lguest tutorial provided a ready prepared Qemu filesystem so the participants could bring up their own lguest-Hosts easy and start hacking on the the different tutorial tasks. What a fun. Even myself was able to implement a new hypercall printing out some string on the lguest console at startup (and i do not consider myself a kernel-hacker).

Lguest is definitely an interesting project and i am curious how it will find its place along the other available virtualization technologies used today.

Wednesday, January 30, 2008

Fire alarm at 2008

We just experienced a fire alarm at the Old Arts building used by
Luckily it could be resolved quite quick, nobody got hurt and the talks continuing.

Funny thing was that the alarm ring seemed to be in a kind of high-available mode because it tried to restart again and again even after it has being turned off.

.... reminded me to yesterday when a small bread on the toaster in the hotel room immediately started the fire alarm. I was flying downstairs to the reception because i read before that it is directly connected to the fire brigade and that you have to pay a real high fine if they are coming out. Luckily it was only the smoke alarm in the room and not the fire alarm of the hotel. Puuuah.

Now i am looking forward to the Penguin Dinner tonight which will be at the tonight at the night market in Melbourne. The social events (and of course the partners program) of LCA rocks !

Combining the virtual and real world

Today Jonathan Oxer presented at 2008 about "Joining Second Life to the Real World". Really cool stuff !

He demonstrated how to interact with the virtual objects inside Second Life using "real world" objects using various small hardware-hacks and making use of the famous Arduino microcontroller. One example was to use a serial RFID-reader, connected to a network socket, to open/close a virtual door within the Second Life virtual reality (John actually has a RFID chip implanted in his arm).

Actually his talk remembers me to my "Data-center within Second Life" project which i was working on about a year ago. My idea that time was to rebuild a "real world" data-center infra-structure inside Second Life, connect it to an openQRM server and enable the administration of real servers from within Second Life.

Basically it is about using the 3D virtual reality of SL as a user-interface for real world objects and/or applications. It worked out pretty well and i even gave a presentation including a live-demonstration within Second Life.about this openQRM sub-project

Here some links to videos of this virtual presentations:
part 1
part 2
part 3

Please find some pictures from this event here.

...... mhmmmm, guess i was "too early" with my idea ;)

Linus Torvalds at 2008 in Melbourne

Here a pic of Linus Torvalds enjoying 2008

... a quote from my wife :

"Until now, no groupies; seems Linus is save this year"

Tuesday, January 29, 2008

Pictures from

Raphael put pictures from the event online at Flickr.

Here myself giving the presentation about "Conform deployment of virtual and physical machines with openQRM and Xen"

Tuesday, January 8, 2008

Interview about openQRM presentation at

Please find an interview about the topics of the upcoming openQRM presenation

"Conform deployment of virtual and physical machines with openQRM and Xen"

taking place at in Bruessels on the 22. and 23. january here.

Here some more infos about this upcoming event :

The european virtualisation conference organised by Profoss ( will take place on 22 and 23 january in Brussels, featuring non-commercial, informative talks by specialists in their fields (CEOs, analysts, technology specialists, developers, ..). The first day will focus on business and management aspects, the second day on technical aspects. Most of the important players of the market will be present. With the multiple discussion and coffee breaks, you should have time to meet and discuss with speakers, sponsors and attendants.

The schedule is available at
and the speakers' profiles are published at

Many thanks to Raphael Bauduin for organizing this Event !

Hope to see you all there !

add to

submit to Furl

Saturday, January 5, 2008

CIOs and IT Managers: Are you willing to adopt virtualization now?

Tarry pointed out an interesting question on Linkedin :

CIOs and IT Managers: Are you willing to adopt virtualization now?

I fully agree with him about the challenge to adopt to virtualization today. It adds another layer of complexity to the IT infra structure which current tools are not ready to manage in a flexible and transparent way for the system-administrators.

Nowadays a common data-center is normally still very static, powered by lots of physical systems with a custom installation, configured and maintained by a couple of loosely connected tools.

Virtualization claims to provide the option to move to an appliance-based deployment via virtual machines to ease-up deployment and to better use the resources of the data-center. Since bringing up a virtual machine more or less just requires space on a storage for the virtual-disk the need for high-end NAS and SAN data-storage systems raises.

On the other hand virtualization adds another level of complexity.
Systems and services migrated to virtual
machines still needs to be setup, monitored and maintained just in the same way as for physical servers.
With a raising number of servers (physical and virtual) it gets more hard to fulfill this task in a successful and efficient way.

Migration itself (e.g. from physical systems to virtual machines) can be a really tricky thing too since most virtualization technologies are still lacking this feature.

Another difficult task for the IT-department is to decide which of the current available virtualization technologies to use.

There comes the question if it is not a disadvantage to be limited to a single virtualization technology ?

To my mind it is because of two reasons :

1) Different virtualization technologies are available today.
There is "full-virtualization" (e.g. Qemu/KVM), "para-virtualization" (e.g. Xen) and "light-virtualization" which is in most cases based on a process-isolation (e.g. Linux-VServer, Solaris Zones).
Each technology has its advantages but also its "limitations".
Now, in a common data-center we normally find lots of different
applications with custom needs e.g. a web hosting company may have hundreds of "idle" customer web-servers plus a couple of in-house oracle data-base servers.
For the web-servers it would make the most sense to choose one of the "light-virtualization" or "para-virtualization" technologies to limit the virtualization overhead to a minimum.
Using "light- or para-virtualization" a single, physical machine can easily host several hundreds virtual partitions.
(please see "Building a virtualized web-farm with openQRM")

On the other hand the data-base servers are CPU, IO and network intensive so choosing one of the "full-virtualization" technologies would be a benefit.
So one reason that it is a disadvantage for the IT-department to limit themselves to a single virtualization technology is that the virtualization type should be selected according to the application needs.

2) The other reason is to simply avoid vendor-locking.

So if it would be best to choose the virtualization technology according the applications and services the IT infra-structure does not only need to support several different virtualization options but also provide an option to move from one virtualization type to another and also the possibility to migrate a virtual machine back to a physical system.

Today, if you try to move a e.g. VMware-partition to Linux-VServer
or a Xen-partition back to a physical machine, sure there are eventually some custom utils, scripts and howtos available which will help but at all this will be an "adventure".

Both situations, the common static data-center using physical servers and the new virtual-appliance-based infra-structure are already quite tricky to manage on their own but with the move to virtualization the system-administrator today even needs to handle "mixed environments", both physical and virtual.

To my mind we would need a tool which seamlessly integrates with the different virtualization technologies and conforms resource-planning, deployment and management of physical systems and virtual machines. This tool should also integrate with modern storage-systems to take the full advantage of fast-cloning servers for rapid and automated provisioning. It should have an open- and plugg-able architecture so that custom third-party utilities can be integrated in an easy way to combine them into a single-management console for the whole IT infra-structure.

If you are interested in such a tool you may want to take a look at the openQRM project which is exactly providing this kind of framework.

Enjoy !

add to

submit to Furl