Wednesday, June 18, 2008

Red Hat announces "next-generation" virtualization based on KVM

Today, at the Red Hat Summit, Red Hat announced three virtualization initiatives including oVirt. The press release is here.

Some choice quotage:

KVM technology has rapidly emerged as the next-generation virtualization technology, following on from the highly successful Xen implementation.

Another good one:

We continue to see huge improvements in functionality, performance and time to market because of our close relationship with our open source partners. For example, Intel and IBM have worked with us for many years covering virtualization technologies that span from Red Hat Enterprise Linux 5 to today's KVM-based announcements.

And of course:

"IBM works closely with Red Hat and the open source community to drive innovation within the Linux kernel," said Daniel Frye, vice president, open systems development at IBM. "IBM has a heterogenous approach toward virtualization, with KVM one of several options. KVM leverages the core features of the Linux kernel, including paravirtualization interfaces contributed by IBM engineers. By combining Linux virtualization infrastructure with open management interfaces such as CIM and libvirt, we gain a solution that eliminates lock-in and open source community innovations, we are able to offer our customers a solution with outstanding performance, scalability and agility."

If you want to see what all the fuss is about, check out KVM.

Monday, June 09, 2008

KVM and Green Computing

A ran across this article today from Tom Henderson that draws attention to the fact that most existing hypervisors (ESX, Xen, Hyper-V) do not support frequency scaling and therefore are not very eco-friendly.

This is partly true. There has been some recent work in Xen to add deep sleep state support and I believe even some work on frequency scaling. It is certainly not true though that virtualization and power-consciousness are at odds with each other. KVM is able to leverage all of the work that's been invested into Linux to manage power wisely. Good power management does not cause any sort of performance drop. Reducing the performance of your workload is only going to make the machine run longer and consume more power.

The reason most hypervisors don't support power management is that it's very hard. When inventing a new Operating System, there's a lot of things you have to focus on before you can even start looking at power management. Again, we see the benefits of using an existing Operating System for virtualization.

Friday, May 09, 2008

The truth about KVM and Xen

When I saw this article in my inbox, I knew I shouldn't bother reading it. I really couldn't help myself though. I'm weak for gossip and my flight was delayed so boredom got the best of me.

I can't blame the tech media for the wild reporting though. The situation surrounding KVM, Xen, and Linux virtualization is pretty confused right now. I'll attempt to do my best to clear things up. I'll make an extra disclaimer though that this is purely my own opinions and does not represent any official position of my employer.

I'm think we can finally admit that we, the Linux community, made a very big mistake with Xen. Xen should have never been included in a Linux distribution. There, I've said it. We've all been thinking it, have whispered it in closed rooms, and have done our bests to avoid it.

I say this, not because Xen isn't useful technology and certainly not because people shouldn't use it. Xen is a very useful project and can really make a huge impact in an enterprise environment. Quite simply, Xen is not, and will never be, a part of Linux. Therefore, including it in a Linux distribution has only led to massive user confusion about the relationship between Linux and Xen.

Xen is a hypervisor that is based on the Nemesis microkernel. Linux distributions ship Xen today and by default install a Linux guest (known as domain-0) and do their best to hide the fact that Xen is not a part of Linux. They've done a good job, most users won't even notice that they are running an entirely different Operating System. The whole situation is somewhat absurd though. It's like if the distributions shipped a NetBSD kernel automatically and switched to using it when you wanted to run a LAMP stack. We don't ship a plethora of purpose-built kernels in a distribution. We ship one kernel and make sure that it works well for all users. That's what makes a Linux distribution Linux. When you take away the Linux kernel, it's not Linux any more.

There is no shortage of purpose-built kernels out there. NetBSD is a purpose-built kernel for networking workloads. QNX is a purpose-built kernel for embedded environments. VxWorks is a purpose-built kernel for real-time environments. Being purpose-built doesn't imply superiority and Linux currently is very competitive in all of these areas.

When the distros first shipped Xen, it was done mostly out of desperation. Virtualization was, and still is, the "hot" thing. Linux did not provide any native hypervisor capability. Most Linux developers didn't even really know that much about virtualization. Xen was a pretty easy to use purpose-built kernel that had a pretty good community. So we made the hasty decision to ship Xen instead of investing in making Linux a proper hypervisor.

This decision has come back to haunt us now in the form of massive confusion. When people talk about Xen not being merged into Linux, I don't think they realize that Xen will *never* be merged into Linux. Xen will always be a separate, purpose-built kernel. There are patches to Linux that enable it to run well as a guest under Xen. These patches are likely to be merged in the future, but Xen will never been a part of the Linux kernel.

As a Linux developer, it's hard for me to be that interested in Xen--for the same reasons I have no interest in NetBSD, QNX, or VxWorks. The same is true for the vast majority of Linux developers. When you think about it, it is really quite silly. We advocate Linux for everything from embedded systems, to systems requiring real-time performances, to high-end mainframes. I trust Linux to run on my dvd player, my laptop, and to run on the servers that manage my 401k. Is virtualization so much harder than every other problem in the industry that Linux is somehow incompatible of doing it well on its own? Of course not. Virtualization is actually quite simple compared to things like real-time.

This does not mean that Xen is dead or that we should have never encouraged people to use it in the first place. At the time, it was the best solution available. At this moment in time, it's still unclear whether Linux as hypervisor is better than Xen in every scenario. I won't say that all users should switch en-masse from Xen to Linux for their virtualization needs. All of the projects I've referenced here are viable projects that have large user bases.

I'm a Linux developer though, and just as others Linux hackers who are trying to make Linux run well in everything from mainframes to dvd players, I will continue to work to make Linux work well as a hypervisor. The Linux community will work toward making Linux the best hypervisor out there. The Linux distros will stop shipping a purpose-built kernel for virtualization and instead rely on Linux for it.

Looking at the rest of the industry, I'm surprised that other kernels haven't gone in the direction of Linux in terms of adding hypervisor support directly to the kernel.

Why is Windows not good enough to act a hypervisor such that Microsoft had to write a new kernel from scratch (Hyper-V)?

Why is Solaris not good enough to act as a hypervisor requiring Sun to ship Xen in xVM? Solaris is good enough to run enterprise workloads but not good enough to run a Windows VM? Really? Maybe :-)

Forget about all of the "true hypervisor" FUD you may read. The real question to ask yourself is what is so wrong with these other kernels that they aren't capable of running virtual machines well and instead have to rely on a relatively young and untested microkernel to do their heavy lifting?

Update: modified some of the text for clarity. Flight delayed more so another round of editing :-)

Monday, April 07, 2008

KVM Forum 2008 Call For Presentations

This is the Call for Presentations for the second annual KVM Developer's Forum, to be held on June 10-13, 2008, in Napa, California, USA [1]. We are looking for presentations on KVM development, quality assurance, management, security, interoperability, architecture support, and interesting use cases. Presentations are 50 minutes in length; there are also 25-minute mini-presentation slots available.

KVM Forum presentations are an excellent way to inform the KVM development community about your work, and to gather valuable feedback about your approach.

Please send your presentation proposal to the KVM Forum 2008 Content Committee at kf2008-cfp@qumranet.com by April 20th.

KVM Forum 2008 Content Committee:

  • Dor Laor
  • Anthony Liguori
  • Avi Kivity

[1] http://kforum.qumranet.com/KVMForum/about_kvmforum.php

On a personal note, I found KVM Forum 2007 to be one of the best run conferences I've attended. The facilities were great and each talk was interesting. There was a great deal of discussion during each talk. Definitely worth the trip.

Sunday, April 06, 2008

KVM for the Mainframe

kvm-65 was released today. The most interesting feature in this release is support for the s390 architecture, more specifically, the System z9 line of mainframes.

The s390 is the grand-daddy of virtualization. Everything started there. In so many ways, everything we're doing with x86 virtualization is just playing catch-up. The new exciting features like hardware virtualization support and hardware paging support have been in s390 forever.

s390 clearly has a very mature hypervisor. What many people may not know though is that it's normal to run two hypervisors at any given time on s390. At the bottom level, there's PR/SM which divides the machine into rather coarse partitions. Within a PR/SM partition, you can run z/OS or Linux. You can also run z/VM within a PR/SM partition. z/VM is another hypervisor that allows for much more sophisticated features like memory overcommit and processor overcommit. The user has the ability to decide how much hypervisor they need to maximize the efficiency of their workloads.

Within a z/VM partition, you can run z/OS or Linux. The beauty of s390 is that this configuration has been supported in the hardware for many years and is very fast.

When Linux adopted native support for virtualization, it became obvious that this could be easily supported on the s390. The hardware has long supported this sort of nested virtualization and the implementation turned out to be very straight forward. It helps that the x86 virtualization extensions were inspired by a paper written about s370 almost 30 years ago :-)

What do you get from a platform that has supported virtualization for longer than I've been alive? In this very first release of KVM for s390, it already supports 64-way guests. After two years of development, we've just gotten to supporting 16-way guests on x86.

Wednesday, March 19, 2008

Exploiting live migration

Apparently at this year's BlackHat, someone presented a paper about attacking live migration traffic. The paper describes a tool called Xensploit which uses a man-in-the-middle attack on live migration traffic to do all sorts of bad things. The core problem is that Xen live migration is not encrypted. Neither is VMotion traffic so the exploits are equally applicable.

While there's already been a lot of commentary suggesting that live migration shouldn't happen over insecure networks, that's not good enough for me. If you are sending the memory of a VM over the network unencrypted, you might as well not have any passwords on any of your machines since you are exposing all of the VM's sensitive data to anyone on the network.

For IBM Director Virtualization Manager, we go to great lengths to always ensure that Xen live migration traffic is always encrypted. As far as I know, no other Xen management tool is capable of encrypting live migration traffic. If you are using Virtualization Manager, you are protected from Xensploit style attacks.

For KVM, we were careful not to make the same mistakes that had been made for Xen. KVM supports live migration over SSH by default and provides a mechanism for third-parties to encrypt migration traffic in anyway they please.

Saturday, January 12, 2008

A preview of gtk-vnc v0.3.3

Since Dan beat me to blogging about the gtk-vnc 0.3.2 release, I decided to co-opt him for 0.3.3 and post a full two weeks before the release actually happens :-)

The 0.3.3 release will add support for the Tight encoding which is perhaps the most widely supported compressed encoding out there. This was really the last piece in making gtk-vnc a first class VNC client supporting all the protocol options that one would expect a good client to support. Much to my surprise, 0.3.3 will also contain a Firefox plugin that allows a VNC widget to be embedded within your web browser thanks to Rich Jones.

At first, a VNC web-browser plugin may sound like a silly idea. Of course, both RealVNC and TightVNC ship a Java applet VNC client. Clearly, there is demand for embedding a VNC session within a web browser. Besides the obvious concerns about performance, Java applets are severely limited in what they can do. You cannot grab the mouse and you cannot grab arbitrary key events. You really can't build a first class VNC client as a Java applet.

With a gtk-vnc based plugin, you can have a first class VNC client in your web browser. An exciting application of such a technology would be a rich web-based management application for virtualization. Things that were not possible in Java, like full-screening a VNC session, supporting copy/paste and drag-n-drop, are all within the realm of possibility using a gtk-vnc plugin.

There's still a fair bit of work to do to harden the plugin and gtk-vnc, such that it could be trusted to be invoked by any web page, but I'm looking forward to see what this leads to.