Today's columnist is Carlo Daffara from Conecta. He writes:
I have had the opportunity to work in many different migration experiments in the financial, educational, industrial, and health care sectors. The reasons for migration are varied, but the single strongest motivator is usually: "let's save some money." Some migrations succeed and some do not, or at least not completely. All of them are difficult.
Yes, all of them. Even with the best practices (and we have written some), the effort required to migrate is always substantial because you are replacing something that works with something that may or may not work better. Humans estimate risks poorly and in IT, the devil you know remains the preferred one, even if you have to reboot that devil daily to make work. A CIO faces a conundrum that is similar to a roulette player. Where the roulette player says, "I have lost so much, I can't stop now or I will lose everything!", the CIO says "we have worked so hard to keep this house of cards standing, we can't stop now or all that work will have been for nothing!"
In reality, our research has shown that it is usually much better to avoid trying to supplant something that exists and works; it is much better to invent something new and different. The huge success of the iPad is not due to the fact that it replaces netbooks, or notebooks, or PCs: it is a different media. Like TV didn't replace radio but created a different channel, you have to go outside the basic competition model to find a market that may be much easier to grasp.
A fitting example may be the mobile environment, where Android surged rapidly by filling the need for a low-cost, easily sourced operating system for mobiles, allowing personalization and adaptation. These are fundamental differentiation factors since most phones share similar hardware functionality. At the same time, I believe that by simply looking to replace Windows on the desktop is in itself a uphill battle. This doesn't mean it shouldn't be done, but it may be easier to find other routes.
One such route is to create an intermediate level between thin clients (easily managed, high structural cost, low flexibility), virtual desktop infrastructure (VDI; moderately easier to manage, high structural cost, high flexibility) and traditional PCs (difficult to manage, high flexibility, low structural cost). The problem with "high structural cost" is that performing an adoption or migration experiment requires you to bring in the whole enchilada: virtualization or remotization infrastructure, server, licenses, and a great deal of effort to join all the dots together. One of the reasons for the great success of the PC was that adding one more PC was possible with limited costs – basically the cost of the hardware and software itself, and some configuration work. A good example of the costs and infrastructure necessary for VDI can be found here, to which project management, licenses, and consulting work must be added.
Also, the recent trend towards remotization (such as Terminal Services and Citrix) and VDI is encountering an unexpected difficulty: the rise of Web applications that are inherently location-transparent and work perfectly on the client. Rich interfaces, Flash, and Java applets place high demands on the server and are best executed directly on the client. So, you pay extra to use software like Citrix HDX that basically moves some of the execution work back to the client. And, if you need "detached mode" and have a large enough hard disk, you can use local virtualization and have the virtual image streamed to your PC for local execution – at a price, of course.
What I propose is a different approach: a mid-zone between a rich, full-install client and a thin client. Most applications would be executed locally, particularly the important ones such as the browser, OpenOffice, and conferencing applications. Also, remotized applications could be accessed and, if necessary, could execute Windows applications almost seamlessly using virtualization with tools like VirtualBox or KVM. The VM image of Windows could be stored remotely at merely the cost of storage and could be replicated in multiple sites using a WAN-compatible system like XtreemFS. This would ensure any change to the image is transparently replicated to another office for nomadic users, allowing them to execute those applications that are not portable or web-accessible.
With this method, the install image can be very, very small. It also means that you can store it in a small USB key and the user can try it without having to install anything. If they like it, they only need to image their hard disk and move it to a remote VM. They can live off that USB key, which can travel with them anywhere. Windows remains, but as more and more applications become web-enabled or transported to Linux, its role becomes smaller and smaller – up to the point where it can disappear.
I believe that this can be a worthwhile experiment. In fact, as a spin-off of our EU research activities, I started doing something similar a few months ago with a fully open source project called EveryDesk. I hope that other may find this approach interesting – and join us in making it useful.
No comments:
Post a Comment