Monday, November 9, 2009
Cloud computing and the big rethink: Part 5
To date, this series has tried to guide you through the changes happening from the infrastructure, developer, and end user perspectives that signal the demise of the full-featured server operating system and the virtual server. Virtualization, and the large scale, multi-tenant operations model we know and love as "cloud computing," are enabling IT professionals to rethink the packaging, delivery, and operation of software functionality in extremely disruptive--and beneficial--ways.
(Credit: Wonderlane)
So, what does this mean to the future of information technology? How will the role of IT, and the roles within IT, change as a result of the changing landscape of the technology it administers? What new applications--and resulting markets--are enabled by the "big rethink"?
Here are just a few of my own observations on this topic:
Software packaging will be application focused, not server focused. As anyone who has deployed a distributed application in the last two decades can tell you, the focus of system deployment has been the server, not the application, for some time now. In the highly customized world of IT systems development before virtualization and the cloud, servers were acquired, software was installed upon the servers in very specific ways, and the entire package was managed and monitored largely from the perspective of the server (e.g. what processes are running, how much CPU is being used, etc.).
As OS functionality begins to get wrapped into application containers, or moved onto the hardware circuitry itself, the packaging begins to be defined in terms of application architecture, with monitoring happening from the perspective of software services and interfaces rather than the server itself. These packages can then be moved around within data centers, or even among them, and the focus of management will remain on the application.
That's not to say that no one will be watching the hardware. Infrastructure operations will always be a key function within data centers. However, outside of the data center operations team, it will matter less and less.
Enterprise IT will begin to bend enterprise and solutions architectures to align better with what is offered from the cloud. I may not agree with some that the cloud will stifle differentiation in software systems, but one thing is very true.
As end users select software-as-a-service applications to run core pieces of their business, meet integration and operations needs from the cloud, and generally move from systems providers to service providers, the need to reduce customization will be strong. This is both to reduce costs and strengthen system survivability in the face of constant feature changes on the underlying application system.
The changing relationship between software and hardware will result in new organizational structures within the IT department. When it comes to IT operations--specifically data center operations--we've generally lived with administrative groups divided along server, storage, and network lines from before the dawn of client-server application architectures.
This organization, however, is an artifact of a time when applications were tightly coupled to the hardware on which they were deployed. In such a static deployment model, expertise was needed to customize these technologies in pursuit of meeting specific service-level goals.
When you decouple software deployment from underlying hardware, it begins to allow for a re-evaluation of these operational roles. Today, most companies are already in a transition in this respect, with increasing reliance on roles like "virtualization administrator" and "operations specialist" to fulfill changing needs.
The changing landscape of software development platforms will result in new philosophies of software architecture, deployment, and operations. I'm thinking here primarily of two things.
First, agility will become king in large-scale systems development for classes of applications ranging from web applications to data processing to core business systems. Agility from the service provider's perspective, in the frequency in which they can release features and fixes. Agility from the perspective of the enterprise developer, through the ways in which they can rapidly iterate over the write-build-test cycle. Agility from the perspective of the entrepreneur, in that data center services are now a credit card away.
Second, I think project management, whether for commercial offerings or for custom enterprise applications, will see rapid change. Agile programming and project management methods make a ton of sense in the cloud, as do service-oriented approaches to software and systems architecture. Project managers wondering what cloud computing will do to their day-to-day jobs should consider what happens if development can outpace a Gant chart.
The need for tactical systems administrators will be reduced. I've written about this in the past, but the tactical system administrator--the man or woman who grabs a trouble ticket from the top of the queue, takes care of the request, closes the ticket, then takes the next ticket from the queue--is going to largely (though probably not entirely) go away.
Why? Automation. Most of the tasks such an admin does day to day are highly automatable: provisioning, failure recovery, scaling, infrastructure management and so on. These administrators are among the last "clerks" in business, and a result of the unfortunate fact that IT has been excellent at automating everything in business--except IT.
Where tactical systems administration will still be needed, however, is in what I like to call the "private cloud operations center," a concept similar to the network operations centers that exist in many Fortune 500 companies today. There, the administrator would monitor overall performance of applications running in the cloud (on both internal and external resources), as well as monitoring the performance of the cloud providers themselves.
There are a lot more forward-thinking thoughts that you and I could probably come up with when we think of the demise of traditional IT in favor of a lean, tight, cloud-oriented IT model. However, the great thing about being involved in cloud today is that the ground is shifting so fast, that I find myself changing many of the long-term predictions I made last year. I wouldn't presume to be able to see the future clearly in the face of cloud computing, but many of the key drivers are already out there.
The trick is to be open-minded about what you see, and to be willing to "rethink"...big.
(Credit: Wonderlane)
So, what does this mean to the future of information technology? How will the role of IT, and the roles within IT, change as a result of the changing landscape of the technology it administers? What new applications--and resulting markets--are enabled by the "big rethink"?
Here are just a few of my own observations on this topic:
Software packaging will be application focused, not server focused. As anyone who has deployed a distributed application in the last two decades can tell you, the focus of system deployment has been the server, not the application, for some time now. In the highly customized world of IT systems development before virtualization and the cloud, servers were acquired, software was installed upon the servers in very specific ways, and the entire package was managed and monitored largely from the perspective of the server (e.g. what processes are running, how much CPU is being used, etc.).
As OS functionality begins to get wrapped into application containers, or moved onto the hardware circuitry itself, the packaging begins to be defined in terms of application architecture, with monitoring happening from the perspective of software services and interfaces rather than the server itself. These packages can then be moved around within data centers, or even among them, and the focus of management will remain on the application.
That's not to say that no one will be watching the hardware. Infrastructure operations will always be a key function within data centers. However, outside of the data center operations team, it will matter less and less.
Enterprise IT will begin to bend enterprise and solutions architectures to align better with what is offered from the cloud. I may not agree with some that the cloud will stifle differentiation in software systems, but one thing is very true.
As end users select software-as-a-service applications to run core pieces of their business, meet integration and operations needs from the cloud, and generally move from systems providers to service providers, the need to reduce customization will be strong. This is both to reduce costs and strengthen system survivability in the face of constant feature changes on the underlying application system.
The changing relationship between software and hardware will result in new organizational structures within the IT department. When it comes to IT operations--specifically data center operations--we've generally lived with administrative groups divided along server, storage, and network lines from before the dawn of client-server application architectures.
This organization, however, is an artifact of a time when applications were tightly coupled to the hardware on which they were deployed. In such a static deployment model, expertise was needed to customize these technologies in pursuit of meeting specific service-level goals.
When you decouple software deployment from underlying hardware, it begins to allow for a re-evaluation of these operational roles. Today, most companies are already in a transition in this respect, with increasing reliance on roles like "virtualization administrator" and "operations specialist" to fulfill changing needs.
The changing landscape of software development platforms will result in new philosophies of software architecture, deployment, and operations. I'm thinking here primarily of two things.
First, agility will become king in large-scale systems development for classes of applications ranging from web applications to data processing to core business systems. Agility from the service provider's perspective, in the frequency in which they can release features and fixes. Agility from the perspective of the enterprise developer, through the ways in which they can rapidly iterate over the write-build-test cycle. Agility from the perspective of the entrepreneur, in that data center services are now a credit card away.
Second, I think project management, whether for commercial offerings or for custom enterprise applications, will see rapid change. Agile programming and project management methods make a ton of sense in the cloud, as do service-oriented approaches to software and systems architecture. Project managers wondering what cloud computing will do to their day-to-day jobs should consider what happens if development can outpace a Gant chart.
The need for tactical systems administrators will be reduced. I've written about this in the past, but the tactical system administrator--the man or woman who grabs a trouble ticket from the top of the queue, takes care of the request, closes the ticket, then takes the next ticket from the queue--is going to largely (though probably not entirely) go away.
Why? Automation. Most of the tasks such an admin does day to day are highly automatable: provisioning, failure recovery, scaling, infrastructure management and so on. These administrators are among the last "clerks" in business, and a result of the unfortunate fact that IT has been excellent at automating everything in business--except IT.
Where tactical systems administration will still be needed, however, is in what I like to call the "private cloud operations center," a concept similar to the network operations centers that exist in many Fortune 500 companies today. There, the administrator would monitor overall performance of applications running in the cloud (on both internal and external resources), as well as monitoring the performance of the cloud providers themselves.
There are a lot more forward-thinking thoughts that you and I could probably come up with when we think of the demise of traditional IT in favor of a lean, tight, cloud-oriented IT model. However, the great thing about being involved in cloud today is that the ground is shifting so fast, that I find myself changing many of the long-term predictions I made last year. I wouldn't presume to be able to see the future clearly in the face of cloud computing, but many of the key drivers are already out there.
The trick is to be open-minded about what you see, and to be willing to "rethink"...big.
Cloud computing and the big rethink: Part 4
So far in this series, I've described why the very form of application infrastructure delivery will change in the coming years, and why both infrastructure and software development will play a major role in that. These are powerful forces that are already at work, and you are already seeing their effects on the way enterprise IT and consumer Web applications are being operated.
There is one more key force that will change the way we acquire, build, and consume enterprise application functionality and data, however. It is the very reason that enterprise IT exists. I am speaking, of course, of the users--the business units and individuals that demand IT give them increased productivity and competitive advantage.
How is it that end users could affect cloud-based architectures? After all, isn't one of the key points about cloud computing that it hides infrastructure and operations from hosted applications and services? The answer is simple: the need for cloud-operated infrastructure comes from the need for more efficient application delivery and operations, which in turn comes from the accelerated need for new software functionality driven by end users.
The most obvious place where this is the case is software as a service. Cloud applications and services that fall under this category are targeted at end users; they deliver computing and storage functionality that meet specific business needs (such as customer relationship management (CRM) or application development and testing).
Here's the thing about most business applications, though, regardless of how they are delivered: they are almost never used out of the box, as is, without some form of customization. I worked for a short time at enterprise content management vendor, Alfresco, and I don't think there were any "as is" deployments. Every engagement involved customization.
For CRM vendor Salesforce.com, the evidence is the importance and success of its Force.com cloud development platform, as well as its AppExchange marketplace. Both allow users to customize or extend Salesforce.com for their needs, and even build new business applications that leverage customer data.
The result of this is that the cloud itself must be not only elastic, but agile. It must bend at all levels to the will of its users, and the degree and ease of configuring and customizing will quickly become competitive differentiators for vendors in all categories of cloud computing.
What are the best ways to accommodate this agility at scales large enough to meet the needs of cloud computing? Well, today that would be two technologies:
Virtualization--the abstraction of computing, storage, and networking resources from underlying infrastructure
Automation--the elimination of the need for human intervention in common, repeatable tasks and decisions
Now, if you are going to virtualize and automate infrastructure in support of a customization of a SaaS application, do you need an entire virtual server with a full featured operating system? Of course not. In fact, I would argue that you need least-common-denominator systems infrastructure to enable the customization to work. Otherwise you are creating unnecessary storage and computing baggage.
I think in many ways only the cloud-computing model enables this degree of efficiency in running customized business systems for end users. Because the service vendors (be it software, platform, or infrastructure services) are able to optimize for all customers at once, a given advancement in efficiency pays off much more (and much faster) for the service provider than it would for a single customer. Multi-tenancy is what makes the economics work for both the business user and the service provider.
My next and final post in the series will attempt to wrap all of this up, and to present a vision of what the cloud of the future may look like when the evolution and/or demise of the operating system and virtual server is complete. Though I harbor no illusions about it happening all at once, or being a pain-free transition, I, for one, am excited about the new technologies this future may enable. I hope you are, too.
There is one more key force that will change the way we acquire, build, and consume enterprise application functionality and data, however. It is the very reason that enterprise IT exists. I am speaking, of course, of the users--the business units and individuals that demand IT give them increased productivity and competitive advantage.
How is it that end users could affect cloud-based architectures? After all, isn't one of the key points about cloud computing that it hides infrastructure and operations from hosted applications and services? The answer is simple: the need for cloud-operated infrastructure comes from the need for more efficient application delivery and operations, which in turn comes from the accelerated need for new software functionality driven by end users.
The most obvious place where this is the case is software as a service. Cloud applications and services that fall under this category are targeted at end users; they deliver computing and storage functionality that meet specific business needs (such as customer relationship management (CRM) or application development and testing).
Here's the thing about most business applications, though, regardless of how they are delivered: they are almost never used out of the box, as is, without some form of customization. I worked for a short time at enterprise content management vendor, Alfresco, and I don't think there were any "as is" deployments. Every engagement involved customization.
For CRM vendor Salesforce.com, the evidence is the importance and success of its Force.com cloud development platform, as well as its AppExchange marketplace. Both allow users to customize or extend Salesforce.com for their needs, and even build new business applications that leverage customer data.
The result of this is that the cloud itself must be not only elastic, but agile. It must bend at all levels to the will of its users, and the degree and ease of configuring and customizing will quickly become competitive differentiators for vendors in all categories of cloud computing.
What are the best ways to accommodate this agility at scales large enough to meet the needs of cloud computing? Well, today that would be two technologies:
Virtualization--the abstraction of computing, storage, and networking resources from underlying infrastructure
Automation--the elimination of the need for human intervention in common, repeatable tasks and decisions
Now, if you are going to virtualize and automate infrastructure in support of a customization of a SaaS application, do you need an entire virtual server with a full featured operating system? Of course not. In fact, I would argue that you need least-common-denominator systems infrastructure to enable the customization to work. Otherwise you are creating unnecessary storage and computing baggage.
I think in many ways only the cloud-computing model enables this degree of efficiency in running customized business systems for end users. Because the service vendors (be it software, platform, or infrastructure services) are able to optimize for all customers at once, a given advancement in efficiency pays off much more (and much faster) for the service provider than it would for a single customer. Multi-tenancy is what makes the economics work for both the business user and the service provider.
My next and final post in the series will attempt to wrap all of this up, and to present a vision of what the cloud of the future may look like when the evolution and/or demise of the operating system and virtual server is complete. Though I harbor no illusions about it happening all at once, or being a pain-free transition, I, for one, am excited about the new technologies this future may enable. I hope you are, too.
Cloud computing and the big rethink: Part 3
In the second part of this series, I took a look at how cloud computing and virtualization will drive homogenization of data center infrastructure over time, and how that is a contributing factor to the adoption of "just enough" systems software. That, in turn, will signal the beginning of the end for the traditional operating system, and in turn, the virtual server.
However, this change is not simply being driven by infrastructure. There is a much more powerful force at work here as well--a force that is emboldened by the software-centric aspects of the cloud computing model. That force is the software developer.
Let me explain. Almost 15 years ago, I went to work for a start-up that was trying to change the way distributed software applications were developed forever. The company was Forte Software, since acquired by Sun (itself soon to be acquired by Oracle), and its CTO, Paul Butterworth, and his team were true visionaries when it came to service-oriented software development (pre-"SOA"), event-driven systems, and business process automation.
What I remember most about Forte's flagship product, a fourth-generation language programming environment and distributed systems platform, was the developer experience:
Write and test your application on a single machine, naming specific instances of objects that would act as services for the rest of the application.
Once the application executed satisfactorily on one system, use a GUI to drag the named instances to a map of the servers on your network, and push a single button to push the bits, execute the various services, and test the application.
Once the application tested satisfactorily, create a permanent partitioning map of the application, and push a single button to distribute the code, generate and compile C++ from the 4GL if needed, and run the application.
This experience was amazingly productive. The only thing it could have used was automation of the partitioning step (with runtime determination of scale, etc.), and the ability to get capacity for the application dynamically from a shared pool. (The latter was technically possible if you used a single Forte environment to run all of the applications that would share the pool, but there still would be no automation of operations.)
I have spent the last 10 years trying to re-create that experience. I also believe most distributed systems developers (Web or otherwise) are looking for the same. This is why I am so passionate about cloud computing, and why I think developers--or, perhaps more to the point, solutions architects--will gain significant decision making power over future IT operations.
I look at it this way: if an end user is looking for an IT service, such as customer relationship management, a custom Web application, or even a lot of servers and storage for an open-source data processing framework, there is almost always something that takes the knowledge and skills of someone who can create, compose, integrate, or configure software systems to meet those needs.
Furthermore, there remains a lot of reliance by nontechnical professionals on their technical counterparts to determine how computing can solve a particular problem. For the most part, in most corporate and public sector settings, the in-house IT department has traditionally been the only choice for any large-scale computing need.
Until recently, if a business unit hired a technologist to look for alternatives to internal IT, the costs of any other "IT-as-a-service" offering (outsourcing, service bureaus, etc.) was extremely expensive and would immediately have to be rationalized against internal IT--usually to the detriment of the alternative. On top of that, all of those alternatives required long-term commitments, so "trying things out" wasn't really an option.
The economics of the cloud change things dramatically. Now the cost of those services are cheap, can be born for very short periods of time, and can all be put on a credit card and expensed. A business unit can go a long way to proving the economic advantages of a cloud-based alternative to internal IT before their budget is significantly impacted.
Developers are increasingly choosing alternative operations models to internal IT, and will continue to do so while the opportunity is there. Internal IT ultimately has to choose between competing with public clouds, providing services that embrace them, or both.
(There are often reasons why internal IT can and should provide alternatives to public cloud computing services. See just about the entire debate over the validity of private clouds.)
So, how does the cloud accommodate and attract software developers? I believe the key will be the development experience itself; key elements like productivity, flexibility, types and strength of services, and so on will be critical to cloud providers.
We need more development tools that are cloud focused (or cloud extensions to the ones we have). We need more of an ecosystem around Ruby on Rails and Java, currently the two most successful open development languages in the cloud, or innovative new approaches to cloud development. We need to tighten up the development and testing experience of PaaS options like Google App Engine, making things "flow" as seamlessly as possible.
We need more IaaS providers to think like Amazon Web Services. We always hold up AWS as the shining light of Infrastructure as a Service, but the truth is that they are actually a cloud platform that happens to have compute and storage services in their catalog. How much more powerful is AWS with other developer-focused services, such as DevPay, Simple Queue Service, and Elastic Map Reduce? This attracts developers, which in turn attracts CPU/hrs and GB/hrs.
How does all of this affect the virtual server and operating system, the topic of this series? Well, if the application developer is getting more services directly from the development platform, what is the need for a bevy of advanced services in the operating system? And if that platform is capable of hiding the infrastructure used to distribute application components--or even hide the fact that the application is distributed altogether--then why use something that represents a piece of infrastructure to package the bits?
Next in the series, I want to consider the role of the business users themselves in rethinking enterprise architectures. In the meantime, you can check out part 1 of this series about how cloud computing will change the way we deliver distributed applications and services; and part 2 about how server virtualization is evolving.
However, this change is not simply being driven by infrastructure. There is a much more powerful force at work here as well--a force that is emboldened by the software-centric aspects of the cloud computing model. That force is the software developer.
Let me explain. Almost 15 years ago, I went to work for a start-up that was trying to change the way distributed software applications were developed forever. The company was Forte Software, since acquired by Sun (itself soon to be acquired by Oracle), and its CTO, Paul Butterworth, and his team were true visionaries when it came to service-oriented software development (pre-"SOA"), event-driven systems, and business process automation.
What I remember most about Forte's flagship product, a fourth-generation language programming environment and distributed systems platform, was the developer experience:
Write and test your application on a single machine, naming specific instances of objects that would act as services for the rest of the application.
Once the application executed satisfactorily on one system, use a GUI to drag the named instances to a map of the servers on your network, and push a single button to push the bits, execute the various services, and test the application.
Once the application tested satisfactorily, create a permanent partitioning map of the application, and push a single button to distribute the code, generate and compile C++ from the 4GL if needed, and run the application.
This experience was amazingly productive. The only thing it could have used was automation of the partitioning step (with runtime determination of scale, etc.), and the ability to get capacity for the application dynamically from a shared pool. (The latter was technically possible if you used a single Forte environment to run all of the applications that would share the pool, but there still would be no automation of operations.)
I have spent the last 10 years trying to re-create that experience. I also believe most distributed systems developers (Web or otherwise) are looking for the same. This is why I am so passionate about cloud computing, and why I think developers--or, perhaps more to the point, solutions architects--will gain significant decision making power over future IT operations.
I look at it this way: if an end user is looking for an IT service, such as customer relationship management, a custom Web application, or even a lot of servers and storage for an open-source data processing framework, there is almost always something that takes the knowledge and skills of someone who can create, compose, integrate, or configure software systems to meet those needs.
Furthermore, there remains a lot of reliance by nontechnical professionals on their technical counterparts to determine how computing can solve a particular problem. For the most part, in most corporate and public sector settings, the in-house IT department has traditionally been the only choice for any large-scale computing need.
Until recently, if a business unit hired a technologist to look for alternatives to internal IT, the costs of any other "IT-as-a-service" offering (outsourcing, service bureaus, etc.) was extremely expensive and would immediately have to be rationalized against internal IT--usually to the detriment of the alternative. On top of that, all of those alternatives required long-term commitments, so "trying things out" wasn't really an option.
The economics of the cloud change things dramatically. Now the cost of those services are cheap, can be born for very short periods of time, and can all be put on a credit card and expensed. A business unit can go a long way to proving the economic advantages of a cloud-based alternative to internal IT before their budget is significantly impacted.
Developers are increasingly choosing alternative operations models to internal IT, and will continue to do so while the opportunity is there. Internal IT ultimately has to choose between competing with public clouds, providing services that embrace them, or both.
(There are often reasons why internal IT can and should provide alternatives to public cloud computing services. See just about the entire debate over the validity of private clouds.)
So, how does the cloud accommodate and attract software developers? I believe the key will be the development experience itself; key elements like productivity, flexibility, types and strength of services, and so on will be critical to cloud providers.
We need more development tools that are cloud focused (or cloud extensions to the ones we have). We need more of an ecosystem around Ruby on Rails and Java, currently the two most successful open development languages in the cloud, or innovative new approaches to cloud development. We need to tighten up the development and testing experience of PaaS options like Google App Engine, making things "flow" as seamlessly as possible.
We need more IaaS providers to think like Amazon Web Services. We always hold up AWS as the shining light of Infrastructure as a Service, but the truth is that they are actually a cloud platform that happens to have compute and storage services in their catalog. How much more powerful is AWS with other developer-focused services, such as DevPay, Simple Queue Service, and Elastic Map Reduce? This attracts developers, which in turn attracts CPU/hrs and GB/hrs.
How does all of this affect the virtual server and operating system, the topic of this series? Well, if the application developer is getting more services directly from the development platform, what is the need for a bevy of advanced services in the operating system? And if that platform is capable of hiding the infrastructure used to distribute application components--or even hide the fact that the application is distributed altogether--then why use something that represents a piece of infrastructure to package the bits?
Next in the series, I want to consider the role of the business users themselves in rethinking enterprise architectures. In the meantime, you can check out part 1 of this series about how cloud computing will change the way we deliver distributed applications and services; and part 2 about how server virtualization is evolving.
Cloud computing and the big rethink: Part 2
n the opening post of this series, I joined Chris Hoff and others in arguing that cloud computing will change the way we package server software, with an emphasis in lean "just enough" systems software. This means that the big, all-purpose operating system of the past will either change dramatically or disappear altogether, as the need for a "handle all comers" systems infrastructure is redistributed both up and down the execution stack.
The reduced need for specialized software packaged with bloated operating systems in turn means the virtual server is a temporary measure; a stopgap until software "containers" adjust to the needs of the cloud-computing model. In this post, I want to highlight a second reason why server virtualization (and storage and network virtualization) will give way to a new form of resource virtualization.
I'll start by pointing out one of the unexpected (for me at least) effects of cloud computing on data center design. Truth be told, this is actually an effect of mass virtualization, but as cloud computing is an operations model typically applied to virtualization, the observation sticks for the cloud.
Today's data centers have been built piecemeal, very often one application at a time. Without virtualization, each application team would typically identify what servers, storage and networking were needed to support the application architecture, and the operations team would acquire and install that infrastructure.
Specific choices of systems used (e.g. the brand of server, or the available disk sizes) might be dictated by internal IT "standards," but in general the systems that ended up in the data center were far from uniform. When I was at utility computing infrastructure vendor Cassatt, I can't remember a single customer that didn't need their automation to handle a heterogeneous environment.
But virtualization changes that significantly, for two reasons:
The hypervisor and virtual machine present a uniform application programming interface and hardware abstraction layer for every application, yet can adjust to the specific CPU, memory, storage, and network needs of each application.
Typical virtualized data centers are inherently multitenant, meaning that multiple stakeholders share the same physical systems, divided from one another by VMs, hypervisors, and their related management software.
So, the success of applications running in a virtualized environment is not dependent of the specialization of the underlying hardware. That is a critical change to the way IT operates.
In fact, in the virtualized world, the drive is the opposite; to create an infrastructure that drives toward homogeneity. Ideally, rack the boxes, wire them up once, and sit back as automation and virtualization tools give the illusion that each application is getting exactly the hardware and networking that it needs.
Now, if the physical architecture no longer needs to be customized for each application, the question quickly becomes what is the role of the virtual server in delivering the application's needs. Today, because applications are written against operating systems as their deployment frameworks, so to speak, and the operating systems are tuned to distribute hardware resources to applications, virtual machines are required.
But imagine if applications could instead be built against more specialized containers that handled both "glue" functions and resource management for that specialization--e.g., a Web app "bundle" that could deal with both network I/O and storage I/O (among other things) directly on behalf of the applications it hosts. (Google App Engine, anyone?)
A homogeneous physical architecture simplifies the task of delivering these distributed computing environments greatly, as there is a consistency of behavior from both a management and execution perspective. However, as it turns out, a homogeneous virtual container environment has the same effect.
So, if the VM isn't hiding diversity at the hardware layer, or diversity at the software layer (which is hidden by the "middleware") what is its purpose? Well, there is still a need for a virtual container of some sort, to allow for a consistent interface between multiple types of cloud middleware and the hardware. But it doesn't need to look like a full-fledged server at all.
Thus, the VM is a stopgap. Virtual containers will evolve to look less and less like hardware abstractions, and more and more like service delivery abstractions.
In my next post, I want to look at things from the software layers down, and get into more detail about why applications will be created differently for the cloud than they were for "servers." Stay tuned.
The reduced need for specialized software packaged with bloated operating systems in turn means the virtual server is a temporary measure; a stopgap until software "containers" adjust to the needs of the cloud-computing model. In this post, I want to highlight a second reason why server virtualization (and storage and network virtualization) will give way to a new form of resource virtualization.
I'll start by pointing out one of the unexpected (for me at least) effects of cloud computing on data center design. Truth be told, this is actually an effect of mass virtualization, but as cloud computing is an operations model typically applied to virtualization, the observation sticks for the cloud.
Today's data centers have been built piecemeal, very often one application at a time. Without virtualization, each application team would typically identify what servers, storage and networking were needed to support the application architecture, and the operations team would acquire and install that infrastructure.
Specific choices of systems used (e.g. the brand of server, or the available disk sizes) might be dictated by internal IT "standards," but in general the systems that ended up in the data center were far from uniform. When I was at utility computing infrastructure vendor Cassatt, I can't remember a single customer that didn't need their automation to handle a heterogeneous environment.
But virtualization changes that significantly, for two reasons:
The hypervisor and virtual machine present a uniform application programming interface and hardware abstraction layer for every application, yet can adjust to the specific CPU, memory, storage, and network needs of each application.
Typical virtualized data centers are inherently multitenant, meaning that multiple stakeholders share the same physical systems, divided from one another by VMs, hypervisors, and their related management software.
So, the success of applications running in a virtualized environment is not dependent of the specialization of the underlying hardware. That is a critical change to the way IT operates.
In fact, in the virtualized world, the drive is the opposite; to create an infrastructure that drives toward homogeneity. Ideally, rack the boxes, wire them up once, and sit back as automation and virtualization tools give the illusion that each application is getting exactly the hardware and networking that it needs.
Now, if the physical architecture no longer needs to be customized for each application, the question quickly becomes what is the role of the virtual server in delivering the application's needs. Today, because applications are written against operating systems as their deployment frameworks, so to speak, and the operating systems are tuned to distribute hardware resources to applications, virtual machines are required.
But imagine if applications could instead be built against more specialized containers that handled both "glue" functions and resource management for that specialization--e.g., a Web app "bundle" that could deal with both network I/O and storage I/O (among other things) directly on behalf of the applications it hosts. (Google App Engine, anyone?)
A homogeneous physical architecture simplifies the task of delivering these distributed computing environments greatly, as there is a consistency of behavior from both a management and execution perspective. However, as it turns out, a homogeneous virtual container environment has the same effect.
So, if the VM isn't hiding diversity at the hardware layer, or diversity at the software layer (which is hidden by the "middleware") what is its purpose? Well, there is still a need for a virtual container of some sort, to allow for a consistent interface between multiple types of cloud middleware and the hardware. But it doesn't need to look like a full-fledged server at all.
Thus, the VM is a stopgap. Virtual containers will evolve to look less and less like hardware abstractions, and more and more like service delivery abstractions.
In my next post, I want to look at things from the software layers down, and get into more detail about why applications will be created differently for the cloud than they were for "servers." Stay tuned.
Cloud computing and the big rethink: Part 1
Chris Hoff, my friend and colleague at Cisco Systems, has reached enlightenment regarding the role of the operating system and, subsequently, the need for the virtual machine in a cloud-centric world.
His post last week reflects a realization attained by those who consider the big picture of cloud computing long enough.
He summarizes his thoughts nicely at the opening of the post:
Virtual machines (VMs) represent the symptoms of a set of legacy problems packaged up to provide a placebo effect as an answer that in some cases we have, until lately, appeared disinclined and not technologically empowered to solve.
If I had a wish, it would be that VM's end up being the short-term gap-filler they deserve to be and ultimately become a legacy technology so we can solve some of our real architectural issues the way they ought to be solved.
Hoff goes on to note that the real problem isn't the VM, but the modern operating system:
The approach we've taken today is that the VMM/Hypervisor abstracts the hardware from the OS. The applications are still stuck on top of operating systems that don't provide much in the way of any benefit given the emergence of development frameworks/languages such as J2EE, PHP, Ruby, .NET, etc. that were built around the notions of decoupled, distributed and mashable application "fabrics."
My own observation here is that our current spate of operating systems were designed when competitors were pushing to use the OS as a differentiator--a way of distinguishing one company's product experience from another. OSes started out being targeted at software, providing a way for applications to use a generalized API to acquire and consume the resources they needed.
At the time, computers had one CPU and the logical thing to do was to design a single OS that could run multiple applications, preferably at once. This created the need for additional functionality to both manage resources and manage the applications themselves.
Furthermore, the operating system increasingly targeted not the needs of software, but the needs of people; more specifically, the needs of computing buyers. Take a look at OS X, or Windows, or even "enterprise" Linux distributions today. The number of features and packages that are included to entice software developers, system administrators, or even consumers to consume the product is overwhelming.
However, any given application doesn't need all those bells and whistles, and most OSes are unfortunately not designed to adjust their footprint to the needs of a specific application.
So, the problem isn't that OS capabilities are not needed, just that they are ridiculously packaged, and could in fact be wrapped into software frameworks that hide any division between the application and the systems it runs on.
By the way, that this is exactly why EMC purchased Fastscale last month, as noted by Chuck Hollis, EMC's CTO of global marketing, on the day the acquisition was announced. Simon Crosby, CTO of the data center and cloud division at Citrix, also notes that this change is coming but sees the OS playing a more important transitional role.
This is a critical concept for application developers wondering how cloud computing will affect software architectures. It is also a critical concept for why IT operations professionals need to understand that their roles and responsibilities are changing.
Because of this, I'll be following up with a few posts this week that will expand on this concept and give you much more of a sense of why the operating system, along with most server, network, and storage virtualization is a stop-gap measure as we move to a cloud experience centered on the application user and the developer.
Next on the list is an explanation of why cloud computing drives infrastructure toward homogeneity (at least within a data center) and why that is the bane of server virtualization.
His post last week reflects a realization attained by those who consider the big picture of cloud computing long enough.
He summarizes his thoughts nicely at the opening of the post:
Virtual machines (VMs) represent the symptoms of a set of legacy problems packaged up to provide a placebo effect as an answer that in some cases we have, until lately, appeared disinclined and not technologically empowered to solve.
If I had a wish, it would be that VM's end up being the short-term gap-filler they deserve to be and ultimately become a legacy technology so we can solve some of our real architectural issues the way they ought to be solved.
Hoff goes on to note that the real problem isn't the VM, but the modern operating system:
The approach we've taken today is that the VMM/Hypervisor abstracts the hardware from the OS. The applications are still stuck on top of operating systems that don't provide much in the way of any benefit given the emergence of development frameworks/languages such as J2EE, PHP, Ruby, .NET, etc. that were built around the notions of decoupled, distributed and mashable application "fabrics."
My own observation here is that our current spate of operating systems were designed when competitors were pushing to use the OS as a differentiator--a way of distinguishing one company's product experience from another. OSes started out being targeted at software, providing a way for applications to use a generalized API to acquire and consume the resources they needed.
At the time, computers had one CPU and the logical thing to do was to design a single OS that could run multiple applications, preferably at once. This created the need for additional functionality to both manage resources and manage the applications themselves.
Furthermore, the operating system increasingly targeted not the needs of software, but the needs of people; more specifically, the needs of computing buyers. Take a look at OS X, or Windows, or even "enterprise" Linux distributions today. The number of features and packages that are included to entice software developers, system administrators, or even consumers to consume the product is overwhelming.
However, any given application doesn't need all those bells and whistles, and most OSes are unfortunately not designed to adjust their footprint to the needs of a specific application.
So, the problem isn't that OS capabilities are not needed, just that they are ridiculously packaged, and could in fact be wrapped into software frameworks that hide any division between the application and the systems it runs on.
By the way, that this is exactly why EMC purchased Fastscale last month, as noted by Chuck Hollis, EMC's CTO of global marketing, on the day the acquisition was announced. Simon Crosby, CTO of the data center and cloud division at Citrix, also notes that this change is coming but sees the OS playing a more important transitional role.
This is a critical concept for application developers wondering how cloud computing will affect software architectures. It is also a critical concept for why IT operations professionals need to understand that their roles and responsibilities are changing.
Because of this, I'll be following up with a few posts this week that will expand on this concept and give you much more of a sense of why the operating system, along with most server, network, and storage virtualization is a stop-gap measure as we move to a cloud experience centered on the application user and the developer.
Next on the list is an explanation of why cloud computing drives infrastructure toward homogeneity (at least within a data center) and why that is the bane of server virtualization.