Tuesday, May 29, 2012

Utility Computing

 


Most people point to Douglas Parkhill's book The Challenge of the Computer Utility that was written in 1966 as the origin of the cloud computing concept. The book details many of the foundational elements of today's cloud computing - elastic provisioning, online delivery, and perception of infinite supply. It just took 34 years for infrastructure to support the original vision.


In the book, Behind the Cloud: The Untold Story of How Salesforce.com Went From Idea to a Billion-Dollar Company by found Marc Benioff, he describes how he had a number of conversations with Tom Siebel about creating an online CRM product. Typically licensing software was selling for extraordinary amounts of money. The low-end product could start around $1,500 per user per license. Worse, buying pricey software wasn't the only expense. There could be an additional $54,000 for support; $1,200,00 for customization and consulting; $385,000 for the basic hardware to run it; $100,000 for administrative personnel; and $30,000 in training. The total cost for 200 people to use a low-end product in the 1990s could exceed $1.8 million in the first year alone.


Most egregious was that the majority of this expensive (and even more expensively managed) software became "shelfware" as 65 percent of Siebel licenses were never used, according to the research grou Gartner.


We have all seen products come into our companies that had millions of dollars invested into the initiative only to find out that it didn't suit the business need, they were to complex, or couldn't be integrated into the existing systems. Unfortunately the investment made into the product made it very hard to walk away from the solution even if it was obvious that it wouldn't work.


Marc Benioff wanted to change that with a SaaS CRM solution. He envisioned having subscribers that would pay a monthly fee like you would a utility company and that it would require half the investment. Providing subscription applications over the internet wasn't unique, it had been done with Prodigy, CompuServe, AOL, and online gaming; however what was unique was taking a business application and hosting it on the internet to replace traditional corporate software.


In May of 2003 Nicholas Carr wrote an article IT Doesn't Matter in the Harvard Business Review. Most IT professionals and executives were very critical of his article. He argued that corporate computer systems weren't important to a company's success. They were necessary - you couldn't operate without them - but most systems had become so commonplace that they no longer provided one company with an edge over its competitors. He thought that information technology had become inert. It was just the cost of doing business.


SaaS based solutions changed the landscape. By offering a utility based computing model that was charged by monthly subscription you could reduce your initial investment and have the elasticity to expand when required. The SaaS cloud service provider operated the infrastructure and software, and companies didn't need to worry about the complexity of implementing a system like CRM. Additionally, you were able to have a robust system up and running in a matter of days instead of 12 to 18 months with traditional software development.


Corporate executives are embracing the utility based computing business model to outsource certain applications to cloud partners. Skip Tappen from NWN viewed this transition as TaaS (Technology as a Service) at the recent 2012 IT Summit and Expo. He said that IT services are a continuum from traditional IT through SaaS. There were two components to delivering the technology, the first was the physical layer (infrastructure, platform, and software) and the second was the service.


SaaS solutions present a compelling opportunity for small, medium, and large corporations. They can provide a competitive advantage by reducing complexity, enhancing speed to market, and lowering capital costs; however they should be implement for edge based software solutions. Core applications that define the organization's heart and soul, their distinct product characteristics should stay internal. While cloud service providers offer great solutions for software that isn't a core component, they don't provide innovative strategic value for your company and that is the reason core applications should stay in-house.


IT must define the criteria for assessing applications that provide strategic value moving to the cloud and which applications are better suited for staying on premise. They should start with "greenfield" applications, it should include a cloud questionnaire for an initial assessment of the application and their should be a cloud solutions team that does a business and risk assessment.


Devising a Cloud Application Onboarding Strategy by Alessandro Perilli from Gartner can provide you with some lessons learned from field research of 17 world-wide organizations. He dives into qualification questionnaires and business impact assessments. It is a fantastic document that can help frame an initial approach for a cloud strategy.


Alessandro points out that developing an application onboarding strategy is a fundamental aspect of any cloud adoption initiative. Organizations need an efficient approach to identifying, prioritizing, and facilitating the onboarding process while addressing organizational and cultural changes.

Monday, May 21, 2012

Hybrid Clouds



Last week at the Boston 2012 VMware Forum I attended a presentation on Hybrid Clouds. During the Hybrid Cloud presentation, a solution architect takes to the stage and shows how easy it is to move applications from your existing infrastructure to an external hosting provider. Click a button and shazam your application has been magically transported to the ether plane of a cloud partner. Apparently cloud applications have come with an interesting new wormhole feature that lets you defy the laws of physics.


The NIST defines a hybrid cloud as a composition of two or more distinct cloud infrastructures (private, community, or public) that remain unique entities, but are bound together by standardized or proprietary technology that enables data and application portability.

Gartner’s conclusion about moving applications to the cloud is that IT organizations desire to conduct V2C (virtual machine to cloud) migrations, but today's market lacks maturity for most scenarios. Additionally, these migrations are time consuming and problematic. Most IT organizations should wait for tools to mature in terms of automation and performance before committing to V2C. As for now, organizations that need to migrate applications should do so by manually redeploying applications to the cloud provider.

That doesn’t disqualify some useful applications for hybrid clouds as long as you are realistic in your approach. First, if you move your virtual machine templates to a cloud partner and build up the infrastructure then you can leave it turned off in a “pay as you go” model for bursting capabilities. This does require you to build out the virtual machines and then reload your applications, but they would be available when you need additional capacity for peak usage.

A re-occurring example of an organization that would benefit from a hybrid cloud solution is Ticketmaster. Theoretically, Ticketmaster must have massive utilization requirements when tickets go on sale for concerts or sporting events. Let’s say it is Friday morning and Bruce Springsteen tickets are going on sale at 10 am. The instant those tickets are available, Ticketmaster will require an enormous amount of IT infrastructure to support the amount of transactions that happen for the next few hours. How can Ticketmaster afford the infrastructure for a business model that requires peak demands for ticket sales like Bruce Springsteen? They could rely on a cloud service provider.

Building out your virtual environment on a cloud partner’s infrastructure may save you from the capital expense of having to purchase the underlying hardware, but it certainly doesn’t admonish you for supporting the virtual instances. Even though these virtual machines may lie dormant, you still need to ensure that they are kept up to date with the latest security patches and software updates. Additionally, you need to make sure baseline infrastructure components like corporate security authentication and names resolution work in your partner’s multi-tenant environment.

When starting to assess applications that are suitable for a hybrid cloud solution, it is good to start with lab and development workload until your company is mature enough to move production instances. One scenario that does not work well in a hybrid cloud model is an application that relies on back-end infrastructure at the company’s datacenter and is adverse to latency issues. Applications with high I/O that need to navigate the network can make your users suffer poor performance and application time-outs. It doesn’t matter how much you save on infrastructure costs, if your customers start to experience poor performance for outward facing applications then the savings is not justified.

Gartner’s take-aways for IT organizations that are considering migrating applications to a cloud service provider:
  • Migrating applications to the cloud normally demands a manual process of deploying fresh cloud templates, reinstalling applications, and moving data.
  • Emerging V2C migration tools attempt to automate a migration from a traditional server virtualization environment into a cloud environment. These tools are not enterprise-ready because they are limited by hypervisor type, guest OS, cloud provider, and VM size:
    •  The V2C migration process is time consuming and prone to failure; the VM size and movement across networks are major contributing factors.
    •  Existing V2C migration tools such as Amazon's VM Import and VMware's vCloud Director are nascent and do not provide much visibility, insight, or assistance to IT organizations.
    •  Before selecting a CSP, IT organizations should ensure importing VMs is on the CSP’s 12-month road map.
    •  Hybrid cloud software and migration tools are emerging, but they are point-to-point, often unidirectional.

 In early May, Riverbed Technology announced a partnership with VMware, which was developed to help enterprises accelerate their journey to the cloud. With the latest collaboration, Riverbed WAN optimization increases the speed of virtual machines moving between clouds (private, public and hybrid) with VMware vCloud Connector. The combination of Riverbed and VMware solutions can enable cloud service providers to maximize their cloud computing offerings by empowering their customers to utilize their existing IT investments. This seems very promising, but it is a technology that was recently released and there are very few cloud service providers that have invested in the capabilities.

When moving to a hybrid cloud model, be realistic about the capabilities in the space before you sell a solution that doesn’t currently exist.

Thursday, May 10, 2012

vCloud Networks


vCloud Director has a layered network structure. There are three primary networking layers:

•  External
•  Organizational
•  vApp


Most organizations will use external and organizational networking; I don’t see many companies taking advantage of vApp networking outside of hosting providers. I will give you a more detailed explanation after we discuss the fundamentals of each networking component.


External Networks

Surprise… Surprise… Guess what external networks do? Give up? It is the means of providing connection to the outside infrastructure. If you don’t have the external network setup in vCloud Director then your Organizations and vApps can’t connect to the outside world.  External Networks are maintained by the Cloud Providers (IT Infrastructure Staff). To create a vCD External Network you point to an existing vSphere port group.



Organizational Networks

Organization Networks are where things get a little more dynamic. If you remember from my previous post, an organization is the workspace owned by an IT business partner or some other tenant. The organization is where you partition and allocate infrastructure resources so that the organizational owner can provision vApps.

The two simplest forms of the network construct are External Organization Network (Direct Connect) and Internal Organization Network. External Organization Networks (Direct Connect) are pretty straightforward; it just uses the External Network to connect to the Internet. Internal Organization Networks are only available internally to the organization; they do not have access to the External Network.

The more complex option is External Organization Network (NAT/Routed). This option is required if you are going to transfer your OVF format vApps to a hybrid cloud partner. This option provides its own private IP schema that the Organization can chose randomly through a dedicated layer 2 segment. The private network is then routed to the External Network.

If you launch your vSphere client, you will see that a dedicated port group is created that supports the organizational segment and that a vShield Edge appliance is automatically deployed. vShield Edge provides network services such as NAT, Firewall and DHCP functionalities to protect and serve this dedicated layer 2 segment.

When working with your external cloud partner, you will use the vShield Edge to create a secure VPN tunnel. In this deployment, the NAT device translates the VPN address of a vShield Edge into a publicly accessible address facing the Internet. Remote VPN routers use this public address to access the vShield Edge. Remote VPN routers can be located behind a NAT device as well. In this case, IT must provide both the VPN native address and the NAT public address to set up the tunnel. On both ends, static one-to-one NAT is required for the VPN address.

Like External Networks, Organization Networks are maintained by the Cloud Providers (IT Infrastructure Staff).

vApp Networks

vApp networks have the same 3 types of network options - vApp Network (Direct Connect), Internal vApp Network, and vApp Network (Nat/Routed). A vApp Network is setup by the organization owner for a vApp. The reason I don't see this being prevalent in most large companies is because I am skeptical that most consumers would have the desire or need to carve up their own network.

One good scenario for vApp Networks is to fence in two development vApp regions so that you can have the same virtual machine in both instances.



There are limitless scenarios that you can design with this layered network approach, from a practical standpoint most large organizations will implement organization networks that are direct connect.

Friday, May 4, 2012

Operational Model



Cloud computing focuses on technology solutions for cost savings, cost avoidance, and business agility, but in large enterprises cloud computing will be a catalyst for organizational realignment to converge traditional infrastructure silos. A new operations model is needed to bring insight into IT costs so that IT executives can become the brokers for internal and external cloud solutions.

Even though logic may point to a realignment of traditional silos, the shift is a challenge considering it will impact span-of-control for IT executives. In my experience, functional silos become barriers to innovation. Innovation thrives in environments that nurture ideas, collaboration, and diverging viewpoints. When departments run in silos they are not looking at broader aspects of organizational activities.



Businesses tend to structure their IT departments based on specific functional roles. These departments may include Wintel servers, mid-range, mainframe, storage and recovery, and networking. I am going to focus on one of the fundamental building blocks of cloud computing - virtualization.

News: Top vBlog 2016 Trending: DRS Advanced Settings