Monday, November 26, 2012

VMware Project NEE - Online Training Delivered From Cloud




Project NEE is VMware's next generation education environment that is powered by vCloud Director. It is a new VMware Labs project providing a richly featured, powerful online learning and lab environment delivered from the cloud to any device, anywhere, anytime. Project NEE will provide online labs, live chat, video, social media, and access to real virtual machines.

I was fortunate enough to have the opportunity to try the VMware vSphere: Install, Configure, Manage lab in Project NEE. When you first start the console you are presented with a Windows 2003 Server. This instance is used for your VMware vSphere Client access to your ESXi hosts and vCenter server.

You will notice that there are two buttons on the side of the browser - Consoles and Manual. The Consoles tab provides you with access to the other virtual instances in the lab environment, and the Manual tab gives you the step-by-step instructions for the class.



Tuesday, November 20, 2012

Moving vSphere and ESXi Hosts to 10 Gig

Let's face it, networking is one of the most important aspects of setting up a hosted environment. In the end, the virtual machines you deploy are useless until networking is configured and they can communicate across the enterprise. VMware networking provides a tremendous amount of flexibility with virtual standard switches (vSS), virtual distributed switches (vDS), and the vShield products. It allows you to create a secured multi-tenant environment that can be configured at the virtual switch level, or all the way down to the port if you are using the virtual distributed switch.

Let's have a brief networking overview of vSphere.

A vSphere standard switch works much like a physical switch. It is a software-based switch that keeps track of which virtual machines are connected to each of its virtual ports and then uses that information to forward traffic to other virtual machines. A vSpherel standard switch (vSS) can be connected to a physical switch by physical uplink adapters; this gives the virtual machines the ability to communicate to the external networking environment and other physical resources. Even though the vSphere standard switch emulates a physical switch, it lacks most of the advanced functionality of physical switches. A vSphere distributed switch (vDS) is a software-based switch that acts as a single switch to provide traffic management across all associated hosts on a datacenter. This enables administrators to maintain a consistent network configuration across multiple hosts.

Friday, November 16, 2012

Windows Server 2012 on ESXi Hosts



VMware issued an interesting knowledge base article stating that snapshots, checkpoints, and VMotion actions of virtual machines with Windows 8 or Windows Server 2012 are not compatible with ESXi hosts implementing different versions. This could cause an interesting dilemma for administrators that upgrade their clusters in a phased approach. In essence, you will need to do a cold migration of the virtual machine to the new host before you can perform the upgrade. This is something you want to keep in mind before you start deploying Windows Server 2012 and Windows 8 in your environment. 



Details

Due to changes in Microsoft's virtual machine generation counter specification that was introduced in the Window 8 Release Preview and Windows Server 2012 RC, corresponding changes were also required in the virtual machine BIOS. Snapshots, checkpoints and VMotion actions of virtual machines with these versions of Windows are not compatible between ESXi hosts that have implemented different revisions of Microsoft's virtual machine generation counter specification.

Saturday, September 29, 2012

Microsoft's Plan To Boost Skilled Workforce Shows Promise

On April 2, 2012, I wrote a blog article called IT's Lost Generation that discussed the talent drain that was going to be recognized over the next 10 to 12 years as 10,000 Baby Boomers retire each day. The recent news from Microsoft shows that the STEM crisis is hitting much earlier and it will deepen soon.

In my article I discussed what I thought attributed to the cause, "Generally speaking, this growing issue is becoming very apparent to most large companies. What has caused this problem? Here are some key contributors that have happened over the past 10 years - depleted staffing levels due to anemic budgets, outsourcing, off-shoring, and stagnate growth opportunities. The recent economic downturn has exasperated the issue. With an 8.3% unemployment rate there is a deep pool of 35 to 50 year old men and women seeking jobs. They are people that are willing to take significantly less money because they have been unemployed for a substantial amount of time. Moreover, it has caused immobility for the current IT staff. People are staying in their current jobs longer, not seeking other opportunities, and they are not being promoted up the ranks because of the extensive amount of layoffs over the past 3 years. In many large companies, the same people have staffed entry-level IT positions for 10 years; there isn't a healthy infusion of young talent entering IT."

Wednesday, September 19, 2012

Microsoft's Bold New Approach



It has been a few weeks since I have posted, sometimes life just gets in the way of my desire to participate in social media. But, today, I wanted to give my thoughts on Microsoft's new business approach with the upcoming release of several products and the strong push into cloud computing with Microsoft Azure.

Let me just say, I think this is the most exciting update to the Microsoft family of products since Windows 95. Do you remember the hype around Windows 95? Microsoft licensed the Rolling Stones "Start Me Up" song for their advertisements. On August 24, 1995, news reports showed lines around the corner of consumers eagerly waiting to get a copy of the new operating system. I wasn't among the die-hard people waiting at midnight to pick up a copy at CompUSA, but I did get a copy the very next morning. It had several new features included in the 32-bit operating system; however, I was a gamer and the most important feature for me was plug and play. Being a gamer required the use of 3rd party video and sound cards, with Windows 3.x there was the painful process of working with the BIOS settings, motherboard pin positions, and IRQ settings to get all your device to work on your desktop, it was never a pleasant experience. In Windows 95, the new plug and play feature orchestrated all this. Huzzah!



Although cosmetically future versions of Windows became more polished, the GUI presentation remained relatively the same. Windows 8 is a bold new approach. The metro-style interface was contrived from the German modernist movement of the 1920s and 1930s, the Bauhaus school. It looks at the essential nature of the object, simplicity and functionality. Sam Moreau stated it was, "Reducing down to the most beautiful form and function - that's what the Bauhaus was all about."

Tuesday, August 7, 2012

Cloud Computing Use Case - Part 3



An application programming interface (API) is a specification intended to be used as an interface by software components to communicate with each other. An API may include specifications for routinesdata structuresobject classes, and variables. Open Cloud APIs are currently a heavily sought after solution to interconnect cloud applications in a more fluid manner. It also affords collaborative services between cloud service and deployment models for interoperability and data portability.

The primary mechanism for building cloud computing solutions is the APIs provided by cloud providers. The Cloud Computing Use Case states that cloud APIs work at four different levels, and they fall into five basic categories.

Levels of APIs

There are four different levels of APIs. Each level requires the developer to focus on different tasks and data structures.

Level 1 – The Wire: At this level, the developer writes directly to the wire format of the request. If the service is REST-based, the developer creates the appropriate HTTP headers, creates the payload for the request, and opens an HTTP connection to the service. The REST service returns data with an accompanying HTTP response code. Because of the straightforward nature of many REST services, it is possible to be relatively efficient while writing code at this level.

If the service is SOAP-based, the developer creates the SOAP envelope, adds the appropriate SOAP headers, and fills the body of the SOAP envelope with the data payload. The SOAP service responds with a SOAP envelope that contains the results of the request. Working with SOAP services requires parsing the XML content of the envelopes; for that reason, most SOAP services are invoked with a higher-level API.

Level 2 – Language-Specific Toolkits: Developers at this level use a language- specific toolkit to work with SOAP or REST requests. Although developers are still focused on the format and structure of the data going across the wire, many of the details (handling response codes and calculating signatures, for example) are handled by the toolkit.

Level 3 – Service-Specific Toolkits: The developer uses a higher-level toolkit to work with a particular service. Working at this level, the developer is able to focus on business objects and business processes. A developer can be far more productive when focusing on the data and processes that matter to the organization instead of focusing on the wire protocol.

Thursday, July 26, 2012

Cloud Computing Use Case - Part 2

In the last post we discussed Cloud Computing Use Group's Cloud Taxonomy; today we are going to dive into their Standards Taxonomy. There are four different ways standards will affect use case scenarios. The standards play a role in the type of cloud service, across different types of cloud services, between enterprise and the cloud, and within the private cloud of an enterprise.


Standards Across Cloud Service Models

Cloud computing is being adopted by all size organizations, this is a revolutionary shift in information technology that is transforming the way we work and is clearly illustrated with cloud solutions like Salesforce.com, Google Apps, and Concur for SaaS, Microsoft Azure and Cloud Foundry for PaaS, and Amazon EC2 and Rackspace for IaaS. Standards for how these different types of cloud service models work together will provide cloud consumer's value.

Standards Within Cloud Service Models

Within each layer of the cloud service model (SaaS, PaaS, IaaS), open standards help prevent vendor lock-in.

Open standards and standard APIs are all about portability, the most applicable open standards to cloud computing are those adopted by the non-profit Open Cloud Initiative (OCI), a non-profit advocate of open cloud computing that was launched at OSCON in 2011.

OCI's requirements for an open cloud are:

  1. Open Formats: All user data and metadata must be represented in open standard formats
  2. Open Interfaces: All functionality must be exposed by the way of open standard interfaces
For Infrastructure as a Service, a standard set of APIs to work with cloud databases would allow applications to work with data from multiple vendors. That common API would give users the freedom to move to another cloud database provider without major changes, and it would make it much easier to integrate new data sources with existing applications. Common APIs for other cloud infrastructure services such as storage, message queues or MapReduce would provide similar benefits, as would common formats for data and data interchange. In the case of virtual machines, a common virtual machine format is crucial. Users should be able to take a VM built and deployed with one cloud provider and deploy it to another cloud provider without changes.

Friday, July 20, 2012

Cloud Computing Use Case - Part 1


The Cloud Computing Use Case group brought together some cloud consumers and cloud vendors to provide several use case scenarios for cloud computing that focused on ensuring interoperability, ease of integration, and portability. They wanted to make certain the case studies didn't use closed and proprietary technologies, but centered on an open environment, minimizing vendor lock-in, and increasing customer choice. Open source solutions and open standards deliver innovation and choice for cloud computing consumers.

  
With several different open standards available in cloud computing, it is important for corporate IT departments to understand the landscape and different approaches to open solutions.
  
As we start to migrate our data from traditional datacenters to cloud subscription-based utility providers it is imperative that the data remains portable. If the vendor is not using open standards then typically you will need an intermediary service to broker the data conversion.
  
The white paper's use cases:
  • Provide a practical, customer-experience-based context for discussions on interoperability and standards.
  • Make it clear where existing standards should be used.
  • Focus the industry's attention on the importance of Open Cloud Computing.
  • Make it clear where there is standards work to be done. If a particular use case can't be built today, or if it can only be built with proprietary APIs and products, the industry needs to define standards to make that use case possible.

Thursday, July 12, 2012

Cloud Cube Model

The cloud ecosystem is accelerating at an astounding rate and adopting cloud computing is a complex decision that includes several factors. The Jericho Forum has designed the Cloud Cube Model to help select cloud formations for secure collaboration. Their intriguing cloud model helps IT managers and business leaders assess the benefits of cloud computing.

The Cloud Cube Model looks at the several "cloud formations". They amount to the cloud service and deployment models. According to the NIST guidelines there are 3 service models which include Software as a Service, Platform as a Service, and Infrastructure as a Server; and there are 4 deployment models which include Public, Private, Community, and Hybrid. Each of these models provides different variations of agility, flexibility, risk, and responsibility.




Tuesday, July 10, 2012

Certificate of Cloud Security Knowledge



The Certificate of Cloud Security Knowledge (CCSK) by the Cloud Security Alliance (CSA) is the first certificate that focuses on cloud computing security. It is currently the most prestigious cloud certificate available. The certificate demonstrates that the IT professional has the conceptual knowledge and implementation skills to deploy a cloud solution with a security risk based approach. 

The Cloud Security Alliance is a not-for-profit organization with a mission to promote the use of best practices for providing security assurance within cloud computing. The CCSK is strongly supported by a broad coalition of experts and organizations from around the world. The collaboration between CSA and ENISA means that the world’s two leading organizations for vendor-neutral cloud security research are providing the foundation for the industry’s first cloud security certification.

To study for the exam you need to have a comprehensive knowledge of the CSA v2.1 Guidance document and the ENISA whitepaper. Both papers can be downloaded from the  Cloud Security Alliance website.
  • Gain competency in the 13 domain topics of the CSA Guidance For Critical Areas of Focus in Cloud Computing V2.1
  • Show understanding of ENISA Cloud Computing: Benefits, Risks and Recommendations for Information Security
  • Be aware of applied knowledge as it relates to: classifying cloud providers into S-P-I model, redundancy, securing popular cloud services, vulnerability assessment considerations, and practical encryption use cases.

Thursday, June 28, 2012

vCloud Director Allocation Models



VMware vCloud Director comes with three different allocation models - Allocation Pool, Pay-As-You-Go, and Reservation Pool. I am going to give a brief overview of the allocation models, and give you my perspective on the best option for an internal corporate environment.

Within an organization vDC you have multiple resource allocation methods each with their own inherent characteristics that can be placed in one of two categories: VM or resource pool.

Allocation Pool Model

The allocation model permits an organization to acquire a given amount of resource, yet has the capability to “burst” higher. Because the allocation pool only guarantees a specified percentage of the allocated resource, the remainder is not guaranteed and there is potential for contention with demand from other consumers.


Friday, June 22, 2012

The 10 Laws of Cloudonomics



Cloudonomics Law #1: Utility services cost less even though they cost more.
An on-demand service provider typically charges a utility premium — a higher cost per unit time for a resource than if it were owned, financed or leased. However, although utilities cost more when they are used, they cost nothing when they are not. Consequently, customers save money by replacing fixed infrastructure with clouds when workloads are spiky, specifically when the peak-to-average ratio is greater than the utility premium.

Cloudonomics Law #2: On-demand trumps forecasting.
The ability to rapidly provision capacity means that any unexpected demand can be serviced, and the revenue associated with it captured. The ability to rapidly de-provision capacity means that companies don’t need to pay good money for non-productive assets. Forecasting is often wrong, especially for black swans, so the ability to react instantaneously means higher revenues, and lower costs.

Cloudonomics Law #3: The peak of the sum is never greater than the sum of the peaks.
Enterprises deploy capacity to handle their peak demands – a tax firm worries about April 15th, a retailer about Black Friday, an online sports broadcaster about Super Sunday. Under this strategy, the total capacity deployed is the sum of these individual peaks. However, since clouds can reallocate resources across many enterprises with different peak periods, a cloud needs to deploy less capacity.

Wednesday, June 20, 2012

Cloud Procurement

There is a new dynamic to IT infrastructure's role when taking on the responsibility of becoming the cloud broker for their organization, the art of procurement. This new skill set becomes self-evident when negotiating with an external cloud provider for new services.

In his technical brief Cloudonomics, Ben Kepes says some of the economic benefits of cloud computing include:

  • Lowering the opportunity cost of running technology
  • Allowing for a shift from capital expenditure to operating expenditure
  • Lowering total cost of ownership of technology
  • Giving organizations the ability to add business value by renewed focus on core activities

Mark Benioff from Salesforce.com outlines the same benefits of cloud computing, "Our definition of Cloud Computing is multi-tenant, it's faster, half the cost, pay as you go, it grows as you grow or shrinks as you shrink. It is extremely efficient."

Saturday, June 9, 2012

IT Job Landscape



Recently there have been several interesting articles about the changing job landscape. This tectonic shift is being accelerated by large-scale movement in cloud computing and the aging IT population. There are two interesting sides to the conversation, both focusing on the dark clouds rolling in over the horizon. But, if we think about it critically, both aspects compliment each other.

Talent Drain

Massachusetts's Lieutenant Governor Tim Murray, who was recently honored for his leadership in STEM, stated, "Retirements are expected to deplete the science and technology workforce by 50% over the next decade." 

I wrote about this topic back in April; in 10 to 12 years most the people I know in information technology, from infrastructure support to developers to IT leaders, will be near the end of their careers or retired. Scary! No really, it is very scary. A recent conversation I had with one of my colleagues reflects this sentiment; "I can see the light at the end of the tunnel. Only a couple more years before I retire." Fortunately, if history serves us right, when the market place opens up due to the significant retirements (estimated at 10,000 to 12,000 a day for the next 12 years) there will be a flood of ambitious young individuals diving into the IT job pool to take advantage of the burgeoning salaries. But will they have the skills required?

Tuesday, June 5, 2012

Cloud Contract Considerations




There is another paradigm shift happening in business, in some ways it is very similar to the early days of virtualization, senior leadership didn't trust virtualization with mission critical applications until it was reassured that the technology provided the security, stability, and performance it had come to expect on a conventionally provisioned server. The same holds true for cloud service providers. In all likelihood, your IT executives are going to be comfortable with the perceived risks of moving an application to an external provider for the economic gains.
There are several aspects to cloud contracts that you should consider when working with a cloud service provider to move an application away from servers physically located within your datacenter. The contract with a cloud service provider is vital. The National Outsourcing Association (NOA) points out that cloud computing is not like normal outsourcing contracts, cloud service provider contracts tend to be far less rigorous than traditional outsourcing partners.

Many cloud services providers reserve the right to change all, or part of, the agreement once it is signed.

Saturday, June 2, 2012

vCloud Director Limitations

 


While designing our vCloud Director environment we discovered several limitations you need to take into consideration. These limitations could have an impact on your overall design.

  • No support for multi-site vCloud Director deployments. For organizations that have a multiple campus locations, you can't use a single vCloud Director instance to manage your entire hosting platform. 
  • Moving a vApp with vCloud Connector is slow and requires and outage. We tested a single medium sized virtual machine move on our internal network and it took 45 minutes!
  • There is an 8 node cluster limitation while using Fast Provisioning with VMFS. It is the same limitation as VMware View. That limitation doesn't apply if you are using NFS. With NFS you can continue with the vSphere 5.0 limit of 32 hosts in a single cluster.
  • Fast Provisioning doesn't give you the same management capabilities that are found in VMware View. There is no centralized deployment method for application updates to linked clones. It makes sense, when you recompose with VMware View you lose all the saved data on the linked image. That would be bad in a server environment!
  • There is a 30 clone limitation when using Fast Provisioning, and as you get close to that limitation you can suffer latency issues.
  • There is no ability for different data stores and storage tiers for VMs that have diverse storage requirements in a vApp. That can become a limitation if you are trying to create a vApp for a marketing application that has a database VM that requires tier 1 storage and a web application that needs tier 3 storage. In this instance you would have to split your application up between multiple vApps based on the appropriate service class.
  • vCloud Director is not storage clustering or storage DRS aware.

Tuesday, May 29, 2012

Utility Computing

 


Most people point to Douglas Parkhill's book The Challenge of the Computer Utility that was written in 1966 as the origin of the cloud computing concept. The book details many of the foundational elements of today's cloud computing - elastic provisioning, online delivery, and perception of infinite supply. It just took 34 years for infrastructure to support the original vision.


In the book, Behind the Cloud: The Untold Story of How Salesforce.com Went From Idea to a Billion-Dollar Company by found Marc Benioff, he describes how he had a number of conversations with Tom Siebel about creating an online CRM product. Typically licensing software was selling for extraordinary amounts of money. The low-end product could start around $1,500 per user per license. Worse, buying pricey software wasn't the only expense. There could be an additional $54,000 for support; $1,200,00 for customization and consulting; $385,000 for the basic hardware to run it; $100,000 for administrative personnel; and $30,000 in training. The total cost for 200 people to use a low-end product in the 1990s could exceed $1.8 million in the first year alone.


Most egregious was that the majority of this expensive (and even more expensively managed) software became "shelfware" as 65 percent of Siebel licenses were never used, according to the research grou Gartner.


We have all seen products come into our companies that had millions of dollars invested into the initiative only to find out that it didn't suit the business need, they were to complex, or couldn't be integrated into the existing systems. Unfortunately the investment made into the product made it very hard to walk away from the solution even if it was obvious that it wouldn't work.


Marc Benioff wanted to change that with a SaaS CRM solution. He envisioned having subscribers that would pay a monthly fee like you would a utility company and that it would require half the investment. Providing subscription applications over the internet wasn't unique, it had been done with Prodigy, CompuServe, AOL, and online gaming; however what was unique was taking a business application and hosting it on the internet to replace traditional corporate software.


In May of 2003 Nicholas Carr wrote an article IT Doesn't Matter in the Harvard Business Review. Most IT professionals and executives were very critical of his article. He argued that corporate computer systems weren't important to a company's success. They were necessary - you couldn't operate without them - but most systems had become so commonplace that they no longer provided one company with an edge over its competitors. He thought that information technology had become inert. It was just the cost of doing business.


SaaS based solutions changed the landscape. By offering a utility based computing model that was charged by monthly subscription you could reduce your initial investment and have the elasticity to expand when required. The SaaS cloud service provider operated the infrastructure and software, and companies didn't need to worry about the complexity of implementing a system like CRM. Additionally, you were able to have a robust system up and running in a matter of days instead of 12 to 18 months with traditional software development.


Corporate executives are embracing the utility based computing business model to outsource certain applications to cloud partners. Skip Tappen from NWN viewed this transition as TaaS (Technology as a Service) at the recent 2012 IT Summit and Expo. He said that IT services are a continuum from traditional IT through SaaS. There were two components to delivering the technology, the first was the physical layer (infrastructure, platform, and software) and the second was the service.


SaaS solutions present a compelling opportunity for small, medium, and large corporations. They can provide a competitive advantage by reducing complexity, enhancing speed to market, and lowering capital costs; however they should be implement for edge based software solutions. Core applications that define the organization's heart and soul, their distinct product characteristics should stay internal. While cloud service providers offer great solutions for software that isn't a core component, they don't provide innovative strategic value for your company and that is the reason core applications should stay in-house.


IT must define the criteria for assessing applications that provide strategic value moving to the cloud and which applications are better suited for staying on premise. They should start with "greenfield" applications, it should include a cloud questionnaire for an initial assessment of the application and their should be a cloud solutions team that does a business and risk assessment.


Devising a Cloud Application Onboarding Strategy by Alessandro Perilli from Gartner can provide you with some lessons learned from field research of 17 world-wide organizations. He dives into qualification questionnaires and business impact assessments. It is a fantastic document that can help frame an initial approach for a cloud strategy.


Alessandro points out that developing an application onboarding strategy is a fundamental aspect of any cloud adoption initiative. Organizations need an efficient approach to identifying, prioritizing, and facilitating the onboarding process while addressing organizational and cultural changes.

Monday, May 21, 2012

Hybrid Clouds



Last week at the Boston 2012 VMware Forum I attended a presentation on Hybrid Clouds. During the Hybrid Cloud presentation, a solution architect takes to the stage and shows how easy it is to move applications from your existing infrastructure to an external hosting provider. Click a button and shazam your application has been magically transported to the ether plane of a cloud partner. Apparently cloud applications have come with an interesting new wormhole feature that lets you defy the laws of physics.


The NIST defines a hybrid cloud as a composition of two or more distinct cloud infrastructures (private, community, or public) that remain unique entities, but are bound together by standardized or proprietary technology that enables data and application portability.

Gartner’s conclusion about moving applications to the cloud is that IT organizations desire to conduct V2C (virtual machine to cloud) migrations, but today's market lacks maturity for most scenarios. Additionally, these migrations are time consuming and problematic. Most IT organizations should wait for tools to mature in terms of automation and performance before committing to V2C. As for now, organizations that need to migrate applications should do so by manually redeploying applications to the cloud provider.

That doesn’t disqualify some useful applications for hybrid clouds as long as you are realistic in your approach. First, if you move your virtual machine templates to a cloud partner and build up the infrastructure then you can leave it turned off in a “pay as you go” model for bursting capabilities. This does require you to build out the virtual machines and then reload your applications, but they would be available when you need additional capacity for peak usage.

A re-occurring example of an organization that would benefit from a hybrid cloud solution is Ticketmaster. Theoretically, Ticketmaster must have massive utilization requirements when tickets go on sale for concerts or sporting events. Let’s say it is Friday morning and Bruce Springsteen tickets are going on sale at 10 am. The instant those tickets are available, Ticketmaster will require an enormous amount of IT infrastructure to support the amount of transactions that happen for the next few hours. How can Ticketmaster afford the infrastructure for a business model that requires peak demands for ticket sales like Bruce Springsteen? They could rely on a cloud service provider.

Building out your virtual environment on a cloud partner’s infrastructure may save you from the capital expense of having to purchase the underlying hardware, but it certainly doesn’t admonish you for supporting the virtual instances. Even though these virtual machines may lie dormant, you still need to ensure that they are kept up to date with the latest security patches and software updates. Additionally, you need to make sure baseline infrastructure components like corporate security authentication and names resolution work in your partner’s multi-tenant environment.

When starting to assess applications that are suitable for a hybrid cloud solution, it is good to start with lab and development workload until your company is mature enough to move production instances. One scenario that does not work well in a hybrid cloud model is an application that relies on back-end infrastructure at the company’s datacenter and is adverse to latency issues. Applications with high I/O that need to navigate the network can make your users suffer poor performance and application time-outs. It doesn’t matter how much you save on infrastructure costs, if your customers start to experience poor performance for outward facing applications then the savings is not justified.

Gartner’s take-aways for IT organizations that are considering migrating applications to a cloud service provider:
  • Migrating applications to the cloud normally demands a manual process of deploying fresh cloud templates, reinstalling applications, and moving data.
  • Emerging V2C migration tools attempt to automate a migration from a traditional server virtualization environment into a cloud environment. These tools are not enterprise-ready because they are limited by hypervisor type, guest OS, cloud provider, and VM size:
    •  The V2C migration process is time consuming and prone to failure; the VM size and movement across networks are major contributing factors.
    •  Existing V2C migration tools such as Amazon's VM Import and VMware's vCloud Director are nascent and do not provide much visibility, insight, or assistance to IT organizations.
    •  Before selecting a CSP, IT organizations should ensure importing VMs is on the CSP’s 12-month road map.
    •  Hybrid cloud software and migration tools are emerging, but they are point-to-point, often unidirectional.

 In early May, Riverbed Technology announced a partnership with VMware, which was developed to help enterprises accelerate their journey to the cloud. With the latest collaboration, Riverbed WAN optimization increases the speed of virtual machines moving between clouds (private, public and hybrid) with VMware vCloud Connector. The combination of Riverbed and VMware solutions can enable cloud service providers to maximize their cloud computing offerings by empowering their customers to utilize their existing IT investments. This seems very promising, but it is a technology that was recently released and there are very few cloud service providers that have invested in the capabilities.

When moving to a hybrid cloud model, be realistic about the capabilities in the space before you sell a solution that doesn’t currently exist.

Thursday, May 10, 2012

vCloud Networks


vCloud Director has a layered network structure. There are three primary networking layers:

•  External
•  Organizational
•  vApp


Most organizations will use external and organizational networking; I don’t see many companies taking advantage of vApp networking outside of hosting providers. I will give you a more detailed explanation after we discuss the fundamentals of each networking component.


External Networks

Surprise… Surprise… Guess what external networks do? Give up? It is the means of providing connection to the outside infrastructure. If you don’t have the external network setup in vCloud Director then your Organizations and vApps can’t connect to the outside world.  External Networks are maintained by the Cloud Providers (IT Infrastructure Staff). To create a vCD External Network you point to an existing vSphere port group.



Organizational Networks

Organization Networks are where things get a little more dynamic. If you remember from my previous post, an organization is the workspace owned by an IT business partner or some other tenant. The organization is where you partition and allocate infrastructure resources so that the organizational owner can provision vApps.

The two simplest forms of the network construct are External Organization Network (Direct Connect) and Internal Organization Network. External Organization Networks (Direct Connect) are pretty straightforward; it just uses the External Network to connect to the Internet. Internal Organization Networks are only available internally to the organization; they do not have access to the External Network.

The more complex option is External Organization Network (NAT/Routed). This option is required if you are going to transfer your OVF format vApps to a hybrid cloud partner. This option provides its own private IP schema that the Organization can chose randomly through a dedicated layer 2 segment. The private network is then routed to the External Network.

If you launch your vSphere client, you will see that a dedicated port group is created that supports the organizational segment and that a vShield Edge appliance is automatically deployed. vShield Edge provides network services such as NAT, Firewall and DHCP functionalities to protect and serve this dedicated layer 2 segment.

When working with your external cloud partner, you will use the vShield Edge to create a secure VPN tunnel. In this deployment, the NAT device translates the VPN address of a vShield Edge into a publicly accessible address facing the Internet. Remote VPN routers use this public address to access the vShield Edge. Remote VPN routers can be located behind a NAT device as well. In this case, IT must provide both the VPN native address and the NAT public address to set up the tunnel. On both ends, static one-to-one NAT is required for the VPN address.

Like External Networks, Organization Networks are maintained by the Cloud Providers (IT Infrastructure Staff).

vApp Networks

vApp networks have the same 3 types of network options - vApp Network (Direct Connect), Internal vApp Network, and vApp Network (Nat/Routed). A vApp Network is setup by the organization owner for a vApp. The reason I don't see this being prevalent in most large companies is because I am skeptical that most consumers would have the desire or need to carve up their own network.

One good scenario for vApp Networks is to fence in two development vApp regions so that you can have the same virtual machine in both instances.



There are limitless scenarios that you can design with this layered network approach, from a practical standpoint most large organizations will implement organization networks that are direct connect.

Friday, May 4, 2012

Operational Model



Cloud computing focuses on technology solutions for cost savings, cost avoidance, and business agility, but in large enterprises cloud computing will be a catalyst for organizational realignment to converge traditional infrastructure silos. A new operations model is needed to bring insight into IT costs so that IT executives can become the brokers for internal and external cloud solutions.

Even though logic may point to a realignment of traditional silos, the shift is a challenge considering it will impact span-of-control for IT executives. In my experience, functional silos become barriers to innovation. Innovation thrives in environments that nurture ideas, collaboration, and diverging viewpoints. When departments run in silos they are not looking at broader aspects of organizational activities.



Businesses tend to structure their IT departments based on specific functional roles. These departments may include Wintel servers, mid-range, mainframe, storage and recovery, and networking. I am going to focus on one of the fundamental building blocks of cloud computing - virtualization.

Monday, April 30, 2012

The Fear of Clouds


Lucy Van Pelt: Are you afraid of responsibility? If you are, then you have hypengyophobia. 
Charlie Brown: I don't think that's quite it. 
Lucy Van Pelt: How about cats? If you're afraid of cats, you have ailurophasia. 
Charlie Brown:Well, sort of, but I'm not sure. 
Lucy Van Pelt: Are you afraid of staircases? If you are, then you have climacaphobia. Maybe you have thalassophobia. This is fear of the ocean, or gephyrobia, which is the fear of crossing bridges. Or maybe you have Nephophobia. Do you think you have Nephophobia? 
Charlie Brown: What's Nephophobia? 
Lucy Van Pelt: The fear of clouds. 
Charlie Brown: THAT'S IT!


Cloud computing is a herald of fear for many IT professionals. But, if IT history is any indicator, the impact it will have on IT jobs is overstated. 

A Tale of Two Trends

Outsourcing: The IT fashion trend at the beginning of the century was outsourcing of IT operations. By outsourcing your IT operations to a strategic partner it would help companies focus on their "core competencies". Furthermore, outsourcing to a company like IBM Global Services would cut labor costs, training costs, and promised superior technical solutions because they could attract top IT talent.

Wednesday, April 25, 2012

Catalogs and Fast Provisioning




Catalogs are a collection of VM templates, vApps, and OS media that are available to organizations for deployment. There are typically two catalogs, the first catalog is the master catalog that is maintained by the cloud providers (IT infrastructure staff). Typically these consist of the server operating system (Windows and Linux) loaded with infrastructure monitoring and security tools. Templates could also include core infrastructure components like IIS and SQL. The intention of the master catalog is to provide templates to the users in an Organization, but they can't edit them.

You are going to want to create a master organization for you master catalog. This organization should be set with PAYG (pay as you go) so it does not consume any resources from the cluster because they are never powered on.

Organizations then make copies of these templates and customize them to meet their specific business unit needs. These modified templates become a part of the organization catalog.

Tuesday, April 24, 2012

Saturday, April 21, 2012

vApps



We are finally going to dive into VMware vCloud Director vApps. Yeah! vApps consist of a single or multiple virtual machines that are packaged and maintained as a single entity. vApps act like a wrapper for multi-tier applications, for example if you had a web application for client information called Contact that consisted of 3 web servers, 1 application server, and 1 database server they would be contained in a vApp cell.

This diagram gives a depiction of the Contact vApp:


vApps aren't new to the VMware ecosystem, they existed in vSphere, but the entire vCloud Director infrastructure is designed to work with vApps. Even if you deploy a single virtual machine it will be in a vApp.

Your vApps are going to map to Organization vDCs that are supplied by the Cloud Provider (infrastructure staff). In this example we have allocated a subset of the Provider vDC resources in Gold, Silver, and Bronze to the Sales Organization. The Organization Administrator can then place the vApp applications in the appropriate Organization vDC based on SLEs provided by the cloud provider.

Monday, April 16, 2012

vDC and Cluster Design Options for vCloud Director


Ruminate: To engage in contemplation.

I know.. I know... I said I was going to write about Catalogs and vApps a few posts back, but over the weekend I was thinking about design scenarios for vCloud Director. Not everyone is going to want to use SLEs or SLAs for their design. Instead you may decide to setup your environment based on application life-cycle. In the below diagram, we are going to leave production and acceptance with traditional governance managed by the infrastucture operations staff. We include acceptance in this model to ensure that it mirrors the production environment.


Like I said in my previous post, enterprise organizations still get the benefit of self-service deployment that comes with IaaS for 60% to 70% of their server infrastructure by enabling vCloud Director in development.

Futhermore, this design removes the complexity of coming up with service classes and the show-back charges associated with the service levels. Lets face it, if you don't come up with a show-back or charge-back model when using service offerings everyone is going to select gold. Do you blame them? If you went to the Ford dealership and they were offering Mustang Shelby for the price of an Escort wouldn't you buy the Mustang?
   
However, I am a strong advocate of service class offerings that are typically associated with IaaS. What is "as a service" with no service classes?

Below I illustrate how you can design your clusters as part of your service class offering. These would feed into your vCloud Director Provider vDCs. The gold offering would include the latest model servers, tier 1 storage, and N+2 cluster redundancy. Silver and Bronze would use older equipment, tier 2 and tier 3 storage, and N+1 cluster redundancy.

Friday, April 13, 2012

Multi-Tenant Infrastructure in vCloud Director

Yesterday on my blog we discussed using VMware vCloud Director as a solution for IaaS. The scenarios gave the basic concepts of Virtual Datacenters (vDCs). One aspect I want to expanded upon is the multi-tenant dynamic Organization vDCs play in Organizations. If you remember, the NIST definition for cloud computing states the provider's computing resources are pooled to serve multiple consumers using a multi-tenant model.

Pooled resources have been the foundation of virtualization, but in a cloud environment that enables self-service capabilities (another NIST characteristic of cloud computing) you need to design your infrastructure so that your consumers don't adversely affect the other hosted tenants.

As discussed yesterday, the cloud provider (infrastructure IT staff) partitions the Provider vDCs resource allocations with Organization vDCs. These are then presented to the Organizations.


Thursday, April 12, 2012

IaaS Design with vCloud Director



In the next few posts, I plan to describe design solutions for using VMware vCloud Director as an internal IaaS layer.

Now, I think it is important for me to give my perspective on IaaS with self-provisioning in a corporate ecosystem. I think it is a great solution for development, If you are a multi-tier environment (production, acceptance, development) I think your production and acceptance environments should stay with a traditional governance model. 

A large part of your IT infrastructure can still can still reap the benefits of IaaS with self-provisioning. Gartner estimates that 60% to 70% of server infrastructure in a global company is used for development purposes.  IaaS in the development stack will help your enterprise become more agile, flexible, and speed applications to market.

Monday, April 9, 2012

Is The Cloud Today's Dot-Com?




Here is what Wikipedia says about the dot-com bubble:

Venture capitalists saw record-setting growth as dot-com companies experienced meteoric rises in their stock prices and therefore moved faster and with less caution than usual, choosing to mitigate the risk by starting many contenders and letting the market decide which would succeed. The low interest rates in 1998–99 helped increase the start-up capital amounts. Although a number of these new entrepreneurs had realistic plans and administrative ability, many more of them lacked these characteristics but were able to sell their ideas to investors because of the novelty of the dot-com concept.

Is today's cloud phenomena a mirror image of the dot-com boom of the mid-1990s? IDC estimates that spending on IT public cloud in 2011 was $28 billion. It is already having an economic impact on the technical marketplace. Furthermore, IDC estimates that last year alone, IT cloud services helped organizations of all sizes and all vertical sectors around the world generate more than $400 billion in revenue and 1.5 million new jobs. In the next four years, the number of new jobs will surpass 8.8 million.

Wednesday, April 4, 2012

The Solution



To be honest, there is no easy solution to the talent drain we are facing in the next 10 to 12 years. That is in part because companies have found more efficient ways to do the work with the current staffing levels. For example, 8 to 10 years ago a single infrastructure engineer would support 25 to 30 physical servers, and now with a highly virtualized environment that same server engineer can support 100 to 250 servers. Additionally, the people currently in the jobs are clinging on to them until they retire.

But ponder this scenario; you work for a fortune 250 company and over half of your IT staff (developers, infrastructure engineers, DBAs, IT leadership) will be gone in 12 to 15 years. 

So what to do? I think it is imperative that companies get directly involved in their local schools. We need to sponsor technology initiatives at both the high school and college level, and start advocating the opportunities that will be available in the near future. I have been working with Worcester Technical High School. They have a fantastic program teaching inner-city kids technology as a trade. They teach computer support, networking, development, database concepts, and of course my favorite subject virtualization. Here is their mission statement:

Monday, April 2, 2012

IT's Lost Generation



Hopefully I am not the only one that has noticed, but we will be facing a serious shortage of IT professionals in the next 10 to 12 years. I often quip about being the youngest person in IT at our Worcester campus for 22 years, I am now 40. I think that was reasonable when I was in my 20's or even pushing early 30's, but it isn't an encouraging sign that the youngest person on our IT staff is 40. There really is a lost generation of IT professionals.

Generally speaking, this growing issue is becoming very apparent to most large companies. What has caused this problem? Here are some key contributors that have happened over the past 10 years - depleted staffing levels due to anemic budgets, outsourcing, off-shoring, and stagnate growth opportunities. The recent economic downturn has exasperated the issue. With an 8.3% unemployment rate there is a deep pool of 35 to 50 year old men and women seeking jobs. They are people that are willing to take significantly less money because they have been unemployed for a substantial amount of time. Moreover, it has caused immobility for the current IT staff. People are staying in their current jobs longer, not seeking other opportunities, and they are not being promoted up the ranks because of the extensive amount of layoffs over the past 3 years. In many large companies, the same people have staffed entry-level IT positions for 10 years; there isn't a healthy infusion of young talent entering IT.

Thursday, March 29, 2012

Constructing Your Ground Floor



In today's blog post, I want share something I provided for my team that I believe is the cornerstone for success of any IT organization. I am not talking about technology, although that is a key component to any IT professional's job, I want to discuss the personality traits that are critical for the people that make up a team that strives to rise above mediocrity.

Honesty

As you look ahead in your career and advance into high ranks or management, consider this: Is any position or role in the company sustainable without honesty? If you are not a person of your word, you will quickly find yourself not meeting your business partner's expectations and it could have a lasting effect on yourself and your team. It is important to give candid feedback and ensure that you do not fall short of your intended goals. That requires providing positive and negative opinions on target dates, infrastructure needs, and work assignments. Of course, we have to be flexible enough to know when we have to move forward with the assignment after we have given our advice.

The Serenity Prayer by Reinhold Niebuhr states this message well:



Its simple, approach your colleagues and business partners with honesty, truthfulness, and an attitude to provide the best service possible, and then you have nothing to fear when something doesn't meet expectations.

Tuesday, March 27, 2012

Internal PaaS




I was recently asked "The discussion we are having right now is about whether anyone really needs a Platform as a Service. What kind of PaaS would be useful to you for hosting applications? Do you think there is a future for the Platform as a Service? Are PaaS offerings like Microsoft’s Azure and Salesforce’s Heroku simply doomed because they are trying to fill a niche for which there is little need? "

I actually think there is a future for "internal" PaaS solutions.It is an excellent complement to both private internal and external IaaS platforms.

Last year we deployed VMware Lab Manager as an IaaS solution for our internal developers. The solution was very well received because it provided our developers with their own virtual datacenter to deploy virtual machines when they need to test new applications. However, that also means they inherit the responsibility of maintaining those virtual machines. Here is the written clause we have in our development IaaS solution:

All the management duties of running the server remain with Lab Manager Workspace Owner. The Workspace Owner in this case, updates all application related software on their own, applies necessary application software patches, installs and upgrades applications, and monitors application performance.

Thursday, March 22, 2012

All You Need is Pixie Dust!




You have decided to go with a public cloud provider and your company has moved your enrollment system out to a third-party vendor. Three years down the road the relationship sours because they have doubled their hosting charges and you terminate the contract. How do you move the data back? Is that spelled out clearly in the contract?  Believe it or not, it can be extremely complex to move your data back in-house after you have decided to use a public SaaS or PaaS proprietary environment.

Let's think of it in the terms of last decade's outsourcing. In 2001 the insurance company I worked for decided to outsource with IBM Global Services. They kept all the infrastructure on premise, but the operational support was provided by IBM. After a 5 year marriage they decided the love affair had gone stale and operational support was brought back in-house. The transition back wasn't easy from a personnel standpoint, less then a 1/4 of the original staff decided to return back to our organization which was a significant talent drain, but the infrastructure was in our data center which ensured no disruption of service to the business.

Now that we have that in perspective. What happens when you deploy an application to a public SaaS or PaaS provider with proprietary infrastructure? There is a strong possibility that you will have to recreate your own system and all the development talent you had will be working for other companies.

Tuesday, March 20, 2012

Keep An Eye On the Horizon When Looking At The Clouds




Nobody can deny there are several benefits to public cloud offerings. But, when looking to partner with an outside cloud vendor you need to tread wisely.

Businesses should look at cloud alternatives when they are looking to implement a strategic solution they currently do not employ in-house or do not have the technical expertise to deploy. A good example would be a new company seeking to start a customer relationship management (CRM) system or an existing company looking to enhance its current capabilities.  In many companies CRM is often disjointed between sales people, regional office leads, and department executives. Sales tracking and customer relations for many companies are on antiquated systems, spreadsheets, and compiled through e-mails which can place a lot of pressure on sales forecasting.



In comes a solution like Salesforce.com. The well known cloud solution provider for sales reporting. Now company executives have real-time sales tracking and they can enable their sales force with Salesforce.com's chatter for internal social media to provide sales tips, best practices, and leads. This type of 'dashboard' into sales can help corporate leaders make strategic decisions on current market data. 

A good analogy would be giving your corporate executives a Tom-Tom to help chart their path through downtown Boston instead of providing them a map of the city before the Big Dig. While the map delivers a general idea of downtown Boston, the street information isn't accurate which could waste valuable time when trying to navigate to your destination. In today's hyper accelerated business world that isn't an option.

News: Top vBlog 2016 Trending: DRS Advanced Settings