By Randy Clark
People don’t talk much about grid computing much these days anymore, but most application teams that require high performance from their infrastructure are actually addicted to grid computing -- whether they know it or not.
Gone are the days of requiring a massive new SMP box to get to the next level of performance. But in today’s world of tight budgets and diverse application needs, the linear scalability inherent in grid technologies becomes meaningless when there are no more blades being added.
This constraint has led grid managers and solution providers to search for new ways to squeeze more capacity from their existing infrastructures, within tight capital expenditure budgets. [Disclosure: Platform Computing is a sponsor of BriefingsDirect podcasts.]
The problem is that grid infrastructures are typically static, with limited-to-no flexibility in changing the application stack parameters – such as OS, middleware, and libraries – and so resource capacity is fixed. By making grids dynamic, however, IT teams can provide a more flexible, agile infrastructure, with lower administration costs and improved service levels.
So how do you make a static grid dynamic? Can it be done in an easy-to-implement and pragmatic, gradual way, with limited impact on the application teams?
By introducing private cloud management capabilities, armed with standard host repurposing tools, any types of grid deployments can go from being static to dynamic.
For example, many firms have deployed multiple grids to serve the various needs of application teams, often using grid infrastructure software from multiple vendors. Implementing a private cloud enables consolidation of all the grid infrastructures to support all the apps through a shared pool approach.
The pool then dynamically allocates resources via each grid workload manager. This provides a phased approach to creating additional capacity through improved utilization, by sharing infrastructure without impacting the application or cluster environments.
The beginning of queue sprawl
Take another example. What if the grid teams have already consolidated using a single workload manager? This approach often results in “queue sprawl,” since resource pools are reserved exclusively for each application’s queues.
But by adding standard tools, such as virtual machines (VMs) and dual-boot, resources can be repurposed on demand for high priority applications. In this case, the private cloud platform instructs on which application stack image should be running at any given time. This results in dynamic application stacks across the available infrastructure, such that any suitable physical machine in the cluster can be repurposed on demand for additional capacity.
Once an existing grid infrastructure is made dynamic and all available capacity is put to use, grid managers can still consider other non-capital spending sources to increase performance even further.
The first step is to scavenge internal underutilized resources that are not owned by the grid team. The under-used resources can range from employee desktop PCs, to VDI farms, and disaster recovery infrastructure or low-priority servers. From these, grid workloads can be launched within a VM on the "scavenged" machines, and then immediately stopped when the owning application or user resumes.
The second major step is to these higher levels of infrastructure productivity direct IT operating budget to external services such as Amazon EC2 and S3. A private cloud solution can centrally manage the integration with and metering of public cloud use (so-called hybrid models), providing additional capacity for “bursty” workloads, or full application environments. And since access to the public cloud is controlled and managed by the grid team, application groups are provided via a seamless service experience -- with higher performance for their total workloads.
While many grid professionals already consider their grid environments cloud-like, the advent of mature cloud computing models can help make grid environments more completely dynamic, providing new avenues for agility, service improvement and cost control.
And by squeezing more from your infrastructure before spending operating budget on external services, you can protect your investment while satisfying users’ insatiable appetite for more performance from the grid.
This guest post comes courtesy of Randy Clark, chief marketing officer at Platform Computing.
You may also be interested in:
- CERN's evolution to cloud computing portends revolutions in extreme IT productivity
- Mainframes provide fast-track access to private cloud benefits for enterprises, process ecosystems
- Technology, process and people must combine smoothly to achieve strategic virtualization benefits
Disclosure: Long GOOG.