Increasingly more organisations are moving their IT systems to the Cloud which is having a dramatic effect on the way datacentres are run. Traditional datacentres, where a facility manager controls a sea of servers, each responsible for a particular task (eg email), are becoming a thing of the past. In their place are Software-Defined Data Centres (SDDCs), in which the whole infrastructure is virtualised and delivered as a service. The control of the datacentre technology (eg. servers) is entirely automated by software.
This is a necessity due to the speed at which virtualised programs run and the flexibility required for modern servers to pick up and drop workloads. Virtual servers can respond to spikes in service demand in the same way the National Grid accommodates electricity consumption as kettles are switched on at half time on FA Cup Final day.
However, SDDCs still rely on the physical infrastructure within which the virtual servers are housed. While virtual technology has evolved at a staggering pace, the management and cooling techniques to successfully maintain adequate datacentre conditions have in many cases fallen behind, relying largely on inefficient manual input and decision making.
The successful operation of a datacentre relies heavily on maintaining optimum temperature levels. But in a bright new virtualised world where the buzz words are ‘speed’ and ‘efficiency’, reflecting the pace a virtual server can pick up workloads automatically, cooling technology has been much slower to react. As a result, datacentres are prone to temperature fluctuations due to the changeable workloads of virtual servers. In contrast, automating cooling strategies and devices will allow facility managers to maintain appropriate temperatures for a productive datacentre.
Moving one step further along the automated route, the software controlling the datacentre service will need to know in real time where it is possible to increase/decrease physical loads to hosts – without affecting or risking the service level agreement (SLA).
But the reality is that this auto-pilot approach will need to extend beyond the computer infrastructure and into the facility infrastructure. Traditionally, change in facility infrastructure has been measured in months and years, not milliseconds. Facility managers have worked hard to improve response times. However, without automatic capabilities they are faced with huge over-provisioning – taking them back to a highly sluggish, inefficient environment. Often, this leaves managers in a situation they sought to escape by moving to an SDDC in the first place.
Data centre infrastructure management (DCIM) can prevent this scenario. Way beyond measuring power usage effectiveness (PUE) or daily temperature checks in data centre hot or cold aisles, DCIM provides facility managers with the ability to monitor a virtual infrastructure. But up to now, the take up of DCIM has been slower than expected. This is largely due to the wrong packages being available.
A fully integrated and automated DCIM package operating in real time is the solution. This should have the capacity to directly talk to the SDDC software, ensuring that turning up the wick in a particular area will not bring down the physical hardware in that space, or reduce the operational efficiency as load is taken away.
We live in a world where organisational evolution is regular and fast-paced. To support this IT will need to be delivered reliably as a utility, facing the same customer expectations as traditional utilities –such as being able to match demand and generation effectively and efficiently, without wasteful and expensive over-provisioning.
With the right tools, facility managers can provide a modern physical infrastructure that properly enables the optimum use of the virtual technology within – one that meets the utility-IT balancing challenge.