"It's no secret that computing is moving toward the cloud as companies look to tap into the environment's flexibility and scalability advantages. According to a recent Forrester Research study, the transition is happening even faster than anticipated, with the market projected to be about 20 percent bigger by 2020 than the research firm previously predicted. And that transition is set to change the way data center infrastructure is deployed as well.
Cloud-Scale Data Centers: Limitless Possibilities
""There's been a higher level of replacement than we had assumed,"" said Forrester analyst Andrew Bartels. ""A lot of companies are not building data centers. They're going to public providers.""
The shift to the cloud doesn't mean that data centers are going away, however. Instead, the data center is becoming an increasingly cloud-focused institution. According to the Forrester study, hardware used for cloud data centers will account for 19 percent of data center server and storage spending by 2020. A recent MarketsandMarkets study observed that the industry is poised to see growth in the ""mega data center"" sector, as the economies of scale provided by cloud and colocation services are pushing companies toward larger infrastructure projects. The cloud may sound like a nebulous concept, but it's crucial to remember that it's tied to real world infrastructure, cloud computing expert Bill Kleyman explained in a recent video for Network Computing. That infrastructure is expanding as the ways people use technology evolves.
""The future of the data center means the future of everything: cloud computing, big data, IT consumerization,"" he said. ""The home is located in the data center. Because let's remember: Although we're looking up at the cloud, all of that has to have a physical home. And that's in the data center.""
Optimizing The Data Center For Cloud Deployments
With growing demands on the data center as it shifts to hosting the cloud, the technology in the facility has to change as well, with optimization techniques needed to make it operate at truly high performance levels. A recent ZDNet article highlighted some of Emerson Network Power's key techniques being used to improve availability, reduce costs and simplify management in the cloud-scale data center. Among these are using a high-density system configuration, optimizing the power architecture and deploying infrastructure management tools.
Advanced Hardware Solutions
Moving toward a denser data center configuration is a common approach for companies looking to optimize their current data centers for the cloud, as it allows them to increase the compute capacity and the energy efficiency of the facility without adding space, Emerson's Wesley Lim told ZDNet. With the combination of virtualization and new blade server technology, it's possible to dramatically increase the amount of computing power and the number of workloads being carried out in a server rack.
""As blade server usage increases, high-density rack configurations have become a best practice for enterprise data centers considering a cloud computing architecture,"" Lim said.
One of the biggest reasons companies are moving to the cloud is to take advantage of new technologies like big data analytics. But with big data programs, storage volumes can grow by as much as 60 to 80 percent annually. As a result, companies are turning toward scale-up architectures like mainframes, as well as tools like solid-state disks, which improve speed as compared to traditional disks - an important advantage for tasks like real-time analytics. The data center of the big data era will necessarily have advanced storage tiering with a range of hardware designed to meet evolving data needs.
Cloud-Scale Data Center
Enhanced Facilities Management
At the same time, these denser configurations can create new cooling challenges, and data center designers will want to implement rack- and row-based cooling solutions, according to Arunangshu Chattopadhyay, Emerson's director of power product marketing. Rather than relying on traditional perimeter cooling approaches, high-density environments need the source of cooling to be much closer to the actualservers, he told ZDNet. Taking this approach can substantially cut cooling costs. Also key to reducing cooling costs and improving general performance is implementing comprehensive infrastructure management.
With detailed analytics data about all aspects of the data center - from temperature patterns to power usage to actual compute activity - companies can design their data centers to operate in smarter, more effective ways, Lim told ZDNet. The key challenge right now, particularly as companies move into the cloud, is bridging the gap between information about the data center facility and information about actual virtualized IT systems. With increasingly advanced data center infrastructure management tools, it's becoming possible to manage workloads and storage for maximum efficiency based on the physical performance of the server room.
Improved Power Availability
Another key aspect of improving performance is to overhaul the power architecture, particularly as higher-density computing puts more of a strain on the facility's power use, Chattopadhyay explained. For cloud-grade data centers, achieving five nines of availability - 99.999 percent uptime, or only five minutes of downtime a year - is increasingly common. But doing so requires data center managers to assess their facilities so they can identify and eliminate key failure points. One of the best places to do this can be in the uninterruptible power supply, where establishing redundancy can go a long way. A parallel redundant, or N+1, system, deploys multiple UPS modules so there are enough to power all the connected equipment (this is the N) as well as an additional module for redundancy (this is the +1).
""For enterprises seeking to achieve scalability without impacting availability, N+1 redundancy remains the most cost-effective option for high availability data centers and is well-suited for high density cloud computing environments,"" Chattopadhyay told ZDNet. He added, ""When executed correctly, redundant on-line UPS architecture enables the enterprise data center to achieve high levels of efficiency without compromising the availability needed for business-critical applications.""
A Time Of Change
The cloud is necessarily forcing the data center to change and evolve toward a more efficient, effective reality. With advanced hardware solutions, smarter architectures and more in-depth management tools, it's possible for companies to create a data center that is truly optimized for cloud-scale computing. As cloud continues to change the way things are done, such approaches to the data center will become standard."