Enterprise IT Made Easy

The #1 platform for building your IT infrastructure. Connect directly with top data centers, network, peering and cloud providers. We're 100% vendor-neutral.

Explore Data Centers


Streamline Your Colocation RFPs With IGNITE Project Platform

Datacenters.com IGNITE Project Platform makes it easy to configure your requirements for colocation services, select colocation providers and data center locations, and submit to your digital Colocation RFP directly to colocation providers matching your requirements.


Build Enterprise Cloud RFPs With IGNITE Project Platform

Configure your cloud servers, cloud storage, DRaaS, SaaS, UCaaS, or CCaaS solutions on the IGNITE Project Platform. Pick a category, enter your requirements for cloud services, and submit your digital Cloud RFP. IGNITE intelligently matches you to the right cloud providers.


Search Data Center Locations and Cloud Providers

Find data center locations, colocation providers and cloud service providers by searching our verified data center database including top markets, facilities and providers in the USA and Internationally. Click data centers near me to view datacenters within a specific radius.


Shop Colocation Pricing, Bare Metal Servers and Hosting Services

Get exclusive deals and discounts from top providers on colocation pricing, bare metal server pricing, dedicated servers and hosting services. Reserve your pricing online directly from the provider or customize your service using the Datacenters.com Project Platform.




data centers




marketplace products

Chris Newell

Global Consultant

Leslie Bonsett

Global Consultant

Michael Kriech

Global Consultant

Calling Data Center Consultants, Brokers and Telecom Agents

Chris Newell

Global Consultant

Leslie Bonsett

Global Consultant

Michael Kriech

Global Consultant

Join the Datacenters.com team as a data center consultant, real estate broker or telecom agent, VAR or MSP. We're always looking for elite industry professionals with a strong background in data center consulting, cloud consulting services, managed services and networking. Ask about our upcoming Datacenters.com Data Center Certification training program.

Data Center Industry Resources

Data Center Vendor List

Datacenters.com Vendor Directory is dedicated to datacenter owners and operators as a resource for sourcing vendors for critical infrastructure including UPS, cooling, construction, security, modular, hardware, storage, networking and more.

Visit Data Center Vendor List
Data Center Real Estate

Alongside manufacturing, data center real estate is one of the hottest markets in commercial real estate. Browse data centers for sale, lease and data center real estate investment opportunities on Datacenters.com. List your data center for sale privately.

View Data Center Real Estate

Trusted by Top Colocation Providers

What Our Providers and Customers Say About Us

Rackspace helps its customers accelerate the value of cloud at every phase of their digital transformation journey and Datacenters.com is a natural partner for anyone on this journey.
Vicki Patten
Vicki Patten
Our company prides itself on being the secure infrastructure company with a global platform of 50+ best in class data centers. We are happy to partner with Datacenters.com with its forward thinking, industry changing, global user experience being a great fit for our products. We are excited to be pioneers in the marketization of colocation and to be a part of the Datacenters.com Marketplace
Chad Markle
Chad Markle

Latest Data Center News

Read the latest data center news about cloud computing, technology and more.

Visit Data Center Blog
2 Jun 2020
Data Center Real Estate, A Tale of Two Markets
One could argue that there are more colocation providers and data center facilities than at any other time in history. MA is up, data center expansion is strong, construction is up, land acquisition is strong. Investor appetite for data center real estate is near all-time highs. There is just one problem...Supply Demand for Data CentersThere is literally no supply of available data centers that meet investor criteria. What is that criteria? Investors interested in participating in the data centers want a credit-worthy, anchor tenant, or a large number of retail customers included in the sale. Right now, that supply simply does not exist. Think about it this way. Why would a data center owner sell a cash-flowing property with creditworthy tenants and upside for continued growth? It would have to be a pretty big number if you ask me.Do not get me wrong. There are plenty of data centers for sale. Some built in the early 1990s. Others built in the 2000s. Some built for retail colocation. Others built as enterprise data centers. In nearly all cases, the data centers for sale have 100% vacancy, single-tenant planning to exit, and relocate to a colocation facility or cloud. Some data centers for sale may have a handful of retail colocation customers. However, these are not the type of data centers that institutional investors, pension funds, or VCs want to purchase. They are looking for a turn-key operation that provides predictable cash-flows. The so-called rent roll and creditworthiness of the tenants is mission-critical.Outdated Data Centers Are a Hard SellAnother major challenge with these types of data centers is that theyre outdated by todays standards. Less than 500kW of total power in a data center is not going to cut it in todays highly competitive colocation environment. Plus, antiquated HVAC will not handle the high-density requirements of todays colocation customers. Another factor is the curb appeal. Yes, I just said curb appeal. Newly constructed data centers have all of the features and amenities that retail and wholesale colocation clients want office space, conference rooms, break rooms, staging areas, high ceilings, storage areas, loading docks, and more. Not to mention advanced security systems, personnel, and monitoring. The Mission Impossible presentation if you will.Cloud Adoption Creates Market ChangeThere is one other factor at play here. The cloud. IT workloads are moving to the cloud whether you like it or not. Take a second and think about how technology is changing. Do you have an exchange server? What about a database server? What about servers that run your applications and host your website? It is true that some workloads run better on physical servers and certain software vendors make it hard to license their software on virtual infrastructure. That will be the case for years to come.When you look at who is buying colocation services and why you see a clear picture of what is truly going on in the market. Business and enterprise clients are adopting the cloud in a big way. They are moving more and more IT workloads to the cloud. It could be public, private, or hybrid cloud. It could even be multi-cloud deployments. At the same time, they need to maintain on-premise or collocated physical servers for certain circumstances. The physical footprint for IT infrastructure is shrinking.So, who are the future buyers of colocation services? Are you ready? It is the hyperscalers such as AWS, Microsoft, Google Cloud, IBM Cloud, Oracle, Alibaba Cloud, and others. It makes sense right? If business and enterprise clients are moving IT workloads to the cloud, cloud service providers (CSPs) require more data center space to support those workloads. You are probably thinking that CSPs build their own private data centers. That is correct. However, they often leverage wholesale colocation until they get to a critical mass where it makes more sense economically to build a data center than lease one. Every CSP is different when it comes to the threshold for building a data center.Retail Colocation Continues Growth PathIs there a play for retail colocation? Absolutely. Business and enterprise clients are building their own private clouds in colocation facilities. There are many instances where physical servers, private cloud servers, and storage make more economic sense than going with one of the big three cloud providers. Like I said before. There will always be a need for physical servers based on the workload type and software licensing.I do offer this word of caution for those concerned about the cost of cloud services. The big hyperscalers have economies of scale, talent, and workforces that simply cannot be matched by a single business or enterprise. Look for this trend to continue. They are simply going to get larger and more powerful. They will find ways of increasing operational efficiencies to drive down the cost of cloud services. Think five to ten years from now. What will the data center industry look like?Future Technologies Drive Colocation DemandThere is another future play for colocation. Drumroll, please. It is called the edge or edge computing. It is not here yet but it is coming with a number of other technological innovations like 5G, internet of things (IoT), autonomous vehicles, and artificial intelligence (AI). The edge is essentially pushing IT workloads closer to the end-user wherever they are located. The main concept behind the edge is that there will be regional and centralized data center locations. Smaller, regional data centers will act as a bridge in communicating between end-users and central data centers.Coming around full circle. The market for smaller, older, outdated data centers is essentially non-existent today from a commercial real estate perspective. At best, these are powered shells in a strategic location than need to be acquired and built out by a colocation provider. The amount of capital required to retrofit and operate one of these data centers is not even a consideration for 99.999% of potential data center investors.That is today. However, could there be a play for these data centers in the future with edge computing. Wide-ranging adoption of edge is still five to ten years out from now. The demand for edge computing and edge data centers is still a big unknown. If I was a betting man, I would place my chips on regional data centers in rural or remote locations with the potential to become an edge data center in the future. There is already enough competition for colocation data centers in large metros.
29 May 2020
How to Select the Right Colocation Provider and Data Center?
Whether driven by finances, personnel and resources, disaster recovery planning or merely wanting to concentrate on core competencies, many businesses are choosing to relocate their IT infrastructure in third-party data centers operated by colocation providers. Colocation is an attractive alternative to hosting mission-critical servers and storage in on-premise telecom closets and data centers.Why? Most businesses have discovered that in-house data centers will not accomplish their availability, reliability, connectivity, and power requirements which are critical to both IT and business strategies. Furthermore, the benefits of relocating servers to a colocation facility include decreased capital costs, greater reliability, simpler management, and fewer resources dedicated to managing and troubleshooting common data center infrastructure tasks.However, making the decision to relocate from an internal data center setup to an offsite colocation data center is not always an easy one. The mere thought of relocating servers, storage, and networking gear requires substantial planning and careful consideration. The apprehension that comes with change is often the largest roadblock in relocating a data center.Once a company decides to make the transition, it takes a clear strategy on how to determine the right colocation provider, data center facility, and services for supporting your hardware remotely. To determine the best approach for their business needs and goals, companies must take five key considerations into account.1) Define Your Goals ObjectivesTechnology that supports business goals and the data center supports the underlying technology. Therefore, businesses should clarify what their long term goals are and what they represent for everyone involved with the technology. Various departments may have different goals and objectives that will impact the data center strategy. Download this freeColocation Buyers Guidefrom Datacenters.comThe first step is to get everyone involved on the same page. That includes identifying all of the stakeholders. Its important to have a diverse cross-section of employees that can weigh in on the general strategy, goals and objectives. This group of employees should encompass senior leaders as well as subject matter experts such as SysOps, DevOps, SecOps, admins, and software developers. This ensures that all of the perspectives and data center use scenarios are analyzed. By way of instance, upper management may not be aware of problems technicians encounter, such as insufficient monitoring and alerting tools for your network.Stakeholders goals need to be longer-term, high-level items they want to achieve, such as enhancing data center uptime or reducing time spent running a data center. Their aims are short-term landmarks that support every one of their goals, be measurable, and contain particular start-to-finish timelines. Every stakeholder must then rank these goals by importance in meeting the overall objectives.Following the interviews with aims and objectives identified, the very next step is to bring stakeholders together and form a consensus on what the most important data center related goals and objectives are. From there, the group can brainstorm the strategic execution of how these goals and objectives can be attained, setting the platform for a comprehensive data center strategy.2) Research Colocation Providers Data CentersAfter goals and objectives are defined, discussions should shift to determining the process for selecting acolocation provider and data center facility. There are varying levels of support provided by colocation providers for their clients. This ranges from the minimum of rack space, power and connectivity to fully managed colocation services.Another major consideration when selecting a colocation data center is located. What is the purpose of the data center relocation? These goals back to the goals and objectives identified above. Is the goal of relocating to a colocation facility for disaster recovery? If so, the location of the data center may be in another city or state. If the goal is reliability or performance, most businesses look to colocation providers and data centers located near their headquarters or branch office. In this case, proximity to the IT personnel responsible for managing the hardware should be within a reasonable driving distance.Not all data center facilities are created the same. There are many different types of data center facilities from purpose-built to retrofit, single-story to multi-story. There are also different tier levels that signify the number of redundant systems built into the data center. The Uptime Institute rates data center facilities on a scale of 1-4 with a Tier I data center being the lowest in terms of uptime and redundancy and a Tier IV data center being the highest. Datacenter redundancy plays a major role in the amount of downtime expected each year. Scheduling a data center tour is the best way to compare data center facilities.Businesses may also heavily weigh the connectivity options at the data center facilities they are interested in. This includes on-net telecom carriers, ISPs, and network providers that are already integrated within the network architecture. Further, cloud direct connectivity to major cloud providers like AWS, Google Cloud, Microsoft Azure and IBM Cloud will also be an important consideration in the decision-making process.Last but not least, many businesses will take into consideration a general budget of how much they would like to spend on colocation services. It will also be important to consider the contract term. Is it a 1, 2, 3, or 5-year term? What are the long term technology drivers? Will the business focus on a hybrid infrastructure approach or will it move 100% to the cloud by a certain date?3) Define Write Your Colocation RFPNow that you have created a list of finalists for colocation providers and data centers, it is time to develop a request for proposal (RFP). The purpose of an RFP is to get standardized answers and responses from each of the providers. This allows for side-by-side comparisons and empowers your business and leaders to negotiate based on the responses. Building an RFP that addresses your specific business requirements is essential for locating the right colocation provider. Download thisColocation RFP Template from Datacenters.com.What does a colocation RFP include? Beyond a company background, an RFP includes clear instructions on what information providers should include in their proposals. This may include mandatory requirements, an estimate on the initial variety of cabinets, and the power necessary to support present and future growth requirements.Additionally, it is key to provide colocation providers the opportunity to describe all aspects of their offering, from electrical infrastructure to how they handle deliveries. The RFPshould also request information relating to pricing and provisions, sample agreements, compliance and certifications, and more.Other questions that you can ask is when the data center last experienced downtime.- Why did the data center go down?- Was it a power outage or network failure?- What was the cause?- How did the redundant systems such as UPS and backup generators respond?- How long did the data center downtime last?- What is the history of downtime?These questions will help you determine the risks of going with a particular colocation provider or data center facility. It is important to realize that data centers are typically built for redundancy and offer significantly better reliability than office buildings.4) Review RFP Responses ProposalsEvaluating provider responses can take even more time than defining requirements and writing the RFP. Once all of the responses are received, it is important to make apples-to-apples comparisons to determine the ideal colocation provider for your business and technology requirements.This can be a daunting task. How do you compare all of the providers? Which one is the best? One way to compare each response is to create a weighted score matrix. You can do this by creating a matrix spreadsheet of all colocation provider finalists and the questions from the RFP. Each colocation provider is scored based on their answers to the questions. You can weight certain questions more heavily than others. The total overall score can be tallied at the bottom of the spreadsheet for each provider.Another way of comparing providers is to ask additional questions for clarification. Follow-up questions can help you understand outlier proposals, service offerings and pricing. Some providers may offer additional services or recommendations. It is important to understand the differences.Collaboration among stakeholders and decision markets in this part of the process is critical. Make sure to involve them as the proposals come in. Also, make sure to share the matrix or a summary of the RFP responses.5) Finalist Selection NegotiationYou are down to one or maybe two colocation provider finalists. Whats next? Negotiation of pricing and terms should be next on your list. The colocation RFP process allows you to compare providers and services on an apples-to-apples basis. It also creates leverage to negotiate terms and pricing of those service offerings.You can request contracts and finalized pricing at this point. Make sure to read through the contracts carefully and engage legal for review and redlines if possible. Highlight any gotchas in the contracts such as automatic renewals, SLAs, initial deposits, length of the contract, move-in dates, and other details. For the majority of colocation providers, contracts can be adjusted to meet the requirements of the client.Final pricing can also be negotiated. Some providers are flexible on price while others are not. You also have to take into consideration the data center facilities and their differences such as tier level and location.Were Here to Help With Your Colocation RFPNeed help writing or managing your colocation RFP?At Datacenters.com, we offer a digital colocation RFP with an easy-to-use wizard that guides you through each step of the RFP process as well as a traditional RFP template for colocation. Leverage our in-depth expertise and market knowledge to get the best colocation provider and data center for your project. Its a free service and we connect you directly with all of the major colocation providers such as Equinix, Digital Realty, CoreSite, Cyxtera, Flexential, Cologix, QTS Data Centers, DataBank, Evoque and more. Contact me, Bart Dorst, Global Technology Consultant, today to learn more.
26 May 2020
Top 20 Cloud Computing Terminology and Definitions You Need to Know
With the rapid rise in cloud adoption, its important to know of the key cloud computing terms and definitions. Many of us are still trying to wrap our heads around cloud. The terminology featured in this list refers primarily to cloud infrastructure solutions including public, private, hybrid, and multi-cloud servers, storage, and networking services.Many of the cloud terms and definitions in this list apply to all cloud service providers (CSPs). IT also applies to specific providers such as Amazon Web Services (AWS), Microsoft Azure Cloud, Google Cloud, and IBM Cloud.1) Infrastructure as a Service (IaaS)What is Infrastructure as a Service (IaaS)? It is a type of computing infrastructure that is provisioned and managed over the internet. With IaaS, you can quickly scale up and down with demand. You also pay only for what you use. It also helps with the capital outlay and complexity in buying and managing your own physical servers.IaaSis offered by hundreds of cloud providers but is dominated by companies such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud. These providers manage the underlying infrastructure while you are responsible for the provisioning, installation, configuration, and management of your own software including operating systems (OS), middleware, and applications.2) Platform as a Service (PaaS)What is Platform as a Service (PaaS)? PaaS is a type of cloud computing service where both development and deployment environments are in the cloud. This allows users and developers to quickly spin up resources without provisioning or configuring the underlying compute or storage infrastructure. With PaaS, you can deliver everything from simple cloud-based apps to enterprise applications.PaaSdiffers from IaaS in several ways. IaaS user or customer is responsible for provisioning, install, configuration, and management of software, OS, middleware, and applications. This is not the case with PaaS. It includes all of the infrastructure such as servers, storage, and networking. It also includes the middleware, development tools, business intelligence (BI), database management systems, and more.With PaaS, you manage the applications you develop, and the cloud provider manages everything else.3) Software as a Service (SaaS)What is Software as a Service (SaaS)? SaaS allows customers and users to connect to and use cloud-based software applications over the internet. SaaS provides users with software that is purchased on a licensing or pay-as-you-go basis from a cloud provider. This is opposed to software that is installed locally on desktop or laptop. With SaaS, users connect to the software over the Internet, usually with a web browser.Examples of cloud SaaS applications include email, calendars, design programs, office suites, collaboration tools, and more. The most common examples of SaaS include Microsoft Office 365, Google G Suite, Google Gmail, Adobe Creative Cloud, Salesforce, Hubspot, and many others.4) Public CloudWhat is public cloud? The public cloud can be defined as a multi-tenant computing service offered over the public internet. Public cloud is for users that want to access compute infrastructure on-demand with increased scalability, elasticity, and on a pay-per-use basis for CPU cycles, storage and bandwidth consumed over a given time period such as per minute, per hour or per month.The benefits of public cloud include shifting from CapEx to OpEx for compute and storage infrastructure. Public cloud also allows customers to right-size infrastructure to their IT workloads, application and data requirements. If implemented correctly with the right security methods such as intrusion detection and prevention systems (IDPS), public cloud can be as secure as private cloud.5) Private CloudWhat is private cloud? Private cloud can be defined as a single-tenant computing service offered over the public internet, dedicated internet connection or internal network. With private cloud, the underlying infrastructure is accessible only to select users instead of the general public.Private clouds provide many of the same benefits of public cloud such as self-service, scalability, and elasticity plus addition control and customization features. Private clouds also offer additional layers of security and privacy ensuring that mission-critical applications and sensitive data are not accessible by third parties.6) Hybrid CloudWhat is hybrid cloud? Hybrid cloud, hybrid IT, and hybrid infrastructure are often confused. A hybrid cloud is a computing environment that includes a combination of both public and private cloud infrastructure. Certain data and applications may be better suited for private cloud while others are fine operating in public cloud. Private and public clouds can work together to compute, process, and store diverse IT workloads in the cloud.Hybrid clouds offer the best of both worlds. You get the benefits of public cloud with the flexibility, scalability, and cost efficiencies paired with private cloud which offers the lowest possible security threats and data exposure.7) Managed CloudWhat is managed cloud? Managed cloud, also referred to as cloud managed services or cloud management, allow customers to deploy cloud-based services such as IaaS, PaaS and SaaS without having the internal staff, resources and technical expertise to install, monitor, and manage it on an ongoing basis.Cloud managed services are available from either a third-party managed service provider (MSP) or a cloud service provider (CSP) providing the underlying infrastructure solution. This includes management and monitoring of compute, storage, networks, and operating systems. It also includes complex tools and applications that run on top of cloud infrastructure.The benefits of managed cloud include access to a team of specialists experienced in public, private, and hybrid cloud architectures. Many MSPs offer cloud management for multi-cloud infrastructure.8) Multi-CloudWhat is multi-cloud? Multi-cloud is a cloud computing deployment approach that is made up of one or more cloud service providers (CSPs). This can include the use of a public cloud provider, a private cloud provider or a cloud provider offering both public and private cloud.Multi-cloud architecture is used when customers want to leverage a certain cloud provider for certain services such as a public cloud server and another cloud provider cloud-based object storage. The reason for deploying a multi-cloud approach includes improving security and performance while potentially lowering costs within an expanded portfolio environment.9) Hybrid ITWhat is hybrid IT? Hybrid IT, also known as hybrid infrastructure, is different than hybrid cloud. A hybrid cloud consists of both public and private cloud infrastructure such as public cloud and private cloud server instances. Hybrid IT infrastructure on the other hand is the architecture of an information technology environment that uses both physical and virtual infrastructure.The most common use case for hybrid IT would be the use of physical servers located in a colocation data center and the use of public or private servers in the cloud. Many customers use hybrid IT for the following reasons when there are issues with virtualization of deployments involving legacy workloads, software licensing issues, regulator or compliance requirements, or data security.10) Central Processing Unit (CPU)What is CPU? A Central Processing Unit (CPU), also referred to as the main processor, is an electronic chip comprised of circuitry within a computer that executes instructions from programs that run or perform tasks. Tasks include the basic arithmetic, logic, controlling, and input/output operations specified by the instructions in the program.In cloud computing, the CPUis commonly referred to as the vCPU or virtual CPU. In this case, a vCPU (virtual CPU) represents a portion of a physical CPU that is assigned to a virtual machine (VM). A vCPU is also known as a virtual processor.11) CoresWhat are cores? In cloud computing, hypervisors control the physical CPUwithin the cloud server. CPUs are divided into what are known as CPU cores. Each core can technically support 8 virtual processors (vCPUs). However, the vCPU does not represent a 1:1 allocation. It represents the time on the physical CPU resource pool. A virtual processor is best described as the amount of processing time spent on the CPU. Some people mistakenly think that 1 vCPU is equal to 1 core. However, a one to one relationship between vCPU and core does not exist.12) Graphics Processing Unit (GPU)What is GPU? A Graphics Processing Unit (GPU), also known as a video card or graphics card, is a specially designed computer chip that performs quick tasks and frees up the CPU to perform other tasks. CPUs use a few cores which are focused on sequential serial processing. GPUs have thousands of smaller cores that are used for multi-tasking.There are two types of GPUs. Integrated GPUs are located on within CPU and share memory with the CPUs processor. Discrete GPUs have their own card and video memory (VRAM). This means that CPU does not have to use RAM for graphics.13) Random Access Memory (RAM)What is RAM? Random Access Memory (RAM) is the short-term memory of a device. RAM is responsible for temporarily storing or remembering everything that runs on your computer. This includes short-term memory for software, documents, web browsers, files, and settings.Data that is stored in RAM can be read from anywhere and almost at the same speed. This is opposed to having your CPU search your hard drive every time you open a new browser window or application. Traditional storage including HDD and SSD is still very slow compared to RAM.An important thing to remember about RAM is short-term rather than long term. It is a volatile technology which means that once it loses power, it no longer has access to the data. That is what traditional, long-term hard drives are for.14) Input-Output Operations Per Second (IOPS)What is IOPS? IOPS is the acronym for Input/Output Operations Per Second. It is a common performance measurement used in the benchmarking of computer storage devices such as hard disk drives (HDD), solid-state drives (SSD), and storage area networks (SAN).Its important to note that the IOPS numbers that are published by manufacturers do not guarantee the same real work application performance.15) Object StorageWhat is object storage? Object storage, also referred to as object-based storage, is a type of storage strategy that manages and manipulates data storage as distinct units, called objects. These objects are kept in a single storehouse and are not ingrained in files inside other folders.Object storage adds metadata to each file, eliminating the tiered file structure used in file storage. It places everything into a flat address space, called a storage pool. Object storage offers infinite scalability and is less costly than other storage types.It is important to note that object storage uses versioning. Newly written objects offer read after write consistency. Edit or deleted objects have eventual read consistency.16) Block StorageWhat is block storage? Block storage, also referred to as block-level storage, is a type of storage that is used within storage area networks (SANs) or cloud-based storage environments. Block storage is used for computing where fast, efficient, and reliable data transportation is required.Blog storage works by breaking up the data into blocks and then storing them in separate pieces. Each block as a unique identifier. The blocks are placed wherever it is most efficient. This means that blocks can be stored across different systems and configured or partitioned to work with different operating systems (OS).17) VolumesWhat are volumes? In data storage, a volume is a single accessible storage area with a single file system that typically resides on a single partition of a hard disk. Although different from a physical disk drive, volumes can be accessed with an OSs logical interface.It is important to note that volumes are different than partitions.18) SnapshotsWhat are snapshots? Snapshots allow near-instantaneous copies of datasets to be taken in a live environment. This copy can be made available for recovery or copied to other cloud servers or storage for performance or test and development environments.A major benefit of snapshots is speed. Snapshots provide immediacy of recovery and the ability to quickly restore data to an environment. It is important to note that there are costs associated with snapshots. They can consume cloud capacity in a single availability zone, multiple availability zones, and or regions.19) ReplicationWhat is replication? Replication in cloud computing involves the sharing of information across redundant infrastructure, such as cloud servers or storage, to improve the reliability, availability, and fault-tolerance of the IT workload - applications, data, databases, or systems.20) High Availability ArchitectureWhat is high availability architecture? High availability architecture involves the design and deployment of multiple components that work together to ensure uninterrupted service during a specific time period. It also encompasses the response time and availability for user requests.The key to highly available architecture is to plan for failure. Systems should be tested on a regular basis under varying scenarios and conditions. This ensures that your IT workloads stay online and are responsive even when components fail and during times of high stress on the system.High availability architectures including the following: hardware redundancy, software and application redundancy, data redundancy, and that single points of failure eliminated.
20 May 2020
Monetizing the Edge: Colocation On-Premise, Yes, You can Have Your Data Center and Pay for it, too.
Often people are told everything is going colo or everything is going cloud. The truth is that most enterprises will use all the available on-site, offsite, cloud-based assets as tools in their toolkit. There is no one size fits all. There is a myriad of reasons to keep compute local. Monetizing that and having it behave as colo on-premise is a newer attractive model as we begin to build out the edge. Edge Data Centers Are.Edge Data Centers and edge compute for that matter, remain as the new hot topic in the data center industry. And like other buzzwords, the definition, use and expectations are lumped into a broad category that encompasses probably more than it should. The Linux Foundation created an Open Glossary of Edge Computing to try to clarify and define the various types of compute and edge. According to the guide, and Edge Data Center is one that is capable of being deployed as close as possible to the edge of the networking, in comparison to traditional centralized data centers. Capable of performing the same functions as centralized data centers although at a smaller scale individually. Because of the unique constraints created by highly-distributed physical locations, edge data centers often adopt autonomic operation, multi-tenancy, distributed and local resiliency, and open standards. Edge refers to the location at which these data centers are typically deployed. Their scale can be defined as micro, ranging from 50 to 150kW+ of capacity. Multiple edge data centers may interconnect to provide capacity enhancement, failure mitigation, and workload migration within the local area, operating as a virtual data center.In short, they are defined, at least here, as small data centers closer to users; and in this option, micro in terms of power. What is missing in this definition is the volume of these small data centers needed for things like smart cities. And while the power numbers seem insignificant at first glance, the aggregate power is daunting. That said, the distributed models allowed in edge compute lend themselves well to renewable energy and alternative energy if not wholly, as backup to the grid that exists. But that doesnt mean that the power consumption can be ignored at any portion of the edge. This is true for edge devices, edge computing and edge data centers, all of which are components of edge compute. Edge devices can be stuck to poles, mounted on buildings, and various other places, but the devices themselves are hardened little machines/devices that are designed with network (generally WiFi or cellular). As these are generally distributed, they are outside of the scope of this doc, except to say, there will be power consumed collectively via a massive number of smaller usage edge devices and they are likely going to report back to some other edge location at least for part of the data they will serve.Shifting to the Edge Data Center, as a building not a device and not a shipping container, these smaller, perhaps modular, data centers will be needed literally all over. The idea is quite simple, move the compute closer to the user. Not all data needs to go back to a central repository. Machine to Machine (M2M), V2I (Vehicle to Infrastructure) and other close proximity, low latency required, communications dont need to travel half-way across the country to a larger data center. In fact, some of those communications dont need to go anywhere after processing. This is part of the reason that the Edge Data Center Market revenue, according to Global Market Insights, is expected to grow to $16 billion US by 2025.Power Consumption at the EdgeAll paths do not lead to VA, or any other large data center city for that matter. The need to place compute closer to the end-user is growing with the advancements in IoT, autonomous vehicle needs, enhanced security, WiFi, 5G, smart cities, and the like. The distribution of compute will create the need to manage power not singularly, but across a variety of sites. Some estimates say that power consumed by the edge will reach 102 Gigawatts. Hyperscalers and builders are increasingly looking towards renewables as noted below: Google Shifting Workloads to Become Carbon Aware Microsoft Pledges to be Carbon Negative by 2035 Amazon Commits to Renewable Energy 100% by 2035While thats great for them, that certainly doesnt address the smart city, or others that dont have the pockets for large scale neutrality. For most data centers, the driving consolidation benefit is cost. But when that data center is broken into 50 smaller data centers, some intelligence is going to be needed to monitor, orchestrate workloads based on energy costs and availability, and be able to flatten out demand removing the need to overprovision for peaks that occasionally occur leaving large amounts of power stranded. Software-Defined Power (SDP) is the last pillar to the software-defined data center and brings AI into the mix. With software-defined power, appliances can be used to shave off the peak (peak shaving). For instance, assume that a data center is designed for a peak of 10kW per cabinet but normal operating needs are closer to 6kW per cabinet. Designing for the peak of 10kW means that roughly 4kW per cabinet for primary and secondary power each is provisioned but remains wasted capacity of 8kW per cabinet total for the vast majority of the time. Colocation providers struggle with stranded power and the cost for that unused capacity is typically just passed through as power costs to the occupants even if it is rarely, if ever, used.Another benefit if SDP is the ability to orchestrate IT loads to maximize power savings. Suppose that out of a 30 cabinet data center, there are several virtual machines that operate sporadically creating inconsistent CPU loads. If all of the servers are provisioned for peak operations, compute capacity, like power, is also wasted. AI is the solution. By bridging the gap between IT and facilities, the compute becomes fluid (orchestrated). For instance, perhaps there are times when workloads can be consolidated to say 25 of those cabinets allowing 5 full cabinets can be powered off. As the loads are orchestrated to take advantage of maximum compute capacity. Node capping allows compute to remain below the thresholds set within the hardware. The entire data center is optimized based on learned AI. Lets face it, with multiple edge data centers, it simply isnt feasible to send someone around to each facility or have someone dial into each facility to handle the monitoring and orchestration of workloads. AI and automation provide this fluidity in a meaningful cost-saving manner currently not available in larger colocation facilities.SDP also allows orchestration of workloads to run over an alternate power source as a means of shaving peak grid costs. One could operate fully on a renewable cell when the costs of grid power are high, and shift back to grid power when needed. Taking advantage of alternate power sources provides significant savings over being stuck on the grid at all times. Sources could include battery backup units, generators, fuel cells, or in rack/row power storage. The SDP options and opportunities for savings are vast and immediate. SDP enables DR of power based on AI. One full building in a smart city arrangement, for instance, could move its compute in the event of a failure while the equipment provides standby power during that orchestration.Smart Cities and the EdgeSmart cities bring a myriad of technologies to the table. People generally think of IoT as an enabler to autonomous vehicles, but that is just the tip of the iceberg. Integrating the edge data center buildings together bring some of the efficiencies discussed above, and add additional options for security incorporated with SD-WAN, for instance. Meshing the disparate buildings together creates a fault tolerance and opens a wealth of opportunities for applications and monetization. The buildings can become colo on prem, if you will, encompassing the best of both worlds while supporting near field communications data, onramps to cloud services, and locally hosted data.The on-premise building can be converted to an OPEX model enabling meshed cloud offerings to be used around the city, by the city, or others. Fully meshed edge (facilities and compute) offer the ability to make the data center a service while moving compute closer to the consumer. Herein lies the advantage of monetization of the edge as a colocation on-premise data center. Edge compute companies can lease a server, a rack, or a full onsite adjacent building as needed. Edge native applications are growing in number and capability. They are not designed to function in a centralized setting. Autonomous vehicles are simply one example. Monetization of the EdgeOne underlying problem with the growth of edge computing is the sheer volume of the number of devices and small data centers needed. Singularly thinking, it is extremely difficult to monetize something small and distributed. However, as we examine the devices and the traffic that will be needed by multiple entities, the share principal begins to make a lot more sense. With all the hype around 5G, there is a misconception among many that this will be ubiquitous fast speeds for everyone. The reality is that there will still need to be backhaul connections. From the central office to the outer tower, this means that significantly more fiber will be in the ground and available for use. For the more remote needs, there may be other technologies in play including WiFi, line of sight connectivity via spectrum, LoRaWAN (Long Range Wide Area Network) and other protocols/devices to carry the communications data streams. Again, bear in mind that these may remain remote, move to centralized storage, or any combination thereof. One very underserved segment of the population with respect to data centers is the companies, hospitals and cities that wish to have their compute equipment/data in house and/or on-premise but dont happen to be similarly located to a large Data Center city. In light of recent events, we have seen some data center operators step up while others havent performed as well. Intelligent hands at the other side of the country is all well and good until multiple organizations need those hands at the same time. There are also companies that just prefer on-site resources, and of course, there are those that have not fully depreciated capital assets and have no desire to retire them. That isnt to say that those assets cant be used for a newer, more efficient, on-premise data center, that can then be monetized for the benefit of the owner. These are where modular data centers fit in. Purpose-built buildings engineered for stability, optimized environmental and power services, with the ability to monetize the sure onslaught of edge compute just makes sense. The Difference Between Modular Edge Compute Facilities and ContainersMany have heard the term ghetto colo which refers to containers being placed at the foot of cell towers for backhaul and edge compute. While it seems attractive to just build and ship, this type of data center isnt ideal in all climates and all environments. Being able to actually have a modularly built data center building frees up a large portion of existing building footprint for repurposing and also allows the facility to be constructed so as to be category 4 hurricane, F5 tornado and earthquake seismic rated to zone 4, ballistics UL-752, level 4 and can also be outfitted as a SCIF/EMP installation. The esthetics of a modular building is much better than a shipping container. Speed to market is rapid once the permits are set, and the buildings can generally be constructed in a short time as the buildings are pre-engineered and pieces and simply assembled on site. Once constructed, the building can be used for the core tenant (in a colo on-premise model) or sole owner with extra capacity being available for edge, onramps, outposts and the like. But this allows data to be near the user, not nearest the closest NFL city where a company may not want to divert or employ/contract resources.The sheer volume of edge compute resources needed as we move forward with IoT, AI and other compute needs will help companies monetize these esthetically pleasing buildings in a secure, cost-efficient manner. Some ideal applications for modular edge buildings include:Rural healthcareSmart AgricultureSmart city distributed buildingsCloud onramps, outposts for hybrid environmentsRural/Urban carrier locationsData Hubs in campus environmentsSmall teaching data centers at colleges with IT/DC curriculumPharmaceutical and highly sensitive environments where data control is paramountThe important takeaway here is that sometimes a small energy-efficient building can provide the same functionality as a large colo, but on-premise in an OPEX model. Cost models make it both attractive and with the colo on-premise model, running the data center can still belong to someone other than core IT staff if that is the desire. Upgrades to aging capital equipment can be done in a smaller, more cost-effective footprint, leaving original not fully depreciated assets in situ for use by other applications or repurposed to a better building layout/setup. Power is fully controlled not absorbed as a pass-through cost. In short, you dont have to move to have what you want, sometimes all it takes is a little patch of land and vision. Just one more tool in the toolkit of data support.