Enterprise IT Made Easy

The #1 platform for building your IT infrastructure. Connect directly with top data centers, network, peering and cloud providers. We're 100% vendor-neutral.


Streamline Your Colocation RFPs With Project Platform

Datacenters.com Project Platform makes it easy to configure your requirements for colocation services, select colocation providers and data center locations, and submit to your digital Colocation RPF directly to colocation providers matching your requirements.


Search Data Center Locations and Cloud Providers

Find data center locations, colocation providers and cloud service providers by searching our verified data center database including top markets, facilities and providers in the USA and Internationally. Click data centers near me to view datacenters within a specific radius.


Shop Colocation Pricing, Bare Metal Servers and Hosting Services

Get exclusive deals and discounts from top providers on colocation pricing, bare metal server pricing, dedicated servers and hosting services. Reserve your pricing online directly from the provider or customize your service using the Datacenters.com Project Platform.

Datacenters.com Database




data centers




marketplace products

Chris Newell

Global Consultant

Leslie Bonsett

Global Consultant

Michael Kriech

Global Consultant

Calling Data Center Consultants, Brokers and Telecom Agents

Chris Newell

Global Consultant

Leslie Bonsett

Global Consultant

Michael Kriech

Global Consultant

Join the Datacenters.com team as a data center consultant, real estate broker or telecom agent, VAR or MSP. We're always looking for elite industry professionals with a strong background in data center consulting, cloud consulting services, managed services and networking. Ask about our upcoming Datacenters.com Data Center Certification training program.

Data Center Industry Resources

Data Center Vendor List

Datacenters.com Vendor Directory is dedicated to datacenter owners and operators as a resource for sourcing vendors for critical infrastructure including UPS, cooling, construction, security, modular, hardware, storage, networking and more.

Visit Data Center Vendor List
Data Center Real Estate

Alongside manufacturing, data center real estate is one of the hottest markets in commercial real estate. Browse data centers for sale, lease and data center real estate investment opportunities on Datacenters.com. List your data center for sale privately.

View Data Center Real Estate

Trusted by Top Colocation Providers

What Our Providers and Customers Say About Us

Rackspace helps its customers accelerate the value of cloud at every phase of their digital transformation journey and Datacenters.com is a natural partner for anyone on this journey.
Vicki Patten
Vicki Patten
Our company prides itself on being the secure infrastructure company with a global platform of 50+ best in class data centers. We are happy to partner with Datacenters.com with its forward thinking, industry changing, global user experience being a great fit for our products. We are excited to be pioneers in the marketization of colocation and to be a part of the Datacenters.com Marketplace
Chad Markle
Chad Markle

Latest Data Center News

Read the latest data center news about cloud computing, technology and more.

Visit Data Center Blog
26 May 2020
Top 20 Cloud Computing Terminology and Definitions You Need to Know
With the rapid rise in cloud adoption, its important to know of the key cloud computing terms and definitions. Many of us are still trying to wrap our heads around cloud. The terminology featured in this list refers primarily to cloud infrastructure solutions including public, private, hybrid, and multi-cloud servers, storage, and networking services.Many of the cloud terms and definitions in this list apply to all cloud service providers (CSPs). IT also applies to specific providers such as Amazon Web Services (AWS), Microsoft Azure Cloud, Google Cloud, and IBM Cloud.1) Infrastructure as a Service (IaaS)What is Infrastructure as a Service (IaaS)? It is a type of computing infrastructure that is provisioned and managed over the internet. With IaaS, you can quickly scale up and down with demand. You also pay only for what you use. It also helps with the capital outlay and complexity in buying and managing your own physical servers.IaaSis offered by hundreds of cloud providers but is dominated by companies such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud. These providers manage the underlying infrastructure while you are responsible for the provisioning, installation, configuration, and management of your own software including operating systems (OS), middleware, and applications.2) Platform as a Service (PaaS)What is Platform as a Service (PaaS)? PaaS is a type of cloud computing service where both development and deployment environments are in the cloud. This allows users and developers to quickly spin up resources without provisioning or configuring the underlying compute or storage infrastructure. With PaaS, you can deliver everything from simple cloud-based apps to enterprise applications.PaaSdiffers from IaaS in several ways. IaaS user or customer is responsible for provisioning, install, configuration, and management of software, OS, middleware, and applications. This is not the case with PaaS. It includes all of the infrastructure such as servers, storage, and networking. It also includes the middleware, development tools, business intelligence (BI), database management systems, and more.With PaaS, you manage the applications you develop, and the cloud provider manages everything else.3) Software as a Service (SaaS)What is Software as a Service (SaaS)? SaaS allows customers and users to connect to and use cloud-based software applications over the internet. SaaS provides users with software that is purchased on a licensing or pay-as-you-go basis from a cloud provider. This is opposed to software that is installed locally on desktop or laptop. With SaaS, users connect to the software over the Internet, usually with a web browser.Examples of cloud SaaS applications include email, calendars, design programs, office suites, collaboration tools, and more. The most common examples of SaaS include Microsoft Office 365, Google G Suite, Google Gmail, Adobe Creative Cloud, Salesforce, Hubspot, and many others.4) Public CloudWhat is public cloud? The public cloud can be defined as a multi-tenant computing service offered over the public internet. Public cloud is for users that want to access compute infrastructure on-demand with increased scalability, elasticity, and on a pay-per-use basis for CPU cycles, storage and bandwidth consumed over a given time period such as per minute, per hour or per month.The benefits of public cloud include shifting from CapEx to OpEx for compute and storage infrastructure. Public cloud also allows customers to right-size infrastructure to their IT workloads, application and data requirements. If implemented correctly with the right security methods such as intrusion detection and prevention systems (IDPS), public cloud can be as secure as private cloud.5) Private CloudWhat is private cloud? Private cloud can be defined as a single-tenant computing service offered over the public internet, dedicated internet connection or internal network. With private cloud, the underlying infrastructure is accessible only to select users instead of the general public.Private clouds provide many of the same benefits of public cloud such as self-service, scalability, and elasticity plus addition control and customization features. Private clouds also offer additional layers of security and privacy ensuring that mission-critical applications and sensitive data are not accessible by third parties.6) Hybrid CloudWhat is hybrid cloud? Hybrid cloud, hybrid IT, and hybrid infrastructure are often confused. A hybrid cloud is a computing environment that includes a combination of both public and private cloud infrastructure. Certain data and applications may be better suited for private cloud while others are fine operating in public cloud. Private and public clouds can work together to compute, process, and store diverse IT workloads in the cloud.Hybrid clouds offer the best of both worlds. You get the benefits of public cloud with the flexibility, scalability, and cost efficiencies paired with private cloud which offers the lowest possible security threats and data exposure.7) Managed CloudWhat is managed cloud? Managed cloud, also referred to as cloud managed services or cloud management, allow customers to deploy cloud-based services such as IaaS, PaaS and SaaS without having the internal staff, resources and technical expertise to install, monitor, and manage it on an ongoing basis.Cloud managed services are available from either a third-party managed service provider (MSP) or a cloud service provider (CSP) providing the underlying infrastructure solution. This includes management and monitoring of compute, storage, networks, and operating systems. It also includes complex tools and applications that run on top of cloud infrastructure.The benefits of managed cloud include access to a team of specialists experienced in public, private, and hybrid cloud architectures. Many MSPs offer cloud management for multi-cloud infrastructure.8) Multi-CloudWhat is multi-cloud? Multi-cloud is a cloud computing deployment approach that is made up of one or more cloud service providers (CSPs). This can include the use of a public cloud provider, a private cloud provider or a cloud provider offering both public and private cloud.Multi-cloud architecture is used when customers want to leverage a certain cloud provider for certain services such as a public cloud server and another cloud provider cloud-based object storage. The reason for deploying a multi-cloud approach includes improving security and performance while potentially lowering costs within an expanded portfolio environment.9) Hybrid ITWhat is hybrid IT? Hybrid IT, also known as hybrid infrastructure, is different than hybrid cloud. A hybrid cloud consists of both public and private cloud infrastructure such as public cloud and private cloud server instances. Hybrid IT infrastructure on the other hand is the architecture of an information technology environment that uses both physical and virtual infrastructure.The most common use case for hybrid IT would be the use of physical servers located in a colocation data center and the use of public or private servers in the cloud. Many customers use hybrid IT for the following reasons when there are issues with virtualization of deployments involving legacy workloads, software licensing issues, regulator or compliance requirements, or data security.10) Central Processing Unit (CPU)What is CPU? A Central Processing Unit (CPU), also referred to as the main processor, is an electronic chip comprised of circuitry within a computer that executes instructions from programs that run or perform tasks. Tasks include the basic arithmetic, logic, controlling, and input/output operations specified by the instructions in the program.In cloud computing, the CPUis commonly referred to as the vCPU or virtual CPU. In this case, a vCPU (virtual CPU) represents a portion of a physical CPU that is assigned to a virtual machine (VM). A vCPU is also known as a virtual processor.11) CoresWhat are cores? In cloud computing, hypervisors control the physical CPUwithin the cloud server. CPUs are divided into what are known as CPU cores. Each core can technically support 8 virtual processors (vCPUs). However, the vCPU does not represent a 1:1 allocation. It represents the time on the physical CPU resource pool. A virtual processor is best described as the amount of processing time spent on the CPU. Some people mistakenly think that 1 vCPU is equal to 1 core. However, a one to one relationship between vCPU and core does not exist.12) Graphics Processing Unit (GPU)What is GPU? A Graphics Processing Unit (GPU), also known as a video card or graphics card, is a specially designed computer chip that performs quick tasks and frees up the CPU to perform other tasks. CPUs use a few cores which are focused on sequential serial processing. GPUs have thousands of smaller cores that are used for multi-tasking.There are two types of GPUs. Integrated GPUs are located on within CPU and share memory with the CPUs processor. Discrete GPUs have their own card and video memory (VRAM). This means that CPU does not have to use RAM for graphics.13) Random Access Memory (RAM)What is RAM? Random Access Memory (RAM) is the short-term memory of a device. RAM is responsible for temporarily storing or remembering everything that runs on your computer. This includes short-term memory for software, documents, web browsers, files, and settings.Data that is stored in RAM can be read from anywhere and almost at the same speed. This is opposed to having your CPU search your hard drive every time you open a new browser window or application. Traditional storage including HDD and SSD is still very slow compared to RAM.An important thing to remember about RAM is short-term rather than long term. It is a volatile technology which means that once it loses power, it no longer has access to the data. That is what traditional, long-term hard drives are for.14) Input-Output Operations Per Second (IOPS)What is IOPS? IOPS is the acronym for Input/Output Operations Per Second. It is a common performance measurement used in the benchmarking of computer storage devices such as hard disk drives (HDD), solid-state drives (SSD), and storage area networks (SAN).Its important to note that the IOPS numbers that are published by manufacturers do not guarantee the same real work application performance.15) Object StorageWhat is object storage? Object storage, also referred to as object-based storage, is a type of storage strategy that manages and manipulates data storage as distinct units, called objects. These objects are kept in a single storehouse and are not ingrained in files inside other folders.Object storage adds metadata to each file, eliminating the tiered file structure used in file storage. It places everything into a flat address space, called a storage pool. Object storage offers infinite scalability and is less costly than other storage types.It is important to note that object storage uses versioning. Newly written objects offer read after write consistency. Edit or deleted objects have eventual read consistency.16) Block StorageWhat is block storage? Block storage, also referred to as block-level storage, is a type of storage that is used within storage area networks (SANs) or cloud-based storage environments. Block storage is used for computing where fast, efficient, and reliable data transportation is required.Blog storage works by breaking up the data into blocks and then storing them in separate pieces. Each block as a unique identifier. The blocks are placed wherever it is most efficient. This means that blocks can be stored across different systems and configured or partitioned to work with different operating systems (OS).17) VolumesWhat are volumes? In data storage, a volume is a single accessible storage area with a single file system that typically resides on a single partition of a hard disk. Although different from a physical disk drive, volumes can be accessed with an OSs logical interface.It is important to note that volumes are different than partitions.18) SnapshotsWhat are snapshots? Snapshots allow near-instantaneous copies of datasets to be taken in a live environment. This copy can be made available for recovery or copied to other cloud servers or storage for performance or test and development environments.A major benefit of snapshots is speed. Snapshots provide immediacy of recovery and the ability to quickly restore data to an environment. It is important to note that there are costs associated with snapshots. They can consume cloud capacity in a single availability zone, multiple availability zones, and or regions.19) ReplicationWhat is replication? Replication in cloud computing involves the sharing of information across redundant infrastructure, such as cloud servers or storage, to improve the reliability, availability, and fault-tolerance of the IT workload - applications, data, databases, or systems.20) High Availability ArchitectureWhat is high availability architecture? High availability architecture involves the design and deployment of multiple components that work together to ensure uninterrupted service during a specific time period. It also encompasses the response time and availability for user requests.The key to highly available architecture is to plan for failure. Systems should be tested on a regular basis under varying scenarios and conditions. This ensures that your IT workloads stay online and are responsive even when components fail and during times of high stress on the system.High availability architectures including the following: hardware redundancy, software and application redundancy, data redundancy, and that single points of failure eliminated.
20 May 2020
Monetizing the Edge: Colocation On-Premise, Yes, You can Have Your Data Center and Pay for it, too.
Often people are told everything is going colo or everything is going cloud. The truth is that most enterprises will use all the available on-site, offsite, cloud-based assets as tools in their toolkit. There is no one size fits all. There is a myriad of reasons to keep compute local. Monetizing that and having it behave as colo on-premise is a newer attractive model as we begin to build out the edge. Edge Data Centers Are.Edge Data Centers and edge compute for that matter, remain as the new hot topic in the data center industry. And like other buzzwords, the definition, use and expectations are lumped into a broad category that encompasses probably more than it should. The Linux Foundation created an Open Glossary of Edge Computing to try to clarify and define the various types of compute and edge. According to the guide, and Edge Data Center is one that is capable of being deployed as close as possible to the edge of the networking, in comparison to traditional centralized data centers. Capable of performing the same functions as centralized data centers although at a smaller scale individually. Because of the unique constraints created by highly-distributed physical locations, edge data centers often adopt autonomic operation, multi-tenancy, distributed and local resiliency, and open standards. Edge refers to the location at which these data centers are typically deployed. Their scale can be defined as micro, ranging from 50 to 150kW+ of capacity. Multiple edge data centers may interconnect to provide capacity enhancement, failure mitigation, and workload migration within the local area, operating as a virtual data center.In short, they are defined, at least here, as small data centers closer to users; and in this option, micro in terms of power. What is missing in this definition is the volume of these small data centers needed for things like smart cities. And while the power numbers seem insignificant at first glance, the aggregate power is daunting. That said, the distributed models allowed in edge compute lend themselves well to renewable energy and alternative energy if not wholly, as backup to the grid that exists. But that doesnt mean that the power consumption can be ignored at any portion of the edge. This is true for edge devices, edge computing and edge data centers, all of which are components of edge compute. Edge devices can be stuck to poles, mounted on buildings, and various other places, but the devices themselves are hardened little machines/devices that are designed with network (generally WiFi or cellular). As these are generally distributed, they are outside of the scope of this doc, except to say, there will be power consumed collectively via a massive number of smaller usage edge devices and they are likely going to report back to some other edge location at least for part of the data they will serve.Shifting to the Edge Data Center, as a building not a device and not a shipping container, these smaller, perhaps modular, data centers will be needed literally all over. The idea is quite simple, move the compute closer to the user. Not all data needs to go back to a central repository. Machine to Machine (M2M), V2I (Vehicle to Infrastructure) and other close proximity, low latency required, communications dont need to travel half-way across the country to a larger data center. In fact, some of those communications dont need to go anywhere after processing. This is part of the reason that the Edge Data Center Market revenue, according to Global Market Insights, is expected to grow to $16 billion US by 2025.Power Consumption at the EdgeAll paths do not lead to VA, or any other large data center city for that matter. The need to place compute closer to the end-user is growing with the advancements in IoT, autonomous vehicle needs, enhanced security, WiFi, 5G, smart cities, and the like. The distribution of compute will create the need to manage power not singularly, but across a variety of sites. Some estimates say that power consumed by the edge will reach 102 Gigawatts. Hyperscalers and builders are increasingly looking towards renewables as noted below: Google Shifting Workloads to Become Carbon Aware Microsoft Pledges to be Carbon Negative by 2035 Amazon Commits to Renewable Energy 100% by 2035While thats great for them, that certainly doesnt address the smart city, or others that dont have the pockets for large scale neutrality. For most data centers, the driving consolidation benefit is cost. But when that data center is broken into 50 smaller data centers, some intelligence is going to be needed to monitor, orchestrate workloads based on energy costs and availability, and be able to flatten out demand removing the need to overprovision for peaks that occasionally occur leaving large amounts of power stranded. Software-Defined Power (SDP) is the last pillar to the software-defined data center and brings AI into the mix. With software-defined power, appliances can be used to shave off the peak (peak shaving). For instance, assume that a data center is designed for a peak of 10kW per cabinet but normal operating needs are closer to 6kW per cabinet. Designing for the peak of 10kW means that roughly 4kW per cabinet for primary and secondary power each is provisioned but remains wasted capacity of 8kW per cabinet total for the vast majority of the time. Colocation providers struggle with stranded power and the cost for that unused capacity is typically just passed through as power costs to the occupants even if it is rarely, if ever, used.Another benefit if SDP is the ability to orchestrate IT loads to maximize power savings. Suppose that out of a 30 cabinet data center, there are several virtual machines that operate sporadically creating inconsistent CPU loads. If all of the servers are provisioned for peak operations, compute capacity, like power, is also wasted. AI is the solution. By bridging the gap between IT and facilities, the compute becomes fluid (orchestrated). For instance, perhaps there are times when workloads can be consolidated to say 25 of those cabinets allowing 5 full cabinets can be powered off. As the loads are orchestrated to take advantage of maximum compute capacity. Node capping allows compute to remain below the thresholds set within the hardware. The entire data center is optimized based on learned AI. Lets face it, with multiple edge data centers, it simply isnt feasible to send someone around to each facility or have someone dial into each facility to handle the monitoring and orchestration of workloads. AI and automation provide this fluidity in a meaningful cost-saving manner currently not available in larger colocation facilities.SDP also allows orchestration of workloads to run over an alternate power source as a means of shaving peak grid costs. One could operate fully on a renewable cell when the costs of grid power are high, and shift back to grid power when needed. Taking advantage of alternate power sources provides significant savings over being stuck on the grid at all times. Sources could include battery backup units, generators, fuel cells, or in rack/row power storage. The SDP options and opportunities for savings are vast and immediate. SDP enables DR of power based on AI. One full building in a smart city arrangement, for instance, could move its compute in the event of a failure while the equipment provides standby power during that orchestration.Smart Cities and the EdgeSmart cities bring a myriad of technologies to the table. People generally think of IoT as an enabler to autonomous vehicles, but that is just the tip of the iceberg. Integrating the edge data center buildings together bring some of the efficiencies discussed above, and add additional options for security incorporated with SD-WAN, for instance. Meshing the disparate buildings together creates a fault tolerance and opens a wealth of opportunities for applications and monetization. The buildings can become colo on prem, if you will, encompassing the best of both worlds while supporting near field communications data, onramps to cloud services, and locally hosted data.The on-premise building can be converted to an OPEX model enabling meshed cloud offerings to be used around the city, by the city, or others. Fully meshed edge (facilities and compute) offer the ability to make the data center a service while moving compute closer to the consumer. Herein lies the advantage of monetization of the edge as a colocation on-premise data center. Edge compute companies can lease a server, a rack, or a full onsite adjacent building as needed. Edge native applications are growing in number and capability. They are not designed to function in a centralized setting. Autonomous vehicles are simply one example. Monetization of the EdgeOne underlying problem with the growth of edge computing is the sheer volume of the number of devices and small data centers needed. Singularly thinking, it is extremely difficult to monetize something small and distributed. However, as we examine the devices and the traffic that will be needed by multiple entities, the share principal begins to make a lot more sense. With all the hype around 5G, there is a misconception among many that this will be ubiquitous fast speeds for everyone. The reality is that there will still need to be backhaul connections. From the central office to the outer tower, this means that significantly more fiber will be in the ground and available for use. For the more remote needs, there may be other technologies in play including WiFi, line of sight connectivity via spectrum, LoRaWAN (Long Range Wide Area Network) and other protocols/devices to carry the communications data streams. Again, bear in mind that these may remain remote, move to centralized storage, or any combination thereof. One very underserved segment of the population with respect to data centers is the companies, hospitals and cities that wish to have their compute equipment/data in house and/or on-premise but dont happen to be similarly located to a large Data Center city. In light of recent events, we have seen some data center operators step up while others havent performed as well. Intelligent hands at the other side of the country is all well and good until multiple organizations need those hands at the same time. There are also companies that just prefer on-site resources, and of course, there are those that have not fully depreciated capital assets and have no desire to retire them. That isnt to say that those assets cant be used for a newer, more efficient, on-premise data center, that can then be monetized for the benefit of the owner. These are where modular data centers fit in. Purpose-built buildings engineered for stability, optimized environmental and power services, with the ability to monetize the sure onslaught of edge compute just makes sense. The Difference Between Modular Edge Compute Facilities and ContainersMany have heard the term ghetto colo which refers to containers being placed at the foot of cell towers for backhaul and edge compute. While it seems attractive to just build and ship, this type of data center isnt ideal in all climates and all environments. Being able to actually have a modularly built data center building frees up a large portion of existing building footprint for repurposing and also allows the facility to be constructed so as to be category 4 hurricane, F5 tornado and earthquake seismic rated to zone 4, ballistics UL-752, level 4 and can also be outfitted as a SCIF/EMP installation. The esthetics of a modular building is much better than a shipping container. Speed to market is rapid once the permits are set, and the buildings can generally be constructed in a short time as the buildings are pre-engineered and pieces and simply assembled on site. Once constructed, the building can be used for the core tenant (in a colo on-premise model) or sole owner with extra capacity being available for edge, onramps, outposts and the like. But this allows data to be near the user, not nearest the closest NFL city where a company may not want to divert or employ/contract resources.The sheer volume of edge compute resources needed as we move forward with IoT, AI and other compute needs will help companies monetize these esthetically pleasing buildings in a secure, cost-efficient manner. Some ideal applications for modular edge buildings include:Rural healthcareSmart AgricultureSmart city distributed buildingsCloud onramps, outposts for hybrid environmentsRural/Urban carrier locationsData Hubs in campus environmentsSmall teaching data centers at colleges with IT/DC curriculumPharmaceutical and highly sensitive environments where data control is paramountThe important takeaway here is that sometimes a small energy-efficient building can provide the same functionality as a large colo, but on-premise in an OPEX model. Cost models make it both attractive and with the colo on-premise model, running the data center can still belong to someone other than core IT staff if that is the desire. Upgrades to aging capital equipment can be done in a smaller, more cost-effective footprint, leaving original not fully depreciated assets in situ for use by other applications or repurposed to a better building layout/setup. Power is fully controlled not absorbed as a pass-through cost. In short, you dont have to move to have what you want, sometimes all it takes is a little patch of land and vision. Just one more tool in the toolkit of data support.
19 May 2020
Getting to Know: vXchnge, An Interview With George Pollock, Jr. SVP, CFO & Treasurer
I had the pleasure of interviewing George Pollock, Jr., Senior Vice President, Chief Financial Officer Treasurer of vXchnge. George brings more than 25 years of finance experience to vXchnge, where he handles all finance and human resources functions. He previously served as Senior Vice President, Chief Financial Officer Treasurer at Switch Data, where he oversaw six accretive acquisitions and the companys 2007 IPO. Before his tenure at Switch Data, he was Chief Financial Officer of the Merchant Banking Division of Communications Equity Associates, Inc. (CEA), where he was responsible for the financial and administrative functions for a $600 million portfolio of private equity funds.Who is vXchnge? Whats the background story of vXchnge?vXchnge is a colocation data center business that provides reliable infrastructure and connectivity options to organizations looking to move on from their outdated or inefficient on-prem IT solutions. We are a young company as far as the data center business goes, so that has allowed us to bring a fresh perspective to the colocation market. Most of our founding team has had years of experience working together and working at other data center providers. When we started vXchngein 2013, we wanted to give customers a better way to run their business within a colocation environment.Why vXchnge? What makes vXchnge different?From the very beginning, we have emphasized transparency and control. Historically, it is remarkable how little information most providers shared with customers about their deployments. How much power are they using at a given time? Where are their servers located right at this moment? Getting that information was such a hassle that a lot of companies were hesitant to consider colocation solutions. That is why we worked with our initial customers to develop the in\site platform, which gives them total visibility into their deployments and provides the control they need to make informed decisions that impact their technology stack. At vXchnge, we combine that capability with a deep commitment to good customer service. That is what sets our data centers apart.How is vXchnge positioned in the industry (currently)?We are growing our capabilities every day. Right now, we have a presence in many markets around the U.S. We are able to serve a wide range of industries, whether network providers, financial services, retail, managed service providers, and so on. One of the things we frequently hear from our customers is that we have the flexibility to meet their specific needs. When someone comes to us for infrastructure services, we offer the flexibility to customize their environment. So instead of trying to fit a square peg into a round hole, we work with customers to assess their needs and then cut the hole to their unique dimensions.Where does vXchnge see itself in 5 years? What does the industry look like?The days of space, power, and cooling being a differentiator is a thing of the past. Those performance efficiencies are foundational colocation services you can get anywhere. What is going to set providers apart from the pack in the future will be the quality of the tools they are providing to customers to manage their business for efficiency, resiliency, risk mitigation and compliance. That is why we are continuously working on adding capabilities to the in\site platform. Things like API integration, RFID tracking, and IT ticketing, they are all designed to put power and control directly into the hands of our customers. We want our data centers to feel indistinguishable from their on-premise data solution. That commitment to transparency and access is where we see the industry headed and were trying to stay ahead of that curve.Where is the demand coming from for colocation services? What are you seeing in the market? Whos buying?The colocation industry is benefiting from a couple of factors. The in-house data center is not a good place to be. Too expensive, aging infrastructure, lack of network options, limited risk mitigation services and probably out of compliance. There are a lot of startups looking to get access to enterprise-level data center capabilities without incurring the massive capital expenses necessary to build their own data center. Colocation services are perfect for them because they can control their costs more easily while also gaining access to whatever cloud services they need now or may need in the future. At the same time, many existing organizations are getting tired of maintaining their aging on-premise infrastructure. A lot of companies that made those investments around the turn of the century are looking at what it will cost to upgrade and deciding that colocation provides much greater flexibility in terms of cost control and capabilities. They do not have to worry about right-sizing their infrastructure for the next few years.What markets are you seeing the most demand?The Austin colocation market has experienced good demand. Disaster recovery planning has become a priority as Hurricane Harvey really exposed how vulnerable the Houstonarea is to flooding. While the data centers in the region held up well during the storm, many were inaccessible because the surrounding roads were flooded. Companies had to reassess their disaster recovery plans and look at Austin as good location for backup and redundancy. The greater Austin metro area is also one of the fastest-growing cities in the country, and there is innovation happening there with the University of Texas and the Tech companies moving in. Overall, it is an exciting market that presents tremendous opportunities for us.What about the cloud? How is vXchnge positioned?The flexibility and cost savings are just too beneficial for most organizations to pass up moving certain apps to the cloud. Having said that, companies have become a lot smarter about how they utilize cloud services. They know what data and applications can be safely located in the cloud and what needs to be kept on their own private networks. That is where a colocation provider like vXchngecan provide value. We make it possible for them to build hybrid IT environments that let them manage multiple cloud services while still keeping their mission-critical assets safely secure within a private network.What about connectivity? How is vXchnge positioned?Building those hybrid IT environments is all about performance. You need to be able to offer direct on-ramps to the cloud that help to reduce latency and avoid the security risks that come from exposing sensitive data to public internet connections. From our perspective, colocationis not just about where you put your servers, it is about what you can do with them once they are embedded within a connectivity-rich environment. Having access to multiple connectivity providers and cloud services helps you to control costs and spin up new capabilities quickly to help you meet demand. We have worked hard to build relationships with those providers to provide our customers with all the connectivity resources they need to respond to existing and future challenges.What do you think about data center automation? The future of data centers?One of the things we have learned from the COVID-19 outbreak is that existing automation trends are going to accelerate. The fact that our customers could still manage their assets remotely using in\site was a huge benefit for us when the pandemic hit. They did not have to worry about sending people to the data center and potentially exposing them to harm. Our data centers are still staffed 24x7x365, so we put measures in place to address the health risks for our employees and customers while still maintaining the operating integrity and performance of the data centers. I think it is important for the colocation industry to retain that human element in order to provide great customer service. Automation is necessary, but there is no substitute for having a technician on-site who can physically inspect your assets whenever you need them to.Future technologies? What about Artificial Intelligence (AI), IoT, 5G? How is vXchnge positioned?We monitor and research the latest trends. When we started vXchnge, we knew edge computing was going to be a consideration for our customers when they started rolling out IoT devices, providing cloud services, and delivering streaming media content. That is why we established a presence in markets that may not have seemed like high priorities at the time. Now we have the coverage and flexibility to help customers keep latency low and improve the reliability of their digital services. With respect to artificial intelligence and machine learning, I think we have been focused on helping to connect customers to the cloud computing resources they need to leverage those technologies. Most companies do not have the processing capabilities to handle big data analysis, but we can connect their systems to cloud providers who do. Again, it all comes back to flexibility. We are always on the lookout for ways to help our customers push the envelope and access the latest and greatest technologies without having to rebuild their IT stack from the ground up.Any other services or solutions to highlight from vXchnge?Adding new features to in\site is an ongoing effort. Last year, we rolled out the mobile app that allows customers to access in\site from anywhere to manage access, monitor power and bandwidth, and manage support tickets. We added the API integration to make it easier for customers to leverage the massive amounts of data that in\site generates. We are also always working on reliability. Uptime is really important to every business and we take our 100% SLA commitment very seriously. vX\defend, our DDoS mitigation service, has been a big part of this effort, and we are working to expand our overall risk mitigation services to accommodate a greater range of business continuity and disaster recovery plans. Overall, it is an exciting time to be a data center provider.
18 May 2020
How to Perform a Successful Cloud Migration in 8 Steps?
We can all agree that the cloud has been an unstoppable force, infiltrating nearly all aspects of IT from cloud servers and storage to software, voice, and collaboration services. In the past five years, everything in the technology stack has been heading towards being cloud-based or software-defined. If its not automated, subscription-based, or accessible remotely, its probably on its way out.Many businesses today find themselves rushing into digital transformation and cloud migration without fully understanding or planning for it. Its an important factor in staying relevant, innovative, and competitive in the market. Unfortunately, the cloud migration mistakes made today will have significant, long-lasting consequences in the future.In this article, we will look at the positive and negative aspects of cloud migration and what to expect. We will also take a look at eight critical steps to take in planning out your migration to the cloud.Does Migrating to Cloud Makes Sense?Moving workloads to the cloud is not only a smart option, but its also pivotal to IT strategies and digital transformation initiatives.In 2002, Amazon Web Services (AWS) changed everything with the first retail option for public cloud servers. Before AWS, the procurement of enterprise physical servers could take weeks or even months to deploy in a colocation data center. Now, it takes only a few moments to spin up or down a cloud server anywhere on the globe.The powerful combination of public, private, and hybrid cloud infrastructure will meet the needs of any workload from hosting mission-critical enterprise applications and databases in the cloud to High-Performance Compute (HPC) for scientific research. The cloud offers what its counterpart, physical servers, and colocation, cannot on-demand, flexible, scalable, highly available, and redundant infrastructure on a pay-as-you-go basis.So, should you do it if everyone else is? There are still more than a handful of IT professionals that arent sold on the cloud. Theyre skeptical of the cloud from a cost, security, compliance and regulatory perspective. Not to mention cloud compatibility and licensing issues. Simply put, some legacy applications run better on dedicated, physical infrastructure.Despite numerous use cases for the cloud, we will most likely always have hybrid infrastructure the use of both physical and virtual servers and storage devices. There are also cases where the cloud does not make sense economically. In some instances, cloud servers and storage can be significantly higher than physical infrastructure hosted on-premise or in a colocation facility.Those that argue against migrating to the cloud also point to its role in promoting data sprawl, causing shadow IT and creating network bottlenecks and bandwidth constraints.Challenges Associated With Cloud Migration?Despite the many benefits of migrating to the cloud, there are several important considerations to ponder before deciding. Timing and preparation are everything.Application and Data DisruptionsThink about minimizing disruptions first. It is important to consider all of the potential operational and user experience disruptions associated with migrating to the cloud.Given that downtime is a major disruptor, minimizing downtime of applications and data accessibility are critical. When users cannot properly access apps and data, your organization can incur financial losses and damaging hits to its reputation with users.Security and Compliance ConcernsAnother concern commonly associated with the cloud is security. Cloud is often perceived as being less secure and more susceptible to hackers and data breaches. Is it true? Most Cloud Service Providers (CSPs) like AWS, Microsoft, and Google Cloud offer a shared responsibility model for security. The CSP is responsible for the security of the cloud while the customer is responsible for security in the cloud.The truth is that security holes and data breaches can occur on any infrastructure whether physical or virtual. It is really up to the organization and department heads to make security a priority. With the cloud, nearly anyone with credentials can deploy an insecure server instance or storage bucket and expose critical business data, intellectual property, and customer information. Can you trust everyone in your organization to lock it down with the appropriate security roles and policies?Budget vs Cloud Cost ExpectationsThe cloud is great at offering on-demand, highly scalable, and configurable infrastructure. Thats what it is known for. However, this can also be a bad thing if mismanaged. There are major differences in the types of cloud services and pricing tiers offered by CSPs. For many organizations, migrating to the cloud without properly assessing the underlying workloads, applications, and data can lead to significantly higher costs than physical infrastructure.A multi-tenant public cloud server is significantly less expensive than a dedicated, single-tenant cloud server. Paying on-demand by the hour is more expensive than reserved instances over a one year, three year, or five-year term. The use of multiple cloud availability zones is significantly higher than a single availability zone. Cloud storage types and tiers such as standard, infrequently accessed, archiving and deep archiving can have huge variances in price. Organizations see costs skyrocket when their workloads are not matched with the appropriate cloud infrastructure type, tier, and cost structure.Internal Resources and Skills GapMany of the challenges that organizations face when migrating and managing their cloud infrastructure are directly the result of an internal skills gap and lack of highly experienced, technical resources. Does your organization have sufficient resources and personnel capable of procuring and deploying public, private, hybrid and even multi-cloud environments? The nature of the cloud lends itself to be a self-service model. As a result, many CSPs do not offer managed services for their cloud infrastructure offerings.Eight Steps for Successful Cloud MigrationFinally, we are at the point in this article where we can highlight eight steps to ensure a successful and non-disruptive cloud migration. Make sure to contact me if you have any questions about cloud migration or would like help with a cloud readiness assessment.1. Develop a Cloud Migration PlanDo not just jump into the cloud and hope that it all gets sorted out along the way. Cloud migrations require planning, input, and strategy. Do you know which applications and data are cloud-ready? Do you want to start with the least mission-critical workloads? Do you have a new product or technology requirement that is forcing a migration to the cloud? Start with the business motives and use cases for your organization. Define this first. Build your requirements doc.Once you understand your workloads and their requirements, it is highly beneficial to create a cloud migration plan that breaks down the migration into different workload priorities and phases. You will also want to research potential CSPs that can meet your requirements from a service offering, cost, management level, location, and compliance and regulatory perspective. Will you be going with one of the big three CSPs or a smaller cloud provider? How do those providers interface with your existing technology stack internet connectivity, network, disaster recovery, and business continuity, etc?2. Create a Cloud Governance FrameworkThis is especially critical, and I cannot emphasize it enough. Security and compliance are important to all organizations regardless of the vertical or industry they are in. This is only amplified when your organization is trusted with personally identifiable information (PII) such as names, emails, phone numbers, credit card information, social security numbers, tax information, and healthcare records.The creation and implementation of a cloud governance framework will help guide your entire organization with clear, policy-based principles to support safe cloud adoption. Input and feedback from your team and other teams spanning IT to DevOps, SysOps, SecOps will be critical in constructing a solid cloud governance framework and plan.Cloud governance should be an extension of your IT governance. It takes into careful consideration of the inherent dangers and threats posed by both internal and external resources. It defines the who, what, where, and why of cloud service deployment. It includes a wealth of information such as structures, roles, responsibilities, policies, plans, objectives, principles, measures, and also a decision framework.3. Define Network Bandwidth RequirementsWill the cloud slow the performance of your existing network? Will it create bottlenecks? Yes. It is the cloud after all and accessible from anywhere with an internet connection and the right authorizations to access infrastructure.Many organizations that we talk to experience challenges by not planning ahead for the increased strain on the network from cloud adoption. Imagine that your entire office creates files and stores them locally. Backups of those files occur once or twice a day and most likely during off-peak times. Bandwidth is primarily used for emails, voice, conferencing, accessing applications, and the internet.Now imagine that you have your entire office creating files and storing them in the cloud. The files are being saved, synced, and uploaded to the cloud. They are also being downloaded from the cloud back to local machines. Your organization is also using bandwidth for emails, voice, conferencing, accessing applications, and the internet. Network performance will suffer if not addressed.Luckily, there are options. Nearly all large CSPs offer dedicated internet connections to their cloud infrastructure. Cloud direct connects from AWS, Microsoft, Google Cloud, and IBM establish a connection directly between your office, network, or colocation data center to the providers infrastructure, bypassing the public internet.4. Create Organization User TrainingsBecause most organizations have a skills gap and lack technical cloud experience, it makes sense to train staff and personnel on the cloud as soon as possible. Create a series of cloud trainings starting with the cloud governance framework and work towards user-defined trainings on the cloud services they will interact with most often in their role. Cross-training is important for understanding the different aspects of a cloud environment and the different user roles and responsibilities associated with each.5. Determine Software Licensing PortabilitySoftware licensing for the cloud draws similar comparisons to streaming music or audio files before that was a thing. Major roadblocks can arise when an organization does not evaluate or plan for software licensing issues. Do your existing licenses for on-premise software extend into the cloud? Some software vendors offer Bring Your Own Software and License (BYOSL) programs. These allow your organization to express permission to migrate their applications to the cloud. Other vendors specify use rights per the number of concurrent users. It gets a little sticky when installing certain software licenses in a multi-tenant, public cloud environment.Avoid this documenting all enterprise applications before your cloud migration. Find out if the licenses are portable to the cloud. If uncertain, talk to the vendor to find out if existing licenses you have purchased can be updated for the application to be used from the cloud. Software Asset Management (SAM) tools can prove useful in reducing risks, costs, and complexities associated with extending license management to your cloud.6. Leverage Automation Migration ToolsDowntime and service disruption is not something that you want to have to explain to your boss or bosses. Luckily, all of the major CSPs such as AWS, Microsoft, and Google Cloud offer automation and migration tools and templates to assist with your cloud migration. Artificial Intelligence (AI) and Machine Learning (ML) are being used in many cloud services to automate, right-size, and deploy workloads to the cloud.There are hundreds and maybe thousands of pre-defined server images, templates, and security policies that you can use and customize to create your cloud infrastructure and virtual private cloud environment.7. Monitor Cloud UsageAre your cloud costs skyrocketing out of control? Budgets out of whack? This should not be the case but unfortunately, it is for many organizations. All major CSPs offer cloud budgeting and monitoring services. One of the first things you should do when migrating to the cloud is to set up budgets and alarms based on costs or usage of cloud infrastructure.We all know that the cloud is scalable, and this is usually a good thing. However, what if it is a malicious attack that auto-scales your infrastructure? This could be a very bad thing for your budget. Setting up usage notifications, access control lists, firewalls, and gateways are critical for your public, private, or hybrid cloud environment.Another benefit of the cloud is performance. It is also highly configurable, and you can leverage numerous cloud services to meet performance demands or cost targets. Leverage your existing data on usage to determine a benchmark for the cloud. Adjust accordingly.8. Managed Services SupportWill your cloud environment be managed or unmanaged? Will certain services be managed while others are unmanaged? Will you purchase managed services for your cloud from a third party or rely on the support provided by your CSP? For most organizations, this is determined by internal resources and cloud expertise as well as budget.One major complaint from many organizations during a cloud migration throughout the cloud adoption phase centers on the lack of support provided by cloud providers. Therefore, it is critical to determine your managed service requirements.Conclusion: Cloud Migrations Require PlanningIf there was just one takeaway from this article, it is that planning for your cloud migration is critical to your success. Dont rush it and be conscious that the cloud may not be the answer for every application or workload. Weve seen more than a handful of organizations enthusiastically go all-in on the cloud only to rip it all out and go back to physical servers, storage devices, and colocation. Why? The main culprit has been out of control cloud costs that were unexpected and unmanageable.Need help with your cloud readiness assessment or cloud migration? Contact me to learn more about how we can help build an inventory of your servers, storage devices, databases, software licenses, internet, network, colocation facilities, and other critical elements of your IT infrastructure.