Often people are told "everything is going colo" or "everything is going cloud." The truth is that most enterprises will use all the available on-site, offsite, cloud-based assets as tools in their toolkit. There is no one size fits all. There is a myriad of reasons to keep compute local. Monetizing that and having it behave as "colo on-premise" is a newer attractive model as we begin to build out the edge.
Edge Data Centers Are….
Edge Data Centers and edge compute for that matter, remain as the new “hot topic” in the data center industry. And like other buzzwords, the definition, use and expectations are lumped into a broad category that encompasses probably more than it should. The Linux Foundation created an Open Glossary of Edge Computing to try to clarify and define the various types of compute and edge. According to the guide, and Edge Data Center is one that “is capable of being deployed as close as possible to the edge of the networking, in comparison to traditional centralized data centers. Capable of performing the same functions as centralized data centers although at a smaller scale individually. Because of the unique constraints created by highly-distributed physical locations, edge data centers often adopt autonomic operation, multi-tenancy, distributed and local resiliency, and open standards. Edge refers to the location at which these data centers are typically deployed. Their scale can be defined as micro, ranging from 50 to 150kW+ of capacity. Multiple edge data centers may interconnect to provide capacity enhancement, failure mitigation, and workload migration within the local area, operating as a virtual data center.”
In short, they are defined, at least here, as small data centers closer to users; and in this option, micro in terms of power. What is missing in this definition is the volume of these small data centers needed for things like smart cities. And while the power numbers seem insignificant at first glance, the aggregate power is daunting. That said, the distributed models allowed in edge compute lend themselves well to renewable energy and alternative energy if not wholly, as backup to the grid that exists.
But that doesn’t mean that the power consumption can be ignored at any portion of the edge. This is true for edge devices, edge computing and edge data centers, all of which are components of edge compute. Edge devices can be stuck to poles, mounted on buildings, and various other places, but the devices themselves are hardened little machines/devices that are designed with network (generally WiFi or cellular). As these are generally distributed, they are outside of the scope of this doc, except to say, there will be power consumed collectively via a massive number of smaller usage edge devices and they are likely going to report back to some other edge location at least for part of the data they will serve.
Shifting to the Edge Data Center, as a building not a device and not a shipping container, these smaller, perhaps modular, data centers will be needed literally all over. The idea is quite simple, move the compute closer to the user. Not all data needs to go back to a central repository. Machine to Machine (M2M), V2I (Vehicle to Infrastructure) and other close proximity, low latency required, communications don’t need to travel half-way across the country to a larger data center. In fact, some of those communications don’t need to go anywhere after processing. This is part of the reason that the Edge Data Center Market revenue, according to Global Market Insights, is expected to grow to $16 billion US by 2025.
Power Consumption at the Edge
All paths do not lead to VA, or any other large data center city for that matter. The need to place compute closer to the end-user is growing with the advancements in IoT, autonomous vehicle needs, enhanced security, WiFi, 5G, smart cities, and the like. The distribution of compute will create the need to manage power not singularly, but across a variety of sites. Some estimates say that power consumed by the edge will reach 102 Gigawatts. Hyperscalers and builders are increasingly looking towards renewables as noted below:
While that’s great for them, that certainly doesn’t address the smart city, or others that don’t have the pockets for large scale neutrality. For most data centers, the driving consolidation benefit is cost. But when that data center is broken into 50 smaller data centers, some intelligence is going to be needed to monitor, orchestrate workloads based on energy costs and availability, and be able to flatten out demand removing the need to overprovision for peaks that occasionally occur leaving large amounts of power stranded.
Software-Defined Power (SDP) is the last pillar to the software-defined data center and brings AI into the mix. With software-defined power, appliances can be used to shave off the peak (peak shaving). For instance, assume that a data center is designed for a peak of 10kW per cabinet but normal operating needs are closer to 6kW per cabinet. Designing for the peak of 10kW means that roughly 4kW per cabinet for primary and secondary power each is provisioned but remains wasted capacity of 8kW per cabinet total for the vast majority of the time. Colocation providers struggle with stranded power and the cost for that unused capacity is typically just passed through as power costs to the occupants even if it is rarely, if ever, used.
Another benefit if SDP is the ability to orchestrate IT loads to maximize power savings. Suppose that out of a 30 cabinet data center, there are several virtual machines that operate sporadically creating inconsistent CPU loads. If all of the servers are provisioned for peak operations, compute capacity, like power, is also wasted. AI is the solution. By bridging the gap between IT and facilities, the compute becomes fluid (orchestrated). For instance, perhaps there are times when workloads can be consolidated to say 25 of those cabinets allowing 5 full cabinets can be powered off. As the loads are orchestrated to take advantage of maximum compute capacity. Node capping allows compute to remain below the thresholds set within the hardware. The entire data center is optimized based on learned AI. Let’s face it, with multiple edge data centers, it simply isn’t feasible to send someone around to each facility or have someone dial into each facility to handle the monitoring and orchestration of workloads. AI and automation provide this fluidity in a meaningful cost-saving manner currently not available in larger colocation facilities.
SDP also allows orchestration of workloads to run over an alternate power source as a means of shaving peak grid costs. One could operate fully on a renewable cell when the costs of grid power are high, and shift back to grid power when needed. Taking advantage of alternate power sources provides significant savings over being stuck on the grid at all times. Sources could include battery backup units, generators, fuel cells, or in rack/row power storage. The SDP options and opportunities for savings are vast and immediate. SDP enables DR of power based on AI. One full building in a smart city arrangement, for instance, could move its compute in the event of a failure while the equipment provides standby power during that orchestration.
Smart Cities and the Edge
Smart cities bring a myriad of technologies to the table. People generally think of IoT as an enabler to autonomous vehicles, but that is just the tip of the iceberg. Integrating the edge data center buildings together bring some of the efficiencies discussed above, and add additional options for security incorporated with SD-WAN, for instance. Meshing the disparate buildings together creates a fault tolerance and opens a wealth of opportunities for applications and monetization. The buildings can become “colo on prem,” if you will, encompassing the best of both worlds while supporting near field communications data, onramps to cloud services, and locally hosted data.
The on-premise building can be converted to an OPEX model enabling meshed cloud offerings to be used around the city, by the city, or others. Fully meshed edge (facilities and compute) offer the ability to make the data center a service while moving compute closer to the consumer. Herein lies the advantage of monetization of the edge as a colocation on-premise data center. Edge compute companies can lease a server, a rack, or a full onsite adjacent building as needed. Edge native applications are growing in number and capability. They are not designed to function in a centralized setting. Autonomous vehicles are simply one example.
Monetization of the Edge
One underlying problem with the growth of edge computing is the sheer volume of the number of devices and small data centers needed. Singularly thinking, it is extremely difficult to monetize something small and distributed. However, as we examine the devices and the traffic that will be needed by multiple entities, the “share” principal begins to make a lot more sense. With all the hype around 5G, there is a misconception among many that this will be ubiquitous fast speeds for everyone. The reality is that there will still need to be backhaul connections. From the central office to the outer tower, this means that significantly more fiber will be in the ground and available for use. For the more remote needs, there may be other technologies in play including WiFi, line of sight connectivity via spectrum, LoRaWAN (Long Range Wide Area Network) and other protocols/devices to carry the communications data streams. Again, bear in mind that these may remain remote, move to centralized storage, or any combination thereof.
One very underserved segment of the population with respect to data centers is the companies, hospitals and cities that wish to have their compute equipment/data in house and/or on-premise but don't happen to be similarly located to a large Data Center city. In light of recent events, we have seen some data center operators step up while others haven’t performed as well. Intelligent hands at the other side of the country is all well and good until multiple organizations need those hands at the same time. There are also companies that just prefer on-site resources, and of course, there are those that have not fully depreciated capital assets and have no desire to retire them. That isn’t to say that those assets can’t be used for a newer, more efficient, on-premise data center, that can then be monetized for the benefit of the owner. These are where modular data centers fit in. Purpose-built buildings engineered for stability, optimized environmental and power services, with the ability to monetize the sure onslaught of edge compute just makes sense.
The Difference Between Modular Edge Compute Facilities and Containers
Many have heard the term “ghetto colo” which refers to containers being placed at the foot of cell towers for backhaul and edge compute. While it seems attractive to just build and ship, this type of data center isn’t ideal in all climates and all environments. Being able to actually have a modularly built data center building frees up a large portion of existing building footprint for repurposing and also allows the facility to be constructed so as to be category 4 hurricane, F5 tornado and earthquake seismic rated to zone 4, ballistics UL-752, level 4 and can also be outfitted as a SCIF/EMP installation. The esthetics of a modular building is much better than a shipping container. Speed to market is rapid once the permits are set, and the buildings can generally be constructed in a short time as the buildings are pre-engineered and pieces and simply assembled on site. Once constructed, the building can be used for the core tenant (in a colo on-premise model) or sole owner with extra capacity being available for edge, onramps, outposts and the like. But this allows data to be near the user, not nearest the closest NFL city where a company may not want to divert or employ/contract resources.
The sheer volume of edge compute resources needed as we move forward with IoT, AI and other compute needs will help companies monetize these esthetically pleasing buildings in a secure, cost-efficient manner. Some ideal applications for modular edge buildings include:
- Rural healthcare
- Smart Agriculture
- Smart city distributed buildings
- Cloud onramps, outposts for hybrid environments
- Rural/Urban carrier locations
- Data Hubs in campus environments
- Small teaching data centers at colleges with IT/DC curriculum
- Pharmaceutical and highly sensitive environments where data control is paramount
The important takeaway here is that sometimes a small energy-efficient building can provide the same functionality as a large colo, but on-premise in an OPEX model. Cost models make it both attractive and with the colo on-premise model, “running the data center” can still belong to someone other than core IT staff if that is the desire. Upgrades to aging capital equipment can be done in a smaller, more cost-effective footprint, leaving original not fully depreciated assets in situ for use by other applications or repurposed to a better building layout/setup. Power is fully controlled not absorbed as a pass-through cost. In short, you don’t have to move to have what you want, sometimes all it takes is a little patch of land and vision. Just one more tool in the toolkit of data support.