The six installment in Bob Hult’s Technology Trends series examines the rise of data centers and the shift to cloud and edge computing.
Over the past decade, data center services have shifted from predominantly web-centric to cloud-centric. Today, they are shifting once again, this time from the era of cloud computing to the intelligent era. There is a need to filter out and automatically reorganize the information before mining it for useful information. This is where artificial intelligence (AI) comes in. Some 97% of large enterprises around the world intend to use AI by 2025, according to Huawei’s Global Industry Vision (GIV).
Cloudlets, or mini-clouds, are starting to roll out closer to the sources of data in an effort to reduce latency and improve overall processing performance. But as this approach gains steam, it also is creating some new challenges involving data distribution, storage and security.
The Asia Direct Cable Consortium has chosen NEC to build its new high-performance submarine cable. The finished cable will feature multiple pairs of high capacity optical fibers designed to carry more than 140 Tbps of traffic to allow for high capacity transmission of data across the East and Southeast Asian regions. As a result of ADC’s high capacity, the new submarine cable will be able to support bandwidth-intensive applications driven by technological advancements in 5G, the cloud, IoT and AI.
The move to edge cloud is resulting in a huge proliferation of local data centers. By moving processing power and services closer to the edge of the network, a wealth of new cloud-based applications dependent on low latencies and highly reliable connections emerge. Like their centralized counterparts, edge data centers need high capacity like long-haul transport links, but the networks they’re building are fundamentally different. Instead of a connecting a few distant central data centers, cloud providers are connecting dozens of distributed data centers in a single city in order meet the fast response times and low latencies required of new edge computing services.
At the risk of giving away the conclusion too early, there’s a clear place — not to mention, a need — for both application and infrastructure deployments in the cloud and on the edge. Centralizing data and the processing it in the cloud can be efficient and effective, but where latency can’t be tolerated, some amount of processing needs to be carried out at the edge. In fact, it’s often easier and more efficient to bring the processing to the data than it is to bring the data to the processing engine.
The high demand for technologies for faster FO data transmission in hyperscale data centers has triggered a whole range of developments. The manufacturer consortia – called MSAs (Multi-Source Agreements) – are working at high pressure on new specifications, which focus on the roadmap from 400 to 800 Gigabit Ethernet (800G). The cloud industry is waiting for new, faster optical connectivity. It is expected that cloud companies will need usable 800G modules by 2023-2024 to be able to increase the transmission performance in their data centers.
A smart building aspires to be agile, responsive and adaptive to its users. Data generated by the building should continuously inform system operation, enabling the building to take proactive steps, anticipating user needs and optimizing target outcomes. Smart buildings use converged networks during operation to connect a variety of subsystems, which traditionally operate independently, so that these systems can share information to enhance total building performance.
Major cloud providers are having trouble getting basic components for new data centers so they’ve put off some construction plans, but they have enough surplus capacity already to ride out the problem. Limiting construction are a scarcity of fiber optics, batteries, and racks.
Just as the stage is set for 400G Ethernet (GbE) to roll out in force later this year, mainly in hyperscale, telco and large data-center networks, there is a call to boost that speed to 800GbE or even higher in the coming years.The need for increased speed in data centers and cloud services speeds is driven by many things including the continued growth of hyperscale networks from players like Google, Amazon and Facebook, but also the more distributed and mobile workloads modern networks support. But the reality on the ground is that much lower speeds are what’s commonly in use.