As 5G deployments continue toward the goal of ubiquitous coverage, two apparently conflicting trends are developing: C-RAN and Edge Computing. To understand why these two apparently diametrically opposed trends are happening, we need to look at the drivers of each, and when we do, we’ll see there isn’t necessarily an inevitable “collision.” But Service Providers need to plan for each of these trends with a network that is expandable, flexible and accessible.
Businesses experience higher speed, bandwidth, and autonomy from the dark fiber. Curious about what is Dark Fiber? Learn the benefit from dark fiber!
The six installment in Bob Hult’s Technology Trends series examines the rise of data centers and the shift to cloud and edge computing.
Cloudlets, or mini-clouds, are starting to roll out closer to the sources of data in an effort to reduce latency and improve overall processing performance. But as this approach gains steam, it also is creating some new challenges involving data distribution, storage and security.
The Asia Direct Cable Consortium has chosen NEC to build its new high-performance submarine cable. The finished cable will feature multiple pairs of high capacity optical fibers designed to carry more than 140 Tbps of traffic to allow for high capacity transmission of data across the East and Southeast Asian regions. As a result of ADC’s high capacity, the new submarine cable will be able to support bandwidth-intensive applications driven by technological advancements in 5G, the cloud, IoT and AI.
The 5G Open Innovation Lab (5G OI Lab), founded by Intel, NASA and T-Mobile, has launched, providing selected start-ups with advanced access to platforms to develop, test and bring to market new use cases that unleash the potential of 5G networks both now and in the future.
At the risk of giving away the conclusion too early, there’s a clear place — not to mention, a need — for both application and infrastructure deployments in the cloud and on the edge. Centralizing data and the processing it in the cloud can be efficient and effective, but where latency can’t be tolerated, some amount of processing needs to be carried out at the edge. In fact, it’s often easier and more efficient to bring the processing to the data than it is to bring the data to the processing engine.
The high demand for technologies for faster FO data transmission in hyperscale data centers has triggered a whole range of developments. The manufacturer consortia – called MSAs (Multi-Source Agreements) – are working at high pressure on new specifications, which focus on the roadmap from 400 to 800 Gigabit Ethernet (800G). The cloud industry is waiting for new, faster optical connectivity. It is expected that cloud companies will need usable 800G modules by 2023-2024 to be able to increase the transmission performance in their data centers.
As lockdowns start to ease in many countries, so the tentative return to work begins, leaving people understandably concerned about how safe a space an office will be in the middle of a global pandemic. There is likely to be an increase in the amount of technology used to monitor employees. From thermal cameras taking your temperature when you enter the building to apps or wearables to alert you if you get too close to colleagues, work could soon have the feel of the Minority Report movie.
Arguably the biggest challenge for hyperscalers is continuity and, by association, reliability. New findings generated by a survey from Uptime Institute revealed that over 10% of all respondents said their most recent reportable outage cost them more than $1m in direct and indirect costs. On March 13th, 2019, Facebook suffered its worst-ever outage, affecting the estimated 2.7 billion users of its core social network, Instagram and messaging applications, Facebook Messenger, and WhatsApp. By extrapolating the company’s 2018 revenue figures, CCN estimated that the blackout could have cost Facebook up to $90 million in lost revenue based on an income of $106,700 per minute. With so many businesses relying on hyperscale data centers to provide the IT backbone to their operations, any downtime can have a substantial impact and sometimes catastrophic ramifications.