Tag: Latency

Demystifying 5G

Do you know 5G’s 3 major benefits, 8 technical goals that deliver those benefits, and 4 technology building blocks that meet the technical goals? This article discusses the technology’s enhanced mobile broadband/capacity enhancements, massive IoT/massive connectivity, and low latency/ultra-high reliability and low latency. The proximity of each application to each benefit in the triangle indicates the benefit’s influence on the application.

Cloud Versus Edge — Is There a Winner? Complementary or Competitive?

At the risk of giving away the conclusion too early, there’s a clear place — not to mention, a need — for both application and infrastructure deployments in the cloud and on the edge. Centralizing data and the processing it in the cloud can be efficient and effective, but where latency can’t be tolerated, some amount of processing needs to be carried out at the edge. In fact, it’s often easier and more efficient to bring the processing to the data than it is to bring the data to the processing engine.

Fiber Type Matters When Simulating Optical Links and Latency

Simulating real-world fiber optic links and time delays in the lab environment is both a frequent and necessary task for engineers performing R&D and equipment certification testing processes.  With the evolution to more advanced network architecture, increasing speeds of 400G and beyond, and latency always being a key element, replicating the field network as closely as possible in the lab is critical to ensure systems will perform as expected post-deployment. 

How IoT is reshaping network design

If the industry is to realize the promised benefits of IoT, we must increase the ability to support more machine-to-machine communications in near-real time, where latency requirements are on the order of a couple of milliseconds. Satisfying these requirements involves a radical rethink about how and where we deploy assets throughout the network. Link reliability will be every bit as critical as latency and will involve multiple failovers wherever that data is being transported.

Will you be able to manage thousands of edge cloud infrastructures?

Edge provides a huge opportunity to host many use cases on one infrastructure, manageable from a single pane of glass. Getting close to end-users not only allows the operator to tap directly into the new revenue streams for ultra-low latency/ultra-reliable services, but also to provide “edge-as-a-service,” and other infrastructure-as-a-service and hosting services to other enterprises.

Researchers propose solutions for networking lag in massive IoT devices

The internet of things (IoT) widely spans from the smart speakers and Wi-Fi-connected home appliances to manufacturing machines that use connected sensors to time tasks on an assembly line, warehouses that rely on automation to manage inventory, and surgeons who can perform extremely precise surgeries with robots. But for these applications, timing is everything: a lagging connection could have disastrous consequences.

Five new ways to think about 5G: The speed trap

5G means that, for the first time, last-mile latency will often be less than backbone latency. If your data center is a long way from lots of your customers, your quality of service will be poorer (i.e. noticeably slower) than competitors with physically closer data centers. The potential answer to this problem has been around for a while – edge and fog computing. These may finally come into their own as last-mile latency drops and the sheer volume of data from the IoT skyrockets.

The Never-Ending Success Story: Dark Fiber

Given the future 5G requirements in terms of latency, data volumes, and reliability, fiber optics are undoubtedly the most future-proof and ideally scalable medium for data transfer. From a technological perspective, it is clear that fiber optics and the new 5G mobile networks offer the best foundations for high transmission rates, minimum latency periods, and thus the greatest possible speed.

RoCE or iWARP for Low Latency?

Enterprise customers will soon require the low latency networking that RDMA offers so that they can address a variety of different applications, such as Oracle and SAP, and also implement software-defined storage using Windows Storage Spaces Direct (S2D) or VMware vSAN. There are three protocols that can be used in RDMA deployment: RDMA over InfiniBand, RDMA over Converged Ethernet (RoCE), and RDMA over iWARP. Given that there are several possible routes to go down, how do you ensure you pick the right protocol for your specific tasks?