Low-voltage cabling infrastructures are becoming more complex than ever. LANs have more connected devices in more locations and data centers are shifting to fully-meshed leaf-spine architectures where every switch is connected to every other switch via redundant pathways. With these complexities come a wider variety of copper and fiber cable and connectivity components and associated racks, cabinets and cable management needed to build reliable, high-performance networks-and that means more extensive and diverse project bills of material (BOMs).
The global crisis created by COVID-19 will have a profound and long-lasting impact. Broadband has played a vital role during this crisis as people work, study and shop from home. These changes in digital behavior have had a seismic effect on our networks. Until now, broadband operators have been using growth models that predicted a gradual increase in bandwidth demand of 30-40% over the next 3 or 4 years. COVID-19 has generated 30-40% growth overnight. We’ve seen huge spikes in usage across online gaming, VPN, streaming services, social media and video conferencing, to name a few.
The Massachusetts Green High Performance Computing Center (MGHPCC) brings together the major research computing deployments from five Boston-area universities into a single, massive data center in Holyoke, Massachusetts. The 15-MW, 780-rack data center is built to be an energy- and space-efficient hub of research computing, with a single computing floor shared by thousands of researchers from Boston University, Harvard University, Massachusetts Institute of Technology, Northeastern University, and the entire University of Massachusetts system. Because the data center is run by hydro and nuclear power, it leaves nearly no carbon footprint. By joining together in the Holyoke site, all of the member institutions gain the benefits of lower space and energy costs, as well as the significant intangible benefits of simplified collaboration across research teams and institutions.
By eliminating the need for splice trays, splice chips and cable slack, Siemon says the OptiFuse connectors reduce material requirements, conserve space within fiber enclosures, and deliver 30% faster installation compared to traditional fiber pigtails.
The SN connector is a new, duplex optical fiber connector using LC style 1.25mm O.D. Zirconia ferrules, designed for next generation hyperscale data center interconnect (DCI). This connector was designed to provide individual and independent duplex fiber breakout at a quad style transceiver (QSFP, QSFP-DD and OSFP) that Senko contends is not only more efficient, reliable and scalable than the MPO connector but also at a lower in cost to implement. The SFP-DD has also adopted the SN as their independent duo style interface, mainly for the wireless fronthaul applications.
The emergence of the QSFP form factor has brought economies of scale to 100G upgrades, putting 100G within the cost reach of both small and hyper-scale data center operators. With a small profile and reduced power consumption, the QSFP form factor is the choice of switch manufacturers for 100G platforms. Despite the economies of scale at the switch level, the urgency to upgrade can inevitably lead to unforeseen compatibility and budget issues.
At the risk of giving away the conclusion too early, there’s a clear place — not to mention, a need — for both application and infrastructure deployments in the cloud and on the edge. Centralizing data and the processing it in the cloud can be efficient and effective, but where latency can’t be tolerated, some amount of processing needs to be carried out at the edge. In fact, it’s often easier and more efficient to bring the processing to the data than it is to bring the data to the processing engine.
Mission Critical and Panduit commissioned Clear Seas Research to conduct a survey measuring industry awareness and usage of edge computing solutions. 100 experts were asked how they would explain edge computing to someone new in the industry. Responses ranged from vague — “It’s modern and tech savvy,” to precise — “Putting the data near the user,” to eye-opening — “Not 100% sure myself.” Read the full report for more insight regarding the perceived challenges and benefits associated with edge computing as well as who should be involved in the decision-making process when it comes to deploying edge infrastructure and selecting the right vendor.
For many in the data center sector, one of the most pressing concerns is that much of the world’s data center infrastructure operates in a manner that is financially suboptimal and environmentally unsustainable. If a data center is only using a fraction of the available power, then the capital investment that is tied up in inflexible power infrastructure is impotent. The question is, who is paying for that stranded capacity and unused space?
As organizations pursue the idea of running containers in edge computing environments, they’ll look to extend their Kubernetes deployments outside the data center. Many enterprises have different views of edge computing, but few rule out the possibility they’ll deploy application components to the edge in the future, particularly for IoT and other low-latency applications and Kubernetes as the ideal mechanism to run containers in edge computing environments — particularly those who have already adopted the container orchestration system for their cloud and data center needs.