Factors to consider for Data Center Efficiency and Availability
The data center is the heart of every enterprise network, enabling the transmission, access and storage of all information. Here, cabling connects enterprise local area networks (LANs) to switches, servers, storage area networks (SANs), and other active equipment that supports all applications, transactions and communication. It also is where the LAN connects to service provider networks that provide access to the Internet and other networks outside of the facility.
As the amount of information and applications continues to grow, data centers are expanding their capacity to house increasing amounts of active equipment and more links than ever before while also needing to enable high-bandwidth, low-latency data transmission to and from equipment. Proper data center design involves maximizing space to allow for growth and scalability, making sure cabling pathways are manageable, improving efficiency and ensuring overall performance, reliability and resilience.
As businesses strive to compete in a data-driven world, cloud and colocation data centers are on the rise as they provide the means for deploying new systems and services faster and expanding capacity without the need to upgrade the data center. Many enterprise businesses are trending toward a hybrid IT approach where some IT resources remain in house, particularly where the business needs to maintain control of the data, while other resources reside in the cloud using services such as software-as-a-service (SaaS) or in large colocation data centers where infrastructure-as-a-service (IaaS) allows these businesses to respond quickly to changing needs.
Key Data Center Considerations and Challenges
Because the data center is essential to an enterprise’s operation and houses an ever-increasing amount of mission critical equipment, there are several key considerations and challenges when it comes to ensuring reliability and performance. Let’s take a look at a few of the more important ones.
Data Center Redundancy and Availability
Data center reliability is largely based on availability (i.e., amount of downtime) and the amount of redundancy (i.e., duplication). Data center redundancy involves having duplicate components (i.e., equipment, links, power and pathways) to ensure functionality in the event that any of these should fail. Data center redundancy is often defined using the “N” system where “N” is the baseline for the number of components required for the data center to function. N+1 redundancy would therefore mean having one more component than is needed to function while 2N redundancy is double the amount of components needed, and 2N+1 redundancy is double the amount plus one. Both the Uptime Institute’s Tier levels or the BICSI 002 availability class system call out the “N” level required for the various levels of data center availability.
Data Center Power, Cooling and Efficiency
Energy consumption is key consideration in the data center, given the cost and increased amount of power required for today’s advanced data center computing. Data center managers are therefore tasked with ensuring efficiency to reduce operational costs, and they often use the Green Grid’s PUE metric to ensure that power coming into the data center is being efficiently used by equipment and not wasted.
Data center cooling also has a significant impact on energy consumption. Preventing the mixing of cold inlet air and hot exhaust air in the data center helps to raise return air temperatures, which improves the efficiency of data center cooling systems and prevents overprovisioning of power-consuming air conditioning units. Preventing the mixing of hot and cold air is also critical to ensuring reliability as hot spots can adversely impact equipment lifetime and reliability.
The use of a hot aisle/cold aisle configuration in the data center is one way that data centers prevent the mixing of hot and cold air. It involves lining up rows of cabinets in way that cold air intake from data center cooling systems is optimized at the front of the equipment and hot air exhaust from the back of the equipment is optimized to reach the cooling return system. Containment systems can also be used to completely isolate hot and cold aisles from each other where roof panels are used to isolate the cold aisle from the rest of the data center (i.e., cold aisle containment) or vertical panels are used to isolate the hot aisle and return the hot exhaust to the overhead return plenum.
Data center cooling can also be affected by the amount of cabling in pathways. When cabling is congested in underfloor pathways or at the front of the equipment, it can prevent proper movement of cold air to the equipment inlet or hot air from the exhaust. The use of effective cable management and moving high density cables overhead are strategies deployed to enable proper airflow.
Fiber Loss Budgets
Insertion loss is the amount of energy that a signal loses as it travels along a cable link (i.e., attenuation) and the loss caused by any connection point along the way (i.e., connectors and splices). While insertion loss is one performance parameter for copper cabling systems, it is the primary performance parameter for fiber systems. Industry standards specify the amount of insertion loss allowed for fiber applications to function properly, and higher speed applications such as 40GBASE-SR4 and 100GBASE-SR4 have much more stringent insertion loss requirements. Data centers determine their fiber loss budgets based on distances between the functional areas and number of connection points along the way to ensure that they stay within these requirements.
Basic fiber testing, known as Tier 1 certification, measures insertion loss of the entire fiber link in decibels (dB) using an optical loss test. Tier 1 certification is almost always required by cable manufacturers to acquire a system warranty, but some may also require Tier 2 certification using an OTDR that also provides insight into the loss of specific connection points and length of the cable.
Staying within the insertion loss budget for fiber is also highly contingent on fiber end face cleanliness as contaminated fiber end faces remain the number one cause of fiber related problems and test failures in data centers. Even the slightest particle on the core of a fiber can cause loss and reflections that degrade performance. Cleaning and inspection are therefore key steps in data center fiber terminations. In order to eliminate subjectivity in determining end-face cleanliness, it is recommended to follow the IEC 61300-3-35 Basic Test and Measurement Procedures Standard that contains specific cleanliness grading criteria to assess pass or fail certification for inspection of a fiber end-face.