Google has recently joined the Open Compute Project (OCP), and has announced a 48V DC power distribution technology to the rack and then down-converted to 1VDC or less as close to the processor as possible. Google’s senior vice president of technology, Urs Hölzle, made the announcement about the 48VDC “shallow” data center rack architecture, on the first day of the Open Compute Summit.
Data Centers are traditionally challenged with getting the heat under control inside the data center building and inside the racks of servers and other equipment. Google seems to have mastered the cooling technique with their clever techniques to reduce “overhead” energy or non-computing energy, as it is called, for cooling and power conversion. They have achieved a comprehensive trailing twelve-month (TTM) Power usage effectiveness (PUE) [PUE is a measure of how efficiently a computer data center uses energy; specifically, how much energy is used by the computing equipment (in contrast to cooling and other overhead)] rating of 1.12 across all of their data centers, in all seasons and including all sources of overhead.
To find out how they achieved such a good efficiency rating, click here.
Now let’s take a look at what a Data Center looks like on the inside to the building. All these images are courtesy of Google from their website showing Inside our data centers. Or your can take our own guided tour.
A data center is a centralized building housing servers for storage, management and the distribution of information. Inside a typical campus network room, routers and switches reside, enabling many data centers to communicate with each other. Google uses state-of-the-art fiber optic networks to connect the myriad of sites since fiber enables incredible speeds at more than 200,000 times faster than any typical home Internet connection. The fiber cables can be seen running along the yellow cable trays near the ceiling in this image. (Image courtesy of Google)