Ahead of DCD>New York, Alfonso Ortega, James R. Birle Professor of Energy Technology, Villanova University, sat down for a Q&A with DCD’s Kisandka Moses, who is producing this year’s conference, to discuss the National Science Foundation Center for Energy Smart Electronic Systems (ES2) research into advanced methods for cooling electronic equipment and the pivotal role it's played in pioneering the transfer of research into technology.
Q: Can you explain the concept behind the Research Center for Energy-Smart Electronic Systems?
A: The NSF is the federal government agency that funds university research and one of their programs is called the Industry University Cooperative Research Program. I use CRC, and the intent of that program is to try to accelerate the transfer of research into technology. This happens by providing funding to establish the the basic center and then member companies, through their membership, support the research that has been conducted in the center. As such, the research ends up being more applied and near term in nature which allows for a more rapid transfer of good ideas into usable technology.
I think that's a unique part of the work that we're doing, it often addresses issues or problems that are no more than three to five years out, sometimes even nearer term than that. The way this operates is a unique program, and I believe it’s a great investment from the federal government to try to accelerate the transition of research into technology.
More on DCD>New York
With 56% more delegates registered than in 2019, download your sample delegate list now to see who you can expect to meet onsite at DCD>New York 2020
Google, Akamai Technologies & Verizon talk to DCD ahead of our New York event on 31 March - 1 April
See what you can expect on-site when DCD>New York returns on the 31 March - 1 April
Q: What do you think has been your most important breakthrough to date?
A: Our core strength is in the area of thermal management. I think that the infrastructure can be thought of as supplying power, cooling and managing the IT workload within the data center. We’ve managed to bring in expertise from various disciplines and as such, work on more holistic challenges. One of the things that I'm very proud of is the fact that we as thermal engineers are working very closely with computer engineers at Binghamton University in order to be able to develop new methods for holistic control of power and cooling as a result of IT demand. We're trying to integrate our solutions, so that they're driven by the need and the need of starts with the IT load in a data center. Another area we have worked on is doing trade-off studies related to using either AC power or DC power in a data center, driven by this concept of holistic IT load power and cooling to identify the most energy efficient scenario in order to provide power and cooling.
We view the entire data center as being a very complex system that requires power, cooling to manage the IT load. What we ask ourselves is, “how do we operate that complex system in a sustainable way?”, which would involve everything from operating more efficiently, but also capturing the massive amounts of waste heat that is produced by most data centers, and being able to use that heat in a sustainable way.
Q: As an expert in thermal and fluid systems, what are your thoughts on liquid cooling in the computing environment? Are we likely to see the technology become ‘mainstream’ within the data center industry?
A: We do carefully constructed experimental work in order to evaluate and characterize liquid cooling systems for electronics, which encapsulates systems which use single phase water as a coolant in the traditional way, all the way through to using refrigerants that evaporate or boil as they pick up heating from electronics.
The first thing that's obvious to me is the amount of material of the fluid that you need in order to remove heat from servers. If you compare liquid to air, it requires very low flow rates of liquid in order to dissipate the heat compared to air and the reason is that liquids have a much higher thermal capacitance. As such, liquid is able to absorb heat so much more readily and transport it out of the system. From the perspective of adoption and the injection of the technology into practice, anybody can see how superior liquid is in removing heat. In fact, I often tell my students that if you wanted to do a bad job of cooling use air, because it is a lousy coolant. It has a very low capacitance, very low thermal conductivity and all of its properties basically point to the direction of being very poor for cooling things. We use air because it's readily available to us and doesn't wet anything, it's a gas and therefore it's benign.
Having said all that, I think if an engineer were to look at the advantages of moving over to a liquid cooling system, purely from a thermal technology point of view, liquids are far superior, whether they're single phase or two phase. From the point of view of adoption it isn't because of the superiority or inferiority, it's because of this change in infrastructure that's going to be required in order to implement liquid cooling systems in data centers which traditionally have been air cooled. I think that there would be expense related to bringing in plumbing systems which bring water into an environment that traditionally hasn’t had liquid in it. There is also an expense tied to a switchover and the way you control liquid cooled systems compared to air cooled systems.
I understand the reluctance to change and the economics, where it may be prohibitive to abandon your air cooled data center even if it's getting increasingly challenging to cool it using air. I strongly suspect that we will continue to see air as the primary cooling mechanism for most data center systems and what we are seeing is the demonstration of the use of water or refrigerant cooling in very dense systems in which there is no opportunity to spread out the servers or in systems which will remain very dense from a volumetric perspective and therefore, have no choice but to cool them using liquids. As such, liquid cooling is being used in high performance computing systems, but we're also starting to see systems like this in a lot of artificial intelligence-based applications powered by GPUs with high degrees of power consumption. I've been involved in this industry for 30 years, however I've never seen a time where we had to take liquid cooling more seriously than than.
Q: Do you have any thoughts on the circular economy? Do you think it's something we should take seriously?
A: I have a huge amount of interest in this idea. We have a very successful program focused on sustainable engineering at Villanova. It's one of our most popular master's degree programs, because there's a lot of people who view this as being an absolute imperative in the way we design engineering systems. In the future I think everybody understands that unless we pay attention to the overall cradle to grave engineering of computing systems, we are being short sighted. For example, if we think about a data center as a big box, what we do is we bring in a massive amount of electrical power into that data center, whether we're either getting it off the grid or we have a very large diesel generator set in the backyard which generates electrical power in the case of emergencies. The electricity is used to power the facility and allow the IT operations of that data center.
I would say 98% of that electrical power, eventually becomes heat. That is a thermodynamic principle and it's always been a little bit astounding to recognize that almost 100% of the electrical power that comes into a data center eventually is dissipated as heat. If we make no attempt to recover that heat, what we're doing is contributing to environmental pollution and global warming.
Is it possible to reuse some of that waste heat? Is it possible to harvest, or capture some of that waste heat and put it to good use? The conversion of waste heat is part of a project which we are contributing to from our research center and is closely related to the idea of circularity. One of the unfortunate things we have found is, the heat that we are capturing and harvesting is very low temperature. At most the temperature of air which is leaving a data center as a result of being heated from the electronic equipment is 80 degrees celsius.
This is a pretty low temperature in order to try to do anything with it, such as generating electrical power or converting it to cooling, which are two common paths that you'd like to be able to take. On the one hand, we have massive amounts of wasted heat but on the other hand, the wasted heat is relatively low quality and difficult to use.
Nonetheless, I think if we're really serious about having a kind of a green data center in the truest sense, we have to find ways in which we can reduce this thermal waste. One way you could do it is by co-locating your data centers in the vicinity of other buildings or structures that could use some of that heat in order to be able to perform their functions. In the power industry, this concept is called co-generation. I would propose that we adopt the concept of co-generation in the data center industry. Can we develop an intermediate bio-chemical process that would store wasted energy, and then reconvert that wasted energy in a way which would assist some primary technology? There's a lot of individuals working on energy storage out there and almost everybody is discovering, of course, that the most optimal path for energy storage isn't mechanical, it is usually chemical. It’s the energy in chemical bonds, it’s the ability to form chemical bonds and break chemical bonds. We need a fresh perspective to ‘get over the hump’.
Alfonso will join us at DCD>New York on March 31 - April 1, to present on: “Will 'waste-to-energy' power the 2030 data center?”