Country missing? Please select your nearest region...
19 December 2006 by DatacenterDynamics -
The wave of digital technology has descended on the healthcare environment. The biggest issue has not been the lack of emerging technology but rather the support challenges that the new technology creates. When new technology is deployed in today's healthcare environment it forces convergence on many levels. The most obvious area of convergence is technical connectivity. An example of technical connectivity would be when a patient physiological monitoring system is interfaced to a wireless telephone through a third party messaging engine in order to automatically display cardiac waveforms on the handset in the event of an alarm. This technical equation is made up of both clinical and IT components, and the sum of which equals a solution that benefits both staff and patient.
Less obvious but more costly ripple effects are the space, access and administration impacts that new technology has on the data center. These new technologies typically require centralized data processing in one form or another such as an HL7 (Medical Data Protocol) ADT (Admission Discharge Transfer) interface, management reporting, web server, or data storage. Several systems that are commonly deployed today such as a patient entertainment system server, nurse call system servers, infant protection system server, security access control system server, security video camera servers/storage may blur the traditional boundaries between what was typically viewed as a data center application. Either the legacy system never required servers or they were a workstation PC supplied by the vendor stuck into a telecom closet or under a desk. Nevertheless, neither solution was a concern of IT and the data center.
Add to this the already data intensive clinical applications such as PACs, Point of Care, and the goal of completely moving to a paperless Electronic Medical Record, and the role of the current healthcare data center may be underestimated.
These applications and their data needed to support the clinical computing needs to reside somewhere other than traditional hap-hazard distribution of central processing. And somewhere is a data center. A single location with mechanical and electrical systems designed to meet the critical uptime needs of the institution. The pendulum is swinging back to consolidated computing. Many institutions are realizing the cost to support the mechanical and electrical infrastructure for distributed computing is cost prohibitive.
A well-planned data center needs to have the flexibility to adapt over time. In other words, data centers need to have scalability, the capability of being easily expanded or upgraded on demand. A well-designed facility must be scalable because of the functional nature of a data center.
For most hospitals, the data center sits at the crossroads of information technology, corporate real estate and facilities. Silos of spending in each of these three groups sometimes prevent the communication that is required to optimize solutions for data center operations. The IT group deploys the equipment to support a business mission. The real estate group provides the space, and the facilities group provides the mechanical and electrical infrastructure to keep this up and running. A successful project relies on the collaboration of these groups.
Data centers are not just about footprint. Data centers succeed or fail based on the capacity and reliability of the electrical and mechanical systems. The electrical systems in a modern data center have a functional life of about 10 years-not a long time. For those who think that just because their facility has a UPS and generator it must be OK to support the hospital's data center, it's a disaster waiting to happen.
Above: Digital hospital equipment
Not all data centers are created equally. There are multiple levels of reliability. And the cost difference to design and construct the various levels is significant. The standard benchmark that is used throughout the industry is the tiered classification approach, which was developed and defined by The Uptime Institute, Inc., a Santa Fe, New Mexico based research entity dedicated to providing information and improving data center management. Over the years, the institute has created a standard measured system for data center reliability that is based on a series of tiered benchmarks. This has evolved into a four tiered classification system used throughout the industry.
These four basic "tiers' that define the level of reliability built into the systems, and reflect varying levels of reliability as expressed in "Need' or "N.' N represents the quantity of components necessary to support the mission. A good example that illustrates this terminology is the tires on your car. You "Need' 4 tires, but with the spare tire, you have five. Therefore your car's tire system can be referred to as N+1. The tiers are broken down as follows:
Tier I: Single path for power and cooling distribution, no redundant components, all systems are "N'. This consists of a single utility feed for power, a single uninterruptible power supply (UPS), and a single back-up generator. The mechanical systems do have redundant components, and maintenance of mechanical and electrical systems require an outage. The result is 99.671% availability with an annual anticipated down time of 28.8 hours.
Tier II: Single path for power and cooling distribution, redundant components. A tier II electrical system is similar to tier I, with the addition of N+1 components for UPS and generators, and N+1 components for mechanical systems. Maintenance of mechanical and electrical systems requires an outage. The result is 99.741% availability with an annual anticipated down time of 22.0 hours.
Tier III: Multiple power and cooling distribution paths, but only one active, redundant component, concurrently maintainable. This is similar to Tier II with the addition of second path for power and cooling. For electrical distribution, this can translate into dual-corded (two power cords) electronic equipment connected to two separate UPS systems, and two emergency generator sources. Mechanical systems would have two paths for chilled water. Maintenance of mechanical and electrical systems can be accomplished without an outage. The result is 99.982% availability with annual anticipated down time of 1.6 hours.
Tier IV: Multiple active power and cooling distribution paths, redundant components, fault tolerant. This is similar to Tier III with the ability of the systems to have at least one worst-case unplanned failure or event, and still maintain operation. This is accomplished by having 2(N+1) systems with two active paths. This results in 99.995% availability with annual anticipated down time of 0.4 hours. It is statistically possible to calculate the down time based on the design. An evaluation of the predictive failure rate of system components will yield anticipated down-time of a data center. The problem occurs when the data center is asked to be more reliable than it was designed for originally.
The Business-based analysis and design process
The answer to the question "How can we have a better data center for our hospital?' starts with a process called business-based analysis and design. A cornerstone of this problemsolving methodology is understanding scalability within the context of long-term planning. How can we satisfy the hospital's current mission without locking ourselves out of tomorrow's needs? Equally important is to understand the result of the ubiquitous use of technology. A business need is revealed, which leads to a technological response, which identifies a new need, which reveals a new business need, and so on and so on; we've established a continuum of increasing reliance on technology resulting in mission creep.
Solving this problem begins with the hospital's business mission validation. What are the current and proposed business missions supported by the data center? In most cases, a single data center is required to support multiple business missions, with their own unique requirements and associated impact on systems. Understanding and validating the business mission(s) will enable the design professional to evaluate systems that can be deployed to support the mission. What tier level of reliability do we need now, and what tier are we going to need in the future?
All hospital data centers were originally created to support non-clinical functions such as payroll, insurance processing, and eventually some basic patient information. Data centers with this business mission can be supported with a tier I or II facility. Yes there are costs associated with an outage, but the benefit gained by spending more to increase reliability did not exist.
Once a hospital makes the leap into digital imaging (PACS), digital pharmacy, clinical communication systems, and digital medical records, it's a new ball game. The data center is now linked to the well being of the patient. A Tier III or IV facility is a must to mitigate the risk.
Another important element of a Tier III or IV facility is "Concurrent Maintenance'. This is the ability to maintain any component in the mechanical and electrical distribution system without an outage. Once the data center goes clinical, it is now linked to the patient 7 x 24. There are no nights and weekends for the maintenance staff to do repairs by shutting down the systems.
The second step is programming. This includes architecture and engineering. This process matches space and systems requirements with the business mission. A programming questionnaire is used, then reviewed and addressed in a series of interactive work sessions with the data center user and facilities staff. At the completion of programming, the design team can do space test-fits and develop systems options that support the program and business mission. This is where scalability comes in. Understanding current and future needs, and also understanding how mechanical and electrical systems can be designed with expansion and growth in mind allows for better cost control. Put in now what you need now, but allow space for growth.
The evaluation of current needs is the easy part. This consists of an evaluation of current technology currently deployed, and documenting current missions supported. An audit would also be conducted to determine current power and cooling requirements, and level of redundancy.
Predicting the future is more complicated. Determining the required level of reliability, or Tier, as previously discussed is primarily dictated by the deployment of clinical systems. The next step is to determine a five and ten year growth profile based on industry trends. How many cabinets, and at what density will the data center need to support determines the mechanical and electrical capacity requirements.
The last component is scalability. We can predict the future power and cooling requirements, but most clients do not want to pay for it up front. A well designed solution will be able to add the needed power and cooling over time in a planned, cost effective manner.
The implications of choosing the wrong level of reliability can be significant both in patient care (read liability), and high costs to upgrade in the future if scalability is ignored.
Once systems and space options are selected, the basis of design can be established. This process documents the systems and design intent-a critical step in explaining to third parties why certain decisions were made.
The culmination is a schematic design. We call this the "C-level' report, a document that the hospital's CEO, CFO, CIO can review in order to understand how the team arrived at its recommendations and why the plan needs to be implemented.This process proves successful in meeting the needs of corporations. Take Johns Hopkins Hospital, one of the world's largest health care institutions. RTKL was asked in 2003 to develop a new mission critical data center for Johns Hopkins Hospital to address the need for consolidation, the deployment of clinical computing systems, and compaction.The solution, based on the business mission validation process, determined a one and three year need for a Tier III facility, and a five and ten year need significant increase in floor space and electrical capacity. In addition, the existing on campus data center is being upgraded to serve as a disaster recovery site.
Above: Aerial photograph - Johns Hopkins Medical Center
The programming phase revealed that in addition to the need for more systems reliability, there is a fivefold increase in anticipated power and cooling requirements over the 10-year business plan. Enter the scalable solution. Johns Hopkins Hospital needs a Tier III data center in 2005 to support 100 cabinets at 2.5 KW per cabinet, and scalable in 2015 to support 250 cabinets at 5 KW per cabinet. The basis of design is a mechanical and electrical solution that meets the 2005 requirements, but is expandable, without an outage, to accommodate the future needs.
Under the direction of Mary Hayes, Johns Hopkins Hospital Director of Data Center Services, the emphasis was placed on viewing data center requirements from a business mission perspective. This allowed the team to focus the design on critical elements over the lifetime of the facility and avoid costly and unneeded data center capabilities from day one. This also eliminated significant future retrofit expenses by including components that allow for a cost-effective future expansion.
The right solution relies on the communication with IT, understanding what mission you need to support, finding the right real estate, determining the best place to house the systems and facilities, and determining what mechanical and electrical systems we need to support the mission. The result will be a data center that predicts and addresses hospital's needs-even before they arrive.
About the authors
Peter O'Connor, RCDD, is an associate with EQ International in Chicago. EQ International, a leading Medical Technology Planning and Consulting firm, has served domestic and international healthcare providers for nearly 20 years.
R. Stephen Spinazzola, P.E., is the vice president in charge of the Applied Technology Group at RTKL, one of the world's largest architecture and engineering firms. Based in the firm's Baltimore office, the Applied Technology Group provides integrated architecture and engineering services demanded by mission critical, technology intensive operations on an international basis. Stephen recently spoke at Datacentre Dynamic's Chicago conference.
Peter and Stephen spoke at DatacenterDynamics 2006 Chicago Conference.