Behind the Curtain 1

In about every meeting or presentation we participate in, we’re asked about the PINES/Evergreen servers, what they are, where they live, how many there are… basically, what is behind the green curtain? Some PINES librarians that attended the focus group meetings we held in February got an opportunity to take a tour of the datacenter and see the PINES server cluster for themselves. Unfortunately, for the rest of you, we’ve had to resort to whiteboard drawings and obtuse hand gestures.

The datacenter the PINES/Evergreen cluster is housed in is extremely secure. There is a very strict process we have to follow: cameras are not allowed without prior approval, a security escort while the pictures are being taken, and a review and release of the pictures by datacenter security after the fact.

PINES/Evergreen is a medium-sized customer of the datacenter. The datacenter houses all sorts of companies and organizations, from small dot-coms, to financial institutions, to Google, who has a huge server farm housed there. Datacenter security wants to ensure that we do not take pictures of anyone else’s equipment or any sensitive areas of the building.

Okay, enough jibber-jabber. Let’s get to the pictures. We went through the camera/photo process earlier this week, and we have some photos to share here. GPLS currently has 3 server racks on the datacenter floor. 2 of those racks are for Evergreen/PINES, and the 3rd rack is for GPLS IT, and supports services such as email and websites like and

The two racks shown in this picture are for PINES/Evergreen:


Starting from the far right, that is the “Database” cabinet. At the bottom of that cabinet is the storage array, where the PINES/Evergreen database resides. We have approximately 2 terrabytes of raw disk space in the storage array. There are 4 shelves of fibre channel hard drives, and 2 storage controller units (they’re the grey boxes with the little screens) in the middle.

Above that are 4 database servers that access the data on the storage array via fiber. The database servers are quad-opterons with 32 GB of RAM. It should go without saying that all Evergreen servers run Linux.

Above the 4 database servers is a SCSI disk array that stores various things for the development group, such as server images, software installation packages, etc.

Moving up, there are 2 “logger” servers. Logs from all of the servers in the cluster are sent and stored here. Central logging like this makes administration and troubleshooting of a large cluster easier.

Above the 2 logger servers is the storage array appliance server. Basically, all this server does is monitor and manage the storage array.

Above the storage appliance server are 2 fiber switches for the storage array. The database servers communicate to the storage array through these switches. Again, notice how everything is at least dual-redundant. If we lose a fiber switch for any reason, that switch’s “twin” is able to handle the full load with no performance degradation. If we lose a logger server, the other can handle the full load. Same with the database servers and the array controllers. Further, each of these servers have redundancies within themselves: they all have dual power supplies and they all are using RAID drive configurations.

An important point about the dual power supplies– each server is fed not only from two separate electrical circuits, but also from two separate electrical systems. At the datacenter, all electricity on the floor is produced onsite by 6 Hitec flywheel generators. I came across a good diagram and description of the generators at

In normal operation, those flywheel generators are powered by external electrical power– supplied by Georgia Power. In the case that Georgia Power fails, the flywheels keep spinning on account of their inertia. Within 3 seconds, the datacenter’s diesel engines spin up, and keep the power flowing with no interruptions. The datacenter has 78,000 gallons of diesel fuel stored on site, and at current load, that fuel will last the datacenter approximately 10 days.

Okay, back to the picture. I’ve talked so much, let me insert the image again so it is easier to reference:


The cabinet to the left of the database cabinet is the “application” cabinet. The largest resident of the cabinet are the application servers. There are 30 of them here, starting at the bottom, and they go about 3/4ths of the way up the cabinet. Each application server has dual opteron processors and 4 GB of RAM. These servers are like the worker bees of the cluster. They go between the end-user and the database, determining things like loan duration, if a particular copy can be held by patron X, if cataloger Y has permission to edit library Z’s item, etc, etc. In more technical terms, this is where Jabber and Apache lives.

Above the application servers, you can make out the back of two ethernet switches. These switches are facing the back of the cabinet, and all of the application servers plug into them. (Again, note the dual redundancy).

Above that are 2 very special servers. One is named “t-bone”, and the other is named “porkchop”. These are basically the bouncers at the front door of the PINES system. In order to get into the Evergreen system from out in the world, you have to go through them. They’re a combination of firewall and load balancer. For the more technical readers, I point you to the LVS project at

Above T-bone and Porkchop is a grey blank panel. That’s it. Nothing special.

Above the blank panel is a small server that provides services such as VPN and access to our desktop computers that live in our office space at the datacenter.

Above that is the monitoring server. All he does all day long is ask servers how they are feeling. If he detects a problem, he sends emails and pages to folks.

And, above that, at the very top of the rack, is your run of the mill 110v power strip.

Let’s look at another picture.


The application server cabinet (the one we just went through) is on the right of this image, and the GPLS IT rack is to the left. At the bottom left of the image, you can see a tile that has holes in it. The floor is 4-foot raised off the concrete foundation. Cold air is forced underneath the floor by air handlers, and comes back up through the tiles in the floor with the holes.

At the very bottom left corner of the image you can see the corner of a brown cardboard box. This is where we keep the secret Death Star plans.

Anyway, back to the GPLS IT cabinet. It houses various general use servers. We host email services and web-hosting for libraries here. This rack also houses the email list server,,, etc. Nothing too terribly exciting, so next image:


This is looking up, above the 3 cabinets. A couple of things to note here. The long yellow thing there is a tray that the fiber lays in. The fiber connection comes out of the tray, runs down inside the yellow tube, and plugs into our networking equipment in the middle cabinet. The black tube is for normal copper ethernet connections. The two green wires are for grounding.

Next image.


Just a picture with the monitor-keyboard combo pulled out of the application cabinet. We can pull up any server via KVM on this monitor-keyboard.

So, that is about it for the 10-cent tour. Questions? Comments? Please ask below…

Update: it’s been pointed out to me that I should mention that Andrew Crane of GPLS took the photos.

One thought on “Behind the Curtain

  • Ross

    OMG! Do U hav ne \/\/@r3Z on that!?!

    Seriously, though, it makes the frankencluster project that we’re working on at Tech (the intention is that it won’t cost anything to build — built completely from parts that were intended for the surplus warehouse) seem as slick and as powerful as a TRS-80 cluster.

Comments are closed.