Economies of Scale is a concept from Economics that describes a condition where expanding an activity results in decreasing costs per unit at the margin. These savings result from efficiencies that are realized as the activity increases in size. In other words, being able to do more or make more, with fewer resources spent per unit. For instance, a company could expand production by using a machine already purchased to make more widgets by adding a second shift and hiring machine operator to keep churning out widgets.
In this post we discuss such scale economies that we are observing in the Evergreen community, but before we go into that, it’s important to note that some activities get less efficient as they get larger, resulting in Diseconomies of Scale instead. The Wikipedia article on Diseconomies of Scale summarizes a few of the reasons costs can increase with an increase in size of an enterprise or activity. It is hard not to read such a litany of causes without reflecting on the experience of vendors in the library world which have bought out or merged with competitors.
There are several contributors to Evergreen’s scale economies.
1. Design architecture. Mike’s Generations post discusses the various generations of ILS software. Each generation of software reflects the current understanding of how software is best developed, how databases should be designed, and takes into account the underlying computer technology available at the time. As time passes, new design concepts are developed and computer equipment improves. In time, what was new becomes obsolete or encrusted with ad hoc patches to fix problems with old code.
Evergreen is based on the most current architecture in a time when databases are designed from an increasingly sophisticated understanding of how to create large databases which are robust and relatively inexpensive compared to earlier designs. We believe that the design architecture of Evergreen will scale to handle all the library catalogs and all the transactions of all libraries in the U.S….at least. A bit of work would have to be done but, as said, we believe the foundational design is in place.
2. Consortial design. As Mike and I discussed in Consortial Library Systems, Evergreen was the first ILS designed to be consortial. It was an important breakthrough in library software.
Consortial design in library software has to solve more problems than handling a large online catalog with many transactions. The catalog must be usable, it must be capable of rapid updates, and the software must allow for flexibility in configurations to fit local conditions. With the consortial design, library consortia can scale in another sense: by adding more libraries. Library users like having access to the resulting longer tail of materials and in the Google era, libraries cannot afford the retro option of maintaining information silos.
3. Finances. Libraries in a number of the Evergreen consortia have found that running a large Evergreen consortium is less expensive than paying for separate and smaller implementations at proprietary vendors—often times paying individual vendors separately for ILSs at different libraries. As we know, as Evergreen implementations increase in size, they become cheaper per unit.
This aspect of Evergreen is built on the two previous points: you have to be able to handle the larger databases and you have to have flexibility of configuration to scale in this financial sense. This flexibility of configuration is important. There are Evergreen consortia which have a public union catalog which allows users to borrow resources from other systems in the consortium without mediation from library staff. There are also consortia which have Evergreen configured so that library users cannot do unmediated borrowing from other members of the consortia. Configuration is a local option and we have an unadvertised special: the open source nature of Evergreen allows the users greater control of the library’s software if they choose to exercise that control.
The fact that there are cost savings in an Evergreen consortium can be a mixed blessing. The libraries in at least one Evergreen consortium saved a great deal of money using Evergreen in its new consortium over paying their legacy vendors for the separate pieces of proprietary software; however, the popularity of resource sharing caused the costs of transporting materials to waiting patrons to absorb the savings. Users of the consortium got better service and were happier with their libraries; in addition, library materials are being used more now than before.
There were no net savings in this Evergreen consortium but, in the end, for about the same dollars, service increased dramatically. There was more bang for the buck.
4. Development also scales. With open source development, the code is open to anyone to examine and tweak. As the Evergreen community has expanded, there are more people developing code. Moreover, these are not just random people but people who work in libraries and who have practical ideas to improve the code. The numbers of such people potentially increases with every new library or consortium using open source software. Proprietary vendors have a harder problem with software development both in affording a comparable number of developers, and in the practicality of developments resulting from having those developers be library employees who use the software in day-to-day library operations.
We also discussed a development multiplier in our post about the Evergreen Superconsortium where we described a new type of development that is quickly becoming a mainstream activity within the Evergreen community: consortia cooperating in developing Evergreen capabilities. This kind of development not only increases Evergreen’s functionality but, given that more than one consortium is involved, the development is cheaper and more robust by virtue of the fact that specifications for the development will be broader.
Open source development relies on scratching an itch. That is, if a function does not exist and a library wants it, that library will pay for the development of the function and give it to the community…just as the forerunners in the community have given other functions. Once a capability is developed, then, everyone can have it from that point on for no charge. The same functionality is not sold over and over again once it exists.
Thanks to Joe Betz for his clarifying suggestions on the draft’s language.