The Library of Alexandria and Disaster Recovery

“THE BURNING OF THE LIBRARY AT ALEXANDRIA IN 391 AD” BY AMBROSE DUDLEY, (FL. 1920S) – THE BRIDGEMAN ART LIBRARY, OBJECT 357910. LICENSED UNDER PUBLIC DOMAIN VIA WIKIMEDIA COMMONS – HTTPS://COMMONS.WIKIMEDIA.ORG/WIKI/FILE:THE_BURNING_OF_THE_LIBRARY_AT_ALE

The Royal Library of Alexandria in Egypt was one of the largest and most influential libraries of the ancient world. Dedicated to the 9 Muses, goddesses of the arts, it functioned as a major center of scholarship from its construction in the 3rd century BC until the Roman conquest of Egypt in 30BC. With collections of works, meeting rooms, lecture halls and gardens, this magnificent library was part of a larger research institution called the Musaeum of Alexandria, where many of the most famous thinkers of the ancient world studied.

Tragically the library is perhaps best known for its destruction which echoes through the timeline of human history, serving as a symbol for the loss of cultural knowledge. Though accounts of the Destruction of the Library of Alexandria differ, after the main library was fully destroyed ancient scholars reverted to the “daughter library” in a temple known as the Serapeum in a different part of the city.

In a previous disaster recovery blog Simon O’Sullivan highlighted that contemporary scholars at the University of Berkley described a single gigabyte of data as being equivalent to a pickup truck full of books and that even a small IBM i site will be moving on average, 3 gigabytes of journal transactions in any 24 hour period. If we double the number of scrolls estimated to have been housed in the great Library of Alexandria, we would have approximately the quantity of data a large Maxava customer shifts in a day. This 200 gigabytes of data would require 200 trucks or countless chariots and scribes to transport and replicate.

Moving the Library Today

CREDIT: FLICKR

CREDIT: FLICKR

Transferring the equivalent of 200 trucks full of books and ensuring that they all arrived in the right order and undamaged is no mean feat.

If you imagine what it would have taken to replicate the Library of Alexandria from one site to an identical one, even with today’s modern transport systems it would be a major undertaking. To move the library would require the perfect synchronization of on the ground personnel to locate the books, write copies of each one then securely package them up, load the trucks and drive them from site A to B without a single truck or book being lost on the way. Upon arriving at the destination the receiving team of librarians would need to be extremely efficient in unpacking these books in the right order and moving them to their correct shelves relative to the other books being concurrently shelved around them.

To make things even worse for this team of librarians, the changes to their library are not limited to new books coming or old books being thrown out, but also involve making changes to the books themselves. These changes could be as significant as adding new chapters or minute as correcting spelling mistakes between editions.

The data contained within the books is most at risk when they are in transit which includes anytime they are not on the shelves in their correct place.

You’re in charge

Take a moment to imagine you are the operations manager for the replication of the Library of Alexandria. How would you go about structuring the process to ensure that it is carried our efficiently so that books spend minimal time in transit thereby reducing the risk of data loss?

There are a number of things which assist this process including having an exclusive highway for trucks however if the drivers know where they are going, this becomes less important. The key issue is how the data is managed by teams on both the primary and backup end of the pipeline. In order to deal with this workload, much like a real library has different sections and librarians, the backup system has different areas with Apply Streams being the equivalent of librarians tasked with updating the stored data.

In essence this is what Maxava and other HA/DR solutions do every time they back up your primary drives. They log changes to the primary library, copy those changes and move it to the backup system where they are then used to update the books sitting on the shelf ready to be used should a disaster occur.

The differences on the primary production side have been discussed in previous posts however how the data is managed upon arrival at the backup system is different. Whereas traditional HA/DR solutions typically offer a maximum of 6 apply streams to manage data, Maxava allows for a practically unlimited number meaning that it is not only highly flexible but can cater for even the largest IBM i environments now and in the future.

Looking to the future

Maxava and IBM i didn’t exist in 30BC however it is also apparent that the educated manpower required to replicate the library and create a backup was either not available, or not prioritized by leaders of the time. When we consider the Royal Library of Alexandria and the lost treasures, it is a humbling and saddening experience however it is possible to learn from this. Thanks to modern technology such as the IBM i platform and Maxava, even the largest sums of data can be securely transferred and saved minimizing the impact that a disaster may have upon your organization.

We couldn’t save the library however if you would like to know more about real-time replication for IBM i and how Maxava can help defend your business data, please Contact us and one of our expert team members will get back to you.


Hugh's photo

Hugh is a recent graduate of the University of Canterbury and holds a Masters degree in Commerce as well as undergraduate degrees in International Business, Marketing, Strategy and Entrepreneurship and a degree in Performance Violin. Outside of his studies Hugh has won both national business and sporting competitions while running his own start up companies. For more information on Hugh, please check out his LinkedIn. 

Comments are closed.