top of page

The importance of reliable and fast IFS replication for Disaster Recovery

Updated: Nov 22, 2021

IBM i users need replication solutions that change as the platform changes. Originally the main focus for High Availability and Disaster Recovery applications was the replication of physical data and related objects. However, most vendor and internal applications have been enhanced over time to use the power of the Integrated File System (IFS) alongside those traditional data files. But objects in the IFS cannot be treated in the same way as traditional replication – the old techniques just don’t work.

Added to this is the high volume of IFS processing that we have all seen on most IBM i installations. There are more objects being used, and they are continually getting larger, and updated more frequently. The result of this is that IFS replication speeds have long been a weak spot in the legacy real-time replication solutions.

If you are looking for a replication solution, or if you find that your current solution is inadequate with IFS support, it is important that you consider the most modern and effective applications that are available currently. Poor and inefficient processing can result in a noticeable IFS replication slowdown that will affect users and will mean that you are less likely to be role-swap ready.

As an aside, the importance of role-swap readiness is often overlooked by IBM i users. Just because you have a replication solution does not mean that the organization is protected. If a Role Swap, or more importantly a Failover, cannot be performed at a moment’s notice, with total confidence that there will be no data loss, then really it is pointless.

Your replication solution must be built on the most modern techniques to address all parts of the IBM i. For the IFS, and in particular the volume of IFS changes that must be replicated, you will need a replication tool like Maxava HA Enterprise+ that has been designed with the IFS in mind. For example it must be fully multi-threaded, able to process many parallel IFS streams. Preferably those processes will need to be dedicated to the IFS as it has unique requirements, so will be expected to run alongside multi-streamed data and object replication.

The IFS will contain many items that do not need replicating. For this reason you will need to choose a replication solution which provides a multitude of ways to select or omit from the IFS, either specifically by name or generically with wildcards and patterns. Maxava HA Enterprise+ does this. And while we are talking about management of the environment, choose a solution that provides a wide variety of interface styles, especially as the Administrators of the IFS replication may not be from an IBM i ‘Green Screen’ background and would prefer browser or PC based interfaces.

Another feature of legacy replication solutions that should be avoided is impact on users. The multi-stream, multi-thread solution, will obviously need processing power to keep on top of a heavy workload. For this reason, the bulk of that workload should be performed on the Target server, away from most users. This is a unique feature of Maxava HA Enterprise+ – no jobs are running on the production server so the resources are not impacted.

In summary, don’t accept an old way of processing new techniques in your applications. Look for a replication solution that has been written and regularly enhanced to process all critical object types such as the IFS in the most efficient way possible. Choose Maxava for more efficient, faster replication of data, objects and IFS – delivered with minimum impact to your production server.


This article is written by Martin Norman, Strategic Partner Development Manager at Maxava


bottom of page