top of page

IBM i (AS/400) HA/DR Evaluation Checklist

This checklist is designed to help IBM i organizations evaluate high availability and disaster recovery solutions in a consistent, evidence-based way. Each criterion is measurable and can be validated through testing, observation, or documentation.


IBM i (AS/400) HA/DR Evaluation Checklist

Data Integrity and Consistency


  • Can the solution guarantee transactional consistency at recovery time?

  • Is replication aware of IBM i journals, commitment control, and object dependencies?

  • Can recovered systems be proven to represent a known, consistent point in time?

  • Are integrity checks continuous rather than periodic?


Evidence to request: Recovery test results, integrity validation reports, documented recovery points.


Recovery Time and Recovery Point Objectives


  • Can RTO and RPO be defined in business terms, not just technical metrics?

  • Are RTO and RPO achievable during peak processing periods?

  • Does latency measurement reflect end-to-end recoverability, not just transport delay?

  • Can RPO be demonstrated rather than assumed?


Evidence to request: Latency trends, recovery test logs, peak load behavior analysis.


Recoverability and Testing


  • Can recovery procedures be executed without disrupting production?

  • Are role swaps planned, repeatable, and reversible?

  • How often is recovery tested in real conditions?

  • Are recovery outcomes predictable and documented?


Evidence to request: Test schedules, test outcomes, documented procedures.


Scalability Under Load


  • Does replication continue to keep up as transaction volumes increase?

  • Can apply processing scale without becoming a bottleneck?

  • How does the system behave during spikes and backlog catch-up?

  • Does replication overhead remain stable on the production system as load grows?


Evidence to request: Receiver throughput data, peak period behavior, backlog recovery metrics.


IBM i Operational Manageability


  • Is replication health clearly visible without constant intervention?

  • Are alerts actionable rather than noisy?

  • Can IBM i teams manage HA without reliance on external infrastructure teams?

  • Does operational effort remain stable as environments grow?


Evidence to request: Operational dashboards, alerting examples, staffing impact over time.


Platform and Environment Flexibility


  • Does the solution support multiple IBM i releases concurrently?

  • Can environments mix releases during upgrades?

  • Are hardware refreshes treated as routine technical events?

  • Is the architecture resilient to environmental change?


Evidence to request: Supported release matrices, upgrade case studies.


Commercial Transparency


  • Is pricing predictable over the life of the solution?

  • Do hardware upgrades trigger licence or maintenance increases?

  • Are processor-based pricing tiers clearly defined and stable?

  • Can long-term TCO be modelled accurately?


Evidence to request: Pricing documentation, upgrade pricing policies.

 
 
bottom of page