Redp 5688
Redp 5688
Redp 5688
Jon Herd
Daniel Danner
Axel Koester
Redpaper
IBM Redbooks
June 2023
REDP-5688-00
Note: Before using this information and the product it supports, read the information in “Notices” on page v.
This edition applies to Version 2, Release 5, Modification 2 of IBM Storage Fusion HCI System and IBM
Storage Fusion.
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .v
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii
Now you can become a published author, too! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Stay connected to IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Summary of changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
June 2023, First Edition (minor update) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
Online resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
Help from IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
This information was developed for products and services offered in the US. This material might be available
from IBM in other languages. However, you may be required to own a copy of the product or product version in
that language in order to access it.
IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area. Any
reference to an IBM product, program, or service is not intended to state or imply that only that IBM product,
program, or service may be used. Any functionally equivalent product, program, or service that does not
infringe any IBM intellectual property right may be used instead. However, it is the user’s responsibility to
evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document. The
furnishing of this document does not grant you any license to these patents. You can send license inquiries, in
writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive, MD-NC119, Armonk, NY 10504-1785, US
This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may make
improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time
without notice.
Any references in this information to non-IBM websites are provided for convenience only and do not in any
manner serve as an endorsement of those websites. The materials at those websites are not part of the
materials for this IBM product and use of those websites is at your own risk.
IBM may use or distribute any of the information you provide in any way it believes appropriate without
incurring any obligation to you.
The performance data and client examples cited are presented for illustrative purposes only. Actual
performance results may vary depending on specific configurations and operating conditions.
Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm the
accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the
capabilities of non-IBM products should be addressed to the suppliers of those products.
Statements regarding IBM’s future direction or intent are subject to change or withdrawal without notice, and
represent goals and objectives only.
This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to actual people or business enterprises is entirely
coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrate programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the sample
programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore,
cannot guarantee or imply reliability, serviceability, or function of these programs. The sample programs are
provided “AS IS”, without warranty of any kind. IBM shall not be liable for any damages arising out of your use
of the sample programs.
The following terms are trademarks or registered trademarks of International Business Machines Corporation,
and might also be trademarks or registered trademarks in other countries.
IBM® IBM Spectrum® Redbooks (logo) ®
IBM Cloud® IBM Spectrum Fusion™ Satellite™
IBM Cloud Pak® Passport Advantage® Spectrum Fusion™
IBM Cloud Satellite® Redbooks®
The registered trademark Linux® is used pursuant to a sublicense from the Linux Foundation, the exclusive
licensee of Linus Torvalds, owner of the mark on a worldwide basis.
Microsoft, and the Windows logo are trademarks of Microsoft Corporation in the United States, other
countries, or both.
Java, and all Java-based trademarks and logos are trademarks or registered trademarks of Oracle and/or its
affiliates.
Red Hat, Ansible, Ceph, OpenShift, are trademarks or registered trademarks of Red Hat, Inc. or its
subsidiaries in the United States and other countries.
VMware, and the VMware logo are registered trademarks or trademarks of VMware, Inc. or its subsidiaries in
the United States and/or other jurisdictions.
Other company, product, or service names may be trademarks or service marks of others.
This IBM® Redbooks® publication offers a short overview of IBM's integrated environment for
container workloads, IBM Storage Fusion. The product comes in two variants, IBM Storage
Fusion HCI System including all required hardware, and IBM Storage Fusion SDS
(software-only) for deployment in public or private Clouds.
Note: This Redpaper has been updated with the 2.5.1 release only for IBM Storage Fusion
and the 2.5.2 release of IBM Storage Fusion HCI System.
IBM Storage Fusion 2.5.2 includes IBM Storage Fusion Data Foundation 4.12. Fusion Data
Foundation 4.12 and Red Had OpenShift Data Foundation 4.12 both provide the same
data service, and OpenShift Data Foundation is the Red Hat brand of the IBM Storage
Fusion Data Foundation offering. Fusion Data Foundation and OpenShift Data Foundation
functions are the same: the only difference is the digital signature. References and links to
OpenShift Data Foundation remain valid. Fusion Data Foundation 4.12 documentation is
integrated within IBM Storage Fusion 2.5.2 documentation.
Note: IBM Spectrum® Fusion HCI and IBM Spectrum Fusion™ have become IBM Storage
Fusion HCI System and IBM Storage Fusion. This Redpaper edition uses the new IBM
Storage brand names introduced in IBM Storage Fusion 2.5.1.
IBM Spectrum Discover functionality is now available as an integrated service within IBM
Storage Fusion called Data Cataloging.
See Evolving the IBM Storage Portfolio Brand Identity and Strategy to learn more about
how IBM Storage Fusion HCI System and IBM Storage Fusion are key to the IBM Storage
Portfolio.
Chapter 1, “Why containers” on page 1 answers general WHY questions: Why are more
and more workloads deployed as containers? Why are containers beneficial for cyber
security? Why should I care about new development and deployment models involving fewer
staff? Why should I care about containers when VMs seem just fine?
Chapter 2, “Why IBM Storage Fusion” on page 11 focusses on WHY IBM Storage Fusion:
What are the benefits of running containers in a turnkey Red Hat OpenShift cluster within IBM
Storage Fusion? How can the integrated environment help reduce the effort required for
deployment, high availability design, backup implementation and cyber resilience planning?
Chapter 3, “Flavors of IBM Storage Fusion” on page 25 describes flavors of IBM Storage
Fusion implementations, major differences between HCI System (hardware) and SDS
(software) variants, and which choices to make to right-size the product for current needs
while retaining the ability to scale out at a future stage.
Jon Herd is a Senior Executive Advocate working for the IBM Technical Lifecycle (TLS)
EMEA Client Care team in Germany. He covers the United Kingdom, Ireland, and Europe,
advising customers on the portfolio of IBM storage products, including the IBM Storage
Fusion range. Jon has been with IBM for more than 45 years, and has held various technical
roles, including Europe, Middle East, and Africa (EMEA) level support on mainframe servers
and technical education development. He has written many IBM Redbooks publications about
IBM Storage and FlashSystem products and is an IBM Redbooks Platinum level author. He
holds IBM certifications in IBM Product Services L3 Thought Leader level, and IBM Technical
Specialist L1 experienced level. He is also a certified Chartered Member of the British
Computer Society (MBCS - CITP), a Certified Member of the Institution of Engineering and
Technology (MIET) and a Certified Technical Specialist of the Open Group.
Daniel Danner is the Lead Architect for IBM Storage Fusion in the D/A/CH region. Before he
joined IBM, he gathered 20 years of experiences working in enterprise IT environments
dealing with mission critical workloads for a large IBM account. Daniel is recognized for his
ability to combine hands on experience, customer view and ability to explain complex
circumstances in a consumable manner. Besides his work on IBM Storage Fusion, he leads
the IBM internal Ansible Automation Community.
Axel Koester Axel Koester, PhD, is Chief Technologist for the EMEA Storage Competence
Center ESCC in Frankfurt, Germany. Besides caring about all most recent storage
technologies and beyond, he frequently speaks at public conferences on technology outlook
topics and their future implications for IT businesses. Currently he also devotes time to
improving Storage performance models for simulation/prediction tools, including models for
IBM Storage Fusion HCI System. Axel is a current member of the board and former chairman
of the IBM TEC Think Tank in D/A/CH. Prior to joining IBM in 2000, Axel helped build the
Large Hadron Collider project at CERN, Geneva, and worked as researcher for the
Fraunhofer Institutes, after having worked as an object-oriented programming pioneer in the
privately held company Gesytec mbH. Axel holds a PhD in electrical engineering and
computer science from the RWTH Aachen University.
Larry Coyne
IBM Redbooks, Tucson Center
Josh Blumert
IBM Global Sales
Chrisilia Bristow, Matt Divito, Lee Helgeson, Khanh Ngo, Tom O’Brien, Fernando de la Tejera
Ruiz, David Wright
IBM Systems
Find out more about the residency program, browse the residency index, and apply online at:
ibm.com/redbooks/residencies.html
Comments welcome
Your comments are important to us!
We want our papers to be as helpful as possible. Send us your comments about this paper or
other IBM Redbooks publications in one of the following ways:
Use the online Contact us review Redbooks form found at:
ibm.com/redbooks
Send your comments in an email to:
[email protected]
Mail your comments to:
IBM Corporation, IBM Redbooks
Dept. HYTD Mail Station P099
2455 South Road
Poughkeepsie, NY 12601-5400
Preface ix
x IBM Storage Fusion Product Guide
Summary of changes
This section describes the technical changes made in this edition of the paper and in previous
editions. This edition might also include minor corrections and editorial changes that are not
identified.
Summary of Changes
for IBM Storage Fusion Product Guide
as created or updated on June 22, 2023.
Changed information
Updated branding information for IBM Storage Fusion Data Foundation (FDF) and Red
Hat OpenShift Data Foundation (ODF). See the IBM Storage Fusion Data Foundation
section in “Scaling Storage for Applications” on page 7 for additional information.
Updated storage product names in Figure 3-4 on page 30.
The metaphor of “pets vs. cattle” (commonly attributed to Bill Baker, then of Microsoft) is well
known in that context: To make automation easier and automatic procedures less diverse, do
not treat computers as individual devices, do not give them individual names, do not manage
their IP addresses, do not manually assign workloads, and do not make machines “opaque”
just to isolate dependencies.
Do you think that virtualization achieves that? Think again. Virtualization improves the
efficient use of resources, confines dependencies, and makes workloads easily movable. But:
Each VM is effectively another individual machine, and their count keeps growing. Each VM
demands (at least some) individual care and effort. Hordes of pets (Figure 1-1).
Can we skip the concept of machines in IT altogether? Yes: This is sometimes referred to
as serverless operations, even though there is still real hardware involved. Managing “cattle”
means more than giving these machines numbered names: The idea of containerization is to
get rid of the machine concept as individual entity. Individual worker nodes or containers do
not need personal attention. Their instantiation, their workload assignment, their termination,
their whole lifecycle is fully automated and driven by short-term demand.
Likewise, security in individual pods or containers won't need personal attention. This is
handled at a higher abstraction level. And nobody looks after disaster resilience in individual
containers. This, too, is usually handled at a higher level.
Eliminating the effort spent on monitoring and managing individual entities leaves us more
time to “sharpen the axe”. The goal of Red Hat OpenShift inside IBM Storage Fusion is not
merely to ease the management of individual containers, but to actually make us forget that
they exist. In contrast, hypervisor environments usually provide tools to manage many pets -
but each of them requires individual care. See Figure 1-2 on page 31.
So, are VMs totally eliminated in a container environment? No: VMs can be very useful to set
up prototypes, to run quick deployment tests, or to add flexibility to control plane nodes. But
for the workload-carrying production worker nodes, VM constructs are obsolete. IBM Storage
Fusion is not built on top of hypervisor technology, even though it has the capability to operate
VMs alongside with containers. Refer to 3.4.7, “VMware versus bare metal” on page 35.
We are so much used to seamless app updates on our mobile devices that we keep asking
ourselves: What's holding back larger IT solutions to do the same? The answer is often “weak
dependency management” and “monolithic architecture”. Monoliths, usually large
conglomerates of compiled software, can only be rolled out in one piece. All-or-nothing.
Figure 1-3 Hardware estimation for a planned Cloud Pak workload (from ibmsalesconfigurator.com)
Details on configuring IBM Storage Fusion in the StorM tool can be found at 3.4.6,
“Configuring the IBM Storage Fusion HCI System with IBM Storage Modeller” on page 34.
The problem here was weak dependency management and slow update policies: In order to
know which systems/instances need the patch, their reliance on Log4j must be documented.
While easily discoverable in the Log4j case, dependencies will be harder to detect in others -
especially when documentation quality depends on human meticulousness. In a container
environment, this dependency is coded as part of the construction recipe of our application
suite and therefore always up to date.
Cyber resiliency on the other hand can be improved by automating a recovery process and
testing it regularly, e.g. an application restart in a failover location that has access to a copy of
the most recent data. Testing it regularly is the limiting element here, comparable to sharping
the axe when you should be chopping wood.
1.5 Zero-trust
On top of typical security procedures, zero-trust means that no “truth” is taken for granted
without proof, and no proof may exceed a certain age.
Since distributed applications can include many services, it makes sense to design a mutual
authentication scheme, especially so when some of these services are externalized. Between
authenticated services, communication can be encrypted in a way that only the intended
receiver can decode the content. See Figure 1-4.
Figure 1-4 Microservice mesh with direct P2P communication (left) and with “zero trust” mutual
authentication (right)
In fact, this automation is not scripted as a sequence of actions, but as a series of outcomes:
An application architect could define that “no instance should continually consume more than
50% or less than 10% CPU” and the workload scheduler component underneath IBM Storage
Fusion (Red Hat OpenShift / Kubernetes) makes it happen, preferring the least utilized
hardware for spreading the workload. Additional rules for throughput or latency optimization
are possible. See Table 1-1.
When an application exceeds the resources of When a container exceeds 50% CPU utilization
a VM, manually increase the assigned (configurable), the platform starts an additional
resources if appropriate. instance of it.
When all VMs on a system exceed that When an instance consumes less than 10%
system's capabilities, manually migrate the (configurable), the platform destroys it unless it is
least important VMs onto new hosts. the last instance.
Programmers of multi-instance services do not need to re-invent proprietary service APIs and
protocols susceptible to new security flaws: Open-Source solutions exist for every flavor of
workload distribution. Most popular are publisher/subscriber schemas like those proposed in
Apache Kafka, a core component in many scalable apps.
IBM Storage Fusion Data Foundation: IBM Storage Fusion 2.5.2 includes IBM Storage
Fusion Data Foundation 4.12. Fusion Data Foundation 4.12 and Red Had OpenShift Data
Foundation 4.12 both provide the same data service, and OpenShift Data Foundation is the
Red Hat brand of the IBM Storage Fusion Data Foundation offering. Fusion Data Foundation
and OpenShift Data Foundation functions are the same: the only difference is the digital
signature. References and links to OpenShift Data Foundation remain valid.
Fusion Data Foundation 4.12 documentation is integrated within IBM Storage Fusion 2.5.2
documentation. IBM Storage Fusion Data Foundation is based on the open-source projects
Ceph, Rook, and Noobaa, and provides a variety of access protocols while being able to use
a wide range of physical storage sources. Redundancy is achieved by storing three copies,
i.e. the erasure coding feature of Ceph is not supported by IBM Storage Fusion Data
Foundation.
ECE: Erasure Coding Edition of IBM Storage Scale, delivered fully integrated with IBM
Storage Fusion HCI System hyperconverged infrastructure. Local node SSDs are combined
into a dispersed, “write-many” persistent storage layer. In addition, IBM Storage Scale ECE
supports multiple access protocols, advanced automated file management (AFM), rule-based
data placement and relocation, geographically dispersed caching, metro replication, and
much more. IBM Storage Scale aims at bottleneck-free parallel scalability by design. Data
redundancy is achieved by erasure coding with high 64% usable-to-raw storage capacity
efficiency when protecting against double failures. This is approximately 2× the capacity
efficiency which can be achieved with three copies of data (33%).
Scaling Backup
IBM Storage Fusion provides a platform-independent backup based on IBM Storage Protect
Plus, where data can be protected on air-gapped storage services and/or devices outside of
the production cluster. This makes sure that data which is being restored after a major
incident is not dependent on the original source of the data, as that may have been
compromised during the incident.
While all backup components inside IBM Storage Fusion scale on demand, externalized
components (like moving copies onto an air-gapped tape library or object storage) must be
managed and scaled individually, since Red Hat OpenShift is not in charge. On the other
hand, previously existing compatible backup environments and procedures can be re-used or
integrated.
Companies using IBM Storage Scale to its full extent typically refer to it as an enterprise data
highway, data bus, or hybrid data bridge. It shortens time-to-market by removing the need to
implement complex and custom data sharing mechanisms between new and existing data
sources. Instead, IBM Storage Scale creates a service bridge which easily and transparently
spans the distance using access credentials and existing storage protocols such as NFS and
S3 Object.
IBM Storage Fusion HCI System supports various interaction mechanisms with other
clusters, applications, or other data sources:
Cross-cluster mounts to other IBM Storage Fusion or IBM Storage Scale clusters
(roadmap)
Data source for distant caches (AFM - Active File Management source)
Data cache from a remote AFM source
Scheduled asynchronous replication, RSYNC
Synchronous replication, Metro-DR, supporting disaster recovery in a different site
Import from an IBM Cloud Object Storage system while sharing its name space.
IBM Storage Fusion SDS takes this software stack to custom-built Red Hat OpenShift
clusters in the private and public cloud.
A 32 core worker node licensed with bare metal subscriptions reduces the Red Hat OpenShift
license cost by 81%. Considering our 64-core node, the savings grow to 91%. See
Figure 2-1.
This significantly lowers the TCO of the container platform. The street price of the smallest
HCI System configuration with three 32 core worker nodes (including all licenses), is lower
than the price of the software licenses for a hypervisor-based cluster, even without taking the
hardware and hypervisor cost into the equation.
Besides, a Hypervisor requires CPU and memory resources from the host system which
cannot be used by Red Hat OpenShift or the containerized workloads.
The hypervisor will also introduce higher latencies to network and disk IOs that run through its
virtualization layer, which in some cases may impact the application performance. See
comparison in Figure 2-2.
They should be resilient to restarts and they should be able to scale out by running multiple
instances. Webservers and the famous LAMP stack are considered being low hanging fruit
here: The migration is trivial.
But mission critical monolithic workloads like RDBMS databases can just as well benefit from
the container deployment and operating model. These workloads require the same level of
performance, security, resilience, and data protection as they were previously getting in VMs
or on bare metal servers. These requirements towards the container platform necessitate a
well thought-through architecture and operating model.
IBM Storage Fusion HCI System was built from the ground up for mission critical applications
like for example the IBM Cloud Paks. Its global data platform is based on a high-performance
parallel file system that has a dedicated 200 Gbit network connection per node in order to
avoid bottlenecks.
While the data is encrypted at rest and in flight, it's also resilient to double failures, which are
rectified by the erasure coding algorithm that builds parities spread across multiple nodes.
Big enterprises are facing different challenges, as multiple teams are involved. Infrastructure,
Storage, Backup, Monitoring, and Operations; each team has its own concept and favorites,
which will bring multiple products and vendors to the table. A heterogeneous complex
software stack is created, with interoperability dependencies and support matrixes from
multiple vendors.
This software stack needs to be monitored and updated, which makes “day2” operations
difficult. Finally, the deployment and configuration of the stack should be automated with
“recipe-style” infrastructure automation in order to avoid mistakes and improve repeatability.
Building a production-ready container platform this way will result in a long-running project
with many resources.
IBM Storage Fusion HCI System is denominated as a “turnkey appliance” because it delivers
a production-ready Red Hat OpenShift cluster in just one day. The appliance is delivered as a
rack with all components mounted, cabled, and tested. Once connected to power and
network, the fully automated installation starts.
Day2 Operations including updates and support activities are made easy as this software
stack is fully supported out of one hand:
Scalability
A container platform is often built and sized for its initial use case. Once the container
adoption in an enterprise gains momentum, new applications with new requirements get
onboarded on the platform and an expansion is needed.
One of the big advantages of a container platform is that compute resources can be easily
scaled out by adding more worker nodes to the cluster.
Block and file storage are again divided into “container ready storage” and “container native
storage”. Each kind has its own advantages and disadvantages:
This usually limits the scalability to a “scale-up” approach, as the cost of accessing storage in
traditional arrays can vary from pod to pod depending on network and other limitations. Also,
the deployment of traditional storage expansions is usually a large investment that requires
specialized personnel rather than ongoing “maintenance” - type of activities.
IBM Storage Fusion HCI System’s global data platform is a “container native storage”. The
HCI is built from 6-20 nodes. Each node can hold up to 10 NVMe drives.
The storage capacity can be increased by either adding NVMe drives to existing nodes or by
adding storage-rich nodes to the HCI cluster. All capacity increases are non-disruptive
operations that are started from the IBM Storage Fusion UI.
Resiliency
Traditional storage arrays provide redundancy by creating a Redundant Array of Independent
Drives RAID across their internal drives. Depending on the RAID level (Raid 5, 6, 10) some
data is used to store the fault tolerance information called parity. But the RAID controller that
is calculating the parity is a single hardware entity that cannot run as distributed software on a
software-defined scale-out storage cluster.
Therefore, most software-defined storage products for containers rely on 2-way or 3-way
replication. This means the complete data is copied and written 2 or 3 times in different failure
zones.
2-way replication can only accept one failure and is therefore not suitable for most
production environments.
3-way replication can accept two failures but only provides 33% data efficiency.
Dispersed Erasure Coding is a more recent technology to make distributed storage highly
available while providing high data efficiency and short rebuild times.The erasure coding
algorithm slices data into a configurable number of parts, builds the parity information, and
distributes data and parity across the available nodes.
IBM Storage Fusion HCI System uses erasure coding and can be configured to use two
different algorithm variants resulting in different fault tolerances and data efficiencies.
4+2p = Requires 6 servers, exposes 67% of the raw capacity as usable storage and can
withstand 2 simultaneous failures.
8+3p = Requires 11 servers, exposes 78% of the raw capacity as usable storage and can
withstand 3 simultaneous failures.
This choice must be made with good foresight as it cannot be easily undone. Enterprises will
usually have policies of their own when it comes to error containment vs. efficiency.
Monolithic applications behave differently: They rely on single instances and the availability of
attached persistent Storage, which is usually provided by the hardware level.
In case of a complete node, rack or availability zone failure, workloads can be restarted on the
remaining worker nodes in the Red Hat OpenShift cluster. This process is a lot easier than
restarting a monolithic application in a standby site.
Container platforms like Red Hat OpenShift are resilient by design. Nevertheless, tolerating
the failure of an entire Availability Zone requires careful planning. The Red Hat OpenShift
control plane is built from three master nodes that represent the quorum and tolerate a
maximum network latency of 5ms between nodes.
Due to the Red Hat OpenShift network latency requirement of max 5ms, the three AZs would
normally be located in the same campus or building. If a larger physical separation of two AZs
is required, stretching the Red Hat OpenShift cluster is not an option. Two independent Red
Hat OpenShift clusters need to be set up.
IBM Storage Fusion HCI System’s global data platform includes the “Metro-DR” feature,
which provides a synchronous storage replication between two AZs with a maximum network
latency of 40ms which is equivalent to ~150 km signal distance. See Figure 2-5.
Figure 2-5 IBM Storage Fusion HCI System’s global data platform includes Metro-DR
Two IBM Storage Fusion HCI Systems form a Metro-DR cluster with a Tiebreaker in a 3rd
location. The network connection to the Tiebreaker VM in C can tolerate higher network
latencies.
IBM Storage Fusion HCI System with Metro-DR protects monolithic applications that would
otherwise be forbidden in a containerized environment without AZ fault tolerance.
More details can be found in the Redbooks IBM Spectrum Scale Security at:
https://www.redbooks.ibm.com/abstracts/redp5426.html
At its core, the Red Hat OpenShift Container Platform relies on the Kubernetes project to
provide the engine for orchestrating containers across many nodes in scalable data centers.
Red Hat OpenShift Container Platform is designed to lock down Kubernetes security and
integrate the platform with a variety of extended components.
Therefore, data protection is a fundamental part of IBM Storage Fusion. Containers are built
from container images that are stored in a container registry. It is important to have a reliable
backup of the container registry, but there is no need to back up running container images
(again).
What is important though is to back up the information that makes container instances
unique, like passwords, service names or the persistent data of the container. This
information is stored in resources like secrets, services, routes, and a lot more.
Figure 2-6 on page 19 gives an overview of namespace and cluster resources. IBM Storage
Fusion creates a backup of the cluster and namespace resources.
The persistent data of a container is stored in separate volumes called “persistent volumes
claims” (PVCs) due to the way they are requested from the control plane (as “claim”).
IBM Storage Fusion can take snapshots of PVCs for point-in-time backups. Snapshots are
always stored on the same file system as the primary PVC data. How often these snapshots
are taken and how long they remain available is defined in policies.
In order to protect snapshots from file system corruption, malicious destruction or physical
damage, they can be periodically copied to an Object Storage service or to an external
backup server. Backup and restore are managed from within the IBM Storage Fusion user
interface. See Figure 2-7.
Figure 2-7 Backup and restore are managed within the IBM Storage Fusion user interface
Hardware monitoring
Today's servers and switches have built-in hardware redundancy like multiple fans and power
supplies. But the best redundancy does not help if failed components are not identified and
replaced as soon as possible, minimizing the risk of double failures.
Red Hat OpenShift does not include hardware monitoring of the underlaying infrastructure
out-of-the-box. In particular, when Red Hat OpenShift runs on top of a hypervisor, hardware
monitoring needs to be implemented on the hypervisor layer or below. If Red Hat OpenShift is
deployed on bare metal, infrastructure monitoring and alerting has to be implemented by the
user via 3rd party tools.
IBM Storage Fusion HCI System includes a full stack hardware monitoring and call home
functionality. Events are raised and identified failed components can be replaced ASAP. This
functionality is only available with the HCI version, not with IBM Storage Fusion SDS.
The Infrastructure Dashboard provides an overview of the complete hardware and software
stack (see Figure 2-8). This holistic view eases troubleshooting and allows diving deeper into
each component with a single click.
IBM Storage Fusion SDS includes a UI with the same look and feel but without the
infrastructure dashboard.
The UI also contains an “Applications” section in which details about all Red Hat OpenShift
projects can be found. Backup policies can be easily assigned to applications and an
overview of the storage consumption and status is provided.
As illustrated in the architecture diagram shown in Figure 2-9, all software components are
fully container-native and run on top of Red Hat OpenShift.
Figure 2-9 IBM Storage Fusion HCI System Software Architecture Diagram
The IBM Storage Fusion software stack is deployed as an Red Hat OpenShift operator and
updated via the Red Hat OpenShift operator lifecycle management concept (OLM).
IBM Storage Fusion adds many custom resource definitions (CRDs) to the Red Hat
OpenShift cluster. This enables IBM Storage Fusion to store its configuration and current
states in custom resources (CRs) in the Red Hat OpenShift cluster configuration database
(etcd).
The IBM Storage Fusion HCI System software includes compute and network operators that
are not available for custom-built Red Hat OpenShift clusters on bare metal or a hypervisor.
These operators take care of the hardware management within the IBM Storage Fusion HCI
System and transmit details like firmware levels, temperatures, hardware component failures,
network vlans, packet counters per network port, and more to Red Hat OpenShift.
This “infrastructure as code” can be used to ease troubleshooting and allow a DevOps
operating model.
The full software stack including the firmware of servers and switches, the hypervisor
software and CSI drivers for storage systems need to be updated as well. See Figure 2-10.
Before the stack is updated, the interoperability of the new versions of the components need
to be verified.
For each IBM Storage Fusion release, IBM delivers a supported and pre-tested software
bundle including:
IBM Storage Fusion operator
Data Protection software
Global Data Platform (SDS)
Firmware updates for servers and switches (HCI only)
Figure 2-10 IBM Storage Fusion Bare metal Red Hat OpenShift Stack
While Red Hat OpenShift and the IBM Storage Fusion operator are updated from the
underlying Red Hat OpenShift UI, all other components are updated from the IBM Storage
Fusion UI.
A custom-built Red Hat OpenShift stack on top of a Hypervisor includes more layers from
different vendors. The software needs to be obtained from many sources, each coming with
its own installation instructions and interoperability guidelines.
Figure 2-11 Custom-built Red Hat OpenShift stack on top of a Hypervisor includes more layers from
different vendors adding complexity
Remember: VMs are still machines, even though they are virtualized from the underlying
hardware. Therefore, they need to be managed as such. In contrast, container platforms do
not consider machines, but rather services, templates, recipes, and resource claims. These
resources can include instances of virtual machines.
The more we modernize our applications, the more we will be able to write “infrastructure as
code”; namely, recipes and services/resource claims. The classic concept of a virtual
machine that conceals all mutual dependencies and security aspects is not adequate for
modern “agile” or “cloud-native” programming and dev-ops-style deployment.
Figure 3-1 shows the typical construction of a three tier architecture system.
Currently one of the most common adoptions of HCI technology is Virtual Desktop
Infrastructure (VDI) application. IBM’s vision of HCI is beyond the simplification of the
management and legacy VDI workloads. In the modern HCI, it should optimize for container
deployment for hybrid cloud and Edge environments. It should support enterprise features
such as seamless backup and restore, disaster recovery, enterprise security and high
availability. The modern HCI should not be an infrastructure silo. It should be able to create a
unified storage platform to allow seamless application mobility and data sharing from clouds
to the edge.
IBM Storage Fusion is built from proven, enterprise-hardened technologies like IBM Storage
Scale and IBM Spectrum Discover (now an integrated IBM Storage Fusion service called
Data Cataloging) and provides a bare metal foundation for the rapid deployment and easy
management of any RHOCP environment. We've brought the entirety of our rich heritage and
vast IP portfolio in supercomputing, AI/ML, and industrial-strength data resilience to bear on
the design of Fusion. The result is a performance- and cost-optimized platform that unlocks
the full potential of RHOCP - at any scale.
Figure 3-3 shows the IBM Storage Fusion software layout overview.
IBM Storage Fusion and the inherent multi-cloud deployment model of IBM Storage Fusion
Data Foundation, in conjunction with Red Hat OpenShift itself, are key to addressing and
enabling the agility, simplicity, and productivity improvements that create efficiencies for
customers.
The IBM Storage Fusion Data Foundation documentation is split into sections for each type of
environment describing how to deploy available storage in each one. At the edge, there is a
new option for very small deployments of Single Node Red Hat OpenShift (SNO) matched by
IBM Storage Fusion Data Foundation with a more specialized deployment of Local Volume
Manager (LVM) instead of Ceph, and MultiCloud Object Gateway.
In public cloud environments, there are managed service offerings for storage on Red Hat
OpenShift Service on AWS (ROSA) and Red Hat OpenShift Dedicated. In all environment
types, there are self-managed options in which the client can choose their preferred
infrastructure and storage model.
This gives the IBM Storage Fusion HCI System the following characteristics:
Integrated HCI appliance for both containers and VMs using Red Hat OpenShift
Highly scalable containerized file system with erasure using IBM Storage Scale ECE
Data resilience for local and remote backup and recovery using IBM Storage Protect Plus
Simple installation and maintenance of hardware and software through the custom-built UI
Global data platform stretching from public clouds to any on-prem or edge locations
IBM Cloud Satellite® and Red Hat ACM (Active Cluster Management) native integration
Ready for AI applications with optional GPUs and global data for better AI
Starts small with 6 servers and scales up to 20 (with HPC GPU enhanced options).
So, in summary, these are the highlights of the IBM Storage Fusion HCI System appliance
and all the advantages of this hyper-converged approach using the outlined building blocks.
Figure 3-4 shows the IBM Storage Fusion HCI System software overview.
Notes: The server in rack position seven (RU7) is the provisioner node, connected to
the optional KVM feature. On older systems, this KVM was a standard feature, however
on current system orders, the KVM feature is optional.
This KVM is used by the IBM SSR to configure the initial systems with network and
base Red Hat OS to then move to the OCP and Fusion software install. The servers in
positions two, three and four (RU2, RU3, and RU4) become the Red Hat OpenShift
control plane.
On systems without the KVM feature, the SSR will use a service laptop to connect into
the node in rack position seven (RU7) and then perform the initial install phase.
IBM Storage Fusion HCI System 2.5.2 also offers the following new enhancements:
Install IBM Storage Fusion HCI System components in your own rack
– IBM Storage Fusion HCI System components can be ordered for installation into
client-supplied racks. When ordering, clients have the option to order an IBM Storage
Fusion HCI System fully racked from the IBM factory or as components that can be
installed into a client-provided rack.
– For more information see here: Installing the HCI System
High Availability Cluster Support
– IBM Storage Fusion HCI Systems can be configured into a three-rack high availability
(HA) cluster. This configuration is commonly referred to as a 3 availability zone (3AZ)
deployment, a strategy useful for ensuring application availability.
– You can connect three IBM Storage Fusion HCI System racks to create a single large
Red Hat OpenShift Container Platform and IBM Spectrum Scale ECE cluster.
– The components in each of the three racks are the same as a single rack.
– For more information see here: High Availability Cluster
Expansion Rack Support
– IBM Storage Fusion HCI System can add new IBM racks to expand your Red Hat
OpenShift Container Platform cluster.
– For more information see here: Adding Racks
More detailed information is available here where the new full feature set for the IBM Storage
Fusion HCI system is explained.
Note: Because of power requirements, only systems with 3-phase power or 60A
single-phase power are eligible for the 1024GB upgrade. Systems using 30A
single-phase power cannot support this memory upgrade.
3.4.6 Configuring the IBM Storage Fusion HCI System with IBM Storage
Modeller
The IBM Storage Fusion HCI System appliance is configured using a tool call IBM Storage
Modeller (StorM). IBM Storage Modeller is an IBM sales process tool that is designed to
model storage solutions for customer applications. This tool is for use by IBM and Business
Partner technical sellers.
Storage Modeller is developed by IBM and functionality is being delivered over time. Storage
Modeller is designed to integrate with upstream and downstream sales tools, with the focus
on optimizing seller workflow and reducing time from Opportunity ID to actual sale of a
solution. Storage Modeller is a cloud-based tool accessed through a web browser-based user
interface.
Figure 3-6 shows the Storage Modeller process for integration with IBM sales tools.
As the StorM tool is used, and the configuration created, the UI display will show a graphical
build of the system. More information on the StorM tool and a virtual tour can be found here:
Figure 3-7 on page 35 shows the IBM StorM graphical output representation of the HCI
appliance configuration.
The left hand section of the UI allows the user to select the number and type of servers, and
storage per server, both the system memory and the number of NVMe drives (minimum of
two per server to a maximum of ten per server).
Note: The number of NVMe drives must be symmetrical across all of the six base servers
and any additional servers added. The StorM tool will check for this rule.
The right hand section of the UI shows the license information, raw vs. usable storage and the
selected hardware resources chosen with the relevant IBM feature codes.
For more information on the details of each type of installation available, refer to the IBM
Document pages for IBM Storage Fusion here:
Figure 3-8 shows the variations on the IBM Storage Fusion HCI System and Software only
versions and which platform each will run on.
The components within each IBM Storage Fusion offering are as follows:
So in summary IBM Storage Fusion software allows you to solve the same problems BUT we
meet you on your platform of choice.
One very common client scenario we see today is IBM Storage Fusion Data Foundation +
Cloud Pak for Data and in particular IBM Storage Fusion providing the backup/restore
capability for a complex Cloud Pak installation.
There is also a section on what is included with the various releases of the IBM Storage
Fusion software.
Operators are pieces of software that ease the operational complexity of running another
piece of software. They act like an extension of the software vendor's engineering team,
watching over a Kubernetes environment (such as Red Hat OpenShift Container Platform)
and using its current state to make decisions in real time.
These Operators allows for simple management of the IBM Storage Fusion HCI System
appliance and allows IBM to build the appliance management software using the latest
software development practices including CI/CD pipelines and online software delivery via
entitled registries for simplified upgrades. IBM Storage Fusion HCI System also supports a
fully air-gaped or disconnected installation.
4.2 Upgrading
IBM Storage Fusion HCI System incorporates an upgrade operator which orchestrates the
upgrade process of firmware on the hardware, and application software of Red Hat
OpenShift, the Appliance Management Stack, IBM Storage Scale ECE and IBM Storage
Protect Plus.
4.3 Management UI
IBM Storage Fusion provides a management or appliance UI for managing the HCI System
appliance. This UI has the ability to monitor and control every area of the appliance. The
appliance UI operator provides a single pane of glass to manage IBM Storage Fusion HCI
System and is integrated with Single Sign On provided by Red Hat OpenShift. The appliance
UI operators monitor the health of the system for problem resolution, provide performance
monitoring capabilities, present capacity utilization so that as a user you know when you need
to add more drives for storage capacity or more physical servers for more compute capacity.
Note: IBM Spectrum Discover functionality is now available as an integrated service within
IBM Storage Fusion called Data Cataloging.
If you want to use Data Cataloging, you must take into consideration the following system
requirements: IBM Storage Fusion system requirements
After the system requirements for Data Cataloging are met, you can install the Data
Cataloging service.
Note: Data Cataloging is applicable only for IBM Storage Fusion and not for IBM Storage
Fusion HCI System. It is enabled starting with IBM Storage Fusion 2.5.1 software.
After Data Cataloging is installed you will need to check IBM Storage Software requirements
and then configure the data source connections to meet your needs such as the following
examples:
IBM Storage Scale data source connections
You can create an IBM Storage Scale data source connection, scan a data source,
and manually initiate a scan.
IBM Storage Archive data source connections
You can define tags and policies in Data Cataloging based on values that are
derived from IBM Storage Archive metadata to help in searching and categorizing
files.
IBM Cloud Object Storage data source connection
You can create the IBM Cloud Object Storage (COS) connection and initiate a
scan.
Data Cataloging and S3 object storage data source connections
Use this information to understand how Data Cataloging works with S3-compliant
object store.
Scanning an Elastic Storage Server data source connection
Use the Data Cataloging GUI to scan an Elastic Storage Server (ESS) data
connection.
Once the IBM Storage Fusion HCI System is configured as an IBM Cloud Satellite location,
from the IBM Cloud console you can then deploy IBM Cloud Services such as Red Hat
OpenShift clusters and other IBM Cloud Satellite enabled services. IBM Storage Fusion HCI
System in this case serves as a turnkey IBM Cloud Satellite location with integrated
infrastructure, capacity management, system monitoring, call home, upgrades, and more.
Note: Both IBM Cloud Satellite and Red Hat Advanced Cluster Management can be
used together to build a hybrid cloud solution with single pane of glass management.
IBM Cloud Satellite is used to deploy IBM Cloud services to on-prem and edge
locations with IBM Storage Fusion HCI System. Red Hat Advanced Cluster
Management is used to manage hybrid cloud resources and applications across
multiple IBM Storage Fusion HCI Systems, public clouds, and more.
IBM Storage Protect Plus can take snapshots of applications and save them locally or save
them off the rack to an external backup location, such as an IBM Storage Protect Plus vSnap
server, that is connected to on-prem or public cloud storage.
4.10 Call Home for all IBM Storage Fusion HCI System
IBM Storage Fusion HCI System has a fully integrated Call Home function which covers both
the IBM Storage Fusion HCI System hardware infrastructure and software with automatic log
collection and upload for rapid problem resolution. The call home function will take any
information passed to it from the appliance UI that is deemed to be critical enough to require
further analysis by IBM Support.
This call home function will connect securely to IBM and raise a CSP (Cognitive Support
Platform) ticket with IBM support. This CSP ticket will then be passed to the relevant IBM
support specialist to analyze and progress the call. Events are natively integrated with Red
Hat OpenShift and can call home to IBM in the event of a physical infrastructure or software
problem with automatic log collection and upload.
IBM Storage Fusion HCI System supports Active File Management (AFM) through the use of
a pair of specialized AFM servers, ordered as a feature code.
The competitive differentiator is not just the fact that IBM has fused a storage platform with
storage services but with the storage platform we have leveraged years of investment in a
global parallel file system with global data access and advanced file management (AFM).
Applications simply access a local file or directory but are accessing data from information
that is stored in a public cloud bucket or legacy data from a remote system 1000s of miles
away. Some of the benefits of AFM are listed here:
You can use AFM to create associations between IBM Storage Fusion clusters.
With AFM, you can implement a single namespace view across sites around the world by
making your global namespace truly global.
By using AFM, you can build a common namespace across locations, and automate the
flow of file data. You can duplicate data for disaster recovery purposes without suffering
from wide area network (WAN) latencies.
Individual files in the AFM filesets can be compressed. Compressing files saves disk
space.
Namespace replication with AFM occurs asynchronously so that applications can operate
continuously on an AFM fileset without network bandwidth constraints.
All these capabilities put together provide not just resilient data but also location aware and
location independent access to data across geographic distance, multi-cluster distance, and
multi-availability zones. Distance is very important and is a requirement for a lot of enterprise
workloads. Consequently, data resiliency has become increasingly important as cyber
resiliency is part of board level discussions.
Furthermore, there are lifecycles in which those models are assessed and improved on a
continuous basis as follows:.
The training datasets and production models all need easy to manage storage
provisioning to support the pipelines that are now prevalent in both DevOps and AI
continuous delivery environments.
Convergence of applications and infrastructure is threefold; computing power for AI/ML
with graphic processing unit (GPU) assist, and co-location of data for training and high
performing production models.
IBM Storage Fusion HCI System can be used to optimize AI workflows with modern container
applications. It does this through the use of a pair of specialized GPU servers ordered as a
feature code.
If a customer is using NVIDIA for AI IBM Storage Scale it can preprocess or process at the
edge with NVIDIA A100 GPUs and NVIDIA Red Hat Operator and tools and then
transparently leverage that data in the cloud or directly on another larger more compute
intensive NVIDIA DGX A100 system with the IBM ESS 3000 or ESS 3200 high performance
storage system.
So, in summary the optional pair of GPU accelerated nodes for AI workloads that leverage the
latest NVIDIA A100 GPU, can also leverage the latest Red Hat OpenShift Operator for AI
from NVIDIA.
This will make creating and running AI workloads on IBM Storage Fusion HCI System even
easier and shows how this can be integrated into an overall AI enhanced workflow.
These two independent clusters run active-active so the container applications can be moved
in case of failure. As an Red Hat OpenShift admin, you can deploy IBM Storage Fusion HCI
System in data center site 1 and data center site 2, resulting in Red Hat OpenShift Container
Platform clusters that run at each site. Two clusters are configured such that they share a
synchronously replicated storage layer, allowing for data mirroring with zero Recovery Point
Objective (RPO) between the two clusters.
IBM Storage Fusion ensures data availability between the two clusters (both the data and the
PV/PVCs). However, you are responsible for managing application deployments between the
cluster. Here, both Red Hat OpenShift Container Platform clusters host active workloads at
the same time, rather than requiring one Red Hat OpenShift Container Platform cluster to be
a cold standby.
The Metro-DR setup includes two sets of networks, namely internal storage network and
Red Hat OpenShift network. In Metro-DR, the internal storage network of one site needs to
connect to the other site and tiebreaker, that is, externalize your storage network. Similarly,
the Red Hat OpenShift network of one site needs to connect to the other site.
Here are the details of each release and links to the IBM Documentation Center for more
detailed information.
Note: The 2.5.1 release is only for IBM Storage Fusion and not for IBM Storage Fusion
HCI System. The IBM Storage Fusion HCI System features are picked up in release 2.5.2.
Listed here are some of the new features in IBM Storage Fusion 2.5.1.
Dynamic storage provisioning using IBM Storage Fusion Data Foundation
– You can configure Dynamic storage device from the IBM Storage Fusion UI for IBM
Cloud and VMware platforms. Support is also available to configure IBM Storage
Fusion Data Foundation encryption from the IBM Storage Fusion user interface.
New platform support for Global Data Platform service
– Global Data Platform service is now supported on ROSA platform. You can connect
and add remote file system from the IBM Storage Fusion user interface.
Policy frequency can be realized in units of minutes or hours with the new service
– Multiple policies can be assigned to single application.
Data Cataloging as an IBM Storage Fusion service
– IBM Storage Fusion Data Cataloging service is available in IBM Storage Fusion. It is a
container native modern metadata management software that provides data insight for
exabyte-scale heterogeneous file, object, backup, and archive storage on premises
and in the cloud. The software easily connects to these data sources to rapidly ingest,
consolidate, and index metadata for billions of files and objects. For more information
about the service, see Data Cataloging.
New Backup and Restore IBM Storage Fusion service
– The new Backup & Restore service in IBM Storage Fusion allows you to backup and
restore applications. It includes the following additional benefits in comparison with
Backup & Restore (Legacy):
• Policy frequency can be realized in units of minutes or hours with the new service
• Multiple policies can be assigned to single application.
• There is more visibility of job progress and error information with the new service.
• Greater resiliency and scale-out capabilities.
– For more information about the service, see Backup and Restore.
Product versions in IBM Spectrum Fusion HCI and IBM Spectrum Fusion:
Red Hat OpenShift Data Foundation 4.10
IBM Spectrum Scale 5.1.6
IBM Spectrum Scale Container Native 5.1.6 for IBM Spectrum Fusion HCI
On-premises VMware supported version is vSphere 7.0
IBM Spectrum Protect Plus 10.1.13
IBM Spectrum Discover 2.0.6
For supported IBM Cloud Paks versions, see IBM Cloud Paks support for IBM Spectrum
Fusion.
The publications listed in this section are considered particularly suitable for a more detailed
discussion of the topics covered in this paper.
IBM Redbooks
The following IBM Redbooks publications provide additional information about the topic in this
document. Note that some publications referenced in this list might be available in softcopy
only.
IBM Storage Fusion HCI System Metro Sync Dr Use Case, REDP-5708
IBM Storage Fusion Backup and Restore for Cloud Pak for Data, REDP-5706
IBM Spectrum Scale Security, REDP-5426
Multi-Factor Authentication Using IBM Security Verify for IBM Spectrum Fusion,
REDP-5662
You can search for, view, download or order these documents and other Redbooks,
Redpapers, Web Docs, draft and additional materials, at the following website:
ibm.com/redbooks
Online resources
These websites are also relevant as further information sources:
IBM Storage Fusion documentation
https://www.ibm.com/docs/en/storage-fusion
IBM Spectrum Fusion 2.4 Support Portal
https://www.ibm.com/support/pages/node/6842171
IBM Spectrum Fusion 2.3 Support Portal
https://www.ibm.com/support/pages/node/6826715
IBM Spectrum Fusion Passport Advantage
https://www.ibm.com/software/passportadvantage
IBM Storage Modeller
https://www.ibm.com/tools/storage-modeller/help/index.html#page/WWR2/5528_intro
.IBM_Storage_Modeller.html
Metro-DR
https://www.ibm.com/docs/en/storage-fusion/2.5?topic=prerequisites-metro-dr-dis
aster-recovery
Red Hat OpenShift Container Platform security and compliance
https://docs.openshift.com/container-platform/4.11/security/index.html
REDP-5688-00
ISBN 0738460990
Printed in U.S.A.
®
ibm.com/redbooks