Getting Started: Journey To Modernization With Ibm Z
Getting Started: Journey To Modernization With Ibm Z
Getting Started: Journey To Modernization With Ibm Z
Getting Started
Journey to Modernization with
IBM Z
Ravinder Akula
Matthew Cousens
Makenzie Manna
Pabitra Mukhopadhyay
Anand Shukla
Redpaper
IBM Redbooks
March 2021
REDP-5627-00
Note: Before using this information and the product it supports, read the information in “Notices” on page 9.
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Now you can become a published author, too! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Stay connected to IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
Other publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
Help from IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
Contents 7
8 Getting Started: Journey to Modernization with IBM Z
Notices
This information was developed for products and services offered in the US. This material might be available
from IBM in other languages. However, you may be required to own a copy of the product or product version in
that language in order to access it.
IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area. Any
reference to an IBM product, program, or service is not intended to state or imply that only that IBM product,
program, or service may be used. Any functionally equivalent product, program, or service that does not
infringe any IBM intellectual property right may be used instead. However, it is the user’s responsibility to
evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document. The
furnishing of this document does not grant you any license to these patents. You can send license inquiries, in
writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive, MD-NC119, Armonk, NY 10504-1785, US
This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may make
improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time
without notice.
Any references in this information to non-IBM websites are provided for convenience only and do not in any
manner serve as an endorsement of those websites. The materials at those websites are not part of the
materials for this IBM product and use of those websites is at your own risk.
IBM may use or distribute any of the information you provide in any way it believes appropriate without
incurring any obligation to you.
The performance data and client examples cited are presented for illustrative purposes only. Actual
performance results may vary depending on specific configurations and operating conditions.
Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm the
accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the
capabilities of non-IBM products should be addressed to the suppliers of those products.
Statements regarding IBM’s future direction or intent are subject to change or withdrawal without notice, and
represent goals and objectives only.
This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to actual people or business enterprises is entirely
coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrate programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the sample
programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore,
cannot guarantee or imply reliability, serviceability, or function of these programs. The sample programs are
provided “AS IS”, without warranty of any kind. IBM shall not be liable for any damages arising out of your use
of the sample programs.
The following terms are trademarks or registered trademarks of International Business Machines Corporation,
and might also be trademarks or registered trademarks in other countries.
CICS® IBM Garage™ UrbanCode®
Concert® IBM Services® WebSphere®
Db2® IBM Watson® z/Architecture®
DB2® IBM Z® z/OS®
FICON® MQSeries® z/VM®
GDPS® Parallel Sysplex® z/VSE®
IBM® Rational® z15™
IBM Cloud® Redbooks®
IBM Cloud Pak® Redbooks (logo) ®
The registered trademark Linux® is used pursuant to a sublicense from the Linux Foundation, the exclusive
licensee of Linus Torvalds, owner of the mark on a worldwide basis.
Microsoft, and the Windows logo are trademarks of Microsoft Corporation in the United States, other
countries, or both.
Java, and all Java-based trademarks and logos are trademarks or registered trademarks of Oracle and/or its
affiliates.
Ansible, OpenShift, Red Hat, are trademarks or registered trademarks of Red Hat, Inc. or its subsidiaries in
the United States and other countries.
UNIX is a registered trademark of The Open Group in the United States and other countries.
Other company, product, or service names may be trademarks or service marks of others.
The term modernization often varies in meaning based on perspective. This IBM®
Redbooks® publication focuses on the technological advancements that unlock computing
environments that are hosted on IBM Z® to enable secure processing at the core of hybrid.
This publication is intended for IT executives, IT managers, IT architects, System
Programmers, and Application Developer professionals.
Authors
This paper was produced by a team of specialists from around the world.
Ravinder Akula is an Infrastructure Architect in IBM India. He is an IBM and Open Group
certified IT Specialist at Expert level. He has over 14 years of experience in the field of IBM Z.
He holds a B.Tech degree in Information Technology from Kakatiya Institute of Technology
and Science, Telangana, India. His areas of expertise include IBM z/OS®, IBM Z Hardware,
IBM Z modernization, zVM, Linux on Z, Disaster Recovery, Parallel Sysplex®, and IBM
GDPS®. He has been working with IBM® since 2006. He is the technical lead for the z/OS
platform support team, which handles multiple IBM Z client infrastructures across different
geographies. He is an active participant in IBM Academy of Technology and has published
several Tech docs. He is an IBM recognized speaker and presenter and teacher and educator
who has been teaching IBM Z classes for IBM Z System Programmers since 2007.
Makenzie Manna is an IBM Redbooks Project Leader in the United States. She has four
years of experience in the Computer Science Software Development field and one year of
experience in developing technical content. She holds a Master of Science degree in
Computer Science Software Development from Marist College. Her areas of expertise include
mathematics, IBM Z, and cloud computing, and has written about topics regarding
modernization with IBM Z and COBOL programming in a modern IDE.
Pabitra Mukhopadhyay is an IBM and Open Group certified Infrastructure Architect in IBM
Services®, India. He has over 14 years of experience in the field of IBM Z®. He is also an
IBM Certified Specialist for Z Systems Technical Support. He holds a B.Tech degree in
Electronics and Communication Engineering from West Bengal University of Technology, WB,
India. He has worked at IBM for 8 years. His areas of expertise include IBM z/OS, IBM Z
hardware, IBM Z modernization, Parallel Sysplex®, middleware, Disaster Recovery,
resiliency, availability, monitoring, and automation. He has participated in industry-level
events and written extensively about z/OS and middleware products in IBM Systems
magazine and IBM Developer.
Anand Shukla is an Infrastructure Architect in IBM India. He has over 14 years of experience
in IBM Z. He holds a M.Tech degree in Information Technology from Latrobe University,
Melbourne, Australia. In his 3 years with IBM, he has worked as a Technical lead for the z/OS
platform, handling multiple IBM Z client infrastructures for multiple geographies. Being an IBM
recognized Educator and Presenter, he has conducted various skills enhancement training for
IBM India team and for IBM Australia, Brazil, Thailand, and Singapore regions. His expertise
includes, IBM z/OS, zVM, Linux on Z, IBM Z Hardware, IBM Z modernization, Disaster
Recovery, Parallel Sysplex, GDPS, and IBM Z DC Migration. Prior to IBM, for over 11 years
he worked as a z/OS Technical Specialist with CSC (now DXC) Australia.
Mark Bylok
Jordan Cartwright
Jim Crowley
Daniel Jast
Alex Lieberman
Luisa Martinez
Ryan Parece
Torin Reilly
James Roca
Xiaoyu (Allen) Zhou
Stephen Warren
IBM
Find out more about the residency program, browse the residency index, and apply online at:
ibm.com/redbooks/residencies.html
Comments welcome
Your comments are important to us!
We want our papers to be as helpful as possible. Send us your comments about this paper or
other IBM Redbooks publications in one of the following ways:
Use the online Contact us review Redbooks form found at:
ibm.com/redbooks
Send your comments in an email to:
[email protected]
Mail your comments to:
IBM Corporation, IBM Redbooks
Dept. HYTD Mail Station P099
2455 South Road
Poughkeepsie, NY 12601-5400
Preface 13
14 Getting Started: Journey to Modernization with IBM Z
1
Cloud computing shattered the notion of closed systems operating within the confines of a
data center. Today’s enterprise computing is hosted on-premises in traditional data centers,
and as-a-service on private and public cloud platforms. In fact, it is not unusual for an
enterprise to assemble their offerings that use multiple cloud platforms, thus the birth of
hybrid multi-cloud. As you might imagine, this shift in computing paradigms is enabled by
significant improvements in technology.
In this chapter, we discuss why IBM Z is the proper environment for your modernization
journey, what is meant by modernization, why there is a need for modernization, and what
that journey can look like within an enterprise.
The latest addition to the IBM Z family is the z15™, which first became available in December
2019. This system is designed to host one trillion secure web transactions per day. To
accomplish this rate, the z15 boasts up to 40 TB of RAM, which is fully redundant and known
as redundant array of independent memory (RAIM), which accommodates hardware failures
in the memory modules. The sheer amount of RAM is put to use by the amount of data that is
hosted in databases and the volume of work that occurs simultaneously.
The compute power of z15 comes from over a hundred cores, which are configured by
system administrators as needed for their specific use case. Feeding the compute cores are
multiple levels of cache and specialty processors. Throughput is achieved by dedicating
hardware to common operations, which frees up the main processors for other work. For
example, I/O and cryptographic operations each feature dedicated processors in the z15. The
systems in the IBM Z family run various operating systems, including multiple Linux
distributions, z/OS, z/TPF, IBM z/VM®, and others. In fact, it is common for multiple operating
systems, and even multiple logical instances of the same operating system, to run on a single
system.
Table 1-1 lists some of the operating systems that run on IBM Z.
z/OS Designed to keep applications and data available, system resources secure, and
server utilization high. z/OS maintains compatibility for applications and runs
Linux on IBM Z containers on-premises and in hybrid multi-cloud.
Linux on Z Provides an enterprise Linux platform that benefits from IBM Z qualities and by
proximity to other systems hosted on Z, such as z/OS.
z/Virtual Machine A hypervisor that can run thousands of Linux on Z virtual machines on one
(z/VM) system and run z/OS, IBM z/VSE®, and z/TPF.
1 https://www.ibm.com/downloads/cas/67Q9DDOR
The amount of business intelligence that is coded in these core applications is vast, and the
value that is attached to these time-tested assets is huge. The value stems from more than
just the applications; it is also based on the business data that is on or originates on IBM Z.
IBM Z earns this trust through multiple avenues. For example, IBM Z hardware, firmware, and
operating systems always conform to a guiding set of architecture rules to ensure support of
current and future workloads and services. Whenever new capabilities and features are
implemented, the IBM z/Architecture® is extended rather than replaced, which enables
compatibility with an earlier version. Therefore, applications that are running on IBM Z
continue to function as capabilities are added. This feature is one way the platform helps
organizations protect their investment; that is, by ensuring applications continue working in
the future without requiring any application changes.
Many long-time users depend on the reliability and availability that IBM Z provides. By using
the technologies in IBM Z, these clients can achieve years or even decades of uninterrupted
service. Among the backdrop of laptops, smartphones, and even watches that must be
restarted every so often, years without interruption is unthinkable. This inherent robustness
enables trust.
At one time, enterprise computing was performed by using closed systems that were housed
in data centers that were owned by the organization that was performing the computing. This
meant that Company A owned the end-to-end processing, the data that was created by that
processing, and they had physical custody of their data. Company A protected themselves by
securing physical access to their data centers and implementing authentication and
authorization best practices.
The era of cloud computing was born with the proliferation of inexpensive commodity
hardware and operating systems, and sufficiently fast internet connections. Company A
suddenly had many more options when it came to implementing their enterprise computing
environments. They embraced multiple public clouds as they built out their client-facing sites
and even some parts of their core applications. This move had some unintended
consequences for Company A’s Z-powerhouse, at the core of their business.
After a few years of focused efforts, Company A built a strong infrastructure across multiple
clouds. Because they built it from scratch, they used all of the latest techniques for application
design, build, deployment, and orchestration. These techniques were perfect for enabling
quick application changes, and Company A found it rolled out updates to its clients more
rapidly.
This brings us to the problem that Company A faces today. Although Company A’s cloud
systems are nimble, Company A never invested their resources to enable similar benefits with
their environments that are hosted on IBM Z. Therefore, when a change is made that
necessitates updates to their cloud systems and their Z systems, one side is often waiting for
the other because of the difference in how they are set up.
Does this mean that Company A’s Z system is old and outdated? Not at all. Company A’s
cloud systems use newer techniques and technologies because they were created more
recently. In fact, the cloud systems use many of the same techniques and technologies that a
company new to IBM Z get started with if they create an environment today.
Because Company A has had environments on Z for many years and did not modernize that
environment with the addition of the cloud systems, they use older technology. All that they
need to do is invest in updating their environment to newer techniques, tools, and
technologies. This updating effort is what we mean by modernization.
Customer Open up new experiences for clients in a way that fuels business growth,
Experience expands market share, adds value to their clients, and increases customer
Officer (CXO) loyalty.
Chief Technology Respond to business requirements faster by delivering new features and
Officer (CTO) incremental updates to existing features more quickly.
Chief Financial Maximize the return on investment (ROI) in the use of IBM Z and the surrounding
Officer (CFO) system by increasing value without increasing expense.
Enterprise Integrate various technologies, platforms, and applications across the enterprise
Architect in a seamless and transparent manner.
Infrastructure Use new technologies and features to ensure the highest levels of reliability,
Architect availability, and simplification.
Application Make it easier to use products and services on the platforms of choice and
Architect simplify the applications’ designs.
Security Architect Ensure the highest levels of security and data privacy by using new ways of
protecting enterprise data and monitoring access.
Application Accelerate and grow quality in every step of the software delivery lifecycle by
Developer using modern tools and processes.
Business Analyst Unleash the power of business data through increased insights that can be
realized only by using the latest technologies with the most current data.
Every individual has their own perspective on this topic, one thing they have in common is a
focus on the use of innovation to deliver better business value for their organization. No one
wants to change technology for the sake of being called modern. It is the value that stems
from modernization that makes the journey worthwhile.
Figure 1-1 shows how transformation is occurring in different areas of the IT system and what
these areas are transitioning toward, including cultural changes.
Note: The terms modernization and digital transformation are used interchangeably
throughout this publication. More specifically, modernization refers to the process or the
practice of upgrading the IT landscape of an organization to address their market needs.
Whereas digital transformation refers to the adoption and use of new features and digital
technology to deliver value. Digital transformation also involves a fundamental change in
the way people and processes work to enhance efficiency and increase the organization’s
competitiveness in the market.
In this section, we describe some of the key drivers of modernization today, including the
following examples:
“Business agility and speed-to-market”
“Integration and interoperability of enterprise systems”
“Security, trust, compliance, and regulatory requirements”
“Resource optimization ”
“Managing enormous amount of data”
DevOps helps streamline infrastructure and accelerate the delivery of high-quality software,
simultaneously lowering overall cost through optimized development, testing, and
deployment. The IBM Z platform supports common tools for product development and
automation, so these approaches can be performed across platforms.
Application developers can concentrate on coding rather than worrying about where the
application is going to be deployed. As a result, minimum viable products can now be
released at a faster rate, which leads to more timely feedback from users. For more
information about DevOps, see Chapter 4, “Introduction to DevOps” on page 39.
To gain more efficiency and increase reuse, business processes must be streamlined across
the organization. In addition, diverse systems and applications must be interconnected and
integrated so that they can work in tandem to achieve business goals. \
The IBM Z platform offers highly secure ways to help deal with compliance and regulatory
security requirements. For example, multi-factor authentication, pervasive encryption,
centralized key management, and in-flight data protection allow you to implement security at
the speed of business.
Organizations are also adopting open source software and standards to achieve higher levels
of cost-effectiveness, speed, and agility. Total cost of ownership (TCO) is one of the most
important factors for finalizing the platform for digital transformation. According to a study
conducted by IDC, the transformational capabilities of IBM Z paired with today’s tools are
helping clients realize signification business value. It is estimated that:
The benefits might outweigh the investment cost for this transformation by an estimated
factor of 6.2x.2
– Cost and operational efficiencies: Estimated 2.5x benefits to costs
– Efficiencies and higher revenue: Estimated 3.8x benefits to costs
– Efficiencies, higher revenue, and protected revenue: Estimated 6.2x benefits to costs
It is also estimated that because of this transformation, organizations now have 64% more
code releases and 44% less time per code release.2
Through an integrated and automated pipeline, the developer does not need to worry about
system intricacies. Recent additions in operating system and middleware support expanded
container support on IBM Z and more is planned for the future.
2 IDC white paper, sponsored by IBM and Broadcom, The Business Value of the Transformative Mainframe, August 2019.
Data privacy is perhaps the most critically important issue facing digital business today. As
regulations and compliance requirements increase, it is vital that you ensure that your client
data remains encrypted, even as it moves off the system of record within your enterprise. IBM
Z has several technologies available to permit data privacy at-rest, in-flight, and even in-use.
3 https://www.ibm.com/it-infrastructure/z/technologies/pervasive-encryption
With each cloud choice an organization makes comes a set of assumptions and features. By
over-relying on one public cloud, companies can place themselves on a set path and find it
difficult to migrate to another cloud service provider in the future.
Organizations have unique needs around their data and workloads that the public cloud alone
might not satisfy. This issue is even more relevant for organizations operating in highly
regulated industries that must comply with various global and local regulations that are
related to data protection. As a result, hybrid multi-cloud as a solution, complemented by
open source software and open standards, is becoming more widely adopted. Figure 1-2
shows the journey from single vendor public cloud to hybrid multi-cloud.
These products help organizations minimize software cost and simplify license management
to a great extent without vendor lock-in restrictions. Because most of the product source code
is easily available, developers can customize the code to suit their requirements, fix any bugs,
or enhance product features. These reasons are some example of why open source software
gained so much traction. Support for open source products and integration of open source
systems with existing systems on IBM Z has enormous potential for clients.
In some organizations, DevOps teams work only on systems that are hosted in the cloud.
Some might think that IBM Z and DevOps cannot coexist, when in reality things are quite the
opposite. Organizations that embraced DevOps practices and open source software for agility
on IBM Z take advantage of years of trusted reliability, performance, availability, and security
capabilities, and reap the benefits of this digital transformation.
Some time ago, having the words “cloud” and “on-premises systems” in the same sentence
was considered implausible, or even impossible. IBM Z evolved significantly and writing
programs on Z and other cloud-oriented technologies together is now common.
To summarize, modernization requires a cultural shift and a change in mindset to create value
by using new tools and processes. For more information, see Chapter 5, “Modernization best
practices” on page 53.
One of the foremost requirements for a successful digital transformation is ensuring that it is
done at a pace you can control. A phased or iterative approach to the IT modernization
journey offers more flexibility and being practical for large, complex projects.
Today’s transformation initiatives are complex, and technology is evolving rapidly, so the
probability of running into something unknown is high. For this reason, you must consider a
phased approach to modernization. A preferred practice is to limit your plan to only what is
visible during the planning stage of a phase or sprint. This limit reduces the risk and gives you
better control in terms of managing the risk and steering the project through necessary
changes.
The core team must include subject matter experts from different areas of technology and job
roles, including architects, application developers, integration specialists, process specialists,
security experts, and domain experts. The leaders, with the help of this core team, must
ensure that the organization embraces change, which is critical to the success of
modernization initiatives.
This core team is typically responsible for ensuring that the rest of the organization is not only
on-board, but also suitably aligned with the organization’s vision and direction for
modernization. Consider putting the core team members through training that surrounds the
modernization journey on IBM Z to better prepare them for the tasks ahead.
In this phase, the core team must understand the concepts and aspects of modernization on
Z. They also must understand how new features of the hardware, operating system,
middleware, automation, monitoring, security, and other product suites can help your
organization digitally transform the IT landscape.
Understanding different approaches and options for transforming applications from being
cloud-ready, to cloud-enabled, to cloud-native in this journey to hybrid multi-cloud
environment, is equally important. The core team must also understand how IT and business
analytics can help the organization get useful insight that can lead to further growth.
Parallel to understanding the new features and capabilities that are offered by the IBM Z
platform, the team must also study their current IT setup in depth to understand what is
working for them, what is not, and how much work is required to modernize the IT setup. They
are encouraged to identify key areas that require urgent attention and prioritization in the
modernization journey. They must also look at the current operational model to see whether it
is in line with future needs.
After you can understand and quantify all the benefits of modernization and how it can
transform your business and maximize value, you can more easily define the goals, justify the
spends, and secure funding for modernization projects and initiatives.
The core team can consider formulating a broad framework and provide necessary guidelines
for enterprise modernization. The rest of the teams and groups within the organization can
work by conforming to this framework and following these guidelines to evaluate the feasibility
of the possible options for modernization.
As part of this example, consider devising options for how the different IBM Z technologies,
features, enhancements, DevOps solutions, products, and open-source software solutions,
(researched in the learn phase) can be tied together. By doing so, you can arrive at a unique
and customized modernization solution that is best-fit for your enterprise.
After the feasibility of the possible modernization solutions is evaluated, you can assess the
options that are available for implementation and further narrow them down. Consider
reviewing the current setup to determine the organization’s readiness in terms of
implementation of a modernization solution. You can prepare lists of various items and
activities that can help you drive the initiative effectively, as shown in the following examples:
Hardware, operating systems, and middleware that must be upgraded.
Products and software that must be installed.
Required security enhancements.
Components or applications that are available and can be reused.
Items that require simplification.
How the monitoring, alerting, and automation solutions must be configured for effective,
efficient, predictive, and insightful monitoring of the entire enterprise.
How to set up or modify DevOps processes and CI/CD pipeline to meet the requirements,
such as automated build, and automated and continuous testing.
How to define the systems and applications to enable seamless integration and migration
to a hybrid multi-cloud environment in the future.
How to encourage an agile way of working across the organization and roll out
collaboration tools that facilitate agile best practices, such as integration of agile tool sets
and feedback channels with a CI/CD pipeline.
List of technology and technical best practices to be implemented when deploying the
solution, and so on.
Consider setting up samples or prototypes for complex parts of the solution to get early
feedback. After you complete the deployment pre-work and testing, the next step is to deploy
the solution. In this phase, you also maintain the solution and provide support for future
enhancements. Suggestions, comments, and feedback that are gathered from users, support
personnel, and test teams must be fed into the iteration planning system for the inclusion of
defect fixes, and solution optimization in the subsequent iterations.
Consider conducting periodic reviews of the iterations to ensure that everything is on track.
Some of these considerations can include the following examples:
Features that were initially planned versus features finally deployed.
What went wrong and what was done correct?
Are there any new backlogs?
Lessons that are learned from the iteration are documented and are used as inputs for the
next iteration planning.
Did the team achieve all the Key Performance Indicators (KPIs) they set?
Do team members require more training and skills to support the modernization journey?
Is a skill road map available for all of the teams?
Do we need to revisit the modernization goals, targets, and road map?
Is the modernization journey on track across the enterprise, or does any team require a
course correction?
Such reviews must occur at every level to track the progress to ensure that you meet targets
and deadlines for every iteration. After you complete the reviews and gather all of the
necessary data points, you can start the next iteration, which begins with the learn phase as
shown in Figure 1-3 on page 12.
Returning to our example, Company A decides it must invest in modernizing the way they use
IBM Z so that it better fits into their hybrid multi-cloud strategy. Their vision is to release
application changes with the same ease and cadence, regardless of which platforms the
applications is on.
In Chapter 2, “Modernization goals and approaches” on page 15, we look at how you can set
specific modernization goals and general approaches to get started, in addition to examples
of the specific goals Company A can set and the approaches they might use to get started.
This chapter discusses how to set modernization goals, different modernization approaches
by using IBM Z, and where to begin your modernization journey. It includes the following
topics:
2.1, “Setting modernization goals” on page 16
2.2, “Where to start” on page 21
For example, you can self-assess by using questions similar to the following examples:
What works for you today and will it remain strategic in the future?
What does not work for you?
Does your IT setup allow agility so requirements can be made quickly?
Where is the industry heading in terms of the IT system?
Is your investment in IT modernization generating the kind of return on investment (ROI)
that you expected?
How does your IT setup fare in comparison to that of your competitors?
What are the technical skills in your IT organization today, and do the same skills continue
to be relevant in your wanted future state?
What is the extent of technical debt in your organization?
What kind of platforms are used today? Are these platforms suitable to enable your
modernization journey?
After you acquire the answers to these questions, prioritize the problems that you want
solved. With this information, you are in a much better position to make an informed decision
regarding the modernization journey of your IT setup.
Try not to jump from the problems to be solved directly to their solutions; that step in the
process comes later. (It is no coincidence that not a single product or offering was discussed
thus far in this publication.)
ROI Model
Comprehensive modernization plans take each layer of the infrastructure into consideration.
For example, if you want to modernize applications and how applications are developed on
IBM Z, it is important that your middleware products and operating systems are also
modernized.
Similarly, to modernize the operating systems, you must have the proper features available
and enabled on your hardware. This availability ensures that you use all of the capabilities of
the IBM Z platform and applications that are running on it. Consider asking questions to
assess what works for your organization and what does not, including the following examples:
How often does your hardware team consider implementing new features?
What is your upgrade strategy for your IBM Z systems?
Is your IT infrastructure resilient to failures and cyberattacks?
Is your IT infrastructure designed for near-continuous availability?
Can your IT infrastructure continuously deliver optimal performance with varying
workload?
Can your IT infrastructure handle an increased workload as a result of a planned outage,
and quickly catch up with the lost time after everything is back up and running?
Is your IT security team highly stressed and concerned about data breaches?
Is your data security encryption algorithm and key quantum-safe?
Also, it helps to avoid the need for complicated interfaces to access mainframe assets,
time-consuming development to allow integration, or multiple runtime environments.
This approach delivers a key capability for enabling digital transformation and hybrid cloud
solutions, which demand new ways of delivering services through new channels and
enhanced interactions with cloud-based solutions.
For more information about this approach, see 3.2, “Enabling data accessibility on IBM Z” on
page 30.
Cloud-native development, with its “develop once and deploy anywhere” philosophy, and the
myriad of open source tools that can be integrated to streamline an automated process,
revolutionized the DevOps strategy of developing and managing applications. For more
information about cloud-native development, see 4.5, “Code” on page 44.
To make matters more challenging, home grown business logic is not always fully
documented. Although new requirements emerge for accessing data, it is a challenge to use
data from your traditional applications’ databases without developing complex application
logic as the data model grows along with the application. It is difficult to re-create any of this
logic without a deep understanding of the applications.
Because you must keep the business running, the transformation of core applications
requires careful, incremental approaches. This approach is critical to address these concerns
as it focuses on incremental transformation of mainframe applications through refactoring and
modern development approaches.
You might also consider refactoring the code incrementally, which is aimed at simplifying the
code without changing its behavior, which improves the maintainability, flexibility,
time-to-market, and above all, the lifespan of these proven programs. Consider externalizing
business rules and policies that are embedded in mainframe assets so that these rules and
policies are easily accessible and usable across the organization over standard protocols.
When refactoring, you might also consider supporting development in the latest programming
languages that complement programs that are written in other languages. This approach not
only helps in improving agility, but also to extend existing, or develop new, applications that
manage systems of record data.
In addition to DevOps, modernizing other processes also realizes many other benefits. For
example, tracking transactions and monitoring the correlation of application, transaction, and
system resource performance data on IBM Z can provide valuable insights, predict failures,
and alert support staff for necessary corrective action. User-friendly monitoring products can
be customized to suit your requirements to empower the operations teams.
For more information about DevOps on IBM Z, see Chapter 4, “Introduction to DevOps” on
page 39.
Figure 2-3 shows some entry points into a modernization journey. Each of these entry points
can naturally pull in the adjacent areas as the journey progresses.
Where you start depends on the level of modernization that is required based on your needs
and priorities. In this section, we discuss approaches on how to decide where to start your
modernization journey based around the following topics:
Disruption to your business
Business risk
Specific drivers to modernize
Teams affected
Business continuity plan
Questions for IT team
Questions for application developers
Role of an IT leader in the journey to modernization
Because you need to maintain availability always during modernization, the transformation of
core applications requires cautious, incremental approaches. We discuss later in this section
conversation starters to consider that can aid in planning your modernization project.
In general, you want to start a modernization journey with a project that has a low likelihood of
causing major disruptions. After a few projects are successfully completed, you can apply the
lessons from those projects to a project with a higher risk of disruption.
Business risk
Part (but not all) of the risk that is assigned to a modernization project comes from the
potential disruption to the business. In the case of a modernization fall out, it is important to
make specific considerations, such as the following examples:
What are the worst- and best-case scenarios for this modernization project?
Is the amount of risk that is introduced by this modernization project acceptable?
What steps can be implemented to reduce the risk that is associated with the
modernization project, and how long does it take to implement these steps?
The suggested practice is to start a modernization journey with a project that has no more
than a moderate amount of business risk. You want to select something that demonstrates
considerable benefits with an acceptable amount of risk.
It is important to periodically reflect back on your modernization drivers to ensure that they
are being addressed during planning and execution. This review prevents the project from
drifting off course.
Teams affected
It is not only the IT department that feels the effects of modernization. It is crucial to
understand how all areas of your workforce might be affected, even if they are not working
with IT infrastructure directly.
IT departments are directly affected by modernization because they are tasked with the
implementation of the selected modernization solution. They are also expected to be well
versed with the modern technology and to confidently implement the modernization steps in a
near flawless manner. This staff not only is affected during implementation, but also as they
use the new solutions going forward.
Other departments might also feel the effects of modernization, even though they were not
tasked with the implementation. Therefore, it is important to consider how the modernization
affects other teams.
It also is imperative to the success of your modernization journey that you be prepared for
extreme scenarios through performing a thorough risk assessment with all affected teams.
The suggested practice is to start your journey with projects that are simplified by affecting
only a small group of people.
IT leaders play a key role in the digital transformation process, as leaders are responsible for
making informed business decisions to set their plans and strategies in relation to
modernization.
Now that we reviewed at the key technical and business considerations that are part of a
modernization strategy, we bring everything together and briefly define the strategy of
Company A in Chapter 3, “Modernization technologies on IBM Z” on page 25.
Also, the International Data Company (IDC) conducted research that determined that
organizations can derive significant value from the platform’s ability to serve as part of a
hybrid cloud environment, present easy to-view graphical interfaces, run open source
application development languages, and be highly analytics-driven and automated.2
Modernization initiatives, common to many clients, include the use of API to connect their IBM
Z environment to both internal and external networks, simplify operations that use web-based
interfaces, use Linux on the platform, integrate Z into DevOps deployment pipelines, and
support agile application development on the platform.
In this chapter, we focus on the technologies and solutions that are available on IBM Z to
accelerate your digital transformation, except for DevOps. For more information about
DevOps, see Chapter 4, “Introduction to DevOps” on page 39.
1
https://www.ensono.com/about-us/press-room/research-ensono-and-wipro-finds-50-businesses-are-looking
-expand-mainframe
2 IDC white paper, sponsored by IBM and Broadcom, The Business Value of the Transformative Mainframe, August 2019.
Broadly speaking, two cloud service models are available: Infrastructure as a Service (IaaS)
and Platform as a Service (PaaS). Before finalizing the model you choose, you must
understand and decide on the roles and responsibilities that you envision the cloud vendor, or
your internal cloud support team, to manage versus the roles and responsibilities that you
envision your application team to manage.
Figure 3-1 shows the separation of duties in IaaS versus PaaS models.
In the IaaS model, the cloud vendor manages the underlying infrastructure and provides you
the capability to manage the virtualized infrastructure. The installation, patching, and
management of the operating system, middleware, DevOps products, delivery pipeline setup,
data management, and application remain the responsibility of the application delivery team.
In the PaaS model, the capabilities and management of the underlying layers of the
infrastructure is provided as a service by the cloud vendor. The application development and
testing tools are provided as services and the application delivery team is not responsible for
managing the development pipeline.
On top of the wanted operating system, the latest versions of standardized software stacks
are available to facilitate flexibility. IBM also supports independent software vendors (ISVs)
and other IBM software on a custom basis.
Figure 3-2 IBM-managed Mainframe IaaS Environment with LPARs and storage subsystems
IBM Z systems are housed in purpose-built, data centers around the world. This LPAR-based
model is designed to offer high levels of security with the system achieving Evaluation
Assurance Level 5 (EAL5) accreditation. For more information about IBM Managed Extended
Cloud for IBM Z, see this web page.
From our ongoing example, the key to Company A’s modernization project is to bring their
IBM Z environment into their hybrid cloud. This is scenario is common for many organizations
that require the freedom to securely deploy, run, and manage their data and applications on
the cloud platform of their choice without running the risk of vendor lock-in.
A hybrid multi-cloud approach brings the flexibility to host your own software one day, move
that same setup to a cloud provider the next, and still have the freedom to change cloud
providers in the future. You can run sensitive, highly regulated, mission-critical applications on
private cloud infrastructure.
You can run less sensitive or even temporary workloads, such as development and test
environments, on the public cloud. With the proper integration and orchestration between the
two, you can take advantage of both, when needed, for the same workload.
Common tools, such as Microsoft Visual Studio Code for code editing, and Git for source
control, work great with IBM Z. Developers can even self-provision middleware instances on
z/OS without needing system programmer skills.
For more information about modernizing the application development process, see Chapter 4,
“Introduction to DevOps” on page 39.
Starting with z/OS Version 2 Release 4, an even closer location became available with z/OS
Container Extensions (zCX). For Linux on Z components in direct support of z/OS workloads,
zCX enables the deployment of a Linux on Z container in a z/OS system, where the Linux on
Z container runs as its own address space in z/OS. Applications resemble any other Docker
application to the developer.
Figure 3-5 shows how the zCX address space operates on z/OS.
Most applications that available to run on Linux only can run on z/OS with zCX. zCX runs
Linux on Z applications on z/OS by using z/OS operations staff and the existing z/OS
environment. For more information about z/CX, see IBM z/OS Container Extensions (zCX)
Use Cases, SG24-8471.
z/OS Connect also gives you the ability to easily use APIs from applications that are written
on z/OS. In many cases, these applications were written before REST was conceived. z/OS
Connect provides your COBOL application; for example, a control block with the results from
a REST call so the application can use and act on them.
When you include DVM as a part of your modernization strategy, you decrease the negative
impact that traditional data movement approaches possess. This opportunity to benefit from
data, where and when it is needed, is obtained through DVM’s ability to use these popular
APIs. API usage also helps with lowering the need for mainframe skills when modernizing
applications.
With DVM, enterprise applications can effectively and cost-efficiently access and update live
transactional IBM Z data that is stored in IMS, IDMS, ADABAS, and VSAM. It supports a
federation of IBM Z data with a myriad of structured and unstructured data sources, such as
Oracle, Apache Hadoop, MongoDB, and web services data.
By using enterprise data in-place where it originates, organizations can use a security-rich
environment, minimize latency, and improve the accuracy of data that is used in analytics.
The insights that are gained from the analytical capabilities of DVM are valuable in
recognizing compliance issues and system readiness, which prepares you to preemptively
address operational threats. For more information about DVM, see this web page.
IBM Z can modernize your security and data privacy approaches to fit the hybrid cloud
architecture. These approaches are designed to not only protect on-premises data, but also
provide data privacy and security by extending it across the hybrid multi-cloud. In this section,
we review some of the tools and features that are available for data privacy, safeguarding
hybrid cloud, and identification and prevention.
To help enterprises achieve this security approach, IBM offers key tools and technologies,
such as the following:
Pervasive encryption
IBM Data Privacy Passports
IBM Z Data Privacy for Diagnostics
IBM Fibre Channel Endpoint Security
IBM Enterprise Key Management Foundation
IBM Hyper Protect Virtual Servers and Hyper Protect Services
IBM Secure Execution for Linux
Pervasive encryption
Encrypting data has long been a solution to preventing unauthorized parties from reading
sensitive data. Traditionally, sensitive data must be identified manually and encrypted by
policy.
With z14, IBM introduced pervasive encryption that allowed all data to be encrypted, both
in-flight and at-rest. This major advancement was from traditional, selective encryption that
was meant to protect data within the IBM Z platform.
For more information about pervasive encryption on IBM Z, see this web page.
Data Privacy Passports protect your data through packaging the data into trusted data
objects. Access to the data can be revoked regardless of where the data is located, a
capability that can be applied to IBM Z data, and data from other platforms.
The benefits of the use of Data Privacy Passports include safeguarding sensitive data,
simplifying how you maintain compliance under industry mandates and regulations, and
managing access to shared data on a need-to-know basis with user access policies.
For more information about Data Privacy Passports, see this web page.
A common scenario is needing to share the dump with support for analysis and problem
resolution but being unable to do so because sending sensitive data is prohibited. As a
solution, IBM provides Data Privacy for Diagnostics, which allows enabled applications to tag
sensitive data as critical for problem diagnostics. In this case, when the memory dump is
captured the sensitive attribute of data is also captured in the dump. This capability protects
data by redacting anything that is tagged as sensitive and creating a second diagnostic
memory dump to be shared externally.
For more information about Data Privacy for Diagnostics, see this IBM Solution Brief.
As you continue on your modernization journey, IBM Fibre Channel Endpoint Security can be
used as an end-to-end solution that ensures all data in-flight on Fibre Connection (IBM
FICON®) and Fibre Channel Protocol (FCP) links from IBM Z to DS8900F, or between IBM Z
platforms over FICON channel-to-channel connections, is encrypted and protected. This
feature enables encryption of all in-flight storage data, regardless of the operating system. For
more information about Fibre Channel Endpoint Security, see IBM Fibre Channel Endpoint
Security for IBM DS8900F and IBM Z, SG24-8455.
IBM Cloud administrators are responsible for running the systems that serve the services, but
do not have the key. Therefore, these administrators cannot read your data.
For more information about Secure Execution for Linux, see IBM Knowledge Center.
IBM offers various suites that run AI in your z/OS environments and are designed to help you
gain actionable insights by making use of open source machine learning. The following AI
tools that are available on IBM Z are discussed next:
IBM Watson® Machine Learning for z/OS
IBM Open Data Analytics for z/OS
IBM Db2 AI for z/OS
The use of IzODA in your journey to modernization on IBM Z include the following key
features:
Its ability to drastically improve iterative process performance by using caching
intermediate results in memory versus writing them to disk.
The integration facilities that it delivers for both on- and off-platform data source.
Its ability to serve as a comprehensive solution for integrating computations to your data.
Allows enterprises to experience open source run times and libraries.
It also can analyze data at its source. This feature removes the risk that is created when data
is replicated and moved and increases the value of insights that are produced by leveraging
the most current data available. With the power of Apache Spark, IzODA integrates key open
source analytic technologies with optimized data access and abstraction. For more
information about IzODA, see this web page.
Db2ZAI uses machine learning to build its recommendations and is built on the services that
are provided by WMLz. Db2ZAI empowers the Db2 for z/OS optimizer to determine the
highest-performing query access paths that are based on workload characteristics.
Figure 3-6 shows the architecture of Db2ZAI. It is designed to reduce CPU consumption,
improve Db2 application performance, and enable rapid model learning that is specific to the
data and application behavior per subsystem, without requiring data science skills.
Big data can have high volume, velocity, or variety, and is beyond the ability of traditional
relational databases to capture, manage, and process with low latency. Analysis of big data
by using advanced analytics techniques, such as text analytics, machine learning, predictive
analytics, data mining, statistics, and natural language processing, allows for more informed
decision making at a faster rate to gain new insights from previously untapped data sources,
independently or together with existing enterprise data.
Figure 3-7 Data and its associated cost, latency, and risk
Security, data privacy, and data governance within this environment are inherently
challenging. Because of this inflexible and complex approach to data distribution, new, live
enterprise data that is created is not readily available for analysis. This configuration is unable
to support modern analytics requirements and real-time insight.
Enterprise data, both current and historical, continues to be the primary critical data source
for most analytics initiatives. If most of an organization’s enterprise data is flowing through
IBM Z, it has a distinct advantage in the pursuit of modern analytics. Organizations can
modernize their environment and practices to cost-effectively combine enterprise data and
analytics processing on a single platform, integrate that data with non-relational data sources,
and enable powerful, real-time analytics and cognitive insights. This on-premises approach
provides an analytics foundation that can then be extended to the cloud. Depending on what
best addresses business needs, organizations can implement a cloud-based approach to
analytics that is built on an on-premises private cloud, public cloud, or hybrid cloud
environment.
Organizations are looking to gain advantage by modernizing and incorporating live enterprise
data in analytics; that is, data within the course of a transaction’s execution. Modern analytics
applications must have access to current transactional data and the infrastructure must be
designed with the same level of security, availability, scalability, and performance as
transactional systems.
By putting rich enterprise data to work in analytics, insight can be derived from live
transactional data, in real time, and at the optimal moment. This approach transforms
traditional tactics to analytics to provide an essential foundation for real-time analytics,
delivering high-speed processing for complex Db2 queries to support business-critical
reporting and analytic workloads.
For more information about Db2 Analytics Accelerator and HTAP, see this web page.
“In the Cloud” z/OS IBM Managed Extended Cloud IaaS for IBM Z
Perhaps most important for developers is process modernization because it affects every step
of the software lifecycle. For more information about DevOps and popular techniques for
process modernization, see Chapter 4, “Introduction to DevOps” on page 39.
DevOps emphasizes agility, meaning the ability to implement changes easily. DevOps teams
focus on standardizing development environments and automating delivery processes to
improve delivery predictability, efficiency, security, and maintainability. DevOps also
encourages empowering teams with the autonomy to build, validate, deliver, and support their
own applications.
This chapter discusses the phases of the DevOps cycle along with various solutions available
for each phase to adopt DevOps practices with IBM Z. It includes the following topics:
4.1, “DevOps culture” on page 40
4.2, “DevOps on IBM Z” on page 40
4.3, “Analyze and plan” on page 42
4.4, “Source code managers” on page 43
4.5, “Code” on page 44
4.6, “Build” on page 46
4.7, “Test” on page 47
4.8, “Provision, deploy, and release” on page 48
Figure 4-1 shows how developers and operations can come together by using DevOps
culture to fulfill business requirements.
While integrating IBM Z into your hybrid cloud, it is imperative that developers and IT
operations understand that the same, agile processes can also be performed on IBM Z, by
using the same DevOps tools and having the same DevOps experience as other platforms. A
range of solutions helps integrate systems, empowering developers with an open and familiar
development environment with enterprise-wide, platform agnostic standardization, in turn
helping developers build, test, and deploy code faster.
Figure 4-2 shows the difference between continuous delivery and continuous deployment
pipelines.
However, the pipeline into a production environment might be gated by a manual approval so
that portion is continuous delivery. Many benefits can be drawn without the need to enable
automatic deployment into production.
Fortunately, tools are available for each step of this process. We begin reviewing these tools
by focusing on the beginning with planning. Though we highlight their capabilities individually,
many of these tools are bundled to optimize their benefits.
For example, ADDI can break down a complex application that was written in COBOL into its
various source code modules, and the CICS transactions that start each module. Any data
files or database transactions also are shown, along with whether the operations are
performed by a module are reads, writes, updates, or the like. Almost immediately, you can
correlate transactions to COBOL code, to reads of VSAM data or updates of databases.
SCMs also provide backup and version control of source code, which creates a safety net if
something goes wrong. Although some SCMs can work with binary files, most often SCMs
are designed to work with text files because source code is stored as text files. We discuss
two widely used SCMs next.
Git
The de facto standard for SCMs these days is Git; it is commonly used for cloud-native
development and can be used equally for developing code on IBM Z. Git is an open source
code manager with several popular client/server implementations, such as GitHub, Bitbucket,
and GitLab.
For on-premises configurations, the Git server runs on Linux, which makes it perfect for Linux
on Z, or on z/OS through z/OS Container Extensions as described in 3.1.4, “Colocation with
IBM Z”. A Git client is available for z/OS from Rocket Software1, which makes it easy to
include Git-based CI/CD pipelines on z/OS. The real power lies in the mass integration of Git
clients with seemingly every integrated development environment (IDE) you want to use. For
more information about IDEs, see 4.5.2, “Integrated development environments ”.
1 https://www.rocketsoftware.com/product-categories/mainframe/git-for-zos
Cloud-native Applications developed from the outset to operate in the cloud and take
advantage of the characteristics of cloud architecture, or an application that was
refactored and reconfigured to do so. Developers design cloud-native
applications to be scalable, platform agnostic, and consisting of microservices.
Cloud-based A service or application being delivered over the internet “in the cloud”. It is a
general term that is applied liberally to any number of cloud offerings.
4.5.1 Cloud-native
By using cloud-native applications, you can create an open ecosystem on IBM Z for access
and use by administrators, developers, and architects, with no special skills required. With an
open and connected environment, developers and administrators can more seamlessly build
today’s business applications. These cloud-native applications can integrate with data and are
optimally deployed across and managed within the hybrid multi-cloud.
Based on individual needs per workload (resources, time, cost, and so on), you have the
choice to develop cloud-native applications in the private and public cloud, or a combination
of both. Cloud-native development features the following attributes:
Architecture: The architecture is microservice-based and the microservices run in
dedicated containers.
Automation: Everything is automated, including CI/CD, APIs, and configuration
management.
DevOps: Applications are driven by DevOps practices. The individuals that build the
applications also run them; therefore, there is less of “throwing applications over the wall.”
It provides support for IBM Enterprise COBOL, PL/I, HLASM, and JCL, including syntax
highlighting, real-time error checking, code completion, and more features. The Zowe
Explorer extension can also be added to VS Code to enable interaction with z/OS datasets,
UNIX files, JES job output, and even MVS Commands.
This option is excellent for “hybrid” developers who are working across platforms and are
familiar with VS Code from their projects on other platforms. VS Code is also widely used in
school settings, which serves as a great way for an early professional who is new to IBM Z to
quickly become productive.
Although you can obtain the Z Open Editor extension for VSCode for no cost, Wazi Code
extends those capabilities to include debugging by using the IBM z/OS Debugger. This
feature allows developers to perform visual debugging with variable inspection, breakpoints,
and the like, in the same VSCode experience where they might choose to write their code.
This entire experience can be hosted in a Red Hat CodeReady Workspace, which provides
the experience in-browser rather than locally installed. For more information about the Wazi
Sandbox, see 4.7, “Test” on page 47.
Automated build is a repeatable build process that can be performed at any time and requires
no human intervention. The build utility tools that are described in this section are designed to
work with IBM Z.
With DBB, engineers can use Groovy scripts that were written for other platforms in the z/OS
pipeline. DBB is ideal for compiling and link-editing programs. For this purpose, DBB provides
a dependency scanner to analyze the relationship between source files, and a web
application to store the dependency information and build reports.
DBB can also be used to add automated testing to the pipeline, or any other system
administration task that you might write JCL for. DBB also works well with Git and Jenkins, as
an example of an SCM and pipeline automation tool.
Figure 4-4 shows an example toolchain that Company A (from our example) is considering to
implement DevOps principles on z/OS.
Here, the application source code is stored in Git. DBB extracts the code from Git before it
handles the compilation and generation of deployable artifacts, while Jenkins handles the
pipeline and controls when DBB is run.
4.7 Test
Users who are new to DevOps sometimes think DevOps is about doing less testing because
code moves more quickly through the development phases and into production. The reality is
that DevOps might mean conducting more testing because every code change is tested.
The key is that this testing is automated. Performing every test manually on every code
change is impossible; therefore, we rely on test automation to ensure quality into code
delivery.
In an agile environment, a significant need exists to move testing earlier in the development
lifecycle, which often is referred to as a “shift left.” This shift drives a need for isolated test
environments, which might conflict with the availability of development and test systems.
Moving dev/test to x86 frees up capacity for production workloads simultaneously while
allowing greater flexibility and availability for dev/test. ZD&T provides a dev/test platform for
IBM z/OS middleware, such as CICS, IMS, Db2, and other z/OS software, to run on
Intel-compatible platforms without the need for IBM Z hardware. Because the environment
that is provided by ZD&T is emulated, performance is acceptable only for test and not
production workloads.
As you might expect from a cloud-based solution, a dashboard provides your developers the
ability to provision and deprovision their individual sandboxes as needed. Similar to ZD&T,
performance for Wazi Sandbox is acceptable only for test and not production workloads.
Deployment tools do more than perform orchestrated deployments, they also track which
version is deployed at any stage of the build and delivery pipeline. They can also manage the
configurations of the environments of all the stages to which the application components must
be deployed.
It is available through IBM z/OS Management Facility (z/OSMF) for tasks that fall under the
cloud provisioning category, such as resource management and software services, through
Representational State Transfer (REST) APIs. Applications can use these public APIs to work
with system resources and extract data.
One of the many benefits of UrbanCode Deploy is its ability to help eliminate manual
processes that are subject to human error, which in turn, leads to the enablement of
continuous delivery for any combination of on-premises, cloud, and mainframe applications.
The need for custom scripting is removed with UrbanCode Deploy by using tested
integrations with many tools and technologies, such as Jenkins (which we discuss in the next
section), Jira, Kubernetes, Microsoft, ServiceNow, and IBM WebSphere.
Quality checks are performed against every application before deployment. These checks are
referred to as deployment approval gates. This process provides greater visibility and
transparency into deployments for audit trails.
With UrbanCode® Deploy, you can set specific, required conditions that must be met before
an application is promoted to an environment by establishing gates. These gates are defined
at the environment level and each environment can have a solitary gate or conditional gates.
UrbanCode Deploy also aids in version control, which makes it easy to release only tested
versions. It manages such control by providing application models and snapshots, where
snapshots are manifests of versioned components and configuration and can be promoted as
single items versus multiple components.
If you decide to use the IBM UrbanCode Deploy product suite, you receive access to
Blueprint Designer. This component provides services, such as cloud orchestration, a
graphical editor, IaC, and Cloud Automation Manager. These services in Blueprint Design
collectively help to establish a CI/CD pipeline to generate and destroy short-term test
environments to swiftly test application changes.
Some of the more critical benefits of UrbanCode Deploy are its ability to automate and
increase the velocity of software deployment through different environments. It is designed to
support the DevOps approach (a critical component of modernization), which enables the
rapid release of incremental changes reliably and repeatedly.
It also includes build and test tools that are used to automate application deployment to
mainframe production environments. Figure 4-6 shows the IBM UrbanCode for deployment
automation flow.
After running a build in your existing Jenkins CI infrastructure, the UrbanCode Deploy/Jenkins
plug-in enables you to publish the build result to UrbanCode Deploy and trigger an application
process for deployment.
Another aspect of deployment that is covered here is the deployment of IT Infra services. Red
Hat Ansible Engine is the component within Ansible Automation Platform that uses hundreds
of modules to automate all aspects of IT environments and processes. It helps developers
and IT operations teams to quickly deploy IT services, applications, and environments to
automate routine activities.
An initial collection of Ansible playbooks are designed to handle many tasks, such as working
with datasets, retrieving job output, and submitting jobs on the system. More collections that
are related to various use cases are being added to z/OS and the IBM Z broader community.
Red Hat Ansible Certified Content for IBM Z helps you connect IBM Z to your wider enterprise
automation strategy through the Ansible Automation Platform ecosystem. The IBM z/OS core
collection is part of this certified content that focuses on z/OS infrastructure deployment and
management. This automation content enables you to start using Ansible Automation
Platform with z/OS to unite workflow orchestration in one easy-to-use platform with
configuration management, provisioning, and application deployment.
The following z/OS core modules are used in the IBM z/OS core collection:
zos_job_submit
zos_job_query
zos_job_output
zos_data_set
For more information about how to use the playbook, see this web page.
In this chapter, we discuss best practices for infrastructure, application, and process
modernization with IBM Z.
This issue becomes clear with teams who use library managers to promote their applications
across environments. Library managers work well for writing applications in a single stream
because their stages are tied to each environment between development and production.
However, they do not handle multi-stream development well.
Instead of having a code “branch” representing a particular environment, modern SCMs use
each branch for a specific feature or deliverable. Therefore, you can merge the branches
together in different ways according to which features you need in a specific environment.
Meanwhile, each branch in the SCM can be updated independently of the others.
Figure 5-1 shows these differences and emphasizes the increase in productivity that can be
achieved by using a modern SCM-enabling, multi-stream development.
One important detail to remember is that source code is not limited to application code only.
Benefits can be realized by storing your test cases in an SCM, and especially in colocating
those tests alongside the application code that they test.
This process also can be extended to include automation scripts and any other text files that
are part of coding, building, or deploying an application. Having everything that you need in
the same SCM enables CI/CD pipelines to include automated building, deploying, and testing
for application changes.
Consistent tools across environments means that your teams can use shared skills and
expand the number of people who might work in an environment. Historically, teams that work
in one environment are unfamiliar with the tools that used in other environments and vice
versa.
By using universal tools regardless of hardware platform, all teams become familiar with the
same tools and might work in any environment. If the tools you use are common and used in
a school curriculum, you are positioned for an even better advantage because of the
immediate increase in productivity from new hires that join your ranks. All of these factors are
beneficial to your team’s velocity.
Common tools also allow for sharing digital artifacts across environments. For example, a
common DevOps toolchain allows for the definition of one pipeline that performs some
operations on IBM Z to deploy the back end, and some operations on public cloud to deploy
the front end, with a centralized dashboard for reporting. In a general sense, scripts or
common routines might be shared across environments, which increases shared skills and
knowledge among teams and in effect boost productivity.
It is common for developers to use a GUI-based editor when working on their code. Thus, it
makes sense that we want to enable their use for application code that runs on IBM Z. This
enablement might be a change for experienced programmers, and some tasks might be
easier to complete without the GUI. However, the focus must be on enabling as many people
as possible to increase productivity.
A similar story is true of operations. Experienced system programmers likely have their own
customized sets of jobs, scripts, and so on, to perform daily tasks. Because they understand
the details of what they are doing, they can quickly respond to requests. Those programmers
with less experience and smaller libraries of scripts might be more productive if they can use
a standard workflow to accomplish their task requests.
Modernization means doing things differently. The emphasis must be on doing things that
enable future productivity, which means offering IDEs for coding and cloud provisioning and
management for system programmer actions.
For more information, see 4.8, “Provision, deploy, and release” on page 48.
Multiple phases of testing are one of the components that contributes to this stability because
nothing is promoted into production before it is thoroughly vetted for quality. The test
environments that are used for this vetting must still be controlled by skilled system
programmers, but an opportunity might exist to allow greater flexibility because the effect of
downtime is not as critical as it is in production. Empowering developers by allowing them to
self-provision temporary resources for their application testing can greatly speed up
development.
For example, one of Company A’s COBOL developers is changing an application that is
hosted in CICS on IBM Z. In past configurations, one shared CICS environment was used for
multiple testers, which meant that testers had to coordinate who can use it at a specific time
so that no one affected anyone else’s testing.
Through modernization with CP&M, Company A enabled developers who are not skilled as
system programmers to easily provision a private, temporary CICS region to use for their
testing and then deprovision it when they are done.
Because developers get their own emulated z/OS environment, they have much more
flexibility for customization as needed for their testing. Better yet, each developer can have
their own environment, which means that developers do affect each other with their
customizations or if their testing causes failures. This model gives the benefits of each
developer having their own z/OS system without affecting IBM Z hardware usage.
At first, Company A relied on a shared z/OS LPAR for their developers to unit test their
application changes. Eventually, they outgrew this model and had many developers working
on changes that must be tested independently of one another.
At the time, commodity Linux servers were sitting unused in their data center, so they installed
IBM Z Development and Test (ZD&T) and turned their developers loose to work in parallel,
which increased productivity. Company A recently chosen Red Hat OpenShift as their
container platform, and they are considering moving these emulated environments to IBM
Wazi for CodeReady Workspaces.
For more information about ZD&T and Wazi, see 4.7, “Test” on page 47.
Traditionally, this update meant recompiling application source code with an updated
compiler; the newer compiler knows about the new instructions, so it uses them when it
translates source code to machine instructions. For COBOL applications, another alternative
is available: IBM Automatic Binary Optimizer for z/OS (ABO).
IBM ABO takes the executables that you use and optimizes them with the latest machine
instructions. Even if you lack the source code for some of your applications, you can still
optimize them with ABO. Although it is still necessary to test the optimized executables before
deploying to production, you can be assured that no source code was changed and less
required testing is needed than you typically associate with application source changes.
COBOL applications that are changed often are recompiled with each change, so they are
more likely to benefit from compiler improvements. ABO is a great approach for those
applications that are not changed frequently or cannot be changed frequently because of lost
source code. These applications are no longer relegated to old machine instructions.
For more information about IBM Automatic Binary Optimizer for z/OS, see this web page.
A best practice is to add an assessment step to each update that focuses on new capabilities.
These capabilities can be features of new IBM Z hardware, such as compression cards, or it
can be functions that come with upgrading a middleware subsystem, such as a new
REST-enablement capability.
It is important for the assessment to identify any new capabilities and which teams must be
contacted for awareness. One all-to-common scenario occurs when system programmers are
aware of new hardware features, but database administrators are not aware. If the database
administrators knew about the new hardware, they update their subsystems to use it.
When running production across hybrid platforms, the latency between the platforms must be
a point of concern and planning. Even for (or perhaps especially for) batch transactions that
occur after the close of business hours, adding a fraction of a second can mean that work is
not completed before the start of the next business day. This issue is seen even for
on-premises computing that starts on IBM Z, involves calculations on a server rack in the
same data center, and then ultimately finishes back on Z. The latency is increased by cloud
because of network paths between systems.
For more information about the benefits of colocating work, see 3.1.4, “Colocation with IBM Z”
on page 30. Consider the latency introduced between platforms as you plan modernization.
The IBM Academic Initiative schools foster and progress education that is related to
enterprise computing. Across the high school and university levels, schools with course
curricula that include IBM Z also feature computer science societies and other clubs, and
everything in between. IBM often partners with clients to create an event with students from
their geographic area. This endeavor is a worthwhile event for all who participate because
students need jobs, clients need new hires, and IBM wants both to succeed.
Another opportunity to find enterprise computing talent is through a program that is called
Master the Mainframe. It is a coding competition that teaches enterprise computing concepts
with the look and feel with which students are familiar.
For more information about Master the Mainframe, see this website.
You also can contact the IBM Z Ecosystem team at: [email protected].
IBM Garage
IBM uses the garage method for collaborating with clients to examine a challenge, design a
solution, and foster it into production. The process starts with a Design Thinking session to
disassemble the problem, brainstorm ideas, and test concepts.
Afterward, IBM Garage™ experts work to create a minimum viable product (MVP) with their
counterparts on the client’s team. The process can help accelerate value and reduce risk, in
addition to providing guidance to deliver immediate business value.
The IBM Garage™ experience blends business strategy, design, and technology into a single
end-to-end journey. The IBM Garage methodology is built around co-creating, co-executing,
and cooperating a solution by applying technology to a business problem.
For more information about IBM Garage, see this web page.
Developer Advocates
If you want to hear technical information about a technology from experienced technical
users, IBM Developer advocate teams might be what you are looking for.
Developer Advocates present on technical topics that are based on personal experience,
without any sales pressure. If technically feasible, Developer Advocates can even hold
hands-on workshops so you can get practical experience without affecting your own systems.
For more information about offerings from the IBM Developer Advocacy teams for IBM Z or
IBM Cloud, contact your IBM account team.
IBM Developer
IBM Developer is a community where you can find a plethora of technical learning materials
and engage with other professionals through blogs and events. Great technical content is
available on IBM Developer, including step-by-step tutorials and code patterns that include
some sample code that is published under open source guidelines to get you started.
Podcasts where SMEs discuss technical topics in detail and videos that include demos also
are available.
IBM Developer is a great resource for technical topics about application modernization and
modern tools for operations. For more information about IBM Developer, see this web page.
In this chapter, we explored some of the ways to get started and discussed some of the
technologies and products that can help you modernize.
This showcase is built on actual use-cases and is flexible so that it can be customized to
better represent more real-world systems.
For more information about engaging with the technical showcase team, see this web page.
Figure 6-1 shows a visualization of the topology of this showcase, which is a stock trading
application.
The portion of our solution that is running on Linux is shown on the left side of Figure 6-1,
where containers are running on Red Hat OpenShift. It is in this are where we use the open
source container-orchestration system, Kubernetes. Kubernetes provides many useful
functions, such as load-balancing, high availability, and scalability.
Within this environment, we deploy Jenkins as our orchestrator, through which we organize
and schedule work to be performed. This deployment allows us to use the flexibility and high
availability of OpenShift for our applications and our DevOps tools.
Jenkins uses the Ansible plug-in to run Ansible playbooks that we wrote to deploy our
applications and middleware, primarily making use of APIs to perform actions in z/OS.
A portion of our private cloud is shown in the center of Figure 6-1 on page 62, where we have
our z/OS system. Our middleware stack on Z consists of Customer Information Control
System (CICS), Db2, IBM MQ, and z/OS Connect, all of which are deployed dynamically from
templates that are defined in Cloud Provisioning and Management (CP&M). We also have
Zowe, Node.js, and our COBOL application in our z/OS system.
Lastly, in the third environment that is shown on the far right side of Figure 6-1 on page 62, we
have some public API services that are hosted in IBM Cloud. These services work in tandem
with the stock trading application in our private cloud to provide certain functions.
In our case, we have calls from the OpenShift environment going to our z/OS environment,
ultimately reaching ISTPOMA through z/OS Connect in CICS.
A user can interact with the stock trader application in two ways: directly through REST APIs,
or indirectly through the web-based UI.
Figure 6-2 shows the APIs that are exposed with z/OS Connect and then redirected to
ISTPOMA on z/OS. The web-based UI also uses these APIs.
/queryPortfolio Query the list of all stock trading portfolios that were added by
users to view all loyalty and stock information.
/viewPortfolio/{owner} View the stock trading portfolio of the owner that is specified
to view loyalty and stock information.
Microservices are a method for development teams to modernize their COBOL programs and
applications. Creating microservices allows development teams to use their applications to
create a collection of API-enabled microservices that split the application up into logical parts.
These microservices still drive the same core business logic through their calls through z/OS
Connect while enabling more features and logic necessary for complex front-end applications
without changing our COBOL code.
We created the following microservices out of our ISTPOMA COBOL application, which we
chose to deploy within Docker containers:
Stock-Quote-Python: A microservice that uses iexFinance to get current stock information.
Loyalty: A microservice that receives notifications about new loyalty level updates and
posts them to IBM MQ on z/OS. Loyalty level updates are determined by the value of the
stock that the user adds to their portfolio. The following thresholds for the loyalty levels are
used:
– Loyalty level default: “Basic”
– Loyalty level “Bronze”: If total is greater than $10,000.00
– Loyalty level “Silver”: If total is greater than $50,000.00
– Loyalty level “Gold”: If total is greater than $100,000.00
– Loyalty level “Platinum”: If total is greater than $1,000,000.00)
Notification: A microservice that watches for new messages on IBM MQ and sends
information to IBM Cloud Functions to perform more notification processing, such as
updating Slack or sending SMS messages. This service is the most optional service in this
stack; no other microservices rely on this setup to function.
Trader: A front-end microservice that bridges together all other services.
Because most of our automation logic lives in Ansible playbooks, it is easy to switch to
another orchestration platform if necessary.
In this technical showcase, Jenkins is running in a Kubernetes container within our Red Hat
OpenShift environment. Jenkins performs tasks that are found in what are known as
pipelines. In most cases, the tasks that are defined in our Jenkins pipelines specify different
Ansible playbooks to run. These Ansible playbooks contain the logic to provision and
configure the different parts of our application.
When a Jenkins build is triggered, the playbooks run and report their status back to Jenkins,
which indicates the success or failure of the playbook.
For more information about Jenkins, see the Jenkins User Documentation web page.
6.5 Ansible
Ansible is an automation engine that automates provisioning, configuration management,
application deployment, intra-service orchestration, and many other IT requirements.
Designed for multitier deployments, Ansible models your IT infrastructure by describing how
all your systems inter-relate, rather than managing only one system at a time. Because it uses
no agents and no other custom security infrastructure, it is easy to deploy, and most
importantly, it uses a simple language (YAML) in the form of files that are called Ansible
playbooks. these playbooks allow you to describe your automation jobs in a way that
resembles plain English.
Ansible is extensible by using modules. Modules are discrete units of code that accomplish a
specific task. Some common modules include the ability to perform regular expression
queries or to update a systems’ packages. During the automated configuration and
deconfiguration of CICS, IBM DB2®, IBM MQ, and z/OS Connect, we make heavy use of the
URI module, which calls REST APIs.
We use Jenkins to orchestrate the Ansible playbooks in this showcase. Jenkins allows us to
trigger our automation in various ways, ranging from a change in a GitHub repository to a
message that is sent to a Slack bot.
When a build is started in Jenkins, the pipelines run a series of Ansible playbooks, which use
APIs that are running in our z/OS system to configure our environment. These playbooks
start by provisioning our middleware, a process that is started through a z/OSMF CP&M API.
Subsequent playbooks call middleware APIs directly, such as the CICS CMCI APIs, which we
use to dynamically define CICS constructs, such as Db2 connections and programs (for
example, our COBOL program). Driving these APIs through Ansible is advantageous
because it provides a common framework across our z/Architecture.
For more information about the use of Ansible with IBM Z, see this web page.
By streamlining some traditional tasks and automating others, z/OSMF can help to simplify
some areas of z/OS system management. It also provides a framework for managing various
aspects of a z/OS system through a web browser interface. This feature in turn allows you to
access and manage your z/OS system from anywhere.
Multiple users can log in to z/OSMF by using different computers or browsers, or multiple
instances of the same browser. All of the templates for z/OSMF CP&M are loaded into
z/OSMF and then API calls are made to z/OSMF to begin starting the workflows and the
templates.
z/OSMF provides you with a single point of control for the following tasks:
Viewing, defining, and updating policies that affect system behavior.
Monitoring the performance of the systems in your enterprise.
Managing software that runs on z/OS.
Performing problem data management tasks.
Consolidating your z/OS management tools.
Note: z/OSMF is the only piece that must be configured on z/OS before any automation is
run because of the necessity for specific APIs and CP&M of Software Services.
To provision the middleware, you define templates in Software Services. Templates are
provided by IBM for all the z/OS middleware we provision in the showcase, including CICS,
Db2, IBM MQ, and z/OS Connect. After these templates are defined, you can provision
unique instances of the middleware automatically, which creates all necessary data sets,
startup procs, and file systems to run the middleware.
With the addition of the Network Configuration Assistant, all network resources that are
required for the middleware instance also are allocated dynamically.
Typically, a CP&M template at least includes a workflow for provisioning and deprovisioning.
Each step of a workflow can be browsed as it runs and allows you to see the output of JCL,
shell scripts, and REST responses. Workflows are easily editable if they must be custom
tailored to your environment by the Workflow Editor plug-in.
After our templates are defined and tested, we use the CP&M APIs to drive provisioning from
Ansible by using the URI module. While the middleware instance provisions, we query the
instance for status until the provisioning is complete. This process allows us to start a Db2,
CICS, IBM MQ, or z/OS Connect instance through a simple API call.
6.7 Zowe
Zowe is an open source project that was created to host technologies that benefit the Z
platform from all members of the Z community (Integrated Software Vendors, System
Integrators, and z/OS consumers). Zowe comes with a set of APIs and operating system
capabilities that applications build on, and includes some ready-for-use applications.
Zowe offers interfaces to interact with z/OS in a way that resembles what you expect of other
cloud platforms. You can use these interfaces as delivered or through plug-ins and extensions
that are created by clients or third-party vendors.
For this showcase, Zowe hosts APIs behind a single port, behind the API mediation layer
gateway, and allows for easier interaction with our APIs on the z/OS system.
By using Zowe CLI, we can take Zowe commands, translate them into API calls against our
z/OS system, and plug them into bash, python, or Java scripts to complete some of the
complex functions on z/OS. Examples of these complex functions are creating or
manipulating a file, submitting a job, and API calls against plug-ins for middleware.
6.8 Db2
IBM Db2 is a family of hybrid data management products that offers a complete suite of
AI-empowered capabilities to help you manage structured and unstructured data that is
on-premises and in private and public cloud environments. Db2 is built on an intelligent
common SQL engine that is designed for scalability and flexibility. It drives high-impact data
insights, seamless business continuity, and real business transformation.
Db2 for z/OS is an industrial-strength database system that is known for its reliability, security,
performance, and recoverability. Key components for recoverability of Db2 and its user
databases are dual BSDS, active log, and archive log datasets, along with a powerful suite of
program utilities to support performance, backup, and recovery.
Db2 supports many languages and supports remote application access. Db2’s key to
availability is the use of data sharing technology to allow 24x7 access in the course of
maintaining system and database updates.
CICS manages sharing resources, the integrity of data, and prioritizing the execution with fast
response. CICS authorizes users, allocates resources (real storage and cycles), and passes
on database requests by the application to the suitable database manager, such as Db2. We
might say that CICS acts like and performs many of the same functions as the z/OS operating
system.
To interact with this application, our front-end stock trading application uses APIs that are
provided by z/OS Connect that make these services available. When z/OS Connect receives
these requests, it converts our JSON payload to a COBOL copybook and passes the
transaction to CICS to be processed. When that process is successful, the transaction is
hardened in our Db2 database.
CICS processes the incoming transactions and forwards the payload to our COBOL
application in copybook form. Our COBOL application processes the copybook and makes
updates to, or retrieves the relevant data from, our Db2 database.
A payload is then passed back to z/OS Connect and converted back into JSON before being
returned to our application. z/OS Connect also allows us to omit or transform fields before
they are returned to our front-end application, letting us remove fields that might not be
relevant to the application. This capability is powerful because it allows us to give our
developers a way to interact with our traditional application in a way that is familiar to them,
regardless of their experience with z/OS.
For more information about z/OS Connect Enterprise Edition, see this web page.
For more information about IBM MQ, see IBM Knowledge Center.
6.12 Summary
Created by the IBM Garage for Systems team in IBM Poughkeepsie, NY, US, this technical
showcase was a culmination of several different technologies to demonstrate one method to
automate deployments across the z/Architecture.
We used Ansible for most of our automation to deploy our entire application stack. This
process included a few different steps in multiple parts of the Z architecture.
First, we needed to look at automation that focused on z/OS to deploy and configure our
middleware. After that process completed, we then focused on compiling our COBOL code to
install to our recently created CICS region.
Finally, we used Red Hat OpenShift to build and run our microservices to drive our business
logic on z/OS.
Along this journey, we constantly passed dynamically created information to get the solution
interconnected. The use of Ansible was ideal because we configured all of our systems with a
single language, despite having large architectural differences. This common framework
made it easy to tie everything together to create our fully automated solution.
This showcase demonstrated how IBM Z can play a key role in your hybrid cloud.
To learn more about how the technical showcase works, see the zModernization Technical
Showcase Presentation YouTube video.
The publications that are listed in this section are considered particularly suitable for a more
detailed discussion of the topics that are covered in this paper.
IBM Redbooks
The following IBM Redbooks publications provide more information about the topic in this
document. Note that some publications referenced in this list might be available in softcopy
only.
Modernizing Applications with IBM CICS, REDP-5628
Accelerating Modernization with Agile Integration, SG24-8452
IBM Z Integration Guide for Hybrid Cloud, REDP-5319
Getting Started with Z/OS Container Extensions and Docker, SG24-8457
Mainframe Modernization and Skills: The Myth and the Reality, REDP-5115
IBM z/OS Container Extensions (zCX) Use Cases, SG24-8471
Cloud Workloads on the Mainframe, REDP-5108
Batch Modernization on z/OS, SG24-7779
Red Hat OpenShift on IBM Z Installation Guide, REDP-5605
IBM Storage for Red Hat OpenShift Blueprint, REDP-5565
You can search for, view, download or order these documents and other Redbooks,
Redpapers, web docs, draft and additional materials, at the following website:
ibm.com/redbooks
REDP-5627-00
ISBN 0738459534
Printed in U.S.A.
®
ibm.com/redbooks