Hana Ebook 01

Download as pdf or txt
Download as pdf or txt
You are on page 1of 186

Front cover

In-memory Computing
with SAP HANA on IBM
eX5 Systems
IBM Systems Solution
for SAP HANA
SAP HANA overview
and use cases
Basic in-memory
computing principles

Gereon Vey
Tomas Krojzl
Ilya Krutov

ibm.com/redbooks

International Technical Support Organization


In-memory Computing with SAP HANA on IBM eX5
Systems
January 2013

SG24-8086-00

Note: Before using this information and the product it supports, read the information in
Notices on page vii.

First Edition (January 2013)


This edition applies to IBM System Solution for SAP HANA, an appliance based on IBM System
eX5 servers and the SAP HANA offering.
Copyright International Business Machines Corporation 2013. All rights reserved.
Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP
Schedule Contract with IBM Corp.

Contents
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
The team who wrote this book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . x
Now you can become a published author, too! . . . . . . . . . . . . . . . . . . . . . . . . xii
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii
Stay connected to IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Chapter 1. History of in-memory computing at SAP . . . . . . . . . . . . . . . . . . 1
1.1 SAP Search and Classification (TREX). . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 SAP liveCache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 SAP NetWeaver Business Warehouse Accelerator . . . . . . . . . . . . . . . . . . 3
1.3.1 SAP BusinessObjects Explorer Accelerated . . . . . . . . . . . . . . . . . . . . 5
1.3.2 SAP BusinessObjects Accelerator . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.4 SAP HANA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Chapter 2. Basic concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.1 Keeping data in-memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.1.1 Using main memory as the data store . . . . . . . . . . . . . . . . . . . . . . . 10
2.1.2 Data persistence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.2 Minimizing data movement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.2.1 Compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.2.2 Columnar storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.2.3 Pushing application logic to the database . . . . . . . . . . . . . . . . . . . . . 19
2.3 Divide and conquer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.3.1 Parallelization on multi-core systems . . . . . . . . . . . . . . . . . . . . . . . . 20
2.3.2 Data partitioning and scale-out . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
Chapter 3. SAP HANA overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.1 SAP HANA overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.1.1 SAP HANA architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.1.2 SAP HANA appliance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.2 SAP HANA delivery model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.3 Sizing SAP HANA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.3.1 The concept of T-shirt sizes for SAP HANA . . . . . . . . . . . . . . . . . . . 26
3.3.2 Sizing approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.4 SAP HANA software licensing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

Copyright IBM Corp. 2013. All rights reserved.

iii

Chapter 4. Software components and replication methods . . . . . . . . . . . 35


4.1 SAP HANA software components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
4.1.1 SAP HANA database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
4.1.2 SAP HANA client. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
4.1.3 SAP HANA studio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
4.1.4 SAP HANA studio repository. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
4.1.5 SAP HANA landscape management structure . . . . . . . . . . . . . . . . . 47
4.1.6 SAP host agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
4.1.7 Software Update Manager for SAP HANA . . . . . . . . . . . . . . . . . . . . 48
4.1.8 SAP HANA Unified Installer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
4.2 Data replication methods for SAP HANA . . . . . . . . . . . . . . . . . . . . . . . . . 51
4.2.1 Trigger-based replication with SAP Landscape Transformation . . . . 52
4.2.2 ETL-based replication with SAP BusinessObjects Data Services . . 53
4.2.3 Extractor-based replication with Direct Extractor Connection . . . . . . 54
4.2.4 Log-based replication with Sybase Replication Server . . . . . . . . . . . 55
4.2.5 Comparing the replication methods . . . . . . . . . . . . . . . . . . . . . . . . . 56
Chapter 5. SAP HANA use cases and integration scenarios . . . . . . . . . . 59
5.1 Basic use case scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
5.2 SAP HANA as a technology platform . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
5.2.1 SAP HANA data acquisition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
5.2.2 SAP HANA as a source for other applications . . . . . . . . . . . . . . . . . 65
5.3 SAP HANA for operational reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
5.4 SAP HANA as an accelerator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
5.5 SAP products running on SAP HANA . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
5.5.1 SAP NetWeaver BW running on SAP HANA . . . . . . . . . . . . . . . . . . 75
5.5.2 Migrating SAP NetWeaver BW to SAP HANA . . . . . . . . . . . . . . . . . 80
5.6 Programming techniques using SAP HANA . . . . . . . . . . . . . . . . . . . . . . . 85
Chapter 6. The IBM Systems Solution for SAP HANA . . . . . . . . . . . . . . . . 87
6.1 IBM eX5 Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
6.1.1 IBM System x3850 X5 and x3950 X5 . . . . . . . . . . . . . . . . . . . . . . . . 88
6.1.2 IBM System x3690 X5. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
6.1.3 Intel Xeon processor E7 family . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
6.1.4 Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
6.1.5 Flash technology storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
6.2 IBM General Parallel File System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
6.3 Custom server models for SAP HANA . . . . . . . . . . . . . . . . . . . . . . . . . . 104
6.3.1 IBM System x workload-optimized models for SAP HANA . . . . . . . 104
6.3.2 SAP HANA T-shirt sizes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
6.3.3 Scale-up . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
6.4 Scale-out solution for SAP HANA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
6.4.1 Scale-out solution without high-availability capabilities . . . . . . . . . . 112

iv

In-memory Computing with SAP HANA on IBM eX5 Systems

6.4.2 Scale-out solution with high-availability capabilities . . . . . . . . . . . . 114


6.4.3 Networking architecture for the scale-out solution . . . . . . . . . . . . . 119
6.4.4 Hardware and software additions required for scale-out. . . . . . . . . 121
6.5 Installation services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
6.6 Interoperability with other platforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
6.7 Support process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
6.7.1 IBM SAP integrated support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
6.7.2 The IBM SAP International Competence Center InfoService . . . . . 125
6.8 IBM Systems Solution with SAP Discovery System . . . . . . . . . . . . . . . . 125
Chapter 7. SAP HANA operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
7.1 Backing up and restoring data for SAP HANA . . . . . . . . . . . . . . . . . . . . 130
7.1.1 Basic Backup and Recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
7.1.2 IBM Tivoli Storage Manager for ERP . . . . . . . . . . . . . . . . . . . . . . . 132
7.2 Disaster Recovery for SAP HANA. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
7.2.1 Using backup and restore as a disaster recovery solution . . . . . . . 138
7.2.2 Disaster recovery by using replication . . . . . . . . . . . . . . . . . . . . . . 139
7.3 Monitoring SAP HANA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
7.3.1 Monitoring with SAP HANA Studio . . . . . . . . . . . . . . . . . . . . . . . . . 142
7.3.2 Monitoring SAP HANA with Tivoli . . . . . . . . . . . . . . . . . . . . . . . . . . 143
7.4 Sharing an SAP HANA system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
7.5 Installing additional agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
7.6 Software and firmware levels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
Chapter 8. Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
8.1 Benefits of in-memory computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
8.2 SAP HANA: An innovative analytic appliance . . . . . . . . . . . . . . . . . . . . . 150
8.3 IBM Systems Solution for SAP HANA. . . . . . . . . . . . . . . . . . . . . . . . . . . 151
8.3.1 Workload Optimized Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
8.3.2 Leading performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
8.3.3 IBM GPFS enhancing performance, scalability, and reliability . . . . 153
8.3.4 Scalability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
8.3.5 Services to speed deployment . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
8.4 Going beyond infrastructure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
8.4.1 A trusted service partner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
8.4.2 IBM and SAP team for long-term business innovation . . . . . . . . . . 157
Appendix A. Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
GPFS license information. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
Abbreviations and acronyms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165

Contents

Other publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165


Online resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
Help from IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166

vi

In-memory Computing with SAP HANA on IBM eX5 Systems

Notices
This information was developed for products and services offered in the U.S.A.
IBM may not offer the products, services, or features discussed in this document in other countries. Consult your
local IBM representative for information on the products and services currently available in your area. Any
reference to an IBM product, program, or service is not intended to state or imply that only that IBM product,
program, or service may be used. Any functionally equivalent product, program, or service that does not infringe
any IBM intellectual property right may be used instead. However, it is the user's responsibility to evaluate and
verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document. The
furnishing of this document does not grant you any license to these patents. You can send license inquiries, in
writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A.
The following paragraph does not apply to the United Kingdom or any other country where such
provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION
PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR
IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of
express or implied warranties in certain transactions, therefore, this statement may not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically made to the
information herein; these changes will be incorporated in new editions of the publication. IBM may make
improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time
without notice.
Any references in this information to non-IBM websites are provided for convenience only and do not in any
manner serve as an endorsement of those websites. The materials at those websites are not part of the materials
for this IBM product and use of those websites is at your own risk.
IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring any
obligation to you.
Any performance data contained herein was determined in a controlled environment. Therefore, the results
obtained in other operating environments may vary significantly. Some measurements may have been made on
development-level systems and there is no guarantee that these measurements will be the same on generally
available systems. Furthermore, some measurements may have been estimated through extrapolation. Actual
results may vary. Users of this document should verify the applicable data for their specific environment.
Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm the
accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the
capabilities of non-IBM products should be addressed to the suppliers of those products.
This information contains examples of data and reports used in daily business operations. To illustrate them as
completely as possible, the examples include the names of individuals, companies, brands, and products. All of
these names are fictitious and any similarity to the names and addresses used by an actual business enterprise is
entirely coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrate programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in any
form without payment to IBM, for the purposes of developing, using, marketing or distributing application programs
conforming to the application programming interface for the operating platform for which the sample programs are
written. These examples have not been thoroughly tested under all conditions. IBM, therefore, cannot guarantee or
imply reliability, serviceability, or function of these programs. You may copy, modify, and distribute these sample
programs in any form without payment to IBM for the purposes of developing, using, marketing, or distributing
application programs conforming to IBM's application programming interfaces.

Copyright IBM Corp. 2013. All rights reserved.

vii

Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business
Machines Corporation in the United States, other countries, or both. These and other IBM trademarked
terms are marked on their first occurrence in this information with the appropriate symbol ( or ),
indicating US registered or common law trademarks owned by IBM at the time this information was
published. Such trademarks may also be registered or common law trademarks in other countries. A current
list of IBM trademarks is available on the Web at http://www.ibm.com/legal/copytrade.shtml
The following terms are trademarks of the International Business Machines Corporation in the United States,
other countries, or both:
AIX
BladeCenter
DB2
Global Business Services
Global Technology Services
GPFS
IBM

Intelligent Cluster
Passport Advantage
POWER
PureFlex
RackSwitch
Redbooks
Redpaper

Redbooks (logo)
System x
System z
Tivoli
z/OS

The following terms are trademarks of other companies:

Intel Xeon, Intel, Itanium,


Intel logo, Intel Inside logo,
and Intel Centrino logo are
trademarks or registered
trademarks of Intel
Corporation or its
subsidiaries in the United
States and other
countries.

Linux is a trademark of
Linus Torvalds in the
United States, other
countries, or both.
Microsoft, Windows, and
the Windows logo are
trademarks of Microsoft
Corporation in the United

States, other countries, or


both.
Java, and all Java-based
trademarks and logos are
trademarks or registered
trademarks of Oracle
and/or its affiliates.

Other company, product, or service names may be trademarks or service marks of others.

viii

In-memory Computing with SAP HANA on IBM eX5 Systems

Preface
This IBM Redbooks publication describes in-memory computing appliances
from IBM and SAP that are based on IBM eX5 flagship systems and SAP HANA.
We first discuss the history and basic principles of in-memory computing, then
we describe the SAP HANA offering, its architecture, sizing methodology,
licensing policy, and software components. We also review IBM eX5 hardware
offerings from IBM. Then we describe the architecture and components of
IBM Systems solution for SAP HANA and its delivery, operational, and support
aspects. Finally, we discuss the advantages of using IBM infrastructure platforms
for running the SAP HANA solution.
The following topics are covered:

The history of in-memory computing


The basic principles of in-memory computing
The SAP HANA overview
Software components and replication methods
SAP HANA use cases and integration scenarios
The IBM Systems solution for SAP HANA
SAP HANA operations
Benefits of using the IBM infrastructure for SAP HANA

This book is intended for SAP administrators and technical solution architects. It
is also for IBM Business Partners and IBM employees who want to know more
about the SAP HANA offering and other available IBM solutions for SAP
customers.

Copyright IBM Corp. 2013. All rights reserved.

ix

The team who wrote this book


This book was produced by a team of specialists from around the world working
at the International Technical Support Organization, Raleigh Center.

Gereon Vey has been a member of the IBM System


x Team at the IBM SAP International Competence
Center (ISICC) in Walldorf, Germany since 2004. He is
the Global Subject Matter Expert for the SAP
Appliances, such as SAP NetWeaver BW Accelerator
and SAP HANA, at the ISICC, and is part of the team
developing the IBM Systems Solution for SAP HANA.
His other activities include maintaining sizing
guidelines and capacity data for System x servers and
pre-sales support for IBM worldwide. He has worked in
the IT industry since 1992. He graduated with a degree
in computer science from the University of Applied
Sciences in Worms, Germany in 1999.
Tomas Krojzl is The Open Group Master Certified IT
Specialist (Packaged Application Implementations) at
the IBM Delivery Center Central Europe (Brno site). He
has practical working experience with the SAP HANA
database almost since its release to the market. In
April 2012, his contributions in the SAP HANA area
helped him to be recognized as an SAP Mentor, and in
October 2012 he became one of the first SAP HANA
Distinguished Engineers. He started in the IT industry
in January 2000 as a developer and joined IBM in July
2005. He graduated from the Tomas Bata University in
Zlin and achieved a Masters degree in Information
Technology in 2006.

In-memory Computing with SAP HANA on IBM eX5 Systems

Ilya Krutov is a Project Leader at the ITSO Center in


Raleigh and has been with IBM since 1998. Before
joining the ITSO, Ilya served in IBM as a Run Rate
Team Leader, Portfolio Manager, Brand Manager,
Technical Sales Specialist, and Certified Instructor. Ilya
has expertise in IBM System x, BladeCenter and
PureFlex System products, server operating
systems, and networking solutions. He has authored
over 100 books, papers, and Product Guides. He has a
Bachelor degree in Computer Engineering from the
Moscow Engineering and Physics Institute.
Thanks to the following people for their contributions to this project:
From the International Technical Support Organization, Raleigh Center:

Kevin Barnes
Tamikia Barrow
Mary Comianos
Shari Deiana
Cheryl Gera
Linda Robinson
David Watts
Erica Wazewski
KaTrina Love

From IBM:

Guillermo B. Vazquez
Irene Hopf
Dr. Oliver Rettig
Sasanka Vemuri
Tag Robertson
Thomas Prause
Volker Fischer

This book is based on SAP In-Memory Computing on IBM eX5 Systems,


REDP-4814. Thanks to the authors:
Gereon Vey
Ilya Krutov

Preface

xi

Now you can become a published author, too!


Heres an opportunity to spotlight your skills, grow your career, and become a
published authorall at the same time! Join an ITSO residency project and help
write a book in your area of expertise, while honing your experience using
leading-edge technologies. Your efforts will help to increase product acceptance
and customer satisfaction, as you expand your network of technical contacts and
relationships. Residencies run from two to six weeks in length, and you can
participate either in person or as a remote resident working from your home
base.
Find out more about the residency program, browse the residency index, and
apply online at:
ibm.com/redbooks/residencies.html

Comments welcome
Your comments are important to us!
We want our books to be as helpful as possible. Send us your comments about
this book or other IBM Redbooks publications in one of the following ways:
Use the online Contact us review Redbooks form found at:
ibm.com/redbooks
Send your comments in an email to:
[email protected]
Mail your comments to:
IBM Corporation, International Technical Support Organization
Dept. HYTD Mail Station P099
2455 South Road
Poughkeepsie, NY 12601-5400

xii

In-memory Computing with SAP HANA on IBM eX5 Systems

Stay connected to IBM Redbooks


Find us on Facebook:
http://www.facebook.com/IBMRedbooks
Follow us on Twitter:
http://twitter.com/ibmredbooks
Look for us on LinkedIn:
http://www.linkedin.com/groups?home=&gid=2130806
Explore new Redbooks publications, residencies, and workshops with the
IBM Redbooks weekly newsletter:
https://www.redbooks.ibm.com/Redbooks.nsf/subscribe?OpenForm
Stay current on recent Redbooks publications with RSS Feeds:
http://www.redbooks.ibm.com/rss.html

Preface

xiii

xiv

In-memory Computing with SAP HANA on IBM eX5 Systems

Chapter 1.

History of in-memory
computing at SAP
In-memory computing has a long history at SAP. This chapter provides a short
overview of the history of SAP in-memory computing. It describes the evolution
of SAP in-memory computing and gives an overview of SAP products involved in
this process:

1.1, SAP Search and Classification (TREX) on page 2


1.2, SAP liveCache on page 2
1.3, SAP NetWeaver Business Warehouse Accelerator on page 3
1.4, SAP HANA on page 6

Copyright IBM Corp. 2013. All rights reserved.

1.1 SAP Search and Classification (TREX)


SAP first made SAP In-Memory Computing available in product form with the
introduction of SAP Search and Classification, better known as Text Retrieval
and Information Extraction (TREX). TREX is a search engine for both structured
and unstructured data. It provides SAP applications with numerous services for
searching and classifying large collections of documents (unstructured data) and
for searching and aggregating business data (structured data).
TREX offers a flexible architecture that enables a distributed installation, which
can be modified to match various requirements. A minimal system consists of a
single host that provides all TREX functions. Starting with a single-host system,
you can extend TREX to be a distributed system and thus increase its capacity.
TREX stores its data, usually referred to as indexes, not in the way traditional
databases do, but merely as flat files in a file system. For a distributed system,
the file system must be a clustered or shared file system, which presents all files
to all nodes of the distributed system.
For performance reasons, TREX indexes are loaded to working memories.
Indexes for structured data are implemented compactly using data compression,
and the data can be aggregated in linear time to enable large volumes of data to
be processed entirely in memory.
Earlier TREX releases (TREX 7.0 and earlier) are supported on a variety of
platforms (such as IBM AIX, HP-UX, SOLARIS, Linux, and Windows). To
optimize the performance of the search and indexing functions provided by the
TREX engine, SAP decided to concentrate on the Intel platform to optimally
utilize the CPU architecture. Therefore, the newest version of TREX (Version
7.10) is only available on Windows and Linux 64-bit operating systems.
TREX as a search engine component is used as an integral part of various SAP
software offerings, such as SAP NetWeaver Enterprise Search. TREX as an SAP
NetWeaver stand-alone engine is a significant part of most search features in
SAP applications.

1.2 SAP liveCache


SAP liveCache technology can be characterized by a hybrid main-memory
database with intensive use of database procedures. It is based on MaxDB,
which is a relational database owned by SAP, introducing a combination of
in-memory data storage with special object-oriented database technologies
supporting the application logic. This hybrid database system can process
enormous volumes of information, such as planning data. It significantly

In-memory Computing with SAP HANA on IBM eX5 Systems

increases the speed of the algorithmically complex, data-intensive and


runtime-intensive functions of various SAP applications, especially within SAP
Supply Chain Management (SAP SCM) and SAP Advanced Planning and
Optimization (SAP APO). The SAP APO/liveCache architecture consists of these
major components:
ABAP code in SAP APO, which deals with SAP APO functionality
Application functions providing extended database functionality to manipulate
business objects
SAP liveCache's special SAP MaxDB implementation, providing a memory
resident database for fast data processing
From the view of the SAP APO application servers, the SAP liveCache database
appears as a second database connection. SAP liveCache provides a native
SQL interface, which also allows the application servers to trigger object-oriented
functions at the database level. These functions are provided by means of C++
code running on the SAP liveCache server with extremely fast access to the
objects in its memory. This is the functionality, which that allows processing load
to be passed from the application server to the SAP LiveCache server, rather
than just accessing database data. This functionality, referred to as the
COM-Modules or SAP liveCache Applications, supports the manipulation of
memory resident objects and datacubes and significantly increases the speed of
the algorithmically complex, data-intensive, and runtime-intensive functions.
SAP APO transfers performance-critical application logic to the SAP liveCache.
Data needed for these operations is sent to SAP liveCache and kept in-memory.
This ensures that the processing happens where the data is, to deliver the
highest possible performance. The object-oriented nature of the application
functions enable parallel processing so that modern multi-core architectures can
be leveraged.

1.3 SAP NetWeaver Business Warehouse Accelerator


The two primary drivers of the demand for business analytics solutions are
increasing data volumes and user populations. These drivers place new
performance requirements on existing analytic platforms. To address these
requirements, SAP introduced SAP NetWeaver Business Warehouse
Accelerator1 (SAP NetWeaver BW Accelerator) in 2006, deployed as an
integrated solution combining software and hardware to increase the
1

Formerly named SAP NetWeaver Business Intelligence Accelerator, SAP changed the software
solution name in 2009 to SAP NetWeaver Business Warehouse Accelerator. The solution functions
remain the same.

Chapter 1. History of in-memory computing at SAP

performance characteristics of SAP NetWeaver Business Warehouse


deployments.
The SAP NetWeaver BW Accelerator is based on TREX technology. SAP used
this existing technology and extended it with more functionality to efficiently
support the querying of massive amounts of data and to perform simple
operations on the data frequently used in a business analytics environment.
The softwares engine decomposes table data vertically into columns that are
stored separately. This makes more efficient use of memory space than
row-based storage because the engine needs to load only the data for relevant
attributes or characteristics into memory. In general, this is a good idea for
analytics, where most users want to see only a selection of data. We discuss the
technology and advantages of column-based storage in Chapter 2, Basic
concepts on page 9, along with other basic in-memory computing principles
employed by SAP NetWeaver BW Accelerator.
SAP NetWeaver BW Accelerator is built for a special use case, speeding up
queries and reports in SAP NetWeaver BW. In a nutshell, after connecting the
SAP NetWeaver BW Accelerator to the BW system, InfoCubes can be marked to
be indexed in SAP NetWeaver BW Accelerator, and subsequently all
database-bound queries (or even parts of queries) that operate on the indexed
InfoCubes actually get executed in-memory by the SAP NetWeaver BW
Accelerator.
Because of this tight integration with SAP NetWeaver BW and the appliance-like
delivery model, SAP NetWeaver BW Accelerator requires minimal configuration
and set up. Intel helped develop this solution with SAP, so it is optimized for, and
only available on, Intel Xeon processor-based technology. SAP partners with
several hardware vendors to supply the infrastructure for the SAP NetWeaver
BW Accelerator software. Customers acquire the SAP software license from
SAP, and the hardware partner delivers a pre-configured and pre-installed
solution.
The IBM Systems solution for SAP NetWeaver BW Accelerator helps provide
customers with near real-time business intelligence for those companies that
need timely answers to vital business questions. It allows customers to perform
queries in seconds rather than tens of minutes and gives them better visibility
into their business.
IBM has significant competitive advantages with our IBM BladeCenter-based
implementation:
Better density
More reliable cooling
Fibre storage switching

In-memory Computing with SAP HANA on IBM eX5 Systems

Fully redundant enterprise class chassis


Systems management
SAP NetWeaver BW Accelerator plugs into existing SAP NetWeaver Business
Warehouse environments regardless of the server platform used in that
environment.
The IBM solution consists of these components:
IBM BladeCenter chassis with HS23 blade servers with Intel Xeon
processors, available in standard configurations scaling from 2 to 28 blades
and custom configurations up to 140 blades
IBM DS3524 with scalable disk
SUSE Linux Enterprise Server as the operating system
IBM General Parallel File System (GPFS)
IBM Services including Lab Services, IBM Global Business Services (GBS),
IBM Global Technology Services (GTS) offerings, and IBM Intelligent
Cluster enablement team services
This intelligent scalable design is based around the IBM General Parallel File
System, exclusive from IBM. GPFS is a highly scalable, high-performance
shared disk file system, powering many of the worlds largest supercomputing
clusters. Its advanced high-availability and data replication features are a key
differentiator for the IBM offering. GPFS not only provides scalability, but also
offers exclusive levels of availability that are easy to implement with no manual
intervention or scripting.
IBM has shown linear scalability for the SAP NetWeaver BW Accelerator through
140 blades2. Unlike all other SAP NetWeaver BW Accelerator providers, the IBM
solution provides a seamless growth path for customers from two blades to 140
blades with no significant changes in the hardware or software infrastructure.

1.3.1 SAP BusinessObjects Explorer Accelerated


To extend the functionality of SAP NetWeaver BW Accelerator, SAP created a
special version of the SAP BusinessObjects Explorer, which can connect directly
to the SAP NetWeaver BW Accelerator using its proprietary communication
protocol. SAP BusinessObjects Explorer, accelerated version, provides an
alternative front end to navigate through the data contained in SAP NetWeaver
BW Accelerator, with a much simpler, web-based user interface than the SAP
2

WinterCorp white paper: "Large-Scale Testing of the SAP NetWeaver BW Accelerator on an IBM
Platform," available at
ftp://ftp.software.ibm.com/common/ssi/sa/wh/n/spw03004usen/SPW03004USEN.PDF

Chapter 1. History of in-memory computing at SAP

NetWeaver BW front ends can provide. This broadens the user base towards the
less experienced BI users.

1.3.2 SAP BusinessObjects Accelerator


SAP enabled the combination of SAP NetWeaver BW Accelerator and SAP
BusinessObjects Data Services to load data into SAP NetWeaver BW
Accelerator from virtually any data source, both SAP and non-SAP data sources.
In combination with BO Explorer as an independent front end, the addition of
SAP BusinessObjects Data Services created a solution that is independent of
SAP NetWeaver BW.
This combination of SAP NetWeaver BW Accelerator, SAP BusinessObjects
Explorer Accelerated, and SAP BusinessObjects Data Services is often referred
to as the SAP BusinessObjects Accelerator or SAP BusinessObjects Explorer
Accelerated Wave 2. Additional blades are added to the SAP NetWeaver BW
Accelerator configuration to support the BusinessObjects Explorer Accelerated
workload, enabling it to be delivered as part of the SAP NetWeaver BW
Accelerator solution.

1.4 SAP HANA


SAP HANA is the next logical step in SAP in-memory computing. By combining
earlier developed or acquired technologies, such as the SAP NetWeaver BW
Accelerator (including TREX technology), SAP MaxDB with its in-memory
capabilities originating in SAP liveCache, or P*Time (acquired by SAP in 2005),
with recent research results from the Hasso Plattner Institute for Software
Systems Engineering3 (HPI), SAP created an in-memory database appliance for
a wide range of applications.

Founded in 1998 by Hasso Plattner, one of the founders of SAP AG, chairman of the board until
2003, and currently chairman of the supervisory board of SAP AG

In-memory Computing with SAP HANA on IBM eX5 Systems

Figure 1-1 shows the evolution of SAP in-memory computing.

TREX

BWA
7.0

BWA
7.20

BO
Explorer

BOE
accelerated

BO
Data
Services

BOA

SAP
HANA 1.0

MAXDB

P*TIME

liveCache

Figure 1-1 Evolution of SAP in-memory computing

Initially targeted at analytical workloads, Hasso Plattner presented (during the


announcement of SAP HANA at SapphireNOW 2010) his vision of SAP HANA
becoming a database suitable as a base for SAPs entire enterprise software
portfolio. He confirmed this vision during his keynote at SapphireNOW 2012 by
highlighting how SAP HANA is on the path to becoming the unified foundation for
all types of enterprise workloads, not only online analytical processing (OLAP),
but also online transaction processing (OLTP) and text.
Just as with SAP NetWeaver BW Accelerator, SAP decided to deploy SAP HANA
in an appliance-like delivery model. IBM was one of the first hardware partners to
work with SAP on an infrastructure solution for SAP HANA.
This IBM Redbooks publication focuses on SAP HANA and the IBM solution for
SAP HANA.

Chapter 1. History of in-memory computing at SAP

In-memory Computing with SAP HANA on IBM eX5 Systems

Chapter 2.

Basic concepts
In-memory computing is a technology that allows the processing of massive
quantities of data in main memory to provide immediate results from analysis and
transaction. The data to be processed is ideally real-time data (that is, data that
is available for processing or analysis immediately after it is created).
To achieve the desired performance, in-memory computing follows these
basic concepts:
Keep data in main memory to speed up data access.
Minimize data movement by leveraging the columnar storage concept,
compression, and performing calculations at the database level.
Divide and conquer. Leverage the multi-core architecture of modern
processors and multi-processor servers, or even scale out into a distributed
landscape, to be able to grow beyond what can be supplied by a single server.
In this chapter, we describe those basic concepts with the help of a few
examples. We do not describe the full set of technologies employed with
in-memory databases, such as SAP HANA, but we do provide an overview of
how in-memory computing is different from traditional concepts.

Copyright IBM Corp. 2013. All rights reserved.

2.1 Keeping data in-memory


Today, a single enterprise class server can hold several terabytes of main
memory. At the same time, prices for server main memory dramatically dropped
over the last few decades. This increase in capacity and reduction in cost makes
it a viable approach to keep huge amounts of business data in memory. This
section discusses the benefits and challenges.

2.1.1 Using main memory as the data store


The most obvious reason to use main memory (RAM) as the data store for a
database is because accessing data in main memory is much faster than
accessing data on disk. Figure 2-1 compares the access times for data in several
locations.
1,000,000
100,000

150x
10,000
1,000
100

2,000x
10
1

17x
0,1

12x
0,01
0,001
CPU register

CPU Cache

RAM

SSD/Flash

Volatile

Hard disk

Non-volatile

Figure 2-1 Data access times of various storage types, relative to RAM (logarithmic scale)

10

In-memory Computing with SAP HANA on IBM eX5 Systems

The main memory is the fastest storage type that can hold a significant amount
of data. While CPU registers and CPU cache are faster to access, their usage is
limited to the actual processing of data. Data in main memory can be accessed
more than a hundred thousand times faster than data on a spinning hard disk,
and even flash technology storage is about a thousand times slower than main
memory. Main memory is connected directly to the processors through a
high-speed bus, whereas hard disks are connected through a chain of buses
(QPI, PCIe, SAN) and controllers (I/O hub, RAID controller or SAN adapter, and
storage controller).
Compared with keeping data on disk, keeping the data in main memory can
dramatically improve database performance just by the advantage in access
time.

2.1.2 Data persistence


Keeping data in main memory brings up the question of what will happen in case
of a loss of power.
In database technology, atomicity, consistency, isolation, and durability (ACID) is
a set of requirements that guarantees that database transactions are processed
reliably:
A transaction must be atomic. That is, if part of a transaction fails, the entire
transaction has to fail and leave the database state unchanged.
The consistency of a database must be preserved by the transactions that it
performs.
Isolation ensures that no transaction interferes with another transaction.
Durability means that after a transaction is committed, it will remain
committed.
While the first three requirements are not affected by the in-memory concept,
durability is a requirement that cannot be met by storing data in main memory
alone. Main memory is volatile storage. That is, it looses its content when it is out
of electrical power. To make data persistent, it must reside on non-volatile
storage, such as hard drives, SSD, or Flash devices.
The storage used by a database to store data (in this case, main memory) is
divided into pages. When a transaction changes data, the corresponding pages
are marked and written to non-volatile storage in regular intervals. In addition, a
database log captures all changes made by transactions. Each committed
transaction generates a log entry that is written to non-volatile storage. This
ensures that all transactions are permanent. Figure 2-2 on page 12 illustrates
this using the example of SAP HANA. SAP HANA stores changed pages in

Chapter 2. Basic concepts

11

savepoints, which are asynchronously written to persistent storage in regular


intervals (by default every five minutes). The log is written synchronously. A
transaction does not return before the corresponding log entry is written to
persistent storage, to meet the durability requirement, as previously described.

Time

Data savepoint
to persistent
storage

Log written
to persistent storage
(committed transactions)

Power failure

Figure 2-2 Savepoints and logs in SAP HANA

After a power failure, the database can be restarted like a disk-based database.
The database pages are restored from the savepoints and then the database
logs are applied (rolled forward) to restore the changes that were not captured in
the savepoints. This ensures that the database can be restored in memory to
exactly the same state as before the power failure.

2.2 Minimizing data movement


The second key to improving data processing performance is to minimize the
movement of data within the database and between the database and the
application. This section describes measures to achieve this.

2.2.1 Compression
Even though todays memory capacities allow keeping enormous amounts of
data in-memory, compressing the data in-memory is still desirable. The goal is to
compress data in a way that does not use up performance gained, while still
minimizing data movement from RAM to the processor.
By working with dictionaries to be able to represent text as integer numbers, the
database can compress data significantly and thus reduce data movement, while
not imposing additional CPU load for decompression, but even adding to the
performance1. Figure 2-3 on page 13 illustrates this with a simplified example.

12

See the example in Figure 2-5 on page 16.

In-memory Computing with SAP HANA on IBM eX5 Systems

Row
ID

Date/
Time

Material

Customer
Name

Quantity

Customers

Material

Chevrier

MP3 Player

Di Dio

Radio

Dubois

Refrigerator

Miller

Stove

Newman

Laptop

14:05

Radio

Dubois

14:11

Laptop

Di Dio

14:32

Stove

Miller

14:38

MP3 Player

Newman

Row
ID

14:48

Radio

Dubois

14:55

Refrigerator

Miller

15:01

Stove

Chevrier

Date/
Time

Material

Customer
Name

Quantity

845

851

872

878

888

895

901

Figure 2-3 Illustration of dictionary compression

On the left side of Figure 2-3, the original table is shown containing text attributes
(that is, material and customer name) in their original representation. The text
attribute values are stored in a dictionary (upper right), assigning an integer value
to each distinct attribute value. In the table, the text is replaced by the
corresponding integer value, as defined in the dictionary. The date and time
attribute was also converted to an integer representation. Using dictionaries for
text attributes reduces the size of the table because each distinct attribute value
has only to be stored once, in the dictionary; therefore, each additional
occurrence in the table just needs to be referred to with the corresponding
integer value.
The compression factor achieved by this method is highly dependent on data
being compressed. Attributes with few distinct values compress well; whereas,
attributes with many distinct values do not benefit as much.
While there are other, more effective compression methods that can be
employed with in-memory computing, to be useful, they must have the correct
balance between compression effectiveness. This gives you more data in your
memory, or less data movement (that is, higher performance), resources needed
for decompression, and data accessibility (that is, how much unrelated data has
to be decompressed to get to the data that you need). As discussed here,

Chapter 2. Basic concepts

13

dictionary compression combines good compression effectiveness with low


decompression resources and high data access flexibility.

2.2.2 Columnar storage


Relational databases organize data in tables that contain the data records. The
difference between row-based and columnar (or column-based) storage is the
way in which the table is stored:
Row-based storage stores a table in a sequence of rows.
Column-based storage stores a table in a sequence of columns.
Figure 2-4 illustrates the row-based and column-based models.

Row-based
Row
ID

Date/
Time

Column-based

Material

Customer
Name

Quantity

Row
ID

Date/
Time

Material

Customer
Name

Quantity

845

845

851

851

872

872

878

878

888

888

895

895

901

901

Row-based store
1

845

851

851

872

878

872

878

Column-based store
1

845

Figure 2-4 Row-based and column-based storage models

14

In-memory Computing with SAP HANA on IBM eX5 Systems

Both storage models have benefits and drawbacks, which are listed in Table 2-1.
Table 2-1 Benefits and drawbacks of row-based and column-based storage
Row-based storage
Benefits

Column-based storage

Record data is stored


together.
Easy to insert/update.

Drawbacks

All data must be read


during selection, even if
only a few columns are
involved in the selection
process.

Only affected colums


have to be read during
the selection process
of a query.
Efficient projectionsa.
Any column can serve
as an index.
After selection, selected
rows must be
reconstructed from
columns.
No easy insert/update.

a. Projection: View on the table with a subset of columns

The drawbacks of column-based storage are not as grave as they seem. In most
cases, not all attributes (that is, column values) of a row are needed for
processing, especially in analytic queries. Also, inserts or updates to the data are
less frequent in an analytical environment2. SAP HANA implements both a
row-based storage and a column-based storage; however, its performance
originates in the use of column-based storage in memory. The following sections
describe how column-based storage is beneficial to query performance and how
SAP HANA handles the drawbacks of column-based storage.

An exception is bulk loads (for example, when replicating data in the in-memory database, which can be
handled differently).

Chapter 2. Basic concepts

15

Efficient query execution


To show the benefits of dictionary compression combined with columnar storage,
Figure 2-5 shows an example of how a query is executed. Figure 2-5 refers to the
table shown in Figure 2-3 on page 13.

Get all records with Customer Name Miller and Material Refrigerator
Dictionary lookup of the strings
Strings are only compared once!

Only those columns are read


which are part of the query condition

Customers

Material

Chevrier

MP3 Player

Di Dio

Radio

Dubois

Refrigerator

Miller

Stove

Newman

Laptop

Integer comparison operations

Customer

Material

0 0 1 0 0 1 0

0 0 0 0 0 1 0
Combine
bit-wise AND

0 0 0 0 0 1 0
Resultset

The resulting records can be assembled from the column stores fast, because positions are known
(here: 6th position in every column)

Figure 2-5 Example of a query executed on a table in columnar storage

The query asks to get all records with Miller as the customer name and
Refrigerator as the material.
First, the strings in the query condition are looked up in the dictionary. Miller is
represented as the number 4 in the customer name column. Refrigerator is
represented as the number 3 in the material column. Note that this lookup has to
be done only once. Subsequent comparison with the values in the table are
based on integer comparisons, which are less resource intensive than string
comparisons.
In a second step, the columns are read that are part of the query condition (that
is, the Customer and Material columns). The other columns of the table are not
needed for the selection process. The columns are then scanned for values
matching the query condition. That is, in the Customer column all occurrences of

16

In-memory Computing with SAP HANA on IBM eX5 Systems

4 are marked as selected, and in the Material column all occurrences of 3 are
marked.
These selection marks can be represented as bitmaps, a data structure that
allows efficient boolean operations on them, which is used to combine the
bitmaps of the individual columns to a bitmap representing the selection or
records matching the entire query condition. In our example, the record number 6
is the only matching record. Depending on the columns selected for the result,
now the additional columns must be read to compile the entire record to return.
But because the position within the column is known (record number 6) only the
parts of the columns have to be read that contain the data for this record.
This example shows how compression not only can limit the amount of data
needed to be read for the selection process, but even simplify the selection itself,
while the columnar storage model further reduces the amount of data needed for
the selection process. Although the example is simplified, it illustrates the
benefits of dictionary compression and columnar storage.

Delta-merge and bulk inserts


To overcome the drawback of inserts or updates having impact on performance
of the column-based storage, SAP plans to implement a lifecycle management
for database records3.

See Efficient Transaction Processing in SAP HANA Database - The End of a Column Store Myth
by Sikka, Frber, Lehner, Cha, Peh, Bornhvd, available at
http://dl.acm.org/citation.cfm?id=2213946

Chapter 2. Basic concepts

17

Figure 2-6 illustrates the lifecycle management for database records in the
column-store.
Update / Insert / Delete

L1 Delta

Merge

Bulk Insert

L2 Delta

Merge

Main store

Unified Table

Read
Figure 2-6 Lifetime management of a data record in the SAP HANA column-store

There are three different types of storage for a table:


L1 Delta Storage is optimized for fast write operations. The update is
performed by inserting a new entry into the delta storage. The data is stored
in records, like in a traditional row-based approach. This ensures high
performance for write, update, and delete operations on records stored in the
L1 Delta Storage.
L2 Delta Storage is an intermediate step. While organized in columns, the
dictionary is not as much optimized as in the main storage and appends new
dictionary entries to the end of the dictionary. This results in easier inserts, but
has drawbacks with regards to search operations on the dictionary because it
is not sorted.
Main Storage contains the compressed data for fast read with a search
optimized dictionary.
All write operations on a table work on the L1 Delta storage. Bulk inserts bypass
L1 Delta storage and write directly into L2 Delta storage. Read operations on a
table always reads from all storages for that table, merging the result set to
provide a unified view on all data records in the table.
During the lifecycle of a record, it is moved from L1 Delta storage to L2 Delta
storage and finally to the Main storage. The process of moving changes to a

18

In-memory Computing with SAP HANA on IBM eX5 Systems

table from one storage to the next one is called Delta Merge, and is an
asynchronous process. During the merge operations, the columnar table is still
available for read and write operations.
Moving records from L1 Delta storage to L2 Delta storage involves reorganizing
the record in a columnar fashion and compressing it, as illustrated in Figure 2-3
on page 13. If a value is not yet in the dictionary, a new entry is appended to the
dictionary. Appending to the dictionary is faster than inserting, but results in an
unsorted dictionary, which impacts the data retrieval performance.
Eventually, the data in the L2 Delta storage must be moved to the Main storage.
To accomplish that, the L2 Delta storage must be locked, and a new L2 Delta
storage must be opened to accept further additions. Then a new Main storage is
created from the old Main storage and the locked L2 Delta storage. This is a
resource intensive task and has to be scheduled carefully.

2.2.3 Pushing application logic to the database


Whereas the concepts described above speed up processing within the
database, there is still one factor that can significantly slow down the processing
of data. An application executing the application logic on the data has to get the
data from the database, process it, and possibly send it back to the database to
store the results. Sending data back and forth between the database and the
application usually involves communication over a network, which introduces
communication overhead and latency and is limited by the speed and throughput
of the network between the database and the application itself.
To eliminate this factor and increase overall performance, it is beneficial to
process the data where it is, at the database. If the database can perform
calculations and apply application logic, less data needs to be sent back to the
application and might even eliminate the need for the exchange of intermediate
results between the database and the application. This minimizes the amount of
data transfer, and the communication between database and application
contributes a less significant amount of time to the overall processing time.

2.3 Divide and conquer


The phrase divide and conquer (derived from the Latin saying divide et impera)
is typically used when a big problem is divided into a number of smaller,
easier-to-solve problems. With regard to performance, processing huge amounts
of data is a big problem that can be solved by splitting it up into smaller chunks of
data, which can be processed in parallel.

Chapter 2. Basic concepts

19

2.3.1 Parallelization on multi-core systems


When chip manufactures reached the physical limits of semiconductor-based
microelectronics with their single-core processor designs, they started to
increase processor performance by increasing the number of cores, or
processing units, within a single processor. This performance gain can only be
leveraged through parallel processing because the performance of a single core
remained unchanged.
The rows of a table in a relational database are independent of each other, which
allows parallel processing. For example, when scanning a database table for
attribute values matching a query condition, the table, or the set of attributes
(columns) relevant to the query condition, can be divided into subsets and
spread across the cores available to parallelize the processing of the query.
Compared with processing the query on a single core, this basically reduces the
time needed for processing by a factor equivalent to the number of cores working
on the query (for example, on a 10-core processor the time needed is one-tenth
of the time that a single core would need).
The same principle applies for multi-processor systems. A system with eight
10-core processors can be regarded as an 80-core system that can divide the
processing into 80 subsets processed in parallel.

2.3.2 Data partitioning and scale-out


Even though servers available today can hold terabytes of data in memory and
provide up to eight processors per server with up to 10 cores per processor, the
amount of data to be stored in an in-memory database or the computing power
needed to process such quantities of data might exceed the capacity of a single
server. To accommodate the memory and computing power requirements that go
beyond the limits of a single server, data can be divided into subsets and placed
across a cluster of servers, forming a distributed database (scale-out approach).
The individual database tables can be placed on different servers within the
cluster, or tables bigger than what a single server can hold can be split into
several partitions, either horizontally (a group of rows per partition) or vertically (a
group of columns per partition) with each partition residing on a separate server
within the cluster.

20

In-memory Computing with SAP HANA on IBM eX5 Systems

Chapter 3.

SAP HANA overview


In this chapter, we describe the SAP HANA offering, its architecture and
components, use cases, delivery model, and sizing and licensing aspects.
This chapter contains the following sections:

SAP HANA overview


SAP HANA delivery model
Sizing SAP HANA
SAP HANA software licensing

Copyright IBM Corp. 2013. All rights reserved.

21

3.1 SAP HANA overview


This section gives an overview of SAP HANA. When talking about SAP HANA,
these terms are used:
SAP HANA database
The SAP HANA database (also referred to as the SAP in-memory database)
is a hybrid in-memory database that combines row-based, column-based,
and object-based database technology, optimized to exploit the parallel
processing capabilities of current hardware. It is the heart of SAP offerings,
such as SAP HANA.
SAP HANA Appliance (SAP HANA)
SAP HANA is a flexible, data source agnostic appliance that allows you to
analyze large volumes of data in real time without the need to materialize
aggregations. It is a combination of hardware and software, and it is delivered
as an optimized appliance in cooperation with SAPs hardware partners for
SAP HANA.
For the sake of simplicity, we use the terms SAP HANA, SAP in-memory
database, SAP HANA database, and SAP HANA appliance synonymously in this
paper. We only cover the SAP in-memory database as part of the SAP HANA
appliance. Where required, we make sure that the context makes it clear which
part we are talking about.

22

In-memory Computing with SAP HANA on IBM eX5 Systems

3.1.1 SAP HANA architecture


Figure 3-1 shows the high-level architecture of the SAP HANA appliance.
Section 4.1, SAP HANA software components on page 36 explains the most
important software components around SAP HANA database.

SAP HANA Appliance


SAP HANA Database
Session Management

SAP HANA
Studio

Request processing / Execution Control


SAP HANA
Client

SAP HANA
Studio Repository

SQL

MDX

SQL Script

Calculation Engine

Relational Engines
Software Update
Manager

Row
Store

Column
Store

SAP Host Agent

SAP HANA
Client

Transaction
Manager

Authorization
Manager

Metadata
Manager

Page
Management

Persistency Layer

Logger

Data Volumes

Persistent Storage

Log Volumes

LM Structure
JVM
SAP CAR

Figure 3-1 SAP HANA architecture

The SAP HANA database


The heart of the SAP HANA database is the relational database engines. There
are two engines within the SAP HANA database:
The column-based store: Stores relational data in columns, optimized holding
tables with huge amounts of data, which can be aggregated in real-time and
used in analytical operations.
The row-based store: Stores relational data in rows, as traditional database
systems do. This row store is more optimized for row operations, such as
frequent inserts and updates. It has a lower compression rate, and query
performance is much lower compared to the column-based store.

Chapter 3. SAP HANA overview

23

The engine used to store data can be selected on a per-table basis at the time of
creation of a table. There is a possibility to convert an existing table from one
type to another. Tables in the row-store are loaded at startup time; whereas,
tables in the column-store can be either loaded at startup or on demand during
normal operation of the SAP HANA database.
Both engines share a common persistency layer, which provides data
persistency consistent across both engines. There is page management and
logging, much like in traditional databases. Changes to in-memory database
pages are persisted through savepoints written to the data volumes on persistent
storage, which are usually hard drives. Every transaction committed in the SAP
HANA database is persisted by the logger of the persistency layer in a log entry
written to the log volumes on persistent storage. The log volumes use flash
technology storage for high I/O performance and low latency.
The relational engines can be accessed through a variety of interfaces. The SAP
HANA database supports SQL (JDBC/ODBC), MDX (ODBO), and BICS (SQL
DBC). The calculation engine allows calculations to be performed in the
database without moving the data into the application layer. It also includes a
business functions library that can be called by applications to do business
calculations close to the data. The SAP HANA-specific SQL Script language is
an extension to SQL that can be used to push down data-intensive application
logic into the SAP HANA database.

3.1.2 SAP HANA appliance


The SAP HANA appliance consists of the SAP HANA database and adds
components needed to work with, administer, and operate the database. It
contains the repository files for the SAP HANA studio, which is an Eclipse-based
administration and data-modeling tool for SAP HANA, in addition to the SAP
HANA client, which is a set of libraries required for applications to be able to
connect to the SAP HANA database. Both the SAP HANA studio and the client
libraries are usually installed on a client PC or server.
The Software Update Manager (SUM) for SAP HANA is the framework allowing
the automatic download and installation of SAP HANA updates from the SAP
Marketplace and other sources using a host agent. It also allows distribution of
the Studio repository to the users.
The Lifecycle Management (LM) Structure for SAP HANA is a description of the
current installation and is, for example, used by SUM to perform automatic
updates.
More details about existing software components is in section 4.1, SAP HANA
software components on page 36.

24

In-memory Computing with SAP HANA on IBM eX5 Systems

3.2 SAP HANA delivery model


SAP decided to deploy SAP HANA as an integrated solution combining software
and hardware, frequently referred to as the SAP HANA appliance. As with SAP
NetWeaver BW Accelerator, SAP partners with several hardware vendors to
provide the infrastructure needed to run the SAP HANA software. IBM was
among the first hardware vendors to partner with SAP to provide an integrated
solution.
Infrastructure for SAP HANA must run through a certification process to ensure
that certain performance requirements are met. Only certified configurations are
supported by SAP and the respective hardware partner. These configurations
must adhere to certain requirements and restrictions to provide a common
platform across all hardware providers:
Only certain Intel Xeon processors can be used. For the currently available
Intel Xeon processor E7 family, the allowed processor models are E7-2870,
E7-4870, and E7-8870. The previous CPU generation was limited to the Intel
Xeon processor X7560.
All configurations must provide a certain main memory per core ratio, which is
defined by SAP to balance CPU processing power and the amount of data
being processed.
All configurations must meet minimum performance requirements for various
load profiles. SAP tests for these requirements as part of the certification
process.
The capacity of the storage devices used in the configurations must meet the
sizing rules (see 3.3, Sizing SAP HANA on page 25).
The networking capabilities of the configurations must include 10 Gb Ethernet
for the SAP HANA software.
By imposing these requirements, SAP can rely on the availability of certain
features and ensure a well-performing hardware platform for their SAP HANA
software. These requirements give the hardware partners enough room to
develop an infrastructure architecture for SAP HANA, which adds differentiating
features to the solution. The benefits of the IBM solution are described in
Chapter 6, The IBM Systems Solution for SAP HANA on page 87.

3.3 Sizing SAP HANA


This section introduces the concept of T-shirt sizes for SAP HANA and gives a
short overview of how to size for an SAP HANA system.

Chapter 3. SAP HANA overview

25

3.3.1 The concept of T-shirt sizes for SAP HANA


SAP defined so-called T-shirt sizes for SAP HANA to both simplify the sizing and
to limit the number of hardware configurations to support, thus reducing
complexity. The SAP hardware partners provide configurations for SAP HANA
according to one or more of these T-shirt sizes. Table 3-1 lists the T-shirt sizes for
SAP HANA.
Table 3-1 SAP HANA T-shirt sizes
SAP T-shirt
size

XS

S and S+

M and M+

Compressed
data in
memory

64 GB

128 GB

256 GB

512 GB

Server main
memory

128 GB

256 GB

512 GB

1024 GB

Number of
CPUs

In addition to the T-shirt sizes listed in Table 3-1, you might come across the
T-shirt size XL, which denotes a scale-out configuration for SAP HANA.
The T-shirt sizes S+ and M+ denote upgradable versions of the S and M sizes:
S+ delivers capacity equivalent to S, but the hardware is upgradable to an M
size.
M+ delivers capacity equivalent to M, but the hardware is upgradable to an L
size.
These T-shirt sizes are used when relevant growth of the data size is expected.
For more information about T-shirt size mappings to IBM System Solution
building blocks, refer to the section 6.3.2, SAP HANA T-shirt sizes on page 108.

3.3.2 Sizing approach


The sizing of SAP HANA depends on the scenario in which SAP HANA is used.
We discuss these scenarios here:
SAP HANA as a stand-alone database
SAP HANA as the database for an SAP NetWeaver BW

26

In-memory Computing with SAP HANA on IBM eX5 Systems

The sizing methodology for SAP HANA is described in detail in the following SAP
Notes1 and attached presentations:
Note 1514966 - SAP HANA 1.0: Sizing SAP In-Memory Database
Note 1637145 - SAP NetWeaver BW on HANA: Sizing SAP In-Memory
Database
The following sections provide a brief overview of sizing for SAP HANA.

SAP HANA as a stand-alone database


This section covers sizing of SAP HANA as a stand-alone database, used for
example as technology platform, operational reporting, or accelerator use case
scenarios, as described in Chapter 5, SAP HANA use cases and integration
scenarios on page 59.
The sizing methodology for this scenario is described in detail in SAP Note
1514966 and the attached presentation.

Sizing the RAM needed


Sizing an SAP HANA system is mainly based on the amount of data to be loaded
into the SAP HANA database because this determines the amount of main
memory (or RAM) needed in an SAP HANA system. To size the RAM, perform
the following steps:
1. Determine the volume of data that is expected to be transferred to the SAP
HANA database. Note that typically customers only select a sub-set of data
from their ERP or CRM databases, so this must be done at the table level.
The information required for this step can be acquired with database tools.
SAP Note 1514966 contains a script supporting this process for SAP
NetWeaver based systems, for example, IBM DB2 LUW and Oracle. If data
comes from non-SAP NetWeaver systems, use the manual SQL statement.
The sizing methodology is based on uncompressed source data size, so in
case that compression is used in the source database, this must be taken into
account too. The script is automatically adjusting table sizes only for DB2
LUW database because information about compression ratio is available in
the data dictionary.
For other database systems, compression factor must be estimated. Note that
real compression factors can differ because compression itself is dependent
on actual data.
In case that source database is non-unicode, multiply the volume of data by
overhead for unicode conversion (assume 50% overhead).
1

SAP Notes can be accessed at http://service.sap.com/notes. An SAP S-user ID is required.

Chapter 3. SAP HANA overview

27

The uncompressed total size of all the tables (without DB indexes) storing the
required information in the source database is denoted as A.
2. Although the compression ratio achieved by SAP HANA can vary depending
on the data distribution, a working assumption is that, in general, a
compression factor of 7 can be achieved:
B = ( A / 7 )
B is the amount of RAM required to store the data in the SAP HANA
database.
3. Use only 50% of the total RAM for the in-memory database. The other 50% is
needed for temporary objects (for example, intermediate results), the
operating system, and the application code:
C = B * 2
C is the total amount of RAM required.
Round the total amount of RAM up to the next T-shirt configuration size, as
described in 3.3.1, The concept of T-shirt sizes for SAP HANA on page 26, to
get the correct T-shirt size needed.

Sizing the disks


The capacity of the disks is based on the total amount of RAM.
As described in 2.1.2, Data persistence on page 11, there are two types of
storage in SAP HANA:
Diskpersistence
The persistence layer writes snapshots of the database in HANA to disk in
regular intervals. These are usually written to an array of SAS drives2. The
capacity for this storage is calculated based on the total amount of RAM:
Diskpersistence = 4 * C
Note that backup data must not be permanently stored in this storage. After
backup is finished, it needs to be moved to external storage media.
Disklog
This contains the database logs, written to flash technology storage devices,
that is, SSDs or PCIe Flash adapters. The capacity for this storage is
calculated based on the total amount of RAM:
Disklog = 1 * C

28

The SSD building block, as described in 6.3, Custom server models for SAP HANA on page 104,
combines Diskpersistence and Disklog on a single SSD array with sufficient capacity.

In-memory Computing with SAP HANA on IBM eX5 Systems

The certified hardware configurations already take these rules into account, so
there is no need to perform this disk sizing. However, we still include it here for
your understanding.

Sizing the CPUs


A CPU sizing can be performed in case that unusually high amount of
concurrently active users executing complex queries is expected. Use the T-shirt
configuration size that satisfies both the memory and CPU requirements.
The CPU sizing is user-based. The SAP HANA system must support 300 SAPS
for each concurrently active user. The servers used for the IBM Systems Solution
for SAP HANA support about 60 to 65 concurrently active users per CPU,
depending on the server model.
SAP recommends that the CPU load not exceed 65%. Therefore size servers to
support no more then 40 to 42 concurrently active users per CPU for standard
workload.

SAP HANA as the database for an SAP NetWeaver BW


This section covers sizing of SAP HANA as the database for an SAP NetWeaver
BW, as described in section 5.5.1, SAP NetWeaver BW running on SAP HANA
on page 75.
The sizing methodology for this scenario is described in detail in SAP Note
1637145 and attached presentations.

Sizing the RAM needed


Similar to the previous scenario, it is important to estimate the volume of
uncompressed data that will be stored in the SAP HANA database. The main
difference is that the SAP NetWeaver BW system is using column based tables
only for tables generated by BW. All other tables are stored as row-based tables.
The compression factor is different for each type of storage; therefore, the
calculation formula is slightly different.
To size the RAM, perform the following steps:
1. The amount of data that will be stored in the SAP HANA database can be
estimated using scripts attached to SAP Note 1637145. They determine the
volume of row based and column based tables separately.
Because the size of certain system tables can grow over time and because
the row store compression factors are not as high as for the column store, it is
recommended to clean up unnecessary data. In a cleansed SAP NetWeaver
BW system, the volume of row-based data is around 60 GB.

Chapter 3. SAP HANA overview

29

Just as in the previous case, only the size of tables is relevant. All associated
indexes can be ignored.
In case the data in the source system is compressed, the calculated volume
needs to be adjusted by an estimated compression factor for the given
database. Only for DB2 databases, which contain the actual compression
rates in the data dictionary, the script calculates the required corrections
automatically.
In case the source system is a non-unicode system, a unicode conversion will
be part of the migration scenario. In this case, the volume of data needs to be
adjusted, assuming a 10% overhead because the majority of data is expected
to be numerical values.
Alternatively, an ABAP report can be used to estimate the table sizes. SAP
Note 1736976 has a report attached that calculates the sizes based on the
data present in an existing SAP NetWeaver BW system.
The uncompressed total size of all the column tables (without DB indexes) is
denoted as Acolumn . The uncompressed total size of all the row tables
(without DB indexes) is referred to as Arow.
2. The average compression factor is approximately 4 for column-based data
and around 1.5 for row based data.
Additionally, an SAP NetWeaver BW system requires about 40 GB of RAM for
additional caches and about 10 GB of RAM for SAP HANA components.
Bcolumn = ( Acolumn / 4 )
Brow = ( Arow / 1.5 )
Bother = 50
For a fully cleansed SAP NetWeaver BW system having 60 GB of row store
data, we can therefore assume a requirement of about 40 GB of RAM for
row-based data.
Brow = 40
B is the amount of RAM required to store the data in the SAP HANA database
for a given type of data.
3. Additional RAM is required for objects that are populated with new data and
for queries. This requirement is valid for column based tables.
C = Bcolumn * 2 + Brow + Bother
For fully cleansed BW systems, this formula can be simplified:
C = Bcolumn * 2 + 90
C is the total amount of RAM required.

30

In-memory Computing with SAP HANA on IBM eX5 Systems

The total amount of RAM must be rounded up to the next T-shirt configuration
size, as described in 3.3.1, The concept of T-shirt sizes for SAP HANA on
page 26, to get the correct T-shirt size needed.

Sizing the disks


The capacity of the disks is based on the total amount of RAM and follows the
same rules as in the previous scenario. For more details, see Sizing the disks
on page 28.
Diskpersistence = 4 * C
Disklog = 1 * C
As in the previous case, disk sizing is not required because certified hardware
configurations already takes these rules into account.

Special considerations for scale-out systems


If the memory requirements exceed the capacity of single node appliance, a
scale-out configuration needs to be deployed.
In this case, it is important to understand how the data is distributed during the
import operation. For optimal performance, different types of workload must be
separated from each other.
The master node holds all row-based tables, which are mostly system tables,
and is responsible for SAP NetWeaver-related workloads. Also, additional SAP
HANA database components are hosted on the master node, such as a name
server or statistics server.
All additional slave nodes hold all master data and transactional data.
Transactional tables are partitioned and distributed across all existing slave
nodes to reach optimal parallel processing.
This logic must be taken into account when planning scale-out configuration for
SAP NetWeaver BW system. For more information, review the following SAP
notes and attached presentations:
Note 1637145 - SAP NetWeaver BW on SAP HANA: Sizing SAP In-Memory
Database
Note 1702409 - SAP HANA DB: Optimal number of scale out nodes for SAP
NetWeaver BW on SAP HANA
Note 1736976 - Sizing Report for BW on HANA

Chapter 3. SAP HANA overview

31

Selecting a T-shirt size


According to the sizing results, select an SAP HANA T-shirt size that satisfies the
sizing requirements in terms of main memory, and possibly CPU capabilities. For
example, a sizing result of 400 GB for the main memory (C) suggests a T-shirt
size of M.
The sizing methodology previously described is valid at the time of writing this
publication and only for use case scenarios previously mentioned. Other use
cases might require another sizing methodology. Also, SAP HANA is constantly
being optimized, which might affect the sizing methodology. Consult SAP
documentation regarding other use cases and up-to-date sizing information.
Note: The sizing approach described here is simplified and can only provide a
rough idea of the sizing process for the actual sizing for SAP HANA. Consult
the SAP sizing documentation for SAP HANA when performing an actual
sizing. It is also a best practice to involve SAP for a detailed sizing because
the result of the sizing does not only affect the hardware infrastructure, but it
also affects the SAP HANA licensing.
In addition to the sizing methodologies described in SAP Notes, SAP provides
sizing support for SAP HANA in the SAP Quick Sizer. The SAP Quick Sizer is an
online sizing tool that supports most of the SAP solutions available. For SAP
HANA, it supports sizing for:
Stand-alone SAP HANA system, implementing the sizing algorithms
described in SAP Note 1514966 (which we described above)
SAP HANA as the database for an SAP NetWeaver BW system,
implementing the sizing algorithms described in SAP Note 1637145
Special sizing support for the SAP HANA rapid-deployment solutions
The SAP Quick Sizer is accessible online at:
http://service.sap.com/quicksizer3

32

SAP S-user ID required

In-memory Computing with SAP HANA on IBM eX5 Systems

3.4 SAP HANA software licensing


As described in 3.2, SAP HANA delivery model on page 25, SAP HANA has an
appliance-like delivery model. However, while the hardware partners deliver the
infrastructure, including operating system and middleware, the license for the
SAP HANA software must be obtained directly from SAP.
The SAP HANA software is available in these editions:
SAP HANA platform edition
This is the basic edition containing the software stack needed to use SAP
HANA as a database, including the SAP HANA database, SAP HANA Studio
for data modeling and administration, the SAP HANA clients, and software
infrastructure components. The software stack comes with the hardware
provided by the hardware partners; whereas, the license has to be obtained
from SAP.
SAP HANA enterprise edition
The SAP HANA enterprise edition extends the SAP HANA platform edition
with the software licenses needed for SAP Landscape Transformation
replication, ETL-based replication using SAP BusinessObjects Data Services,
and Extractor-based replication with Direct Extractor Connection.
SAP HANA extended enterprise edition
SAP HANA extended enterprise edition extends the SAP HANA platform
edition with the software licenses needed for log-based replication with the
Sybase Replication server.
SAP HANA database edition for SAP NetWeaver BW
This edition is restricted to be used as the primary database for an SAP
NetWeaver BW system. Any access to the SAP HANA database must take
place through the BW system.
Additional information about available replication technologies is in section 4.2,
Data replication methods for SAP HANA on page 51.
The SAP HANA licenses are based on the amount of main memory for SAP
HANA. The smallest licensable memory size is 64 GB, increasing in steps of 64
GB. The hardware might provide up to double the amount of main memory than
licensed, as illustrated in Table 3-2.
Table 3-2 Licensable memory per T-shirt size
T-shirt size

Server main memory

Licensable memorya

XS

128 GB

64 - 128 GB

Chapter 3. SAP HANA overview

33

T-shirt size

Server main memory

Licensable memorya

256 GB

128 - 256 GB

512 GB

256 - 512 GB

1024 GB (= 1 TB)

512 - 1024 GB

a. In steps of 64 GB

As you can see from Table 3-2 on page 33, the licensing model allows you to
have a matching T-shirt size for any licensable memory size between 64 GB and
1024 GB.

34

In-memory Computing with SAP HANA on IBM eX5 Systems

Chapter 4.

Software components and


replication methods
This chapter explains the purpose of individual software components of SAP
HANA solution and introduces available replication technologies.
This chapter contains the following sections:
4.1, SAP HANA software components on page 36
4.2, Data replication methods for SAP HANA on page 51

Copyright IBM Corp. 2013. All rights reserved.

35

4.1 SAP HANA software components


The SAP HANA solution is composed from the following main software
components, which we describe in the following sections:

4.1.1, SAP HANA database on page 37


4.1.2, SAP HANA client on page 37
4.1.3, SAP HANA studio on page 38
4.1.4, SAP HANA studio repository on page 46
4.1.5, SAP HANA landscape management structure on page 47
4.1.6, SAP host agent on page 47
4.1.7, Software Update Manager for SAP HANA on page 48
4.1.8, SAP HANA Unified Installer on page 51

Figure 4-1 illustrates the possible locations of these components.

User Workstation
SAP
SAP HANA
HANA
studio
studio
(optional)
(optional)

SAP
SAP HANA
HANA
client
client
(optional)
(optional)

Server
SAP
SAP HANA
HANA
client
client

Server (log replication)


Sybase
Sybase
Replication
Replication
Agent
Agent (*1)
(*1)

SAP HANA Appliance


SAP
SAP HANA
HANA
studio
studio
(optional)
(optional)

SAP
SAP host
host
agent
agent

SAP
SAP HANA
HANA
client
client

SAP
SAP HANA
HANA
LM
LM structure
structure

SAP
SAP HANA
HANA
studio
studio
repository
repository

Software
Software
Update
Update
Manager
Manager

Data
Data Modeling
Modeling
Row
Row Store
Store
Column
Column Store
Store

SAP
SAP HANA
HANA
database
database

Other optional components:


SMD
SMD Agent
Agent
(optional)
(optional)

Sybase
Sybase
Replication
Replication
Server
Server (*1)
(*1)

Sybase
Sybase
EDCA
EDCA (*1)
(*1)

SAP
SAP HANA
HANA
Load
Load
Controller
Controller (*1)
(*1)

(*1) component is required only in case of replication using Sybase Replication Server
EDCA = Enterprise Connect Data Access
SMD = Solution Manager Diagnostics

Figure 4-1 Distribution of software components related to SAP HANA

Components related to replication using Sybase Replication Server are not


covered in this publication.

36

In-memory Computing with SAP HANA on IBM eX5 Systems

4.1.1 SAP HANA database


The SAP HANA database is the heart of the SAP HANA offering and the most
important software component running on the SAP HANA appliance.
SAP HANA is an in-memory database that combines row-based and
column-based database technology. All standard features available in other
relational databases are supported (for example, tables, views, indexes, triggers,
SQL interface, and so on)
On top of these standard functions, the SAP HANA database also offers
modeling capabilities that allow you to define in-memory transformation of
relational tables into analytic views. These views are not materialized; therefore,
all queries are providing real-time results based on content of the underlying
tables.
Another feature extending the capabilities of the SAP HANA database is the
SQLscript programming language, which allows you to capture transformations
that might not be easy to define using simple modeling.
The SAP HANA database can also be integrated with external applications, such
as an SAP R/3 software environment. Using these possibilities customers can
extend their models by implementing existing statistical and analytical functions
developed in the SAP R/3 programming language.
The internal structures of the SAP HANA database are explained in detail in
Chapter 3, SAP HANA overview on page 21.

4.1.2 SAP HANA client


The SAP HANA client is a set of libraries that are used by external applications to
connect to the SAP HANA database.
The following interfaces are available after installing the SAP HANA client
libraries:
SQLDBC
An SAP native database SDK that can be used to develop new custom
applications working with the SAP HANA database.
OLE DB for OLAP (ODBO) (available only on Windows)
ODBO is a Microsoft driven industry standard for multi-dimensional data
processing. The query language used in conjunction with ODBO is the
Multidimensional Expressions (MDX) language.

Chapter 4. Software components and replication methods

37

Open Database Connectivity (ODBC)


ODBC interface is a standard for accessing database systems, which was
originally developed by Microsoft.
Java Database Connectivity (JDBC)
JDBC is a Java-based interface for accessing database systems.
The SAP HANA client libraries are delivered in 32-bit and 64-bit editions. It is
important to always use the correct edition based on the architecture of the
application that will use this client. 32-bit applications cannot use 64-bit client
libraries and vice versa.
To access the SAP HANA database from Microsoft Excel you can also use a
special 32-bit edition of the SAP HANA client called SAP HANA client package
for Microsoft Excel.
The SAP HANA client is backwards compatible, meaning that the revision of the
client must be the same or higher than the revision of the SAP HANA database.
The SAP HANA client libraries must be installed on every machine where
connectivity to the SAP HANA database is required. This includes not only all
servers but also user workstations that are hosting applications that are directly
connecting to the SAP HANA database (for example SAP BusinessObjects
Client Tools or Microsoft Excel).
It is important to keep in mind that whenever the SAP HANA database is updated
to a more recent revision, all clients associated with this database must also be
upgraded.
For more information about how to install the SAP HANA client, see the official
SAP guide SAP HANA Database - Client Installation Guide, which is available for
download at:
http://help.sap.com/hana_appliance

4.1.3 SAP HANA studio


The SAP HANA studio is a graphical user interface that is required to work with
local or remote SAP HANA database installations. It is a multipurpose tool that
covers all of the main aspects of working with the SAP HANA database. Because
of that, the user interface is slightly different for each function.
Note that the SAP HANA studio is not dependent on the SAP HANA client.

38

In-memory Computing with SAP HANA on IBM eX5 Systems

The following main function areas are provided by the SAP HANA studio (each
function area is also illustrated by a corresponding figure of the user interface):
Database administration
The key functions are stopping and starting the SAP HANA databases, status
overview, monitoring, performance analysis, parameter configuration, tracing,
and log analysis.
Figure 4-2 shows the SAP HANA studio user interface for database
administration.

Figure 4-2 SAP HANA studio: Administration console (overview)

Chapter 4. Software components and replication methods

39

Security management
This provides tools that are required to create users, to define and assign
roles, and to grant database privileges.
Figure 4-3 shows an example of the user definition dialog.

Figure 4-3 SAP HANA studio: User definition dialog

40

In-memory Computing with SAP HANA on IBM eX5 Systems

Data management
Functions to create, change or delete database objects (like tables, indexes,
views), commands to manipulate data (for example insert, update, delete,
bulk load, and so on)
Figure 4-4 shows an example of the table definition dialog.

Figure 4-4 SAP HANA studio: Table definition dialog

Chapter 4. Software components and replication methods

41

Modeling
This is the user interface to work with models (metadata descriptions how
source data is transformed in resulting views), including the possibility to
define new custom models, and to adjust or delete existing models.
Figure 4-5 shows a simple analytic model.

Figure 4-5 SAP HANA studio: Modeling interface (analytic view)

42

In-memory Computing with SAP HANA on IBM eX5 Systems

Content management
Functions offering the possibility to organize models in packages, to define
delivery units for transport into a subsequent SAP HANA system, or to export
and import individual models or whole packages.
Content management functions are accessible from the main window in the
modeler perspective, as shown in Figure 4-6.

Figure 4-6 SAP HANA studio: Content functions on the main panel of modeler perspective

Chapter 4. Software components and replication methods

43

Replication management
Data replication into the SAP HANA database is controlled from the Data
Provisioning dialog in the SAP HANA studio, where new tables can be
scheduled for replication, suspended, or replication for a particular table can
be interrupted.
Figure 4-7 shows an example of a data provisioning dialog.

Figure 4-7 SAP HANA studio: Data provisioning dialog

44

In-memory Computing with SAP HANA on IBM eX5 Systems

Software Lifecycle Management


The SAP HANA solution offers the possibility to automatically download and
install updates to SAP HANA software components. This function is
controlled from the Software Lifecycle Management dialog in the SAP HANA
studio. Figure 4-8 shows an example of such a dialog.

Figure 4-8 SAP HANA studio: Software lifecycle dialog

The SAP HANA database queries are consumed indirectly using front-end
components, such as SAP BusinessObjects BI 4.0 clients. Therefore the SAP
HANA studio is required only for administration or development and is not
needed for end users.
The SAP HANA studio runs on the Eclipse platform; therefore, every user must
have Java Runtime Environment (JRE) 1.6 or 1.7 installed, having the same
architecture (64-bit SAP HANA studio has 64-bit JRE as prerequisite).
Currently supported platforms are Windows 32-bit, Windows 64-bit, and Linux
64-bit.
Just like the SAP HANA client, the SAP HANA studio is also backwards
compatible, meaning that the revision level of the SAP HANA studio must be the
same or higher revision level than the revision level of the SAP HANA database.

Chapter 4. Software components and replication methods

45

However, based on practical experience, the best approach is to keep SAP


HANA studio on same revision level as the SAP HANA database whenever
possible. Installation and parallel use of multiple revisions of SAP HANA studio
on one workstation is possible. When using one SAP HANA studio instance for
multiple SAP HANA databases, the revision level of the SAP HANA studio must
be the same or higher revision level than the highest revision level of the SAP
HANA databases being connected to.
SAP HANA studio must be updated to a more recent version on all workstations
whenever the SAP HANA database is updated. This can be automated using
Software Update Manager (SUM) for SAP HANA. We provide more details about
this in 4.1.4, SAP HANA studio repository on page 46 and 4.1.7, Software
Update Manager for SAP HANA on page 48.
For more information about how to install the SAP HANA studio, see the official
SAP guide, SAP HANA Database - Studio Installation Guide, which is available
for download at:
http://help.sap.com/hana_appliance

4.1.4 SAP HANA studio repository


Because SAP HANA studio is an Eclipse based product it can benefit from all
standard features offered by this platform. One of these features is the ability to
automatically update the product from a central repository located on the SAP
HANA server.
The SAP HANA studio repository is initially installed by the SAP HANA Unified
Installer and must be manually updated at the same time that the SAP HANA
database is updated (more details about version compatibility are in section
4.1.3, SAP HANA studio on page 38). This repository can then be used by all
SAP HANA studio installations to automatically download and install new
versions of code.
Using this feature is probably the most reliable way to keep all installations of
SAP HANA studio in sync with the SAP HANA database. However note that a
one time configuration effort is required on each workstation (for more details see
4.1.7, Software Update Manager for SAP HANA on page 48).
For more information about how to install the SAP HANA studio repository, see
the official SAP guide, SAP HANA Database - Studio Installation Guide, which is
available for download at:
http://help.sap.com/hana_appliance

46

In-memory Computing with SAP HANA on IBM eX5 Systems

4.1.5 SAP HANA landscape management structure


The SAP HANA landscape management (LM) structure (lm_structure) is an XML
file that describes the software components installed on a server. The
information in this file contains:
SID of SAP HANA system and host name
Stack description including the edition (depending on the license schema)
Information about the SAP HANA database, including installation directory
Information about the SAP HANA studio repository, including location
Information about the SAP HANA client, including location
In the case of the SAP HANA enterprise extended edition: information about
SAP HANA load controller (which is part of the Sybase Replication Server
based replication)
Information about host controller
The LM structure description also contains revisions of individual components
and therefore needs to be upgraded when the SAP HANA database is upgraded.
Information contained in this file is used by the System Landscape Directory
(SLD) data suppliers and by the Software Update Manager (SUM) for SAP
HANA.
More information about how to configure the SLD connection is provided in the
official SAP guide, SAP HANA Installation Guide with Unified Installer, which is
available for download at:
http://help.sap.com/hana_appliance

4.1.6 SAP host agent


The SAP host agent is a standard part of every SAP installation. In an SAP
HANA environment, it is important in the following situations:
Automatic update using Software Update Manager (SUM) for SAP HANA
(more information is in documents SAP HANA Automated Update Guide and
SAP HANA Installation Guide with Unified Installer)
Replication using the Sybase Replication Server where the host agent is
handling login authentication between source and target servers (explained in
the document SAP HANA Installation and Configuration Guide - Log-Based
Replication)

Chapter 4. Software components and replication methods

47

4.1.7 Software Update Manager for SAP HANA


The Software Update Manager (SUM) for SAP HANA is a tool that belongs to the
SAP Software Logistics (SL) Toolset. This tool offers two main functions:
Automated update of the SAP HANA server components to the latest
revision, downloaded from SAP Service Marketplace
Enablement of automated update of remote SAP HANA studio installations
against the studio repository installed on SAP HANA server
Both functions are discussed in the subsequent sections.

Automated update of SAP HANA server components


The SAP Software Update Manager is a separate software component that must
be started on the SAP HANA server. A good practice is to install this component
as a service.
Tip:
The Software Update Manager can be configured as a Linux service by
running the following commands:
export JAVA_HOME=/usr/sap/<SID>/SUM/jvm/jre
/usr/sap/<SID>/SUM/daemon.sh install
The service can be started using the following command:
/etc/init.d/sum_daemon start
The SAP Software Update Manager does not have a user interface. It is
controlled remotely from the SAP HANA studio.

48

In-memory Computing with SAP HANA on IBM eX5 Systems

Figure 4-9 illustrates the interaction of SUM with other components.

User Workstation

stack.xml
stack.xml
IMCE_SERVER*.SAR
IMCE_SERVER*.SAR
IMCE_CLIENT*.SAR
IMCE_CLIENT*.SAR
IMC_STUDIO*.SAR
IMC_STUDIO*.SAR
HANALDCTR*.SAR
HANALDCTR*.SAR
SAPHOSTAGENT*.SAR
SAPHOSTAGENT*.SAR
SUMHANA*.SAR
SUMHANA*.SAR

SAP Service Marketplace


Installation
Installation
Installation
Installation
Installation
Installation
package
package
Installation
Installation
package
package
Installation
Installation
package
package
package
package
package
package

Software
Software
Update
Update
Manager
Manager

SAP
SAP host
host
agent
agent

self-update

SAP
SAP HANA
HANA
client
client
updated components

SAP
SAP HANA
HANA
studio
studio

SAP HANA Appliance

SAP
SAP HANA
HANA
studio
studio
repository
repository

Data
Data Modeling
Modeling
Row
Row Store
Store
Column
Column Store
Store

SAP
SAP HANA
HANA
database
database

SAP
SAP HANA
HANA
Load
Load
Controller
Controller (*1)
(*1)

(*1) component is required only in case of replication using Sybase


Replication Server

Figure 4-9 Interaction of Software Update Manager (SUM) for SAP HANA with other software components

The Software Update Manager can download support package stack information
and other required files directly from the SAP Service Marketplace (SMP).
If a direct connection from the server to the SAP Service Marketplace is not
available, the support package stack definition and installation packages must be
downloaded manually and then uploaded to the SAP HANA server. In this case,
the stack generator at the SAP Service Marketplace can be used to identify
required packages and to generate the stack.xml definition file (a link to the stack
generator is located in the download section, subsection Support packages in
the SAP HANA area).
The SUM update file (SUMHANA*.SAR archive) is not part of the stack definition
and needs to be downloaded separately.
The Software Update Manager will first perform a self-update as soon as the
Lifecycle Management perspective is opened in the SAP HANA studio.
After the update is started, all SAP HANA software components are updated to
their target revisions, as defined by the support package stack definition file. This
operation needs downtime; therefore, a maintenance window is required, and the
database must be backed up before this operation.

Chapter 4. Software components and replication methods

49

This scenario is preconfigured during installation using the Unified Installer (see
the document SAP HANA Installation Guide with Unified Installer - section SUM
for SAP HANA Default Configuration for more details). If both the SAP HANA
studio and the Software Update Manager for SAP HANA are running on SPS04,
no further steps are required.
Otherwise a last configuration step, installing the server certificate inside the
Java keystore, needs to be performed on a remote workstation where the SAP
HANA studio is located.
For more information about installation, configuration, and troubleshooting of
SUM updates, see the guides:
SAP HANA Installation Guide with Unified Installer
SAP HANA Automated Update Guide
The most common problem during configuration of automatic updates using
SUM is a host name mismatch between server installation (fully-qualified host
name that was used during installation of SAP HANA using Unified Installer) and
the host-name used in the SAP HANA studio. For more details, see the
troubleshooting section in SAP HANA Automated Update Guide.

Automated update of SAP HANA studio


The second function of the Software Update Manager for SAP HANA is to act as
an update server for remote SAP HANA studio installations.
Figure 4-10 illustrates the interaction of SUM with other components for this
scenario.

User Workstation
SAP
SAP HANA
HANA
studio
studio

SAP HANA Appliance


Software
Software
Update
Update
Manager
Manager

Data
Data Modeling
Modeling
Row
Row Store
Store

(read operation)
Column
Column Store
Store
SAP
SAP HANA
HANA
studio
studio
repository
repository

SAP
SAP HANA
HANA
database
database

Figure 4-10 Interaction of the Software Update Manager (SUM) for SAP HANA with other
software components during update of remote SAP HANA studio

50

In-memory Computing with SAP HANA on IBM eX5 Systems

If the Unified Installer was used to install SAP HANA software components, no
actions need to be performed on the server.
The only configuration step needed is to adjust the SAP HANA studio
preferences to enable updates and to define the location of the update server.

4.1.8 SAP HANA Unified Installer


The SAP HANA Unified Installer is tool targeted to be used by SAP HANA
hardware partners. It installs all required software components on the SAP
HANA appliance according to SAP requirements and specifications.
Installation parameters, such as system ID, system number, and locations of
required directories, are provided through the configuration file.
The tool then automatically deploys all required software components in
predefined locations and performs all mandatory steps to configure the Software
Update Manager (SUM) for SAP HANA.
See the SAP HANA Installation Guide with Unified Installer for more details.

4.2 Data replication methods for SAP HANA


Data can be written to the SAP HANA database either directly by a source
application, or it can be replicated using replication technologies.
The following replication methods are available for use with the SAP HANA
database:
Trigger-based replication
This method is based on database triggers created in the source system to
record all changes to monitored tables. These changes are then replicated to
the SAP HANA database using the SAP Landscape Transformation system.
ETL-based replication
This method employs an Extract, Transform, and Load (ETL) process to
extract data from the data source, transform it to meet the business or
technical needs, and load it into the SAP HANA database. The SAP
BusinessObject Data Services application is used as part of this replication
scenario.
Extractor-based replication
This approach uses the embedded SAP NetWeaver BW that is available on
every SAP NetWeaver-based system to start an extraction process using

Chapter 4. Software components and replication methods

51

available extractors and then redirecting the write operation to the SAP HANA
database instead of the local Persistent staging Area (PSA).
Log-based replication
This method is based on reading the transaction logs from the source
database and re-applying them to the SAP HANA database.
Figure 4-11 illustrates these replication methods.

SAP HANA
database
Source System
SAP ERP

Trigger-Based Replication
Application Layer

ETL-Based Replication
Embedded BW

Database

Log
File

Extractor-Based Replication

Log-Based Replication

Figure 4-11 Available replication methods for SAP HANA

The following sections discuss these replication methods for SAP HANA in more
detail.

4.2.1 Trigger-based replication with SAP Landscape Transformation


SAP Landscape Transformation replication is based on tracking database
changes using database triggers. All modifications are stored in logging tables in
the source database, which ensures that every change is captured even when
the SAP Landscape Transformation system is not available.
The SAP Landscape Transformation system reads changes from source systems
and updates the SAP HANA database accordingly. The replication process can
be configured as real-time (continuous replication) or scheduled replication in
predefined intervals.

52

In-memory Computing with SAP HANA on IBM eX5 Systems

The SAP Landscape Transformation operates on the application level; therefore,


the trigger-based replication method benefits from the database abstraction
provided by the SAP software stack, which makes it database independent. It
also has extended source system release coverage, where supported releases
start from SAP R/3 4.6C up to the newest SAP Business Suite releases.
The SAP Landscape Transformation also supports direct replication from
database systems supported by the SAP NetWeaver platform. In this case, the
database must be connected to the SAP Landscape Transformation system
directly (as an additional database) and the SAP Landscape Transformation is
playing the role of the source system.
The replication process can be customized by creating ABAP routines and
configuring their execution during replication process. This feature allows the
SAP Landscape Transformation system to replicate additional calculated
columns and to scramble existing data or filter-replicated data based on defined
criteria.
The SAP Landscape Transformation replication leverages proven System
Landscape Optimization (SLO) technologies (such as Near Zero Downtime, Test
Data Migration Server (TDMS), and SAP Landscape Transformation) and can
handle both unicode and non-unicode source databases. The SAP Landscape
Transformation replication provides a flexible and reliable replication process,
fully integrates with SAP HANA Studio, and is simple and fast to set up.
The SAP Landscape Transformation Replication Server does not have to be a
separate SAP system. It can run on any SAP system with the SAP NetWeaver
7.02 ABAP stack (Kernel 7.20EXT). However, it is recommended to install the
SAP Landscape Transformation Replication Server on a separate system to
avoid high replication load causing performance impact on base system.
The SAP Landscape Transformation Replication Server is the ideal solution for
all SAP HANA customers who need real-time (or scheduled) data replication
from SAP NetWeaver-based systems or databases supported with SAP
NetWeaver.

4.2.2 ETL-based replication with SAP BusinessObjects Data Services


An ETL-based replication for SAP HANA can be set up using SAP
BusinessObjects Data Services, which is a full-featured ETL tool that gives
customers maximum flexibility with regard to the source database system:
Customers can specify and load the relevant business data in defined periods
of time from an SAP ERP system into the SAP HANA database.

Chapter 4. Software components and replication methods

53

SAP ERP application logic can be reused by reading extractors or utilizing SAP
function modules.
It offers options for the integration of third-party data providers and supports
replication from virtually any data source.
Data transfers are done in batch mode, which limits the real-time capabilities of
this replication method.
SAP BusinessObjects Data Services provides several kinds of data quality and
data transformation functionality. Due to the rich feature set available,
implementation time for the ETL-based replication is longer than for the other
replication methods. SAP BusinessObjects Data Services offers integration with
SAP HANA. SAP HANA is a available as a predefined data target for the load
process.
The ETL-based replication server is the ideal solution for all SAP HANA
customers who need data replication from non-SAP data sources.

4.2.3 Extractor-based replication with Direct Extractor Connection


Extractor-based replication for SAP HANA is based on already existing
application logic available in every SAP NetWeaver system. The SAP NetWeaver
BW package that is a standard part of the SAP NetWeaver platform can be used
to run an extraction process and store the extracted data in the SAP HANA
database.
This functionality requires some corrections and configuration changes to both
the SAP HANA database (import of delivery unit and parametrization) and on the
SAP NetWeaver BW system as part of the SAP NetWeaver platform
(implementing corrections using SAP note or installing a support package and
parametrization). Corrections in the SAP NetWeaver BW system ensures that
extracted data is not stored in local Persistent staging Area (PSA) but diverted to
the external SAP HANA database.
Use of native extractors instead of replication of underlying tables can bring
certain benefits. Extractors offer the same transformations that are used by SAP
NetWeaver BW systems.This can significantly decrease the complexity of
modeling tasks in the SAP HANA database.
This type of replication is not real-time and only available features and
transformation capabilities provided by a given extractor can be used.

54

In-memory Computing with SAP HANA on IBM eX5 Systems

Replication using Direct Extractor Connection (DXC) can be realized in the


following basic scenarios:
Using the embedded SAP NetWeaver BW functionality in the source system
SAP NetWeaver BW functions in the source system are usually not used.
After implementation of the required corrections, the source system calls its
own extractors and pushes data into the external SAP HANA database.
The source system must be based on SAP NetWeaver 7.0 or higher. Since
the function of a given extractor is diverted into SAP HANA database, this
extractor must not be in use by the embedded SAP NetWeaver BW
component for any other purpose.
Using an existing SAP NetWeaver BW to drive replication
An existing SAP NetWeaver BW can be used to extract data from the source
system and to write the result to the SAP HANA system.
The release of the SAP NetWeaver BW system that is used must be at least
SAP NetWeaver 7.0, and the given extractor must not be in use for this
particular source system.
Using a dedicated SAP NetWeaver BW to drive replication
The last option is to install a dedicated SAP NetWeaver system to extract data
from the source system and store the result in the SAP HANA database. This
option has minimal impact on existing functionality because no existing
system is changed in any way. However a new system is required for this
purpose.
The current implementation of this replication technology is only allowing for one
database schema in the SAP HANA database. Using one system for controlling
replication of multiple source systems can lead to collisions because all source
systems use the same database schema in the SAP HANA database.

4.2.4 Log-based replication with Sybase Replication Server


The log-based replication for SAP HANA is realized with the Sybase Replication
Server. It captures table changes from low-level database log files and
transforms them into SQL statements that are in turn executed on the SAP
HANA database. This is similar to what is known as log shipping between two
database instances.
Replication with the Sybase Replication Server is fast and consumes little
processing power due to its closeness to the database system. However, this
mode of operation makes this replication method highly database dependent,
and the source database system coverage is limited1. It also limits the conversion
capabilities; therefore, replication with the Sybase Replication Server only

Chapter 4. Software components and replication methods

55

supports Unicode source databases. The Sybase Replication Server cannot


convert between code pages, and because SAP HANA works with unicode
encoding internally, the source database has to use unicode encoding as well.
Also, certain table types used in SAP systems are unsupported.
To set up replication with the Sybase Replication Server, the definition and
content of tables chosen to be replicated must initially be copied from the source
database to the SAP HANA database. This initial load is done with the R3Load
program, which is also used for database imports and exports. Changes in tables
during initial copy operation are captured by the Sybase Replication Server;
therefore, no system downtime is required.
This replication method is only recommended for SAP customers who were
invited to use it during the ramp up of SAP HANA 1.0.
SAP recommends to instead use trigger-based data replication using the SAP
Landscape Transformation Replicator, which is described in the previous section.

4.2.5 Comparing the replication methods


Each of the described data replication methods for SAP HANA has its benefits
and weaknesses:
The trigger-based replication method with the SAP Landscape
Transformation system provides real-time replication while supporting a wide
range of source database systems. It can handle both unicode and
non-unicode databases and makes use of proven data migration technology.
It leverages the SAP application layer, which limits it to SAP source systems.
Compared to the log-based replication method, it offers a broader support of
source systems, while providing almost similar real-time capabilities, and for
that reason it is recommended for replication from SAP source systems.
The ETL-based replication method is the most flexible of all, paying the price
for flexibility with only near real-time capabilities. With its variety of possible
data sources, advanced transformation, and data quality functionality, it is the
ideal choice for replication from non-SAP data sources.
The extractor-based replication method offers reuse of existing transformation
capabilities that are available in every SAP NetWeaver based system. This
can significantly decrease the required implementation effort. However this
type of replication is not real time and is limited to capabilities provided by the
available extractors in the source system.
The log-based replication method with the Sybase Replication Server
provides the fastest replication from the source database to SAP HANA.
1

56

Only certain versions of IBM DB2 on AIX, Linux, and HP-UX are supported with this replication
method.

In-memory Computing with SAP HANA on IBM eX5 Systems

However, it is limited to unicode-encoded source databases, and it does not


support all table types used in SAP applications. It provides no transformation
functionality, and the source database system support is limited.
Figure 4-12 shows these replication methods in comparison.

Real-Time

Near Real-Time

Real-Time Capabilities

Preferred by SAP

SAP LT System

Direct Extractor
Connection

SAP Business Objects


Data Services

Unicode only
Very limited DB support
Data Conversion Capabilities

Sybase
Replication Server

Real Real-Time

Many DBs supported


Unicode and Non-Unicode
on Application Layer

SAP NetWeaver 7.0+


Re-use of extractors
Transformation
Any Datasource
Transformation
Data Cleansing

Figure 4-12 Comparison of the replication methods for SAP HANA

The replication method that you choose depends on the requirements. When
real-lime replication is needed to provide benefit to the business, and the
replication source is an SAP system, then the trigger-based replication is the
best choice. Extractor based replication might keep project cost down by reusing
existing transformations. ETL-based replication provides the most flexibility
regarding data source, data transformation, and data cleansing options, but does
not provide real-time replication.

Chapter 4. Software components and replication methods

57

58

In-memory Computing with SAP HANA on IBM eX5 Systems

Chapter 5.

SAP HANA use cases and


integration scenarios
In this chapter, we outline the different ways that SAP HANA can be implemented
in existing customer landscapes and highlight various aspects of such an
integration. Whenever possible, we mention real-world examples and related
offerings.
This chapter is divided in several sections based on the role of SAP HANA and
the way it interacts with other software components:

5.1, Basic use case scenarios on page 60


5.2, SAP HANA as a technology platform on page 61
5.3, SAP HANA for operational reporting on page 66
5.4, SAP HANA as an accelerator on page 72
5.5, SAP products running on SAP HANA on page 74
5.6, Programming techniques using SAP HANA on page 85

Copyright IBM Corp. 2013. All rights reserved.

59

5.1 Basic use case scenarios


The following classification of use cases was presented during the SAP TechEd
2011 event in session EIM205 Applications powered by SAP HANA. SAP defined
the following five use case scenarios:

Technology platform
Operational reporting
Accelerator
In-Memory products
Next generation applications

Figure 5-1 illustrates these use case scenarios.

Accelerator
Accelerator
Operational
Operational
Reporting
Reporting

In-Memory
In-Memory
Products
Products

Data
Data Modeling
Modeling

Technology
Technology
platform
platform

Column
Column Store
Store
Row
Row Store
Store

Next
Next
Generation
Generation
Applications
Applications

SAP
SAP HANA
HANA
Figure 5-1 Basic use case scenarios defined by SAP in session EIM205

These five basic use case scenarios describe the elementary ways SAP HANA
can be integrated. We cover each of these use case scenarios in a dedicated
section within this chapter.
SAP maintains a SAP HANA Use Case Repository with specific examples for
how SAP HANA can be integrated. This repository is online at the following web
address:
http://www.experiencesaphana.com/community/resources/use-cases

60

In-memory Computing with SAP HANA on IBM eX5 Systems

The use cases in this repository are divided into categories based on their
relevance to a specific industry sector. It is a good idea to review this repository
to find inspiration about how SAP HANA can be leveraged in various scenarios.

5.2 SAP HANA as a technology platform


SAP HANA can be used even in non-SAP environments. The customer can use
structured and un-structured data that is derived from non-SAP application
systems to be able to take advantage of SAP HANA power. SAP HANA can be
used to accelerate existing functionality or to provide new functionality that was,
until now, not realistic.
Figure 5-2 presents SAP HANA as a technology platform.

Data
Data Modeling
Modeling

Non-SAP
Non-SAP
or
or SAP
SAP
data
source
data source

Non-SAP
Non-SAP
application
application

Column
Column Store
Store
Row
Row Store
Store

SAP
SAP HANA
HANA

SAP
SAP
Reporting
Reporting
and
and Analytics
Analytics

Figure 5-2 SAP HANA as technology platform

SAP HANA is not technologically dependent on other SAP products and can be
used independently as the only one SAP component in the customers
Information Technology (IT) landscape. On the other hand, SAP HANA can be
easily integrated with other SAP products, such as SAP BusinessObjects BI
platform for reporting or SAP BusinessObjects Data Services for ETL replication,
which gives customers the possibility to use only the components that are
needed.
There are many ways that SAP HANA can be integrated into a customer
landscape, and it is not possible to describe all combinations. Software
components around the SAP HANA offering can be seen as building blocks, and
every solution must be assembled from the blocks that are needed in a particular
situation.
This approach is extremely versatile and the amount of possible combinations is
growing because SAP constantly keeps adding new components in their SAP
HANA-related portfolio.

Chapter 5. SAP HANA use cases and integration scenarios

61

IBM offers consulting services that help customers to choose the correct solution
for their business needs. For more information, see section 8.4.1, A trusted
service partner on page 154.

5.2.1 SAP HANA data acquisition


There are multiple ways that data can flow into SAP HANA. In this section, we
describe the various options that are available. Figure 5-3 gives an overview.

Current situation
Non-SAP
Non-SAP
application
application

Replacing existing database


Custom
Custom
database
database

Data replication
Non-SAP
Non-SAP
application
application

Non-SAP
Non-SAP
application
application

SAP
SAP HANA
HANA

Dual database approach


Custom
Custom
database
database

Non-SAP
Non-SAP
application
application

Custom
Custom
database
database

Data replication

SAP
SAP HANA
HANA

SAP
SAP HANA
HANA

Figure 5-3 Examples of SAP HANA deployment options in regards to data acquisition

The initial situation is schematically displayed in the upper-left corner of


Figure 5-3. In this example, a customer specific non-SAP application writes data
to a custom database that is slow and not meeting customer needs.
The other three examples in Figure 5-3 show that SAP HANA can be deployed in
such a scenario. These show that there is no single solution that is best for every
customer but that each situation must be considered independently.

62

In-memory Computing with SAP HANA on IBM eX5 Systems

Each of these three solutions have both advantages and disadvantages, which
we highlight, to show aspects of a given solution that might need more detailed
consideration:
Replacing the existing database with SAP HANA
The advantage of this solution is that the overall architecture is not going to be
significantly changed. The solution will remain simple without the need to
include additional components. Customers might also save on license costs
for the original database.
A disadvantage to this solution is that the custom application must be
adjusted to work with the SAP HANA database. If ODBC or JDBS is used for
database access, this is not a big problem. Also the whole setup must be
tested properly. Because the original database is being replaced, a certain
amount of downtime is inevitable.
Customers considering this approach must be familiar with the features and
characteristics of SAP HANA, especially when certain requirements must be
met by the database that is used (for example in case of special purpose
databases).
Populating SAP HANA with data replicated from the existing database
The second option is to integrate SAP HANA as a side-car database to the
primary database and to replicate required data using one of the available
replication techniques.
An advantage of this approach is that the original solution is not touched and
therefore no downtime is required. Also only the required subset of data has
to be replicated from the source database, which might allow customers to
minimize acquisition costs because SAP HANA acquisition costs are directly
linked to the volume of stored data.
The need for implementing replication technology can be seen as the only
disadvantage of this solution. Because data is only delivered into SAP HANA
through replication, this component is a vital part of the whole solution.
Customers considering this approach must be familiar with various replication
technologies, including their advantages and disadvantages, as outlined in
section 4.2, Data replication methods for SAP HANA on page 51.
Customers must also be aware that replication might cause additional load on
the existing database because modified records must be extracted and then
transported to the SAP HANA database. This aspect is highly dependent on
the specific situation and can be addressed by choosing the proper replication
technology.

Chapter 5. SAP HANA use cases and integration scenarios

63

Adding SAP HANA as a second database in parallel to the existing one


This third option keeps the existing database in place while adding SAP
HANA as a secondary database. The custom application then stores data in
both the original database and in the SAP HANA database.
This option balances advantages and disadvantages of the previous two
options. A main prerequisite is the ability of the source application to work
with multiple databases and the ability to control where data is stored. This
can be easily achieved if the source application was developed by the
customer and can be changed, or if the source application is going to be
developed as part of this solution. If this prerequisite cannot be met, this
option is not viable.
An advantage of this approach is that no replication is required because data
is directly stored in SAP HANA as required. Customers can also decide to
store some of the records in both databases.
If data stored in the original database is not going to be changed and SAP
HANA data will be stored in both databases simultaneously, customers might
achieve only minimal disruption to the existing solution.
A main disadvantage is the prerequisite that the application must be able to
work with multiple databases and that it must be able to store data according
to the customers expectations.
Customers considering this option must be aware about the abilities provided
by the application delivering data into the existing database. Also, disaster
recovery plans must be carefully adjusted, especially when consistency
between both databases is seen as a critical requirement.
These examples must not be seen as an exhaustive list of integration options for
an SAP HANA implementation, but rather as a demonstration of how to develop
a solution that matches customer needs.
It is of course possible to populate the SAP HANA database with data coming
from multiple different sources, such as SAP or non-SAP applications, custom
databases, and so on.
These sources can feed data into SAP HANA independently, each using a
different approach or in a synchronized manner using the SAP BusinessObjects
Data Services, which can replicate data from several different sources
simultaneously.

64

In-memory Computing with SAP HANA on IBM eX5 Systems

5.2.2 SAP HANA as a source for other applications


The second part of integrating SAP HANA is to connect existing or new
applications to run queries against the SAP HANA database. Figure 5-4
illustrates an example of such an integration.

Current situation
Custom
Custom
database
database

Possible scenario
Non-SAP
Non-SAP
application
application

Custom
Custom
database
database

Non-SAP
Non-SAP
application
application

SAP
SAP HANA
HANA

SAP
SAP analytic
analytic
tools
tools

SAP
SAP BOBJ
BOBJ
reporting
reporting
Figure 5-4 An example of SAP HANA as a source for other applications

The initial situation is schematically visualized in the left part of Figure 5-4. A
customer-specific application runs queries against a custom database that is a
functionality that we must preserve.
A potential solution is in the right part of Figure 5-4. A customer-specific
application runs problematic queries against the SAP HANA database. If the
existing database is still part of the solution, specific queries that do not need
acceleration can still be executed against the original database.
Specialized analytic tools, such as the SAP BusinessObjects Predictive Analysis,
can be used to run statistical analysis on data that is stored in the SAP HANA
database. This tool can run analysis directly inside the SAP HANA database,
which helps to avoid expensive transfers of massive volumes of data between the
application and the database. The result of this analysis can be stored in SAP
HANA, and the custom application can use these results for further processing,
for example, to facilitate decision making.
SAP HANA can be easily integrated with products from the SAP
BusinessObjects family; therefore, these products can be part of the solution,
responsible for reporting, monitoring critical KPIs using dashboards, or for data
analysis.

Chapter 5. SAP HANA use cases and integration scenarios

65

These tools can also be used without SAP HANA however SAP HANA is
enabling these tools to process much bigger volumes of data and still provide
results in reasonable time.

5.3 SAP HANA for operational reporting


Operational reporting is playing more and more of an important role. In todays
economic environment, companies must understand how various events in our
globally integrated world impact their business to be able to make proper
adjustments to counter the effects of these events. Therefore the pressure to
minimize the delay in reporting is becoming higher and higher. An ideal situation
is to have the ability to have a real-time snapshot of current situations just within
seconds from requesting.
At the same time, the amount of data that is being captured grows every year.
Additional information is collected and stored at more detailed levels. All of this
makes operational reporting more challenging because huge amounts of data
need to be processed quickly to produce the desired result.
SAP HANA is a perfect fit for this task. Required information can be replicated
from existing transactional systems into the SAP HANA database and then
processed significantly faster than directly on the source systems.
The following use case is often referred to as a data mart or side-car approach
because SAP HANA sits by the operational system and receives the operational
data (usually only an excerpt) from this system by means of replication.
In a typical SAP-based application landscape today, you will find a number of
systems, such as SAP ERP, SAP CRM, SAP SCM, and other, possibly non-SAP,
applications. All of these systems contain loads of operational data, which can be
used to improve business decision making using business intelligence
technology. Data used for business intelligence purposes can be gathered either
on a business unit level using data marts or on an enterprise level with an
enterprise data warehouse, such as the SAP NetWeaver Business Warehouse
(SAP NetWeaver BW). ETL processes feed the data from the operational
systems into the data marts and the enterprise data warehouse.

66

In-memory Computing with SAP HANA on IBM eX5 Systems

Figure 5-5 illustrates such a typical landscape.

Corporate BI

Enterprise Data Warehouse (BW)


BWA

Database

Local BI

Data
Mart

SAP ERP 1

SAP ERP n

(or CRM, SRM, SCM)

(or CRM, SRM, SCM)

...

BI

Data
Mart

ETL
DB

Non-SAP
Business
Application

BI

Data
Mart

ETL
Database

DB

Database

Database

DB

Figure 5-5 Typical view of an SAP-based application landscape today

With the huge amounts of data collected in an enterprise data warehouse,


response times of queries for reports or navigation through data can become an
issue, generating new requirements to the performance of such an environment.
To address these requirements, SAP introduced the SAP NetWeaver Business
Warehouse Accelerator, which is built for this use case by speeding up queries
and reports in the SAP NetWeaver BW by leveraging in-memory technology.
While being a perfect fit for an enterprise data warehouse holding huge amounts
of data, the combination of SAP NetWeaver BW and SAP NetWeaver BW
Accelerator is not always a viable solution for the relatively small data marts.

Chapter 5. SAP HANA use cases and integration scenarios

67

With the introduction of SAP HANA 1.0, SAP provided an in-memory technology
aiming to support business intelligence at a business unit level. SAP HANA
combined with business intelligence tools, such as the SAP BusinessObjects
tools and data replication mechanisms feeding data from the operational system
into SAP HANA in real-time, brought in-memory computing to the business unit
level. Figure 5-6 shows such a landscape with the local data marts replaced by
SAP HANA.

Corporate BI

Enterprise Data Warehouse (BW)


Database

Accelerator

Local BI

SAP
HANA
1.0

SAP ERP 1

SAP ERP n

(or CRM, SRM, SCM)

(or CRM, SRM, SCM)

...

Sync

Database

SAP
HANA
1.0

Database

Non-SAP
Business
Application

Database

SAP
HANA
1.0

Figure 5-6 SAP vision after the introduction of SAP HANA 1.0

Business intelligence functionality is provided by an SAP BusinessObjects BI


tool, such as the SAP BusinessObjects Explorer, communicating with the SAP
HANA database through the BI Consumer Services (BICS) interface.
This use case scenario is oriented mainly on existing products from the SAP
Business Suite where SAP HANA acts as a foundation for reporting on big
volumes of data.

68

In-memory Computing with SAP HANA on IBM eX5 Systems

Figure 5-7 illustrates the role of SAP HANA in an operational reporting use case
scenario.

SAP
SAP
Business
Business
Suite
Suite

Data
Data Modeling
Modeling
Column
Column Store
Store
repl.

RDBMS
RDBMS

Row
Row Store
Store

SAP
SAP
Reporting
Reporting
and
and Analytics
Analytics

SAP
SAP HANA
HANA

Figure 5-7 SAP HANA for operational reporting

Usually the first step is the replication of data into the SAP HANA database,
which is usually originating from the SAP Business Suite. However some solution
packages are also built for non-SAP data sources.
Sometimes source systems need to be adjusted by implementing modifications
or by performing specific configuration changes.
Data is typically replicated using the SAP Landscape Transformation replication;
however, other options, such as replication using SAP BusinessObjects Data
Services or SAP HANA Direct Extractor Connection (DXC), are also possible.
The replication technology is usually chosen as part of the package design and
cannot be changed easily during implementation.
A list of tables to replicate (for SAP Landscape Transformation replication) or
transformation models (for replication using Data Services) are part of the
package.
SAP HANA is loaded with models (views) that are either static (designed by SAP
and packaged) or automatically generated based on customized criteria. These
models describe the transformation of source data into the resulting column
views. These views are then consumed by SAP BusinessObjects BI 4.0 reports
or dashboards that are either delivered as final products or pre-made templates
that can be finished as part of implementation process.
Some solution packages are based on additional components (for example, SAP
BusinessObjects Event Insight). If required, additional content that is specific to
these components can also be part of the solution package.
Individual use cases, required software components, prerequisites, configuration
changes, including overall implementation processes, are properly documented
and attached as part of the delivery.

Chapter 5. SAP HANA use cases and integration scenarios

69

Solution packages can contain:


SAP BusinessObjects Data Services Content (data transformation models)
SAP HANA Content (exported models - attribute views, analytic views)
SAP BusinessObjects BI Content (prepared reports, dashboards)
Transports, ABAP reports (adjusted code to be implemented in source
system)
Content for other software components, such as SAP BusinessObjects Event
Insight, Sybase Unwired Platform, and so on.
Documentation
Packaged solutions like these are being delivered by SAP under the name SAP
Rapid Deployment Solutions (RDS) for SAP HANA or by other system
integrators, such as IBM.
Available offerings contain everything that customers need to implement the
requested function. Associated services, including implementation, can also be
part of delivery.
While SAP HANA as a technology platform can be seen as an open field where
every customer can build their own solution using available building blocks, the
SAP HANA for operational reporting scenarios are well prepared packaged
scenarios that can easily and quickly be deployed on existing landscapes.
A list of SAP Rapid Deployment Solution (RDS) offerings are at the following web
address:
http://www.sap.com/solutions/rapid-deployment/solutions-by-business.epx
?Filter1=SOLA000069
Alternatively, you can use the following quick link and then open
Technology SAP HANA:
http://service.sap.com/solutionpackages
There are SAP notes containing Quick Guide documents that are critical for
understanding how a given solution is designed. These documents contain
information about required software components that might have to be obtained,
about version requirements for existing components, and about implementation
sequences.

70

In-memory Computing with SAP HANA on IBM eX5 Systems

The currently available Rapid Deployment Solutions, SAP Notes with more
information, and links to related web content (where available) are:
SAP Bank Analyzer Rapid-Deployment Solution for Financial Reporting with
SAP HANA (see SAP Note 1626729)
http://service.sap.com/rds-hana-finrep
SAP CRM rapid-deployment solution for analytics with SAP HANA (see SAP
Note 1680801)
http://service.sap.com/rds-crm-bwhana
SAP Customer Usage Analysis rapid deployment solution (see SAP Note
1729467)
http://service.sap.com/rds-haha-cua
SAP rapid-deployment solution for implementation of data services, BI
platform, and rapid marts to SAP HANA (see SAP Note 1678910)
http://service.sap.com/rds-eim
SAP Grid Infrastructure Analytics rapid-deployment solution (see SAP Note
1703517)
http://service.sap.com/rds-grid-ana
SAP rapid-deployment solution for sales pipeline analysis with SAP HANA
(see SAP Note 1637113)
http://service.sap.com/rds-crm-pipeline
SAP ERP rapid-deployment solution for profitability analysis with SAP HANA
(see SAP Note 1632506)
http://service.sap.com/rds-hana-copa
SAP ERP rapid-deployment solution for accelerated finance and controlling
with SAP HANA (see SAP Note 1656499)
http://service.sap.com/rds-hana-fin
SAP ERP rapid-deployment solution for operational reporting with SAP HANA
(see SAP Note 1647614 for SAP HANA SP03, or SAP Note 1739432 for SAP
HANA SP04)
http://service.sap.com/rds-hana-erp
SAP Global Trade Services rapid-deployment solution for sanctioned-party
list screening with SAP HANA (see SAP Note 1689708)
http://service.sap.com/rds-gts
SAP Situational Awareness rapid-deployment solution for public sector with
SAP HANA (see SAP Note 1681090)
http://service.sap.com/rds-saps

Chapter 5. SAP HANA use cases and integration scenarios

71

Program Performance Analysis rapid-deployment solution for Aerospace and


Defense (see SAP Note 1696052)
http://service.sap.com/rds-aad
SAP rapid-deployment solution for Sentiment Intelligence with SAP HANA
(see SAP Note 1710619)
http://service.sap.com/rds-sent-int
SAP rapid-deployment solution for Shopper Insight (see SAP Note 1735653)
http://service.sap.com/rds-shop
SAP Deposits Management rapid-deployment solution for transaction history
analysis with SAP HANA (see SAP Note 1626730)
http://service.sap.com/rds-hana-transhis

5.4 SAP HANA as an accelerator


SAP HANA in a side-car approach as an accelerator is similar to a side-car
approach for reporting purposes. The difference is that the consumer of the data
replicated to SAP HANA is not a business intelligence tool but the source system
itself. The source system can use the in-memory capabilities of the SAP HANA
database to run analytical queries on the replicated data. This helps applications
performing queries on huge amounts of data to run simulations, pattern
recognition, planning runs, and so on.
SAP HANA can also be used to accelerate existing processes in SAP Business
Suite systems, even for those systems that are not yet released to be directly
running on the SAP HANA database.
Some SAP systems are processing big amounts of records that need to be
filtered or aggregated based on specific criteria. Results are then used as inputs
for all dependent activities in a given system.
In case of really big data volumes, execution time can be unacceptable (in
amount of hours). Such workloads can easily run several hours, which can cause
unnecessary delays. Currently these tasks are typically being processed
overnight as batch jobs.
SAP HANA as an accelerator can help to significantly decrease this execution
time.

72

In-memory Computing with SAP HANA on IBM eX5 Systems

Figure 5-8 illustrates this use case scenario.

SAP
SAP UI
UI

SAP
SAP
Business
Business
Suite
Suite

read
Data
Data Modeling
Modeling
Column
Column Store
Store
repl.

RDBMS
RDBMS

Row
Row Store
Store

SAP
SAP
Reporting
Reporting
and
and Analytics
Analytics

SAP
SAP HANA
HANA

Figure 5-8 SAP HANA as an accelerator

The accelerated SAP system must meet specific prerequisites. Before this
solution can be implemented, installation of specific support packages or
implementation of SAP Notes might be required. This introduces the necessary
code changes in the source system.
The SAP HANA client must be installed on a given server, and the SAP kernel
must be adjusted to support direct connectivity to the SAP HANA database.
As a next step, replication of data from the source system is configured. Each
specific use case has a defined replication method and a list of tables that must
be replicated. Most common is the SAP Landscape Transformation replication;
however, some solutions offer alternatives, for example, for the SAP CO-PA
Accelerator, replication can also be performed by an SAP CO-PA Accelerator
specific ABAP report in source system.
The source system is configured to have direct connectivity into SAP HANA as
the secondary database. The required scenario is configured according to the
specifications and then activated. During activation the source system
automatically deploys the required column views into SAP HANA and activates
new ABAP code that was installed in the source system as the solution
prerequisite. This new code can run time, consuming queries against the SAP
HANA database, which leads to significantly shorter execution times.
Because SAP HANA is populated with valuable data, it is easy to extend the
accelerator use case by adding operational reporting functions. Additional
(usually optional) content is delivered for SAP HANA and for SAP
BusinessObjects BI 4.0 client tools, such as reports or dashboards.

Chapter 5. SAP HANA use cases and integration scenarios

73

SAP HANA as the accelerator and SAP HANA for operational reporting use case
scenarios can be nicely combined in a single package. Here is a list of SAP
Rapid Deployment Solutions (RDS) implementing SAP HANA as accelerator:
SAP Bank Analyzer Rapid-Deployment Solution for Financial Reporting with
SAP HANA (see SAP Note 1626729)
http://service.sap.com/rds-hana-finrep
SAP rapid-deployment solution for customer segmentation with SAP HANA
(see SAP Note 1637115)
http://service.sap.com/rds-cust-seg
SAP ERP rapid-deployment solution for profitability analysis with SAP HANA
(see SAP Note 1632506)
http://service.sap.com/rds-hana-copa
SAP ERP rapid-deployment solution for accelerated finance and controlling
with SAP HANA (see SAP Note 1656499)
http://service.sap.com/rds-hana-fin
SAP Global Trade Services rapid-deployment solution for sanctioned-party
list screening with SAP HANA (see SAP Note 1689708)
http://service.sap.com/rds-gts

5.5 SAP products running on SAP HANA


Another way that SAP HANA can be deployed is to use SAP HANA as the
primary database for selected products.
SAP NetWeaver Business Warehouse (BW) running on SAP HANA is generally
available since April 2012. The SAP ERP Central Component (SAP ECC)
running on HANA was announced to be available by the end of 2012 and other
products from the SAP Business Suite family are expected to follow.
One big advantage of running existing products to use SAP HANA as the primary
database is the minimal disruption to the existing system. Almost all functions,
customizations, and with SAP NetWeaver BW, also customer specific modeling,
are preserved because application logic written in ABAP is not changed. From a
technical perspective, the SAP HANA conversion is similar to any other database
migration.

74

In-memory Computing with SAP HANA on IBM eX5 Systems

Figure 5-9 illustrates SAP NetWeaver BW running on SAP HANA.

SAP BW on SAP HANA

SAP ECC on SAP HANA

(available)

(planned)

Traditional
extraction

SAP
SAP
Business
Business
Suite
Suite
RDBMS
RDBMS

SAP
SAP BW
BW

SAP
SAP ECC
ECC

Column
Column Store
Store

Column
Column Store
Store

Row
Row Store
Store

Row
Row Store
Store

SAP
SAP HANA
HANA

SAP
SAP HANA
HANA

Figure 5-9 SAP products running on SAP HANA: SAP Business Warehouse (SAP
NetWeaver BW) and SAP ERP Central Component (SAP ECC)

5.5.1 SAP NetWeaver BW running on SAP HANA


SAP HANA can be used as the database for an SAP NetWeaver Business
Warehouse (SAP NetWeaver BW) installation. In this scenario, SAP HANA
replaces the traditional database server of an SAP NetWeaver BW installation.
The application servers stay the same.

Chapter 5. SAP HANA use cases and integration scenarios

75

The in-memory performance of SAP HANA dramatically improves query


performance and eliminates the need for manual optimizations by materialized
aggregates in SAP NetWeaver BW. Figure 5-10 shows SAP HANA as the
database for the SAP NetWeaver Business Warehouse.

Corporate BI

Enterprise Data Warehouse (BW)


SAP HANA
Local BI

Virtual
Data Mart

Virtual
Data Mart

Virtual
Data Mart

Local BI

Local BI

SAP ERP 1

SAP ERP n

(or CRM, SRM, SCM)

(or CRM, SRM, SCM)

...

SAP
HANA
Database

Non-SAP
Business
Application
SAP
HANA

Database

Database

Figure 5-10 SAP HANA as the database for SAP NetWeaver Business Warehouse

In contrast to an SAP NetWeaver BW system accelerated by the in-memory


capabilities of SAP NetWeaver BW Accelerator, an SAP NetWeaver BW system
with SAP HANA as the database keeps all data in-memory. With SAP
NetWeaver BW Accelerator, the customer chooses the data to be accelerated,
which is then copied to the SAP NetWeaver BW Accelerator. Here the traditional
database server (for example, IBM DB2 or Oracle) still acts as the primary
datastore.
SAP NetWeaver BW on SAP HANA is probably the most popular SAP HANA use
case, which achieves big performance improvements with relatively small efforts.
The underlying database is replaced by the SAP HANA database, which
significantly improves both data loading times and query execution times.
Because the application logic written in ABAP is not impacted by this change, all
investments in developing BW models are preserved. The transition to SAP
HANA is a transparent process that requires minimal effort to adjust existing
modeling.

76

In-memory Computing with SAP HANA on IBM eX5 Systems

In-memory optimized InfoCubes


InfoCubes in SAP NetWeaver BW running on traditional database are using the
so called Enhanced Star Schema. This schema was designed to optimize
different performance aspects of working with multidimensional models on
existing database systems.
Figure 5-11 illustrates the Enhanced Star Schema in BW with an example.
Data Package
Dimension Table:
/BI0/D0COPC_C08P

Company Code
Master Data Table:
/BI0/PCOMP_CODE

DIMID

COMP_CODE

SID_0CHNGID
SID_0RECORDTP

Fact Table:
/BI0/F0COPC_C08

SID_0REQUID

KEY_0COPC_C08P
KEY_0COPC_C08T
KEY_0COPC_C08U

Enterprise Structure
Dimension Table:
/BI0/D0COPC_C081

KEY_0COPC_C081

DIMID

KEY_0COPC_C082

SID_0COMP_CODE

KEY_0COPC_C083

SID_0PLANT

AMOUNTFX

OBJVERS
CHANGED

COMP_CODE

CHRT_ACCTS

SID

COMPANY

CHCKFL

COUNTRY

DATAFL

...

INCFL

KEY_0COPC_C084
KEY_0COPC_C085

Company Code
SID Table:
/BI0/SCOMP_CODE

Material
Dimension Table:
/BI0/D0COPC_C082

Plant
SID Table:
/BI0/SPLANT

Plant
Master Data Table :
/BI0/PPLANT
PLANT

PLANT

OBJVERS

SID

CHANGED

AMOUNTVR

DIMID

CHCKFL

ALTITUDE

PRDPLN_QTY

SID_0MATERIAL

DATAFL

BPARTNER

LOTSIZE_CM

SID_0MAT_PLANT

INCFL

...

Figure 5-11 Enhanced Star Schema in SAP NetWeaver Business Warehouse

The core part of every InfoCube is the fact table. This table contains dimension
identifiers (IDs) and corresponding key figures (measures). This table is
surrounded by dimension tables that are linked to fact tables using the dimension
IDs.
Dimension tables are usually small tables that group logically connected
combinations of characteristics, usually representing master data. Logically

Chapter 5. SAP HANA use cases and integration scenarios

77

connected means that the characteristics are highly related to each other, for
example, company code and plant. Combining unrelated characteristics leads to
a big amount of possible combinations, which can have a negative impact on the
performance.
Because master data records are located in separate tables outside of the
InfoCube, an additional table is required to connect these master data records to
dimensions. These additional tables contain a mapping of auto-generated
Surrogate IDs (SIDs) to the real master data.
This complex structure is required on classical databases; however, with SAP
HANA this requirement is obsolete. SAP therefore introduced the SAP HANA
Optimized Star Schema, illustrated in Figure 5-12.

Fact Table:
/BI0/F0COPC_C08

Data Package
Dimension Table:
/BI0/D0COPC_C08P

KEY_0COPC_C08P

DIMID

SID_0CALDAY

SID_0CHNGID

SID_0FISCPER

SID_0RECORDTP

SID_0FISCVARNT

SID_0REQUID

Company Code
Master Data Table:
/BI0/PCOMP_CODE
COMP_CODE

Company Code
SID Table:
/BI0/SCOMP_CODE

CHANGED

COMP_CODE

CHRT_ACCTS

SID_0CURRENCY

SID

COMPANY

SID_0UNIT

CHCKFL

COUNTRY

SID_0COMP_CODE

DATAFL

SID_0PLANT

INCFL

SID_0MATERIAL
SID_0MAT_PLANT
SID_0CURTYPE

Plant
SID Table:
/BI0/SPLANT

Plant
Master Data Table :
/BI0/PPLANT
PLANT

...

PLANT

OBJVERS

AMOUNTFX

SID

CHANGED

AMOUNTVR

CHCKFL

ALTITUDE

PRDPLN_QTY

DATAFL

BPARTNER

LOTSIZE_CM

INCFL

Figure 5-12 SAP HANA Optimized Star Schema in SAP NetWeaver BW system

78

OBJVERS

In-memory Computing with SAP HANA on IBM eX5 Systems

The content of all dimensions (except for the Data Package dimension) is
incorporated into the fact table. This modification brings several advantages:
Simplified modeling
Poorly designed dimensions (wrong combinations of characteristics) cannot
affect performance anymore. Moving characteristics from one dimension to
another is not a physical operation anymore; instead, it is just a metadata
update.
Faster loading
Because dimension tables do not exist, all overhead workload related to
identification of existing combinations or creating new combinations in the
dimension tables is not required anymore. Instead, the required SID values
are directly inserted into the fact table.
The SAP HANA Optimized Star Schema is automatically used for all newly
created InfoCubes on the SAP NetWeaver BW system running on the SAP
HANA database.
Existing InfoCubes are not automatically converted to this new schema during
the SAP HANA conversion of the SAP NetWeaver BW system. The conversion of
standard InfoCubes to in-memory optimized InfoCubes must be done manually
as a follow-up task after the database migration.

SAP HANA acceleration areas


The SAP HANA database can bring significant performance benefits; however, it
is important to set the expectations correctly. SAP HANA can improve loading
and query times, but there are certain limits that cannot be overcome.
Migration of SAP NetWeaver BW to run on SAP HANA will certainly not improve
extraction processes because extraction happens in the source system.
Therefore it is important to understand how much of the overall load time is taken
by extraction from the source system. This information is needed to properly
estimate the potential performance improvement for the load process.
Other parts of the load process are improved. The new star schema removes
unnecessary activities from the loading process.
Some of the calculations and application logic can be pushed to the SAP HANA
database. This ensures that data intensive activities are being done on the SAP
HANA database level instead of on the application level. This increases the
performance because the amount and volume of data exchanged between the
database and the application are significantly reduced.

Chapter 5. SAP HANA use cases and integration scenarios

79

SAP HANA can calculate all aggregations in real-time. Therefore aggregates are
no longer required, and roll-up activity related to aggregate updates is obsolete.
This also reduces overall execution time of update operations.
If SAP NetWeaver BW Accelerator was used, the update of its indexes is also no
longer needed. Because SAP HANA is based on similar technology as an SAP
NetWeaver BW Accelerator, all queries are accelerated. Query performance with
SAP HANA can be compared to situations when all cubes are indexed by the
SAP NetWeaver BW Accelerator. In reality, query performance can be even
faster than with SAP NetWeaver BW Accelerator because additional features are
available for SAP NetWeaver BW running on SAP HANA, for example, the
possibility to remove an InfoCube and instead run reports against in-memory
optimized DataStore Objects (DSOs).

5.5.2 Migrating SAP NetWeaver BW to SAP HANA


There are multiple ways that an existing SAP NetWeaver Business Warehouse
system can be moved to an SAP HANA database. It is important to distinguish
between a building proof of concept (POC) demo system and a productive
migration.
The available options are divided into two main groups:
SAP NetWeaver BW database migration on page 80
Transporting the content to the SAP NetWeaver BW system on page 83
These two groups are just main driving ideas behind the move from a traditional
database to SAP HANA. Within each group there are still many possibilities of
how a project plan can be orchestrated.
In the following sections, we explain these two approaches in more detail.

SAP NetWeaver BW database migration


The following software levels are prerequisites for SAP NetWeaver BW running
on SAP HANA1:
SAP NetWeaver BW 7.30 SP52 or SAP NetWeaver BW 7.31 SP4
SAP HANA 1.0 SPS03 (the latest available revision is recommended)

see SAP Note 1600929 for latest information


As per SAP Note 1600929, SP07 or higher must be imported for your SAP
NetWeaver BW Installation (ABAP) before migration and after installation.

80

In-memory Computing with SAP HANA on IBM eX5 Systems

It is important to be aware that not all SAP NetWeaver BW add-ons are


supported to run on the SAP HANA based system. For the latest information, see
following SAP Notes:
Note 1600929 - SAP NetWeaver BW powered by SAP HANA DB: Information
Note 1532805 - Add-On Compatibility of SAP NetWeaver 7.3
Note 1652738 - Add-on compatibility for SAP NetWeaver EHP 1 for NW 7.30
Unless your system already meets the minimal release requirements, the first
step before converting SAP NetWeaver BW is to upgrade the system to the latest
available release and to the latest available support package level.
A database upgrade might be required as part of the release upgrade or as a
prerequisite before database migration to SAP HANA. For a list of supported
databases, see SAP note 1600929.
Table 5-1 lists the databases that were approved as source databases for the
migration to SAP HANA at the time of writing.
Table 5-1 Supported source databases for a migration to the SAP HANA database
Database

SAP NetWeaver BW 7.30

SAP NetWeaver BW 7.31

Oracle

11.2

11.2

MaxDB

7.8

7.9

MS SQL server

2008

2008

IBM DB2 LUW

9.7

9.7

IBM DB2 for i

6.1, 7.1

6.1, 7.1

IBM DB2 for z/OS

V9, V10

V9, V10

SybaseASE

n/a

15.7

SAP HANA is currently not a supported database for any SAP NetWeaver Java
stack; therefore, dual-stack installations (ABAP+Java) must be separated into
two individual stacks using the Dual-Stack Split Tool from SAP.
Because some existing installations are still non-Unicode installations, another
important prerequisite step might be a conversion of the database to unicode
encoding. This unicode conversion can be done as a separate step or as part of
the conversion to the SAP HANA database.
All InfoCubes with data persistency in the SAP NetWeaver Business Warehouse
Accelerator are set as inactive during conversion, and their content in SAP
NetWeaver BW Accelerator is deleted. These InfoCubes must be reloaded again

Chapter 5. SAP HANA use cases and integration scenarios

81

from the original primary persistence; therefore, required steps must be


incorporated into the project plan.
A migration to the SAP HANA database follows the exact same process as any
other database migration. All activity in the SAP NetWeaver BW system is
suspended after all preparation activities are finished. A special report is
executed to generate database-specific statements for the target database that is
used during import. Next, the content of the SAP system is exported to a
platform-independent format and stored in files on disk.
These files are then transported to the primary application server of the SAP
NetWeaver BW system. Note that application part of SAP NetWeaver BW is not
allowed to run on the SAP HANA appliance. Therefore a minimal installation
needs to have two servers:
SAP HANA appliance hosting the SAP HANA database
The SAP HANA appliance is delivered by IBM with the SAP HANA database
preloaded. However, the database will be empty.
Primary application server hosting ABAP instance of SAP NetWeaver BW
There are minimal restrictions with regard to the operating system of the
primary application server. See the Product Availability Matrix (PAM) for
available combinations (search for SAP NetWeaver 7.3 and download
overview presentation):
http://service.sap.com/pam

82

In-memory Computing with SAP HANA on IBM eX5 Systems

At the time of writing this book, the following operating systems (see
Table 5-2) were available to host ABAP part of SAP NetWeaver BW system:

Windows Server 2008


x86_64 (64-bit) (including R2)

AIX 6.1, 7.1


Power (64-bit)

HP-UX 11.31
IA64 (64-bit)

Solaris 10
SPARC (64-bit)

Solaris 10
x86_64 (64-bit)

Linux SLES 11 SP1


x86_64 (64-bit)

Linux RHEL 5
x86_64 (64-bit)

Linux RHEL 6
x86_64 (64-bit)

IBM i 7.1
Power (64-bit)

Table 5-2 Supported operating systems for primary application server

SAP NetWeaver
BW 7.30

yes

yes

yes

yes

yes

yes

yes

yes

no

SAP NetWeaver
BW 7.31

yes

yes

yes

yes

yes

yes

no

yes

yes

The next step is the database import. It contains the installation of the SAP
NetWeaver BW on the primary application server and the import of data into the
SAP HANA database. The import occurs remotely from the primary application
server as part of the installation process.
Parallel export/import using socket connection and FTP and NFS exchange
modes are not supported. Currently only the asynchronous file-based
export/import method is available.
After mandatory post-activities, conversion of InfoCubes and DataStore objects
to their in-memory optimized form must be initiated to take all benefits that the
SAP HANA database can offer. This can be done either manually for each object
or as a mass operation using a special report.
Customers must plan a sufficient amount of time to perform this conversion. This
step can be time consuming because the content of all InfoCubes must be
copied into temporary tables that have the new structure.
After all post activities are finished, the system is ready to be tested.

Transporting the content to the SAP NetWeaver BW system


Unlike with a database migration, this approach is based on performing
transports of activated objects (Business Content) from the existing SAP

Chapter 5. SAP HANA use cases and integration scenarios

83

NetWeaver BW system into a newly installed SAP NetWeaver BW system with


SAP HANA as a primary database.
The advantage of this approach is that content can be transported across
releases, as explained in following SAP Notes:

Note 1090842 - Composite note: Transports across several releases


Note 454321 - Transports between Basis Release 6.* and 7.0
Note 1273566 - Transports between Basis Release 700/701 and 702/73*
Note 323323 - Transport of all activated objects of a system

The possibility to transport content across different releases can significantly


reduce the amount of effort that is required to build a proof of concept (POC)
system because most of the prerequisite activities, such as the release upgrade,
database upgrade, dual-stack split, and so on, are not needed.
After transporting the available objects (metadata definitions), their content must
also be transported from the source to the target system. The SAP NetWeaver
BW consultant must asses which available options are most suitable for this
purpose.
Note that this approach is not recommended for production migration where a
conventional database migration is used. Therefore additional effort invested in
building a POC system in the same way as the production system will be treated,
is a valuable test. This kind of test can help customers to create a realistic effort
estimation for the project, estimate required runtimes, and define detailed
planning of all actions that are required. All involved project team members
become familiar with the system and can solve and document all specific
problems.

Parallel approach to SAP HANA conversion


The recommended approach to convert an SAP NetWeaver BW system to use
the SAP HANA database is a parallel approach, meaning that the new SAP
NetWeaver BW system is created as a clone of the original system. The standard
homogeneous system copy method can be used for this purpose.
This clone is then reconfigured in a way that both the original and the cloned BW
system is functional and both systems can extract data from the same sources.
Detailed instructions about how to perform this cloning operation are explained in
SAP Note 886102, scenario B2.

84

In-memory Computing with SAP HANA on IBM eX5 Systems

Here is some important information that is relevant to the cloned system. Refer to
the content in SAP Note 886102 to understand the full procedure that must be
applied on the target BW system. The SAP Note states:
Caution: This step deletes all transfer rules and PSA tables of these source
systems, and the data is lost. A message is generated stating that the source
system cannot be accessed (since you deleted the host of the RFC connection).
Choose Ignore.
It is important to understand the consequences of this action and to plan the
required steps to reconfigure the target BW system so that it can again read data
from the source systems.
Persistent Staging Area (PSA) tables can be regenerated by the replication of
DataSources from the source systems, and transfer rules can be transported
from the original BW system. However the content of these PSA tables is lost
and needs to be reloaded from source systems.
This step might potentially cause problems where DataStore objects are used
and PSA tables contain the complete history of data.
An advantage of creating a cloned SAP NetWeaver BW system is that the
original system is not impacted and can still be used for productive tasks. The
cloned system can be tested and results compared with the original system
immediately after the clone is created and after every important project
milestone, such as a release upgrade or the conversion to SAP HANA itself.
Both systems are fully synchronized because both systems periodically extract
data from the same source systems. Therefore, after an entire project is finished,
and the new SAP NetWeaver BW system running on SAP HANA meets the
customers expectations, the new system can fully replace the original system.
A disadvantage of this approach is the additional load imposed on the source
systems, which is caused by both SAP NetWeaver BW systems performing
extraction from the same source system, and certain limitations mentioned in the
following SAP notes:
Note 775568 - Two and more BW systems against one OLTP system
Note 844222 - Two OR more BW systems against one OLTP system

5.6 Programming techniques using SAP HANA


The last use case scenario is based on recent developments from SAP where
applications can be built directly against the SAP HANA database leveraging all
its features, such as the embedded application server (XS Engine) or stored

Chapter 5. SAP HANA use cases and integration scenarios

85

procedures, which allows logic to be directly processed inside the SAP HANA
database.
A new software component can be integrated with SAP HANA either directly or it
can be built on top of the SAP NetWeaver stack, which is can work with the SAP
HANA database using client libraries.
Because of its breadth and depth, this use case scenario is not discussed in
detail as part of this publication.

86

In-memory Computing with SAP HANA on IBM eX5 Systems

Chapter 6.

The IBM Systems Solution


for SAP HANA
This chapter discusses the IBM Systems Solution for SAP HANA. We describe
the hardware and software components, scale-up and scale-out approaches,
workload-optimized models, interoperability with other platforms, and support
processes. We also highlight IBM Systems solution with SAP Discovery System.
The following topics are covered:

6.1, IBM eX5 Systems on page 88


6.2, IBM General Parallel File System on page 102
6.3, Custom server models for SAP HANA on page 104
6.4, Scale-out solution for SAP HANA on page 110
6.5, Installation services on page 122
6.6, Interoperability with other platforms on page 122
6.7, Support process on page 123
6.8, IBM Systems Solution with SAP Discovery System on page 125

Copyright IBM Corp. 2013. All rights reserved.

87

6.1 IBM eX5 Systems


IBM decided to base their offering for SAP HANA on their high-performance,
scalable IBM eX5 family of servers. These servers represent the IBM high-end
Intel-based enterprise servers. IBM eX5 systems, all based on the eX5
Architecture, are the HX5 blade server, the x3690 X5, the x3850 X5, and the
x3950 X5. They have a common set of technical specifications and features:
The IBM System x3850 X5 is a 4U highly rack-optimized server. The x3850
X5 also forms the basis of the x3950 X5, the new flagship server of the IBM
x86 server family. These systems are designed for maximum utilization,
reliability, and performance for compute-intensive and memory-intensive
workloads, such as SAP HANA.
The IBM System x3690 X5 is a 2U rack-optimized server. This machine
brings the eX5 features and performance to the mid tier. It is an ideal match
for the smaller, two-CPU configurations for SAP HANA.
The IBM BladeCenter HX5 is a single wide (30 mm) blade server that follows
the same design as all previous IBM blades. The HX5 brings unprecedented
levels of capacity to high-density environments.
When compared with other machines in the System x portfolio, these systems
represent the upper end of the spectrum and are suited for the most demanding
x86 tasks.
For SAP HANA, the x3690 X5 and the x3950 X5 are used, which is why we
feature only these systems in this paper.
Note: For the latest information about the eX5 portfolio, see the IBM
Redpaper publication IBM eX5 Portfolio Overview: IBM System x3850 X5,
x3950 X5, x3690 X5, and BladeCenter HX5, REDP-4650, for further eX5
family members and capabilities. This paper is available at the following web
page:
http://www.redbooks.ibm.com/abstracts/redp4650.html

6.1.1 IBM System x3850 X5 and x3950 X5


The IBM System x3850 X5 (Figure 6-1 on page 89) offers improved performance
and enhanced features, including MAX5 memory expansion and
workload-optimized x3950 X models to maximize memory, minimize costs, and
simplify deployment.

88

In-memory Computing with SAP HANA on IBM eX5 Systems

Figure 6-1 IBM System x3850 X5 and x3950 X5

The x3850 X5 and the workload-optimized x3950 X5 are the logical successors
to the x3850 M2 and x3950 M2, featuring the IBM eX4 chipset. Compared with
previous generation servers, the x3850 X5 offers:
High memory capacity
Up to 64 DIMMS standard and 96 DIMMs with the MAX5 memory expansion
per 4-socket server
Intel Xeon processor E7 family
Exceptional scalable performance with advanced reliability for your most
data-demanding applications
Extended SAS capacity with eight HDDs and 900 GB 2.5" SAS drives or 1.6
TB of hot-swappable RAID 5 with eXFlash technology
Standard dual-port Emulex 10 GB Virtual Fabric adapter
Ten-core, 8-core, and 6-core processor options with up to 2.4 GHz (10-core),
2.13 GHz (8-core), and 1.86 GHz (6-core) speeds with up to 30 MB L3 cache
Scalable to a two-node system with eight processor sockets and 128 DIMM
sockets
Seven PCIe x8 high-performance I/O expansion slots to support hot-swap
capabilities
Optional embedded hypervisor
The x3850 X5 and x3950 X5 both scale to four processors and 2 Terabytes (TB)
of RAM. With the MAX5 attached, the system can scale to four processors and
3 TB of RAM. Two x3850 X5 servers can be connected together for a single
system image with eight processors and 4 TB of RAM.

Chapter 6. The IBM Systems Solution for SAP HANA

89

With their massive memory capacity and computing power, the IBM System
x3850 X5 and x3950 X5 rack-mount servers are the ideal platform for
high-memory demanding, high-workload applications, such as SAP HANA.

6.1.2 IBM System x3690 X5


The IBM System x3690 X5 (Figure 6-2) is a 2U rack-optimized server that brings
new features and performance to the mid tier.

Figure 6-2 IBM System x3690 X5

This machine is a two-socket, scalable system that offers up to four times the
memory capacity of current two-socket servers. It supports the following
specifications:
Up to 2 sockets for Intel Xeon E7 processors. Depending on the processor
model, processors have six, eight, or ten cores.
Scalable from 32 to 64 DIMMs sockets with the addition of a MAX5 memory
expansion unit.
Advanced networking capabilities with a Broadcom 5709 dual Gb Ethernet
controller standard in all models and an Emulex 10 Gb dual-port Ethernet
adapter standard on some models, optional on all others.
Up to 16 hot-swap 2.5-inch SAS HDDs, up to 16 TB of maximum internal
storage with RAID 0, 1, or 10 to maximize throughput and ease installation.
RAID 5 is optional. The system comes standard with one HDD backplane that
can hold four drives. A second and third backplane are optional for an
additional 12 drives.
New eXFlash high-IOPS solid-state storage technology.
Five PCIe 2.0 slots.
Integrated Management Module (IMM) for enhanced systems management
capabilities.

90

In-memory Computing with SAP HANA on IBM eX5 Systems

The x3690 X5 features the IBM eXFlash internal storage using solid state drives
to maximize the number of I/O operations per second (IOPS). All configurations
for SAP HANA based on x3690 X5 use eXFlash internal storage for high IOPS
log storage or for both data and log storage.
The x3690 X5 is an excellent choice for a memory-demanding and
performance-demanding business application, such as SAP HANA. It provides
maximum performance and memory in a dense 2U package.

6.1.3 Intel Xeon processor E7 family


The IBM eX5 portfolio of servers uses CPUs from the Intel Xeon processor E7 family
to maximize performance. These processors are the latest in a long line of
high-performance processors.
The Intel Xeon processor E7 family CPUs are the latest Intel scalable processors
and can be used to scale up to four or more processors. When used in the IBM
System x3850 X5 or x3950 X5, these servers can scale up to eight processors.
The Intel Xeon E7 processors have a lot of features that are relevant for the SAP
HANA workload. We cover some of these features in the following sections. For
more in-depth information about the benefits of the Intel Xeon processor E7
family for SAP HANA, see the Intel white paper Analyzing Business as it
Happens, April 2011, available for download at:
http://www.Intel.com/en_us/ssets/pdf/whitepaper/mc_sap_wp.pdf

Instruction set extensions


SAP HANA makes use of several instruction set extensions of the Intel Xeon E7
processors. For example, these extensions allow you to process multiple data
items with one instruction. SAP HANA uses these instructions to speed up
compression and decompression of in-memory data and to improve search
performance.

Intel Hyper-Threading Technology


Intel Hyper-Threading Technology enables a single physical processor to
execute two separate code streams (threads) concurrently on a single processor
core. To the operating system, a processor core with Hyper-Threading appears
as two logical processors, each of which has its own architectural state.
Hyper-Threading Technology is designed to improve server performance by
exploiting the multi-threading capability of operating systems and server
applications. SAP HANA makes extensive use of Hyper-Threading to
parallelize processing.

Chapter 6. The IBM Systems Solution for SAP HANA

91

For more information about Hyper-Threading Technology, see the following web
page:
http://www.intel.com/technology/platform-technology/hyper-threading/

Intel Turbo Boost Technology 2.0


Intel Turbo Boost Technology dynamically turns off unused processor cores and
increases the clock speed of the cores in use. For example, with six cores active,
a 2.4 GHz 10-core processor can run the cores at 2.67 GHz. With only four cores
active, the same processor can run those cores at 2.8 GHz. When the cores are
needed again, they are dynamically turned back on and the processor frequency
is adjusted accordingly. When temperature, power, or current exceed
factory-configured limits and the processor is running higher than the base
operating frequency, the processor automatically reduces the core frequency to
reduce temperature, power, and current.
Turbo Boost Technology is available on a per-processor number basis for the eX5
systems. For ACPI-aware operating systems, no changes are required to take
advantage of it. Turbo Boost Technology can be engaged with any number of
cores enabled and active, resulting in increased performance of both
multi-threaded and single-threaded workloads.
For more information about Turbo Boost Technology, see the following web page:
http://www.intel.com/technology/turboboost/

92

In-memory Computing with SAP HANA on IBM eX5 Systems

Quick Path Interconnect (QPI)


Earlier versions of the Intel Xeon processor were connected by a parallel bus to a
core chipset, which functions as both a memory and I/O controller. The new Intel
Xeon E7 processors implemented in IBM eX5 servers include a separate
memory controller to each processor. Processor-to-processor communication is
carried over shared-clock or coherent quick path interconnect (QPI) links, and I/O
is transported over non-coherent QPI links through I/O hubs (Figure 6-3).

I/O Hub

I/O

Memory

Processor

Processor

Memory

Memory

Processor

Processor

Memory

I/O Hub

I/O

Figure 6-3 Quick path interconnect, as in the eX5 portfolio

In previous designs, the entire range of memory was accessible through the core
chipset by each processor, which is called a shared memory architecture. This
new design creates a non-uniform memory access (NUMA) system in which a
portion of the memory is directly connected to the processor where a given
thread is running, and the rest must be accessed over a QPI link through another
processor. Similarly, I/O can be local to a processor or remote through another
processor.
For more information about QPI, see the following web page:
http://www.intel.com/technology/quickpath/

Chapter 6. The IBM Systems Solution for SAP HANA

93

Reliability, availability, and serviceability


Most system errors are handled in hardware by the use of technologies, such as
error checking and correcting (ECC) memory. The E7 processors have additional
reliability, availability, and serviceability (RAS) features due to their architecture:
Cyclic redundancy checking (CRC) on the QPI links
The data on the QPI link is checked for errors.
QPI packet retry
If a data packet on the QPI link has errors or cannot be read, the receiving
processor can request that the sending processor retry sending the packet.
QPI clock failover
In the event of a clock failure on a coherent QPI link, the processor on the
other end of the link can take over providing the clock. This is not required on
the QPI links from processors to I/O hubs because these links are
asynchronous.
SMI packet retry
If a memory packet has errors or cannot be read, the processor can request
that the packet be resent from the memory buffer.
Scalable memory interconnect (SMI) retry
If there is an error on an SMI link, or a memory transfer fails, the command
can be retried.
SMI lane failover
When an SMI link exceeds the preset error threshold, it is disabled, and
memory transfers are routed through the other SMI link to the memory buffer.
All these features help prevent data from being corrupted or lost in memory. This
is especially important with an application, such as SAP HANA, because any
failure in the area of memory or inter-CPU communication leads to an outage of
the application or even of the complete system. With huge amounts of data
loaded into main memory, even a restart of only the application means
considerable time required to return to operation.

Machine Check Architecture


The Intel Xeon processor E7 family also features the machine check architecture
(MCA), which is a RAS feature that enables the handling of system errors that
otherwise require the operating system to be halted. For example, if a dead or
corrupt memory location is discovered, but it cannot be recovered at the memory
subsystem level, and provided that it is not in use by the system or an
application, an error can be logged but the operation of the server can continue.

94

In-memory Computing with SAP HANA on IBM eX5 Systems

If it is in use by a process, the application to which the process belongs can be


aborted or informed about the situation.
Implementation of the MCA requires hardware support, firmware support (such
as that found in the unified extensible firmware interface (UEFI)), and operating
system support. Microsoft, SUSE, Red Hat, and other operating system vendors
included support for the Intel MCA on the Intel Xeon processors in their latest
operating system versions.
SAP HANA is the first application that leverages the MCA to handle system
errors to prevent the application from being terminated in case of a system error.
Figure 6-4 shows how SAP HANA leverages the Machine Check Architecture.

Normal
operation

Hardware

OS

Error
detected

Error
passed to
OS

Hardware
correctable
error

Error
corrected

Memory
Page
unused

Memory Page
unmapped
and marked

SAP HANA

Application
signaled

Application
terminates

Page identified
and data can be
reconstructed

Reconstruct
data in
corrupted page

Figure 6-4 Intel Machine Check Architecture (MCA) with SAP HANA

If a memory error is encountered that cannot be corrected by the hardware, the


processor sends an MCA recovery signal to the operating system. An operating
system supporting MCA, such as SUSE Linux Enterprise Server used in the SAP
HANA appliance, now determines whether the affected memory page is in use
by an application. If unused, it unmaps the memory page and marks it as bad. If
the page is used by an application, traditionally the OS has to hold that
application, or in the worst case stop all processing and halt the system. With
SAP HANA being MCA-aware, the operating system can signal the error
situation to SAP HANA, giving it the chance to try to repair the effects of the
memory error.

Chapter 6. The IBM Systems Solution for SAP HANA

95

Using the knowledge of its internal data structures, SAP HANA can decide what
course of action to take. If the corrupted memory space is occupied by one of the
SAP in-memory tables, SAP HANA reloads the associated tables. In addition, it
analyzes the failure and checks whether it affects other stored or committed data,
in which case it uses savepoints and database logs to reconstruct the committed
data in a new, unaffected memory location.
With the support of MCA, SAP HANA can take appropriate action at the level of
its own data structures to ensure a smooth return to normal operation and avoid
a time-consuming restart or loss of information.

I/O hubs
The connection to I/O devices (such as keyboard, mouse, and USB) and to I/O
adapters (such as hard disk drive controllers, Ethernet network interfaces, and
Fibre Channel host bus adapters) is handled by I/O hubs, which then connect to
the processors through QPI links. Figure 6-3 on page 93 shows the I/O Hub
connectivity. Connections to the I/O devices are fault tolerant because data can
be routed over either of the two QPI links to each I/O hub.
For optimal system performance in the four processor systems (with two I/O
hubs), balance high-throughput adapters across the I/O hubs. The configurations
used for SAP HANA contain several components that require high throughput
I/O:
Dual-port 10 Gb Ethernet adapters
ServeRAID controllers to connect the SAS drives
High IOPS PCIe Adapters
To ensure optimal performance, the placement of these components in the PCIe
slots was optimized according to the I/O architecture outlined above.

6.1.4 Memory
For an in-memory appliance, such as SAP HANA, a systems main memory, its
capacity, and its performance play an important role. The Intel Xeon processor
E7 family, Figure 6-5 on page 97, has a memory architecture that is well suited to
the requirements of such an appliance.
The E7 processors have two SMIs. Therefore, memory needs to be installed in
matched pairs. For better performance, or for systems connected together,
memory must be installed in sets of four. The memory used in the eX5 systems is
DDR3 SDRAM registered DIMMs. All of the memory runs at 1066 MHz or less,
depending on the processor.

96

In-memory Computing with SAP HANA on IBM eX5 Systems

Processor

Memory
controller

Buffer

Buffer

Memory
controller

Buffer

Buffer

DIMM

DIMM

DIMM

DIMM

DIMM

DIMM

DIMM

DIMM

DIMM

DIMM

DIMM

DIMM

DIMM

DIMM

DIMM

DIMM

Figure 6-5 Memory architecture with Intel Xeon processor E7 family

Memory DIMM placement


The eX5 servers support a variety of ways to install memory DIMMs. It is
important to understand that because of the layout of the SMI links, memory
buffers, and memory channels, you must install the DIMMs in the correct
locations to maximize performance.
Figure 6-6 on page 98 shows eight possible memory configurations for the two
memory cards and 16 DIMMs connected to each processor socket in an x3850
X5. Similar configurations apply to the x3690 X5 and HX5. Each configuration
has a relative performance score. The following key information from this chart is
important:
The best performance is achieved by populating all memory DIMMs in the
server (configuration 1 in Figure 6-6 on page 98).
Populating only one memory card per socket can result in approximately a
50% performance degradation. (Compare configuration 1 with 5.)
Memory performance is better if you install DIMMs on all memory channels
than if you leave any memory channels empty. (Compare configuration 2 with
3.)
Two DIMMs per channel result in better performance than one DIMM per
channel. (Compare configuration 1 with 2, and compare 5 with 6.)

Chapter 6. The IBM Systems Solution for SAP HANA

97

1
2
3
4
5
6
7
8

Mem Ctrl 1

Mem Ctrl 1

Mem Ctrl 1

Mem Ctrl 1

Mem Ctrl 1

Mem Ctrl 1

Mem Ctrl 1

Mem Ctrl 1

Mem Ctrl 2

Relative
performance
Each processor:
2 memory controllers
2 DIMMs per channel
8 DIMMs per MC

1.0

Each processor:
2 memory controllers
1 DIMM per channel
4 DIMMs per MC

0.94
0.61

Mem Ctrl 2

Each processor:
2 memory controllers
2 DIMMs per channel
4 DIMMs per MC

0.58

Mem Ctrl 2

Each processor:
2 memory controllers
1 DIMM per channel
2 DIMMs per MC

Mem Ctrl 2

Each processor:
1 memory controller
2 DIMMs per channel
8 DIMMs per MC

0.51

Mem Ctrl 2

Each processor:
1 memory controller
1 DIMM per channel
4 DIMMs per MC

0.47

Mem Ctrl 2

Each processor:
1 memory controller
2 DIMMs per channel
4 DIMMs per MC

Mem Ctrl 2

Mem Ctrl 2

Each processor:
1 memory controller
1 DIMM per channel
2 DIMMs per MC

Memory card
DIMMs
Channel
Memory buffer
SMI link
Memory controller

Mem Ctrl 1

0.31

Relative memory performance

Memory configurations

1
0.94

0.9
0.8
0.7

0.61

0.6

0.58
0.51

0.5

0.47

0.4

0.31

0.29

0.3
0.2
0.1
0
1

Configuration

0.29

Figure 6-6 Relative memory performance based on DIMM placement (one processor and two memory
cards shown)

Nonuniform memory architecture


Nonuniform memory architecture (NUMA) is an important consideration when
configuring memory because a processor can access its own local memory
faster than non-local memory. The configurations used for SAP HANA do not use
all available DIMM sockets. For configurations like these, another principle to
consider when configuring memory is that of balance. A balanced configuration
has all of the memory cards configured with the same amount of memory. This
principle helps to keep remote memory access to a minimum.
A server with a NUMA, such as the servers in the eX5 family, has local and
remote memory. For a given thread running in a processor core, local memory
refers to the DIMMs that are directly connected to that particular processor.
Remote memory refers to the DIMMs that are not connected to the processor
where the thread is running currently. Remote memory is attached to another

98

In-memory Computing with SAP HANA on IBM eX5 Systems

processor in the system and must be accessed through a QPI link (Figure 6-3 on
page 93). However, using remote memory adds latency. The more such latencies
add up in a server, the more performance can degrade. Starting with a memory
configuration where each CPU has the same local RAM capacity is a logical step
toward keeping remote memory accesses to a minimum.
In a NUMA system, each processor has fast, direct access to its own memory
modules, reducing the latency that arises due to bus-bandwidth contention. SAP
HANA is NUMA-aware, and thus benefits from this direct connection.

Hemisphere mode
Hemisphere mode is an important performance optimization of the Intel Xeon
processor E7, 6500, and 7500 product families. Hemisphere mode is
automatically enabled by the system if the memory configuration allows it. This
mode interleaves memory requests between the two memory controllers within
each processor, enabling reduced latency and increased throughput. It also
allows the processor to optimize its internal buffers to
maximize memory throughput.
Hemisphere mode is enabled only when the memory configuration behind each
memory controller on a processor is identical. In addition, because eight DIMMs
per processor are required for using all memory channels, eight DIMMs per
processor must be installed at a time for optimized memory performance.

6.1.5 Flash technology storage


As discussed in 2.1.2, Data persistence on page 11, storage technology
providing high IOPS capabilities with low latency is a key component of the
infrastructure for SAP HANA. The IBM eX5 systems used for the IBM Systems
Solution for SAP HANA feature two kinds of flash technology storage devices:
eXFlash, as used in the IBM System x3690 X5-based configuration
High IOPS adapters, as used in the IBM System x3950 X5-based
configurations
The following sections provide more information about these options.

Chapter 6. The IBM Systems Solution for SAP HANA

99

eXFlash
IBM eXFlash is the name given to the eight 1.8-inch solid state drives (SSDs),
the backplanes, SSD hot swap carriers, and indicator lights that are available for
the x3850 X5/x3950 X5 and x3690 X5. Each eXFlash can be put in place of four
SAS or SATA disks. The eXFlash units connect to the same types of ServeRAID
disk controllers as the SAS/SATA disks. Figure 6-7 shows an eXFlash unit, with
the status light assembly on the left side.

Status lights
Solid state drives
(SSDs)

Figure 6-7 IBM eXFlash unit

In addition to using less power than rotating magnetic media, the SSDs are more
reliable and can service many more I/O operations per second (IOPS). These
attributes make them suited to I/O-intensive applications, such as transaction
processing, logging, backup and recovery, and business intelligence. Built on
enterprise-grade MLC NAND flash memory, the SSD drives used in eXFlash
deliver up to 30,000 IOPS per single drive. Combined into an eXFlash unit, these
drives can deliver up to 240,000 IOPS and up to 2 GBps of sustained read
throughput per eXFlash unit.
In addition to its superior performance, eXFlash offers superior uptime with three
times the reliability of mechanical disk drives. SSDs have no moving parts to fail.
Each drive has its own backup power circuitry, error correction, data protection,
and thermal monitoring circuitry. They use Enterprise Wear-Leveling to extend
their use even longer.
A single eXFlash unit accommodates up to eight hot-swap SSDs and can be
connected to up to 2 performance-optimized controllers. The x3690 X5-based
models for SAP HANA enable RAID protection for the SSD drives by using two
ServeRAID M5015 controllers with the ServeRAID M5000 Performance
Accelerator Key for the eXFlash units.

High IOPS adapter


The IBM High IOPS SSD PCIe Adapters provide a new generation of
ultra-high-performance storage based on solid state device technology for

100

In-memory Computing with SAP HANA on IBM eX5 Systems

System x and BladeCenter. These adapters are alternatives to disk drives and
are available in several sizes, from 160 GB to 1.2 TB. Designed for
high-performance servers and computing appliances, these adapters deliver
throughput of up to 900,000 I/O operations per second (IOPS), while providing
the added benefits of lower power, cooling, and management overhead and a
smaller storage footprint. Based on standard PCIe architecture coupled with
silicon-based NAND clustering storage technology, the High IOPS adapters are
optimized for System x rack-mount systems and can be deployed in blades
through the PCIe expansion units. They are available in storage capacities up to
2.4 TB.
These adapters use NAND flash memory as the basic building block of
solid-state storage and contain no moving parts. Thus, they are less sensitive to
issues associated with vibration, noise, and mechanical failure. They function as
a PCIe storage and controller device, and after the appropriate drivers are
loaded, the host operating system sees them as block devices. Therefore, these
adapters cannot be used as bootable devices.
The IBM High IOPS PCIe Adapters combine high IOPS performance with low
latency. As an example, with 512 KB block random reads, the IBM 1.2TB High
IOPS MLC Mono Adapter can deliver 143,000 IOPS, compared with 420 IOPS
for a 15 K RPM 146 GB disk drive. The read access latency is about 68
microseconds, which is one hundredth of the latency of a 15 K RPM 146 GB disk
drive (about 5 ms or 5000 microseconds). The write access latency is even less,
with about 15 microseconds.
Reliability features include the use of Enterprise-grade MLC (eMLC), advanced
wear-leveling, ECC protection, and Adaptive Flashback redundancy for RAID-like
chip protection with self-healing capabilities, providing unparalleled reliability and
efficiency. Advanced bad-block management algorithms enable taking blocks out
of service when their failure rate becomes unacceptable. These reliability
features provide a predictable lifetime and up to 25 years of data retention.

Chapter 6. The IBM Systems Solution for SAP HANA

101

The x3950 X5-based models of the IBM Systems Solution for SAP HANA come
with IBM High IOPS adapters, either with 320 GB (7143-H1x), 640 GB
(7143-H2x, -H3x), or 1.2 TB storage capacity (7143-HAx, -HBx, -HCx).
Figure 6-8 shows the IBM 1.2TB High IOPS MLC Mono adapter, which comes
with the x3950 based 2012 models (7143-HAx, -HBx, -HCx).

Figure 6-8 IBM 1.2 TB High IOPS MLC Mono adapter

6.2 IBM General Parallel File System


The IBM General Parallel File System (GPFS) is a key component of the IBM
Systems Solution for SAP HANA. It is a high-performance shared-disk file
management solution that can provide faster, more reliable access to a common
set of file data. It enables a view of distributed data with a single global
namespace.
GPFS leverages its cluster architecture to provide quicker access to your file
data. File data is automatically spread across multiple storage devices, providing
optimal use of your available storage to deliver high performance.
GPFS is designed for high-performance parallel workloads. Data and metadata
flow from all the nodes to all the disks in parallel under control of a distributed
lock manager. It has a flexible cluster architecture that enables the design of a
data storage solution that not only meets current needs but that can also quickly
be adapted to new requirements or technologies. GPFS configurations include
direct-attached storage, network block input and output (I/O), or a combination of
the two, and multi-site operations with synchronous data mirroring.

102

In-memory Computing with SAP HANA on IBM eX5 Systems

GPFS can intelligently prefetch data into its buffer pool, issuing I/O requests in
parallel to as many disks as necessary to achieve the peak bandwidth of the
underlying storage-hardware infrastructure. GPFS recognizes multiple I/O
patterns, including sequential, reverse sequential, and various forms of striped
access patterns. In addition, for high-bandwidth environments, GPFS can read or
write large blocks of data in a single operation, minimizing the overhead of I/O
operations.
Expanding beyond a storage area network (SAN) or locally attached storage, a
single GPFS file system can be accessed by nodes using a TCP/IP or InfiniBand
connection. Using this block-based network data access, GPFS can outperform
network-based sharing technologies, such as NFS and even local file systems
such as the EXT3 journaling file system for Linux or Journaled File System.
Network block I/O (also called network shared disk (NSD)) is a software layer
that transparently forwards block I/O requests from a GPFS client application
node to an NSD server node to perform the disk I/O operation and then passes
the data back to the client. Using a network block I/O, configuration can be more
cost effective than a full-access SAN.
Storage pools enable you to transparently manage multiple tiers of storage
based on performance or reliability. You can use storage pools to transparently
provide the appropriate type of storage to multiple applications or different
portions of a single application within the same directory. For example, GPFS
can be configured to use low-latency disks for index operations and high-capacity
disks for data operations of a relational database. You can make these
configurations even if all database files are created in the same directory.
For optimal reliability, GPFS can be configured to eliminate single points of
failure. The file system can be configured to remain available automatically in the
event of a disk or server failure. A GPFS file is designed to transparently fail over
token (lock) operations and other GPFS cluster services, which can be
distributed throughout the entire cluster to eliminate the need for dedicated
metadata servers. GPFS can be configured to automatically recover from node,
storage, and other infrastructure failures.
GPFS provides this functionality by supporting these:
Data replication to increase availability in the event of a storage media failure
Multiple paths to the data in the event of a communications or server failure
File system activity logging, enabling consistent fast recovery after system
failures
In addition, GPFS supports snapshots to provide a space-efficient image of a file
system at a specified time, which allows online backup and can help protect
against user error.

Chapter 6. The IBM Systems Solution for SAP HANA

103

GPFS offers time-tested reliability and was installed on thousands of nodes


across industries, from weather research to multimedia, retail, financial industry
analytics, and web service providers. GPFS also is the basis of many cloud
storage offerings.
The IBM Systems solution for SAP HANA benefits in several ways from the
features of GPFS:
GPFS provides a stable, industry-proven, cluster-capable file system for SAP
HANA.
GPFS adds extra performance to the storage devices by striping data across
devices.
GPFS enables the IBM Systems solution for SAP HANA to grow beyond the
capabilities of a single system, into a scale-out solution, without introducing
the need for external storage.
GPFS adds high-availability and disaster recovery features to the solution.
This makes GPFS the ideal file system for the IBM Systems solution for SAP
HANA.

6.3 Custom server models for SAP HANA


Following the appliance-like delivery model for SAP HANA, IBM created several
custom server models for SAP HANA. These workload-optimized models are
designed to match and exceed the performance requirements and the functional
requirements as specified by SAP. With a small set of IBM System x
workload-optimized models for SAP HANA, all sizes of SAP HANA solutions can
be built, from the smallest to large installations.

6.3.1 IBM System x workload-optimized models for SAP HANA


In the first half of 2011, IBM announced a full range of IBM System x
workload-optimized models for SAP HANA, covering all SAP HANA T-shirt sizes
with the newest generation technology. Because there is no direct relationship
between the workload-optimized models and the SAP HANA T-shirt sizes, we
refer to these models as building blocks. In some cases, there are several

104

In-memory Computing with SAP HANA on IBM eX5 Systems

building blocks available for one T-shirt size. In some, two-building blocks have to
be combined to build a specific T-shirt size. Table 6-1 shows all building blocks
announced in 2011 and their features.
Table 6-1 IBM System x workload-optimized models for SAP HANA, 2011 models
Building
block

Server
(MTM)

CPUs

Main
memory

Log
storage

Data
storage

Preload

XS

x3690 X5
(7147-H1xa)

2x Intel Xeon
E7-2870

128 GB DDR3
(8x 16 GB)

8x 50 GB 1.8
MLC SSD

8x 300 GB
10 K SAS HDD

Yes

x3690 X5
(7147-H2x)

2x Intel Xeon
E7-2870

256 GB DDR3
(16x 16 GB)

8x 50 GB 1.8
MLC SSD

8x 300 GB
10 K SAS HDD

Yes

SSD

x3690 X5
(7147-H3x)

2x Intel Xeon
E7-2870

256 GB DDR3
(16x 16 GB)

10x 200 GB 1.8 MLC SSD


(combined log and data)

Yes

S+

x3950 X5
(7143-H1x)

2x Intel Xeon
E7-8870

256 GB DDR3
(16x 16 GB)

320 GB High
IOPS adapter

8x 600 GB
10 K SAS HDD

Yes

x3950 X5
(7143-H2x)

4x Intel Xeon
E7-8870

512 GB DDR3
(32x 16 GB)

640 GB High
IOPS adapter

8x 600 GB
10 K SAS HDD

Yes

L Option

x3950 X5
(7143-H3x)

4x Intel Xeon
E7-8870

512 GB DDR3
(32x 16 GB)

640 GB High
IOPS adapter

8x 600 GB
10 K SAS HDD

No

a. x = Country-specific letter (for example, EMEA MTM is 7147-H1G, and the US MTM is 7147-H1U).
Contact your IBM representative for regional part numbers.

In addition to the models listed in Table 6-1, there are models specific to a
geographic region:
Models 7147-H7x, -H8x, and -H9x are for Canada only and are the same
configurations as H1x, H2x, and H3x, respectively.
Models 7143-H4x and -H5x are for Canada only and are the same
configuration as H1x and H2x, respectively.

Chapter 6. The IBM Systems Solution for SAP HANA

105

In October of 2012, IBM announced a new set of IBM System x


workload-optimized models for SAP HANA, updating some of the components
with newer generation versions. Table 6-2 shows all building blocks announced in
2012 and their features.
Table 6-2 IBM System x workload-optimized models for SAP HANA, 2012 models
Building
block

Server
(MTM)

CPUs

Main
memory

Log
storage

Data
storage

Preload

XS

x3690 X5
(7147-HAxa)

2x Intel Xeon
E7-2870

128 GB DDR3
(8x 16 GB)

10x 200 GB 1.8 MLC SSD


(combined log and data)

Yes

x3690 X5
(7147-HBx)

2x Intel Xeon
E7-2870

256 GB DDR3
(16x 16 GB)

10x 200 GB 1.8 MLC SSD


(combined log and data)

Yes

S+

x3950 X5
(7143-HAx)

2x Intel Xeon
E7-8870

256 GB DDR3
(16x 16 GB)

1.2 TB High
IOPS adapter

8x 900 GB
10 K SAS HDD

Yes

x3950 X5
(7143-HBx)

4x Intel Xeon
E7-8870

512 GB DDR3
(32x 16 GB)

1.2 TB High
IOPS adapter

8x 900 GB
10 K SAS HDD

Yes

L Option

x3950 X5
(7143-HCx)

4x Intel Xeon
E7-8870

512 GB DDR3
(32x 16 GB)

1.2 TB High
IOPS adapter

8x 900 GB
10 K SAS HDD

No

a. x = Country-specific letter (for example, EMEA MTM is 7147-HAG, and the US MTM is 7147-HAU).
Contact your IBM representative for regional part numbers.

All models (except for 7143-H3x and 7143-HCx) come with a preload comprising
SUSE Linux Enterprise Server for SAP Applications (SLES for SAP) 11 SP1,
IBM GPFS, and the SAP HANA software stack. Licenses and maintenance fees
(for three years) for SLES for SAP and GPFS are included. Section GPFS
license information on page 160 has an overview about which type of GPFS
license comes with a specific model, and the amount of Processor Value Units
(PVU) included.The licenses for the SAP software components have to be
acquired separately from SAP.
The L-Option building blocks (7143-H3x or 7143-HCx) are intended as an
extension to an M building block (7143-H2x or 7143-HBx). When building an
L-Size SAP HANA system, one M building block has to be combined with an
L-Option building block, leveraging eX5 scalability. Both systems then act as one
single eight-socket, 1 TB server. Therefore, the L-Option building blocks do not
require a software preload, it comes however with the required additional
software licenses for GPFS and SLES for SAP.
The building blocks are configured to match the SAP HANA sizing requirements.
The main memory sizes match the number of CPUs, to give the correct balance
between processing power and data volume. Also, the storage devices in the

106

In-memory Computing with SAP HANA on IBM eX5 Systems

systems provide the storage capacity required to match the amount of main
memory.
All systems come with storage for both the data volume and the log volume
(Figure 6-9). Savepoints are stored on a RAID protected array of 10 K SAS hard
drives, optimized for data throughput. The SAP HANA database logs are stored
on flash technology storage devices:
RAID-protected, hot swap eXFlash SSD drives on the models based on IBM
System x3690 X5
Flash-based High IOPS PCIe adapters for the models based on IBM System
x3950 X5
These flash technology storage devices are optimized for high IOPS
performance and low latency to provide the SAP HANA database with a log
storage that allows the highest possible performance. Because a transaction in
the SAP HANA database can only return after the corresponding log entry is
written to the log storage, high IOPS performance and low latency are key to
database performance.
The building blocks based on the IBM System x3690 X5 (except for the older
7147-H1x and 7147-H2x), come with combined data and log storage on an array
of RAID-protected, hot-swap eXFlash SSD drives. Optimized for throughput, high
IOPS performance, and low latency, these building blocks give extra flexibility
when dealing with large amounts of log data, savepoint data, or backup data.

Time

Data savepoint
to persistent
storage

SAS Drives

Log written
to persistent storage
(committed transactions)

SSD Drives / PCIe Flash

optimized for

optimized for

throughput

high IOPS / low latency

Server
local
storage

GPFS file system


For maximum performance and scalability
Figure 6-9 SAP HANA data persistency with the internal storage of the workload-optimized systems

Chapter 6. The IBM Systems Solution for SAP HANA

107

6.3.2 SAP HANA T-shirt sizes


This section provides information about how the SAP HANA T-shirt sizes, as
described in 3.3.1, The concept of T-shirt sizes for SAP HANA on page 26, can
be realized using the IBM System x workload-optimized models for SAP HANA1:
For a T-shirt size XS 128 GB SAP HANA system, building block XS
(7147-H1x or 7147-HAx) is the correct choice. These x3690 X5-based
building blocks are the entry-level models of the line of IBM System x
workload-optimized systems for SAP HANA.
A T-shirt size S 256 GB can either be realized with the newer S building block
(7147-HBx) or the older SSD building block (7147-H3x) with combined data
and log storage on eXFlash SSD drives. The older S building block
(7147-H2x) is suitable too, equipped with separate storage for data (SAS
drives) and logs (SSD drives), but has limitations with regards to the scale-out
solution. All three are based on IBM System x3690 X5.
For a T-shirt size S 256 GB with upgradability to M (that is, a T-shirt size S+),
the S+ building block (7143-H1x or 7143-HAx) is the perfect choice. Unlike the
S and SSD building blocks, it is based on the IBM System x3950 X5 4-socket
system to ensure upgradability.
A T-shirt size M 512 GB can be realized with the M building block (7143-H2x
or 7143-HBx). As it can be upgraded to a T-shirt size L using the L-Option
building block, it is also the perfect fit if a T-shirt size M+ is required.
For a T-shirt size L 1 TB, one M building block (7143-H2x or 7143-HBx) must
be combined with an L-Option building block (7143-H3x or 7143-HCx),
connected together to form a single server using eX5 scaling technology.

108

The model numbers given might have be to replaced by a region-specific equivalent by changing
the x to a region-specific letter identifier. See 6.3.1, IBM System x workload-optimized models for
SAP HANA on page 104.

In-memory Computing with SAP HANA on IBM eX5 Systems

Table 6-3 gives an overview of the SAP HANA T-Shirt sizes and their relation to
the IBM custom models for SAP HANA.
Table 6-3 SAP HANA T-shirt sizes and their relation to the IBM custom models
SAP T-shirt
size

XS

S+

M and M+

Compressed
data in
memory

64 GB

128 GB

128 GB

256 GB

512 GB

Server main
memory

128 GB

256 GB

256 GB

512 GB

1024 GB

Number of
CPUs

Mapping to
building
blocksa

7147-HAx or
7147-H1x

7147-HBx or
7147-H3x or
7147-H2x

7143-HAx or
7143-H1x

7143-HBx or
7143-H2x

Combine
7143-HBx or
7143-H2x
with
7143-HCx or
7143-H3x)

a. For a region-specific equivalent, see 6.3.1, IBM System x workload-optimized models for SAP
HANA on page 104.

6.3.3 Scale-up
This section talks about upgradability, or scale-up, and shows how IBM custom
models for SAP HANA can be upgraded to accommodate the need to grow into
bigger T-shirt sizes.
To accommodate growth, the IBM Systems Solution for SAP HANA can be
scaled in these ways:
Scale-up approach: Increase the capabilities of a single system by adding more
components.
Scale-out approach: Increase the capabilities of the solution by using multiple
systems working together in a cluster.
We discuss the scale-out approach in 6.4, Scale-out solution for SAP HANA on
page 110.

Chapter 6. The IBM Systems Solution for SAP HANA

109

The building blocks of the IBM Systems Solution for SAP HANA, as described
previously, were designed with extensibility in mind. The following upgrade options
exist:
An XS building block can be upgraded to be an S-size SAP HANA system by
adding 128 GB of main memory to the system.
An S+ building block can be upgraded to be an M-Size SAP HANA system by
adding two more CPUs, which is another 256 GB of main memory. For the
7143-H1x, another 320 GB High IOPS adapter needs to be added to the
system, the newer 7143-HAx has the required flash capacity already
included.
An M building block (7143-H2x or 7143-HBx) can be extended with the L
option (7143-H3x or 7143-HCx) to resemble an L-Size SAP HANA System.
The 2011 models can be combined with the 2012 models, for example, the
older 7143-H2x can be extended with the new 7143-HCx.
With the option to upgrade S+ to M, and M to L, IBM can provide an
unmatched upgrade path from a T-shirt size S up to a T-shirt size L, without
the need to retire a single piece of hardware.
Of course, upgrading server hardware requires system downtime. However, due
to GPFSs capability to add storage capacity to an existing GPFS file system by
just adding devices, data residing on the system remains intact. We nevertheless
recommend that you do a backup of the data before changing the systems
configuration.

6.4 Scale-out solution for SAP HANA


Up to now we talked about single-server solutions. Although the scale-up
approach gives flexibility to expand the capabilities of an SAP HANA installation,
there might be cases where the required data volumes exceed the capabilities of
a single server. To meet such requirements, the IBM Systems Solution for SAP
HANA supports a scale-out approach (that is, combining a number of systems
into a clustered solution, which represents a single SAP HANA instance). A SAP
HANA system can span multiple servers, partitioning the data, to be able to hold
and process larger amounts of data than a single server can accommodate.

110

In-memory Computing with SAP HANA on IBM eX5 Systems

To illustrate this scale-out solution, the following figures show a schematic


depiction of such an installation. Figure 6-10 shows a single-node SAP HANA
system.

node01
SAP HANA DB
DB partition 1
- SAP HANA DB

- Index server
- Statistic server
- SAP HANA studio

Shared file system GPFS

Primary data

HDD

Flash

data01

log01

Figure 6-10 Single-node SAP HANA system

This single-node solution has these components:


The SAP HANA software (SAP HANA database with index server and
statistic server)
The shared file system (GPFS) on the two types of storage:
The data storage (on SAS disks), here referred to as HDD, which holds
the savepoints
The log storage (on SSD drives or PCIe Flash devices), here referred to as
Flash, which holds the database logs
This single node represents one single SAP HANA database consisting of one
single database partition. Both the savepoints (data01) and the logs (log01) are
stored once (that is, they are not replicated), denoted as being primary data in
Figure 6-10.

Chapter 6. The IBM Systems Solution for SAP HANA

111

6.4.1 Scale-out solution without high-availability capabilities


The first step towards a scale-out solution was to introduce a clustered solution
without failover or high-availability (HA) capabilities. IBM was the first hardware
partner to validate a scale-out solution for SAP HANA. SAP validated this
solution for clusters of up to four nodes, using S or M building blocks in a
homogeneous cluster (that is, no mixing of S and M building blocks).
This scale-out solution differs from a single server solution in a number of ways:
The solution consists of a homogeneous cluster of building blocks,
interconnected with two separate 10 Gb Ethernet networks (not shown in
Figure 6-11 on page 113), one for the SAP HANA application and one for the
GPFS file system communication.
The SAP HANA database is split into partitions, forming a single instance of
the SAP HANA database.
Each node of the cluster holds its own savepoints and database logs on the
local storage devices of the server.
The GPFS file system spans all nodes of the cluster, making the data of each
node available to all other nodes of the cluster.
Figure 6-11 on page 113 illustrates this solution, showing a 3-node configuration
as an example.

112

In-memory Computing with SAP HANA on IBM eX5 Systems

node01

node02

node03

SAP HANA DB
DB partition 1

DB partition 2

DB partition 3

- SAP HANA DB
Worker node

- SAP HANA DB
Worker node

- SAP HANA DB
Worker node

- Index server
- Statistic server
- SAP HANA studio

- Index server
- Statistic server

- Index server
- Statistic server

Shared file system - GPFS

Primary data

HDD

Flash

HDD

Flash

HDD

Flash

data01

log01

data02

log02

data03

log03

Figure 6-11 A 3-node clustered solution without failover capabilities

To an outside application connecting to the SAP HANA database, this looks like a
single instance of SAP HANA. The SAP HANA software distributes the requests
internally across the cluster to the individual worker nodes, which process the
data and exchange intermediate results, which are then combined and sent back
to the requestor. Each node maintains its own set of data, persisting it with
savepoints and logging data changes to the database log.
GPFS combines the storage devices of the individual nodes into one big file
system, making sure that the SAP HANA software has access to all data
regardless of its location in the cluster, while making sure that savepoints and
database logs of an individual database partition are stored on the appropriate
storage device of the node on which the partition is located. While GPFS
provides the SAP HANA software with the functionality of a shared storage
system, it ensures maximum performance and minimum latency by using locally
attached disks and flash devices. In addition, because server-local storage
devices are used, the total capacity and performance of the storage within the
cluster automatically increases with the addition of nodes, maintaining the same
per-node performance characteristics regardless of the size of the cluster. This
kind of scalability is not achievable with external storage systems.

Chapter 6. The IBM Systems Solution for SAP HANA

113

The absence of fail-over capabilities represents a major disadvantage of this


solution. The cluster acts as a single-node configuration. In case one node
becomes unavailable for any reason, the database partition on that node
becomes unavailable, and with it the entire SAP HANA database. Loss of the
storage of a node means data loss (as with a single-server solution), and the
data has to be recovered from a backup. For this reason, this scale-out solution
without failover capabilities is an intermediate solution that will go away after all
of the SAP hardware partners can provide a solution featuring high-availability
capabilities. The IBM version of such a solution is described in the next section.

6.4.2 Scale-out solution with high-availability capabilities


The scale-out solution for SAP HANA with high-availability capabilities enhances
the exemplary four-node scale-out solution described in the previous section in
two major fields:
Making the SAP HANA application highly available by introducing standby
nodes, which can take over from a failed node within the cluster
Making the data provided through GPFS highly available to the SAP HANA
application, even in the event of the loss of one node, including its data on the
local storage devices
SAP HANA allows the addition of nodes in the role of a standby node. These
nodes run the SAP HANA application, but do not hold any data or take an active
part in the processing. In case one of the active nodes fails, a standby node
takes over the role of the failed node, including the data (that is, the database
partition) of the failed node. This mechanism allows the clustered SAP HANA
database to continue operation.

114

In-memory Computing with SAP HANA on IBM eX5 Systems

Figure 6-12 illustrates a four-node cluster with the fourth node being a standby
node.

node01

node02

node03

node04

SAP HANA DB
DB partition 1

DB partition 2

DB partition 3

- SAP HANA DB
Worker node

- SAP HANA DB
Worker node

- SAP HANA DB
Worker node

- SAP HANA DB
Standby node

- Index server
- Statistic server
- SAP HANA studio

- Index server
- Statistic server

- Index server
- Statistic server

- Index server
- Statistic server

Shared file system - GPFS

Primary data

HDD

Flash

HDD

Flash

HDD

Flash

data01

log01

data02

log02

data03

log03

HDD

Flash

Replica

Figure 6-12 A 4-node clustered solution with failover capabilities

To be able to take over the database partition from the failed node, the standby
node has to load the savepoints and database logs of the failed node to recover
the database partition and resume operation in place of the failed node. This is
possible because GPFS provides a global file system across the entire cluster,
giving each individual node access to all the data stored on the storage devices
managed by GPFS.
In case a node has an unrecoverable hardware error, the storage devices holding
the nodes data might become unavailable or even destroyed. In contrast to the
solution without high-availability capabilities, here the GPFS file system
replicates the data of each node to the other nodes, to prevent data loss in case
one of the nodes goes down. Replication is done in a striping fashion. That is,
every node has a piece of data of all other nodes. In the example illustrated in
Figure 6-12, the contents of the data storage (that is, the savepoints, here
data01) and the log storage (that is, the database logs, here log01) of node01
are replicated to node02, node03, and node04, each holding a part of the data
on the matching device (that is, data on HDD, log on flash). The same is true for
all nodes carrying data, so that all information is available twice within the GPFS
file system, which makes it tolerant to the loss of a single node. The replication
occurs synchronously. That is, the write operation only finishes when the data is
both written locally and replicated. This ensures consistency of the data at any

Chapter 6. The IBM Systems Solution for SAP HANA

115

point in time. Although GPFS replication is done over the network and in a
synchronous fashion. This solution still over achieves the performance
requirements for validation by SAP.
Using replication, GPFS provides the SAP HANA software with the functionality
and fault tolerance of a shared storage system while maintaining its performance
characteristics. Again, due to the fact that server-local storage devices are used,
the total capacity and performance of the storage within the cluster automatically
increases with the addition of nodes, maintaining the same per-node
performance characteristics regardless of the size of the cluster. This kind of
scalability is not achievable with external storage systems.

Example of a node takeover


To further illustrate the capabilities of this solution, this section provides a node
takeover example. In this example, we have a 4-node setup, initially configured
as illustrated in Figure 6-12 on page 115, with three active nodes and one
standby node.
First, node03 experiences a problem and fails unrecoverably. The master node
(node01) recognizes this and directs the standby node, node04, to take over from
the failed node. Remember that the standby node is running the SAP HANA
application and is part of the cluster, but in an inactive role.
To recreate database partition 3 in memory to be able to take over the role of
node03 within the cluster, node04 reads the savepoints and database logs of
node03 from the GPFS file system, reconstructs the savepoint data in memory,
and re-applies the logs so that the partition data in memory is exactly like it was
before node03 failed. Node04 is in operation, and the database cluster has
recovered.

116

In-memory Computing with SAP HANA on IBM eX5 Systems

Figure 6-13 illustrates this scenario.

node01

node02

node03

node04

SAP HANA DB
DB partition 1

DB partition 2

DB partition 3

- SAP HANA DB
Worker node

- SAP HANA DB
Worker node

- SAP HANA DB
Defunct node

- SAP HANA DB
Worker node

- Index server
- Statistic server
- SAP HANA studio

- Index server
- Statistic server

- Index server
- Statistic server

- Index server
- Statistic server

Shared file system - GPFS

Primary data

HDD

Flash

HDD

Flash

data01

log01

data02

log02

HDD

Flash

HDD

Flash

data03

log03

Replica

Figure 6-13 Standby node 4 takes over from failed node 3

The data that node04 was reading was the data of node03, which failed,
including the local storage devices. For that reason GPFS had to deliver the data
to node04 from the replica spread across the cluster using the network. Now
when node04 starts writing savepoints and database logs again during the
normal course of operations, these are not written over the network, but to the
local drives, again with a replica striped across the cluster.
After fixing the cause for the failure of node03, it can be reintegrated into the
cluster as the new standby system (Figure 6-14 on page 118).

Chapter 6. The IBM Systems Solution for SAP HANA

117

node01

node02

node03

node04

SAP HANA DB
DB partition 1

DB partition 2

DB partition 3

- SAP HANA DB
Worker node

- SAP HANA DB
Worker node

- SAP HANA DB
Standby node

- SAP HANA DB
Worker node

- Index server
- Statistic server
- SAP HANA studio

- Index server
- Statistic server

- Index server
- Statistic server

- Index server
- Statistic server

Shared file system - GPFS

Primary data

HDD

Flash

HDD

Flash

data01

log01

data02

log02

HDD

Flash

HDD

Flash

data03

log03

Replica

Figure 6-14 Node 3 is reintegrated into the cluster as a standby node

This example illustrates how IBM combines two independently operating


high-availability measures (that is, the concept of standby nodes on the SAP
HANA application level and the reliability features of GPFS on the infrastructure
level), resulting in a highly available and scalable solution.
At the time of writing, clusters of up to 16 nodes using S building blocks
(7143-HBx only), SSD building blocks, M building blocks or L configurations (M
building block extended by L option) are validated by SAP. This means that the
cluster has a total main memory of up to 16 TB or up to 8TB of compressed data.
Depending on compression factor, this accommodates up to 56 TB of source
data2.
Note: SAP validated this scale-out solution (with HA), which is documented in
the SAP product availability matrix, with up to 16 nodes in a cluster. However,
the building block approach of IBM makes the solution scalable without any
known limitation. For those customers who need a scaleout configuration
beyond the 16 TB offered today, IBM offers a joint validation at the customer
site working closely with SAP.

118

Uncompressed source data, compression factor of 7:1

In-memory Computing with SAP HANA on IBM eX5 Systems

6.4.3 Networking architecture for the scale-out solution


Networking plays an integral role in the scale-out solution. The standard building
blocks are used for scale-out, interconnected by 10 Gb Ethernet, in a redundant
fashion. There are two redundant 10 Gb Ethernet networks for the
communication within the solution:
A fully redundant 10 Gb Ethernet network for cluster-internal communication
of the SAP HANA software
A fully redundant 10 Gb Ethernet network for cluster-internal communication
of GPFS, including replication
These networks are internal to the scale-out solution and have no connection to
the customer network. The networking switches for these networks are part of
the appliance and cannot be substituted with other than the validated switch
models.
Figure 6-15 illustrates the networking architecture for the scale-out solution and
shows the SAP HANA scale-out solution connected to an SAP NetWeaver BW
system as an example.

SAP
BW
system

SAP HANA scale-out solution

Switch

10 GbE switch

Node 1
Switch

Node 1

Node 1

...

Node n

10 GbE switch

Figure 6-15 Networking architecture for the scale-out solution

All network connections within the scale-out solution are fully redundant. Both
the internal GPFS network and the internal SAP HANA network are connected to

Chapter 6. The IBM Systems Solution for SAP HANA

119

two 10 Gb Ethernet switches, interconnected for full redundancy. The switch


model used here is the IBM System Networking RackSwitch G8264. It delivers
exceptional performance, being both lossless and low latency. With 1.2 Tbps
throughput, the G8264 provides massive scalability and low latency that is ideal
for latency-sensitive applications, such as SAP HANA. The scale-out solution for
SAP HANA makes intensive use of the advanced capabilities of this switch, such
as virtual link aggregation groups (vLAG).
Figure 6-16 shows the back of an M building block (here: 7143-H2x) with the
network interfaces available. The letters denoting the interfaces correspond to
the letters used in Figure 6-15 on page 119.

GPFS

IMM

SAP HANA
Figure 6-16 The back of an M building block with the network interfaces available

Each building block comes with one (2011 models) or two (2012 models)
dual-port 10 Gb Ethernet interface cards (NIC). To provide enough ports for a
fully redundant network connection to the 10 Gb Ethernet switches, an additional
dual-port 10 Gb Ethernet NIC can be added to the system (see also section
6.4.4, Hardware and software additions required for scale-out on page 121).
An exception to this is an L configuration, where each of the two chassis (the M
building block and the L option) hold one or two dual-port 10 Gb Ethernet NICs.
Therefore an L configuration does not need an additional 10 Gb Ethernet NIC for
the internal networks, even for the 2011 models.
The six available 1 Gb Ethernet interfaces available (a.b.e.f.g.h) on the system
can be used to connect the systems to other networks or systems, for example,
for client access, application management, systems management, data
management, and so on. The interface denoted with the letter i is used to

120

In-memory Computing with SAP HANA on IBM eX5 Systems

connect the integrated management module (IMM) of the server to the


management network.

6.4.4 Hardware and software additions required for scale-out


The scale-out solution for the IBM Systems Solution for SAP HANA builds upon
the same building blocks as they are used in a single-server installation. There
are however additional hardware and software components needed, to
complement the basic building blocks when implementing a scale-out solution.
Depending on the building blocks used, additional GPFS licenses might be
needed for the scale-out solution. The GPFS on x86 Single Server for Integrated
Offerings, V3 provides file system capabilities for single-node integrated
offerings. This kind of GPFS license does not cover the use in multi-node
environments, such as the scale-out solution discussed here. To use building
blocks that come with the GPFS on x86 Single Server for Integrated Offerings
licenses, for a scale-out solution, GPFS on x86 Server licenses have to be
obtained for these building blocks. Section GPFS license information on
page 160 has an overview about which type of license comes with a specific
model, and the amount of Processor Value Units (PVU) needed. Alternatively,
GPFS File Placement Optimizer licenses can be used in conjunction with GPFS
on x86 Server licenses. In a scale-out configuration, a minimum of three nodes
have to use GPFS on x86 Server licenses, and the remaining nodes can use
GPFS File Placement Optimizer licenses. Other set ups, such as the disaster
recovery solution described in section 7.2, Disaster Recovery for SAP HANA on
page 137, might require more nodes using GPFS on x86 Server licenses,
depending on the role of the nodes in the actual setup. Section GPFS license
information on page 160 has an overview on the GPFS license types, which
type of license comes with a specific model, and the amount of Processor Value
Units (PVU) needed.
As we discussed in section 6.4.3, Networking architecture for the scale-out
solution on page 119, additional 10 Gb Ethernet network interface cards have to
be added to the building blocks in some configurations, to provide redundant
network connectivity for the internal networks, and possibly also for the
connection to the customer network, in case a 10 Gb Ethernet connection to the
other systems (for example, replication server, SAP application servers) is
required. Information about supported network interface cards for this purpose is
provided in the Quick Start Guide.
For a scale-out solution built upon the SSD-only building blocks based on x3690
X5, additional 200 GB 1.8 MLC SSD drives are required to be able to
accommodate the additional storage capacity required for GPFS replication. The
total number of SSD drives required is documented in the SAP Product
Availability Matrix (PAM) for SAP HANA available online at (search for HANA):

Chapter 6. The IBM Systems Solution for SAP HANA

121

http://service.sap.com/pam

6.5 Installation services


The IBM Systems Solution for SAP HANA comes with the complete software
stack, including the operating system, GPFS, and the SAP HANA software. Due
to the nature of the software stack, and dependencies on how the IBM Systems
solution for SAP HANA is used at the customer location, the software stack
cannot be preloaded completely at manufacturing. Therefore installation services
are required. Installation services for the IBM Systems Solution for SAP HANA
typically include:
Performing an inventory and validating of the delivered system configuration
Verifying / updating the hardware to the latest level of BIOS, firmware, device
drivers, OS patches as required
Verifying / configuring the RAID configuration
Finishing the software preload according to the customer environment
Configuring / verifying network settings and operation
Performing system validation
Providing onsite skills transfer (when required) on the solution and best
practices and delivering post install documentation
To ensure the correct operation of the appliance, installation services for the IBM
Systems Solution for SAP HANA have to be performed by specifically trained
personnel, available from IBM STG Lab Services, IBM Global Technology
Services, or IBM business partners, depending on the geography.

6.6 Interoperability with other platforms


To access the SAP HANA database from a system (SAP or non-SAP), the SAP
HANA database client has to be available for the platform the system is running
on. Platform availability of the SAP HANA database client is documented in the
product availability matrix (PAM) for SAP HANA, which is available online at
(search for HANA):
http://service.sap.com/pam
At the time of writing, the SAP HANA database client is available on all major
platforms, including but not limited to:

122

In-memory Computing with SAP HANA on IBM eX5 Systems

Microsoft Windows Server 2008


Microsoft Windows XP (32 bit), Windows Vista, Windows 7 (both 32 bit and
64 bit)
SUSE Linux Enterprise Server 11 on 32 and 64 bit x86 platforms, IBM
System z,
RedHat Enterprise Linux on 64 bit x86 platforms
IBM AIX 5.2, 5.3, 6.1 and 7.1 on the IBM POWER platform
IBM i V7R1 on the IBM POWER platform
HP-UX 11.31 on Itanium
Oracle Solaris on x86 and SPARC
For up-to-date and detailed availability information, refer to the PAM.
If there is no SAP HANA database client available for a certain platform, SAP
HANA can still be used in a scenario with replication, by using a dedicated SAP
Landscape Transformation server (for SAP Business Suite sources) or an SAP
BusinessObjects Data Services server running on a platform for which the SAP
HANA database client is available. This way data can be replicated into SAP
HANA, which then can be used for reporting or analytic purposes, using a front
end supporting SAP HANA as a data source.

6.7 Support process


The deployment of SAP HANA as an integrated solution, combining software and
hardware from both IBM and SAP, also reflects in the support process for the IBM
Systems Solution for SAP HANA.
All SAP HANA models offered by IBM include SLES for SAP Applications with
SUSE 3-year priority support and IBM GPFS with 3-year support. The hardware
comes with a 3-year limited warranty3, including customer replaceable unit
(CRU) and on-site support4.

6.7.1 IBM SAP integrated support


SAP integrates the support process with SUSE and IBM as part of the HANA
appliance solution-level support. If you encounter software problems on your
3
4

For information about the IBM Statement of Limited Warranty, see


http://www.ibm.com/servers/support/machine_warranties/
IBM sends a technician after attempting to diagnose and resolve the problem remotely.

Chapter 6. The IBM Systems Solution for SAP HANA

123

SAP HANA system, access the SAP Online Service System (SAP OSS) website
at:
https://service.sap.com
When you reach the website, create a service request ticket using a
subcomponent of BC-HAN or BC-DB-HDB as the problem component. IBM
support works closely with SAP and SUSE and is dedicated to supporting SAP
HANA software and hardware issues.
Send all questions and requests for support to SAP using their OSS messaging
system. A dedicated IBM representative is available at SAP to work on this
solution. Even if it is clearly a hardware problem, an SAP OSS message should
be opened to provide the best direct support for the IBM Systems solution for
SAP HANA.
When opening an SAP support message, we recommend using the text template
provided in the Quick Start Guide, when it is obvious that you have a hardware
problem. This procedure expedites all hardware-related problems within the SAP
support organization. Otherwise, the SAP Support Teams will gladly help you
with the questions regarding the SAP HANA appliance in general.
Before you contact support, make sure that you have taken these steps to try to
solve the problem yourself:
Check all cables to make sure that they are connected.
Check the power switches to make sure that the system and any optional
devices are turned on.
Use the troubleshooting information in your system documentation, and use
the diagnostic tools that come with your system. Information about diagnostic
tools is available in the Problem Determination and Service Guide on the IBM
Documentation CD that comes with your system.
Go to the following IBM support website to check for technical information,
hints, tips, and new device drivers or to submit a request for information:
http://www.ibm.com/supportportal/
For SAP HANA software-related issues you can search the SAP Online
Service System (OSS) website for problem resolutions. The OSS website has
a knowledge database of known issues and can be accessed here:
https://service.sap.com/notes
The main SAP HANA information source is available here:
https://help.sap.com/hana_appliance

124

In-memory Computing with SAP HANA on IBM eX5 Systems

If you have a specific operating system question or issue, contact SUSE


regarding SUSE Linux Enterprise Server for SAP Applications. Go to the SUSE
website:
http://www.suse.com/products/prioritysupportsap/
Media is available for download here:
http://download.novell.com/index.jsp?search=Search&families=2658&keywor
ds=SAP
Note: Registration is required before you can download software packages
from the SUSE website.

6.7.2 The IBM SAP International Competence Center InfoService


The IBM SAP International Competence Center (ISICC) InfoService is the key
support function of the IBM and SAP Alliance. It serves as a single point of entry
for all SAP-related questions for customers using IBM Systems and Solutions
with SAP applications. As a managed question and answer service it has access
to a worldwide network of experts on technology topics about IBM products in
SAP environments. You can contact the ISICC InfoService using email at
[email protected].
Note: The ISICC InfoService does not provide product support. If you need
product support for the IBM Systems solution for SAP HANA, refer to section
6.7.1, IBM SAP integrated support on page 123. If you need support for
other IBM products, consult the product documentation on how to get product
support.

6.8 IBM Systems Solution with SAP Discovery System


The SAP Discovery system is a preconfigured hardware and software landscape
that can be used to test drive SAP technologies. It is an evaluation tool that
provides an opportunity to realize the joint value of the SAP business process
platform and SAP BusinessObjects tools running on a single system. It provides
a complete, fully documented system with standard SAP software components
for developing and delivering service-based applications, including all the
interfaces, functionality, data, and guidance necessary to run a complete,
end-to-end business scenario.
The SAP Discovery system allows you to interact with SAPs most current
technologies: Mobility (Sybase Unwired Platform, Afaria), SAP HANA, SAP

Chapter 6. The IBM Systems Solution for SAP HANA

125

CRM, SAP ERP EhP5, SAP NetWeaver 7.3, SAP BusinessObjects and more
along with the IBM robust DB2 database. The SAP business process platform,
which is a part of the SAP Discovery system, helps organizations discover ways
to accelerate business innovation and respond to changing business needs by
designing reusable process components that make use of enterprise services.
The SAP BusinessObjects portfolio of tools and applications on the SAP
Discovery system were designed to help optimize information discovery and
delivery, information management and query, reporting, and analysis. For
business users, the SAP Discovery system helps bridge the gap between
business and IT and serves as a platform for future upgrade planning and
functional trial and gap analysis.
The SAP Discovery system includes sample business scenarios and
demonstrations that are preconfigured and ready to run. It is a preconfigured
environment with prepared demos and populated with Best Practices data. A list
of detailed components, exercises, and SAP Best Practices configuration is
available online at:
http://www.sdn.sap.com/irj/sdn/discoverysystem
The IBM Systems Solution with SAP Discovery system uses the IBM System
x3650 M4 server to provide a robust, compact and cost-effective hardware
platform for the SAP Discovery System, using VMware ESXi software with
Microsoft Windows and SUSE Linux operating systems. IBM System x3650 M4
servers offer an energy-smart, affordable, and easy-to-use rack solution for data
center environments looking to significantly lower operational and solution costs.
Figure 6-17 shows the IBM Systems solution with SAP Discovery system.

Figure 6-17 The IBM Systems solution with SAP Discovery System

With an embedded VMware hypervisor, x3650 M4 provides a virtualized


environment for the SAP software, consolidating a wealth of applications onto a
single 2U server. The IBM Systems solution with SAP Discovery system is also
configured with eight hard drives (including one recovery drive) to create a
compact, integrated system.

126

In-memory Computing with SAP HANA on IBM eX5 Systems

The combination of the IBM Systems solution for SAP HANA and the IBM
Systems solution with SAP Discovery System is the ideal platform to explore,
develop, test, and demonstrate the capabilities of an SAP landscape including
SAP HANA. Figure 6-18 illustrates this.
IBM Systems Solution with SAP Discovery System
Client VM
Windows Server 2008
Sybase SUP 2.1

Server VM
SUSE Linux Enterprise Server 11 SP1

SAP
CRM 7.0

Sybase Afaria
SAP Business Objects
Client Tools

SAP BusinessObjects

SAP
MDM 7.1

Explorer 4.0
Business Intelligence 4.0

SAP CRM 7.0

Data Services 4.0

Dashboard
Crystal
DS / BI 4.0
SAP HANA Studio 1.0
SAP NetWeaver
Developer Studio

SAP NW PI 7.30

SAP NW CE 7.3

ESR

ESR

SR

SR

SLD

SLD

SAP NW
Mobile 7.1

SAP MDM Client

SAP NW
BW 7.3

IBM Systems solution


for SAP HANA
(purchased separately)

SAP NW 7.3 EP
SAP NW 7.3 BI
SAP NWDI 7.3

SAP ERP 6.0 EhP5


Gateway 2.0
SAP Landscape Transformation (SLT)

SAP NetWeaver
Business Client 3.0
SAP GUI 7.20

Integrated
with core

IBM DB2 Version 9.7

VMware ESXi

Figure 6-18 IBM Systems solution with SAP Discovery System combined with SAP HANA

Whether you plan to integrate new SAP products into your infrastructure or are
preparing for an upgrade, the IBM Systems solution with SAP Discovery system
can help you thoroughly evaluate SAP applications and validate their benefits.
You gain hands-on experience, the opportunity to develop a proof of concept,
and the perfect tool for training your personnel in advance of deploying a
production system. The combination of the IBM Systems solution with SAP
Discovery system with one of the SAP HANA models based on IBM System
x3690 X5 gives you a complete SAP environment including SAP HANA in a
compact 4U package.
More information about the IBM Systems solution with SAP Discovery System is
available online at:
http://www.ibm.com/sap/discoverysystem

Chapter 6. The IBM Systems Solution for SAP HANA

127

128

In-memory Computing with SAP HANA on IBM eX5 Systems

Chapter 7.

SAP HANA operations


This chapter discusses the operational aspects of running an SAP HANA
system.
The following topics are covered:

7.1, Backing up and restoring data for SAP HANA on page 130
7.2, Disaster Recovery for SAP HANA on page 137
7.3, Monitoring SAP HANA on page 142
7.4, Sharing an SAP HANA system on page 144
7.5, Installing additional agents on page 146
7.6, Software and firmware levels on page 147

Copyright IBM Corp. 2013. All rights reserved.

129

7.1 Backing up and restoring data for SAP HANA


Because SAP HANA usually plays a critical role in the overall landscape, it is
critical to back up the data in the SAP HANA database and be able to restore it.
This section gives a short overview about the basics of backup and recovery for
SAP HANA and the integration of SAP HANA and IBM Tivoli Storage Manager
for ERP.

7.1.1 Basic Backup and Recovery


Simply saving away the savepoints and the database logs is technically
impossible in a consistent way, and thus does not constitute a consistent backup
that can be recovered from. Therefore a simple file-based backup of the
persistency layer of SAP HANA is not sufficient.

Backing up
A backup of the SAP HANA database has to be triggered through the SAP HANA
Studio or alternatively through the SAP HANA SQL interface. SAP HANA will
then create a consistent backup, consisting of one file per cluster node. Simply
saving away the savepoints and the database logs does not constitute a
consistent backup that can be recovered from. SAP HANA always performs a full
backup. Incremental backups are currently not supported by SAP HANA.
SAP HANA internally maintains transaction numbers, which are unique within a
database instance, also and especially in a scale-out configuration. To be able to
create a consistent backup across a scale-out configuration, SAP HANA
chooses a specific transaction number, and all nodes of the database instance
write their own backup files including all transactions up to this transaction
number.
The backup files are saved to a defined staging area that might be on the internal
disks, an external disk on an NFS share, or a directly attached SAN subsystem.
In addition to the data backup files, the configuration files and backup catalog
files have to be saved to be recovered. For point in time recovery, the log area
also has to be backed up.
With the IBM Systems solution for SAP HANA, one of the 1 Gbit network
interfaces of the server can be used for NFS connectivity, alternatively an
additional 10Gbit Network interface (if PCI slot available). It is also supported to
add a fibre channel HBA for SAN connectivity. The Quick Start Guide for the IBM
Systems solution for SAP HANA lists supported hardware additions to provide
additional connectivity.

130

In-memory Computing with SAP HANA on IBM eX5 Systems

Restoring a backup
It might be necessary to recover the SAP HANA database from a backup in the
following situations:
The data area is damaged
If the data area is unusable, the SAP HANA database can be recovered up to
the latest committed transaction, if all the data changes after the last
complete data backup are still available in the log backups and log area. After
the data and log backups have been restored, the SAP HANA databases
uses the data and log backups and the log entries in the log area to restore
the data and replay the logs, to recover. It is also possible to recover the
database using an older data backup and log backups, as long as all relevant
log backups made after the data backup are available1. More information:
SAP Note 1705945 (Determining the files needed for a recovery)
The log area is damaged.
If the log area is unusable, the only possibility to recover is to replay the log
backups. In consequence, any transactions committed after the most recent
log backup are lost, and all transactions that were open during the log backup
are rolled back.
After restoring the data and log backups, the log entries from the log backups
are automatically replayed in order to recover. It is also possible to recover the
database to a specific point in time, as long as it is within the existing log
backups.
The database needs to be reset to an earlier point in time because of a logical
error.
To reset the database to a specific point in time, a data backup from before
the point in time to recover to and the subsequent log backups must be
restored. During recovery the log area might be used as well, depending on
the point in time the database is reset to. All changes made after the recovery
time are (intentionally) lost.
You want to create a copy of the database.
It can be desirable to create a copy of the database for various purposes,
such as creating a test system.
A database recovery is initiated from the SAP HANA studio.
A backup can only be restored to an identical SAP HANA system, with regard to
the number of nodes, node memory size, host names and SID. Changing of host
names and SID during recovery is however enabled since SAP HANA 1.0
SPS04.
1

See SAP Note 1705945 for help with determining the files needed for a recovery.

Chapter 7. SAP HANA operations

131

When restoring a backup image from a single node configuration into a scale-out
configuration, SAP HANA does not repartition the data automatically. The correct
way to bring a backup of a single-node SAP HANA installation to a scale-out
solution is as follows:
1. Backup the data from the stand-alone node.
2. Install SAP HANA on the master node.
3. Restore the backup into the master node.
4. Install SAP HANA on the slave and standby nodes as appropriate, and add
these nodes to the SAP HANA cluster.
5. Repartition the data across all worker nodes.
More detailed information about the backup and recovery processes for the SAP
HANA database is provided in the SAP HANA Backup and Recovery Guide,
available online at:
http://help.sap.com/hana_appliance

Backup tool integration


There is currently no backup tool integration with SAP HANA. In the future, SAP
will provide an interface that can be used by manufacturers of external backup
tools to back up the data and redo logs of an SAP HANA system2.
There is however a possibility to integrate a backup tool with SAP HANA, by
allowing it to trigger an application level backup using SQL and then save away
the backup files. Section 7.1.2, IBM Tivoli Storage Manager for ERP on
page 132 describes such an integration of IBM Tivoli Storage Manager for ERP
with SAP HANA.

7.1.2 IBM Tivoli Storage Manager for ERP


IBM Tivoli Storage Manager for ERP is a simple, scalable data protection solution
for SAP HANA and SAP ERP. Tivoli Storage Manager (TSM) for ERP V6.4
includes a one-step command that automates SAP HANA backup and TSM data
protection.
TSM customers running SAP HANA appliances can backup their instances using
their existing TSM backup environment. Until a standardized backup interface is
available for SAP HANA, TSM customers running SAP HANA appliances need a
solution to move their file-based backups to TSM. TSM for ERP Data
Protection for SAP HANA v6.4 provides such backup and restore functionality for
SAP HANA.
2

132

See SAP Note 1730932 - Using backup tools with Backint for more details.

In-memory Computing with SAP HANA on IBM eX5 Systems

Setting up Data Protection for SAP HANA


The Data Protection for SAP HANA comes with a setup.sh command, which is
a configuration tool to prepare the TSM for ERP configuration file, create the SAP
HANA backup user and set all necessary environment variables for the SAP
HANA administration user. The setup.sh command guides through the
configuration process. Data Protection for SAP HANA stores a backup user and
its password in the SAP HANA keystore called 'hdbuserstore', to enable
unattended operation of a backup.

Backing up the SAP HANA database with TSM


SAP HANA writes its backup (logs and data) to files at pre-configured directories.
The Data Protection for SAP HANA command backup.sh reads the
configuration files to retrieve these directories (if not default configuration). On
backup execution the files created in these directories are moved to the running
TSM instance and are deleted afterwards from these directories (except for the
HANA configuration files).
Figure 7-1 illustrates this backup process.

node01

node02

node03

SAP HANA DB
DB partition 1

2
Primary data

Tivoli
Storage
Manager
Server

Backup files

DB partition 3

2 file system - GPFS2


Shared

HDD

Flash

HDD

Flash

HDD

Flash

data01

log01

data02

log02

data03

log03

backup

backup

3
backup

4
move files to TSM

5
backup

DB partition 2

backup.sh

restore

Tivoli Storage Manager


Storage
Figure 7-1 Backup process with Data Protection for SAP HANA, using local storage for backup files

Chapter 7. SAP HANA operations

133

The backup process follows these steps:


1. The backup.sh command triggers a log or data backup of the SAP HANA
database.
2. The SAP HANA database performs a synchronized backup on all nodes.
3. The SAP HANA database writes a backup file on each node.
4. The backup.sh command collects the filenames of the backup files.
5. The backup files are moved to TSM (and deleted on the nodes).
Instead of having the backup files of the individual nodes written to the local
storage of the nodes, and external storage system can be used to provide space
to store the backup files. All nodes need to be able to access this storage, for
example, using NFS. Figure 7-2 illustrates this.

node01

node02

node03

SAP HANA DB
DB partition 1

2
Primary data

Tivoli
Storage
Manager
Server

move files to TSM

5
backup

DB partition 2

DB partition 3

2 file system - GPFS2


Shared

HDD

Flash

HDD

Flash

HDD

Flash

data01

log01

data02

log02

data03

log03

backup.sh

restore

4
Tivoli Storage Manager
Storage

backup

backup

backup

SAP HANA
backup file storage

Figure 7-2 Backup process with Data Protection for SAP HANA, using external storage for backup files

Running log and data backups requires the DP for SAP HANA backup.sh
command to be executed as the SAP HANA administration user (<sid>adm).

134

In-memory Computing with SAP HANA on IBM eX5 Systems

The backup.sh command provides two basic functions:


1. Complete the data-backup (including HANA instance and landscape
configuration files).
2. Complete log-backup and remove successfully saved redo log files from disk.
The functions can be selected using command line arguments to be able to
schedule the backup script with a given parameter:
backup.sh --data

Performs complete data and configuration file backup

backup.sh --logs

Performs complete log backup followed by a LOG


RECLAIM

By using this command, a backup of the SAP HANA database into TSM can be
fully automated.

Restoring the SAP HANA database from TSM


The SAP HANA database requires the backup files to be restored to start a
recovery process using the SAP HANA studio. For SAP HANA database
revisions 30 and higher, Data Protection for SAP HANA provides a restore.sh
command that moves all required files back to file system location automatically,
so that the user is not required to search these files manually. For earlier
revisions of the SAP HANA database, this has to be done manually using the
TSM BACKUP-Filemanager. The SAP HANA database expects the backup files
to be restored to same directory as they were written during backup. The
recovery itself can then be triggered using the SAP HANA Studio.

Chapter 7. SAP HANA operations

135

To restore data backups, including SAP HANA configuration files and logfile
backups, TSMs BACKUP-Filemanager is used. Figure 7-3 shows a sample
panel of the BACKUP-Filemanager.
BACKUP-Filemanager V6.4.0.0, Copyright IBM 2001-2012
.------------------+---------------------------------------------------------------.
| Backup ID's
| Files stored under TSM___A0H7K1C4QI
|
|------------------+---------------------------------------------------------------|
| TSM___A0H7KM0XF4 | */hana/log_backup/log_backup_2_0_1083027170688_1083043933760 |
| TSM___A0H7KLYP3Z | */hana/log_backup/log_backup_2_0_1083043933760_1083060697664 |
| TSM___A0H7KHNLU6 | */hana/log_backup/log_backup_2_0_1083060697664_1083077461376 |
| TSM___A0H7KE6V19 | */hana/log_backup/log_backup_2_0_1083077461376_1083094223936 |
| TSM___A0H7K9KR7F | */hana/log_backup/log_backup_2_0_1083094223936_1083110986880 |
| TSM___A0H7K7L73W | */hana/log_backup/log_backup_2_0_1083110986880_1083127750848 |
| TSM___A0H7K720A4 | */hana/log_backup/log_backup_2_0_1083127750848_1083144513792 |
| TSM___A0H7K4BDXV | */hana/log_backup/log_backup_2_0_1083144513792_1083161277760 |
| TSM___A0H7K472YC | */hana/log_backup/log_backup_2_0_1083161277760_1083178040064 |
| TSM___A0H7K466HK | */hana/log_backup/log_backup_2_0_1083178040064_1083194806336 |
| TSM___A0H7K1C4QI | */hana/log_backup/log_backup_2_0_1083194806336_1083211570688 |
| TSM___A0H7JX1S77 | */hana/log_backup/log_backup_2_0_1083211570688_1083228345728 |
| TSM___A0H7JSRG2B | */hana/log_backup/log_backup_2_0_1083228345728_1083245109824 |
| TSM___A0H7JOH1ZP | */hana/log_backup/log_backup_2_0_1083245109824_1083261872960 |
| TSM___A0H7JK6ONC | */hana/log_backup/log_backup_2_0_1083261872960_1083278636608 |
| TSM___A0H7JJWUI8 | */hana/log_backup/log_backup_2_0_1083278636608_1083295400384 |
| TSM___A0H7JJU5YN | */hana/log_backup/log_backup_2_0_1083295400384_1083312166016 |
| TSM___A0H7JFWAV4 | */hana/log_backup/log_backup_2_0_1083312166016_1083328934016 |
| TSM___A0H7JBG625 | */hana/log_backup/log_backup_2_0_1083328934016_1083345705856 |
| TSM___A0H7JBAASN | */hana/log_backup/log_backup_2_0_1083345705856_1083362476352 |
| TSM___A0H7J7BLDK | */hana/log_backup/log_backup_2_0_1083362476352_1083379244416 |
| TSM___A0H7J5U8S7 | */hana/log_backup/log_backup_2_0_1083379244416_1083396008064 |
| TSM___A0H7J5T92O | */hana/log_backup/log_backup_2_0_1083396008064_1083412772928 |
| TSM___A0H7J4TWPG | */hana/log_backup/log_backup_2_0_1083412772928_1083429538688 |
|
| */hana/log_backup/log_backup_2_0_1083429538688_1083446303424 |
|
| */hana/log_backup/log_backup_2_0_1083446303424_1083463079488 |
|
| */hana/log_backup/log_backup_2_0_1083463079488_1083479846528 V
|------------------+---------------------------------------------------------------|
| 24 BID's
| 190 File(s) - 190 marked
|
`------------------+---------------------------------------------------------------'
TAB change windows
F2 Restore
F3 Mark all
F4 Unmark allF5 reFresh
F6 fileInfo
F7 redireCt
F8 Delete
F10 eXit
ENTER mark file
Figure 7-3 The BACKUP-Filemanager interface

Desired data and log backups can be selected and then restored to the desired
location. If no directory is specified for the restore, the BACKUP-Filemanager
restores the backups to the original location from which the backup was done.

136

In-memory Computing with SAP HANA on IBM eX5 Systems

After the backup files have been restored, the recovery process has to be started
using SAP HANA Studio. More information about this process and the various
options for a recovery is contained in the SAP HANA Backup and Recovery
Guide, available online at:
http://help.sap.com/hana_appliance
After completing the recovery process successfully and the backup files are no
longer needed, they must be removed from disk manually.

7.2 Disaster Recovery for SAP HANA


When talking about Disaster Recovery it is important to understand the
difference between Disaster Recovery and High Availability. High Availability is
covering a hardware failure (e.g. one node becomes unavailable due to a faulty
CPU, memory DIMM, storage or network failure) in a scale out configuration.
This has been covered in section 6.4.2, Scale-out solution with high-availability
capabilities on page 114.
Disaster Recovery (DR) covers the event, when multiple nodes in a scale-out
configuration fail, or a whole data center goes down due to a fire, flood, or other
disaster, and a secondary site needs to take over the SAP HANA system. The
ability to recover from a disaster, or to tolerate a disaster without major impact,
is sometimes also referred to as Disaster Tolerance (DT).
When running an SAP HANA side-car scenario (e.g. SAP CO-PA Accelerator,
sales planning, smart metering) the data will still be available in the source SAP
Business Suite system. Planning or analytical tasks will run significantly slower
without the SAP HANA system being available, but no data is lost. More
important is the situation if SAP HANA is the primary database, like when using
Business Warehouse with SAP HANA as the database. In this case the
productive data is solely available within the SAP HANA database, and according
to the business service level agreements, prevention for a failure is absolutely
necessary.
A disaster recovery solution for SAP HANA can be based on two different levels:
On the application level, by shipping database logs from the primary site to
the secondary site. At the time of writing this feature is not supported by the
SAP HANA database.
On the infrastructure level:
Using backups replicated or otherwise shipped from the primary site to the
secondary site, and used for a restore in case of a disaster.

Chapter 7. SAP HANA operations

137

By replicating the data written to disks by SAP HANAs persistency layer,


either synchronously or asynchronously, allowing to restart and recovering
the SAP HANA database on the secondary site in the event the primary
site becomes unavailable.
Which kind of disaster recovery solution to implement depends on the Recovery
Time Objective (RTO) and the Recovery Point Objective (RPO). The RTO
describes how quickly the SAP HANA database has to be back available after a
disaster. The RPO describes the point in time to which data has to be restored
after a disaster, for example, how old the most recent backup is.

7.2.1 Using backup and restore as a disaster recovery solution


Using backup and restore as a disaster recovery solution is a basic way of
providing disaster recovery. Depending on the RPO it might however be a viable
way to achieve disaster recovery. The basic concept is to backup the data on the
primary site regularly (at least daily) to a defined staging area that might be an
external disk on an NFS share or a directly attached SAN subsystem (does not
need to be dedicated to SAP HANA). After the backup is done, it has to be
transferred to the secondary site, for example, by a simple file transfer (can be
automated) or by using replication functionality of the storage system used to
hold the backup files.
As described in Restoring a backup on page 131, a backup can only be
restored to an identical SAP HANA system, therefore an SAP HANA system has
to exist on the secondary side which is identical to the one on the primary site, at
minimum with regard to the number of nodes and node memory size. During
normal operations this system can run other non-productive SAP HANA
instances, for example a Quality Assurance (QA), Development (DEV), Test, or
other second tier systems. In case the primary site goes down, the system needs
to be cleared from these second tier HANA systems (a fresh install of the SAP
HANA software is recommended) and the backup can be restored. Upon
configuring the application systems to use the secondary site instead of the
primary one, operation can be resumed. The SAP HANA database will recover
from the latest backup in case of a disaster.

138

In-memory Computing with SAP HANA on IBM eX5 Systems

Figure 7-4 illustrates the concept of using backup and restore as a basic disaster
recovery solution.

Primary Site
node01

node02

node03

node04
backup

SAP HANA DB
DB partition 1

DB partition 2

DB partition 3

Standby node

Shared file system - GPFS

Primary data

HDD

Flash

HDD

Flash

HDD

Flash

data01

log01

data02

log02

data03

log03

HDD

Flash

Storage

Replica

mirror /
transfer
backup set

Secondary Site
node01

node02

node03

node04

SAP HANA DB
DB partition 1

DB partition 2

DB partition 3

restore

Standby node

Shared file system - GPFS

Primary data

HDD

Flash

HDD

Flash

HDD

Flash

data01

log01

data02

log02

data03

log03

HDD

Flash
Storage

Replica

Figure 7-4 Using backup and restore as a basic disaster recovery solution

7.2.2 Disaster recovery by using replication


The replication based disaster recovery solution for the IBM Systems solution for
SAP HANA is based on the same architecture as the scale-out solution. Using
GPFS functionality a secondary (and tertiary) site is added to the solution to
achieve disaster recovery. Maintaining the same setup and administration
concept as with a single-site configuration, a migration from a single-site setup to
a multiple site setup is seamless. The following sections give an overview on a
multi-site disaster recovery solution leveraging GPFS replication.

Overview
For a Disaster Recovery setup it is necessary to have identical scale-out
configurations on both the primary and the secondary site. In addition there

Chapter 7. SAP HANA operations

139

needs to be a third site which has the sole responsibility to act as a quorum site.
In the configuration described here, the distance between the primary and
secondary data centers has to be within a range to allow for synchronous
replication with limited impact to the overall application performance (also
referred to as metro-mirror distance).
The major difference between a single site (as described in 6.4.2, Scale-out
solution with high-availability capabilities on page 114) and a multi-site solution
is the placement of the replicas within GPFS. Whereas in a single-site
configuration there is only one replica3 of each data block in one cluster, a
multi-site solution will hold an additional replica in the remote or secondary site.
This ensures that, when the primary site fails, a complete copy of the data is
available in the second site and operation can be resumed on this site.
A two-site solution implements the concept of a synchronous data replication on
file system level between both sites, leveraging the replication capabilities of
GPFS. Synchronous data replication means that any write request issued by the
application is only committed to the application after it has been successfully
written on both sides. In order to maintain the application performance within
reasonable limits the network latency (and therefore the distance) between the
sites has to be limited to metro-mirror distances. The maximum achievable
distance depends on the performance requirements of the SAP HANA system
and of the network configuration in the customer environment.

Basic setup
During normal operation there is an active SAP HANA instance running. The
SAP HANA instance on the secondary site is not active. The implementation on
each site is identical to a standard scale-out cluster with high availability as
described in section 6.4.2, Scale-out solution with high-availability capabilities

140

In addition to the primary data. In GPFS terminology, these are already two replicas, that is the
primary data and the first copy. To avoid confusion, we do not count the primary data as a replica.

In-memory Computing with SAP HANA on IBM eX5 Systems

on page 114. It therefore has to include standby servers for high availability. A
server failure is being handled completely within on site and does not enforce a
site failover. Figure 7-5 illustrates this setup.

Site A
node01

node02

node03

Site B
node05

node04

SAP HANA DB (active)


Partition 1

Partition 2

Partition 3

node06

node07

node08

SAP HANA DB (inactive)


Partition 1

Standby

Partition 2

Partition 3

Standby

Shared file system - GPFS


Primary data
First Replica

synchronous
replication
Second Replica

GPFS
quorum
node

Site C
Figure 7-5 Basic setup of the disaster recovery solution using GPFS synchronous replication

The connection between the two main sites A and B depends on the customers
network infrastructure. It is recommended to have a dual link dark fibre
connection to allow for redundancy also in the network switch side on each site.
For full redundancy an additional link pair is required for cross connection the
switches. Within each site the 10 Gb Ethernet network connections for both the
internal SAP HANA and the internal GPFS network are implemented in a
redundant layout.
As with a standard scale-out implementation, the disaster recovery configuration
relies on GPFS functionality to enable the synchronous data replication between
sites. A single site solution holds one replica of each data block. This is being
enhanced with a second replica in the dual site disaster recovery
implementation. A stretched GPFS cluster is being implemented between the
two sites. Figure 7-5 illustrates that there is a combined cluster on GPFS level
between both sites, whereas the SAP HANA installations are independent of
each other. GPFS file placement policies ensure that there is one replica on the
primary site and a second replica on the secondary site. In case of a site failure
the file system can therefore stay active with a complete data replica in the
secondary site. The SAP HANA database can then be made operational through
a manually procedure based on the persistency and log files available in the file
system.

Chapter 7. SAP HANA operations

141

Site failover
During normal operation there is a running SAP HANA instance active on the
primary site. The secondary site has an installed SAP HANA instance that is
inactive. A failover to the remote SAP HANA installation has to be initiated
manually. Depending on the reason for the site failover it can be decided if the
secondary site becomes the production site or both sites stay offline until the
reason for the failover is removed and the primary site becomes active again.
During normal operation the GPFS file system is not mounted on the secondary
site, ensuring that there is not read nor write access to the file system. In case of
a failover, first ensure that a second replica of data is available on the secondary
site before the file system is mounted. This replication process is initiated
manually, and as soon as it completes correctly, the file system is mounted. From
there on the SAP HANA instance can be started and the data loaded into
memory. The SAP HANA database is restored to the latest savepoint and the
available logs are recovered.
Any switch from one site to the other incorporates a down time of SAP HANA
operations, because the two independent instances on either site must not run at
the same time, due to the sharing of the persistency and log files on the
filesystem.

Summary
The disaster recovery solution for the IBM Systems solution for SAP HANA
exploits the advanced replication features of GPFS, creating a cross-site cluster
that ensures availability and consistency of data across two sites. It does not
impose the need for additional storage systems, but completely builds upon the
scale-out solution for SAP HANA. This simple architecture reduces the
complexity in maintaining such a solution.

7.3 Monitoring SAP HANA


In a productive environment, administration and monitoring of an SAP HANA
appliance play an important role.

7.3.1 Monitoring with SAP HANA Studio


The SAP tool for administration of and monitoring the SAP HANA appliance is
the SAP HANA Studio. It allows you to monitor the overall system state:
General system information (such as software versions).

142

In-memory Computing with SAP HANA on IBM eX5 Systems

A warning section shows the latest warnings generated by the statistics


server. Detailed information about these warnings is available as a tooltip.
Bar views provide an overview of important system resources. The amount of
available memory, CPUs, and storage space is displayed, in addition to the
used amount of these resources.
In a distributed landscape the amount of available resources is aggregated over
all servers.
Note: More information about administration and monitoring of SAP HANA is
available in the SAP HANA administration guide, accessible online:
http://help.sap.com/hana_appliance

7.3.2 Monitoring SAP HANA with Tivoli


Most of the monitoring data visible in SAP HANA Studio is collected by the
statistics server, which is a monitoring tool for the SAP HANA database. It
collects statistical and performance information from the database using SQL
statements.

Chapter 7. SAP HANA operations

143

Monitoring data provided by the statistics sever can be used by other monitoring
tools also. Figure 7-6 shows an image of this data integrated into IBM Tivoli
monitoring.

Figure 7-6 Monitoring the SAP HANA database with Tivoli

Tivoli monitoring also provides agents to monitor the operating system of the
SAP HANA appliance. Hardware monitoring of the SAP HANA appliance servers
can be achieved with IBM Systems Director, which also can be integrated into a
Tivoli monitoring landscape.
By integrating the monitoring data collected by the statistics server, the Tivoli
Monitoring agent for the operating system, and hardware information provided by
IBM director, Tivoli Monitoring can provide a holistic view of the SAP HANA
appliance.

7.4 Sharing an SAP HANA system


SAP HANA is a high performance appliance, prohibiting the use of any kind of
virtualization concept. This can lead to many, many SAP HANA appliances in the
datacenter, for example, production, disaster recovery, quality assurance (QA),

144

In-memory Computing with SAP HANA on IBM eX5 Systems

test and sandbox systems, possibly for multiple application scenarios, regions or
lines of business. Therefore the consolidation of SAP HANA instances, at least
for non-production systems, seems desirable. There are however major
drawbacks when consolidating multiple SAP HANA instances on one system4.
Due to this it is generally not supported for production systems. For
non-production systems the support status depends on the scenario:
Multiple Components on One System (MCOS)
Having multiple SAP HANA instances on one system, also referred to
as MCOS (Multiple Components on One System) is not supported
because this poses conflicts between different SAP HANA databases
on a single server, for example, common data and log volumes,
possible performance degradations, interference of the systems
against each other, and so on. While SAP supports this under certain
conditions (see SAP Note 1681092), IBM does not support such a
configuration.
Multiple Components on One Cluster (MCOC)
Running multiple SAP HANA Instances on one scale-out cluster (for
the sake of similarity to the other abbreviations we call this MCOC)
is supported as long as each node of the cluster runs only one SAP
HANA instance. A development and a QA instance can run on one
cluster, but with dedicated nodes for each of the two SAP HANA
instances, for example, each of the nodes runs either the
development instance, or the QA instance, but not both. Only the
GPFS file system is shared across the cluster.
Multiple Components in One Database (MCOD)
Having one SAP HANA Instance containing multiple components,
schemas or application scenarios - also referred to as Multiple
Components in One Database (MCOD) - is supported. This means
however to have all data within a single database which is also
maintained as a single database, which can lead to limitations in
operations, database maintenance, backup and recovery, and so on.
For example, bringing down the SAP HANA database affects all of
the scenarios. It is impossible to bring it down for only one scenario.
SAP Note 1661202 documents the implications.
Things to consider when consolidating SAP HANA instances on one system are:
An instance filling up the log volume causes all other instances on the system
to stop working properly. This can be addressed by monitoring the system
closely.

One SAP HANA system, as referred to in this section, can consist of one single server or multiple
servers in a clustered configuration.

Chapter 7. SAP HANA operations

145

Installation of an additional instance might fail, when there are already other
instances installed and active on the system. The installation procedures
check the available space on the storage, and refuse to install when there is
less free space than expected. This might also happen when trying to
re-install an already installed instance.
Installing a new SAP HANA revision for one instance might affect other
instances already installed on the system. For example new library versions
coming with the new install might break the already installed instances.
The performance of the SAP HANA system becomes unpredictable because
the individual instances on the system sharing resources like memory and
CPU.
When asking for support for such a system, you might be asked to remove the
additional instances and to recreate the issue on a single instance system.

7.5 Installing additional agents


Many organizations have processes and supporting software in place, to monitor,
back up, or otherwise interact with their servers. As SAP HANA is delivered in an
appliance-like model, there are restrictions with regards to additional software,
for example, monitoring agents, to be installed onto the appliance.
Only the software installed by the hardware partner is recommended on the SAP
HANA appliance. For the IBM Systems solution for SAP HANA, IBM defined
three categories of agents:
Supported

IBM provides a solution covering the respective area, no


validation by SAP is required.

Tolerated

Solutions provided by a third party that are allowed to be used on


the IBM Workload Optimized Solution for SAP HANA. It is
customers responsibility to obtain support for such solutions.
Such solutions are not validated by IBM and SAP. If issues with
such solutions occur and cannot be resolved, the use of such
solutions might be prohibited in the future.

Prohibited

Solutions that must not be used on the IBM Systems solution for
SAP HANA, using these solutions might compromise the
performance, stability or data integrity of the SAP HANA
appliance.

Do not install additional software on the SAP HANA appliance which is classified
as prohibited for use on the SAP HANA appliance. As an example, initial tests

146

In-memory Computing with SAP HANA on IBM eX5 Systems

show that some agents can decrease performance or even possibly corrupt the
SAP HANA database (for example, virus scanners).
In general, all additionally installed software must be configured not to interfere
with the functionality or performance of the SAP HANA appliance. If any issue of
the SAP HANA appliance occurs, you might be asked by SAP to remove all
additional software and to reproduce the issue.
The list of agents that are supported, tolerated, or prohibited for use on the SAP
HANA appliance are published in the Quick Start Guide for the IBM Systems
Solution for SAP HANA appliance, available online at:
http://www-947.ibm.com/support/entry/myportal/docdisplay?lndocid=MIGR-5
087035

7.6 Software and firmware levels


The IBM Systems solution for SAP HANA appliance contains several different
components that might at times be required to be upgraded (or downgraded)
depending on different support organizations recommendations. These
components can be split up into four general categories:

Firmware
Operating system
Hardware drivers
Software

The IBM System x SAP HANA support team, after informed, reserves the right to
perform basic system tests on these levels when it is deemed to have a direct
impact on the SAP HANA appliance. In general, IBM does not give specific
recommendations to which levels are allowed for the SAP HANA appliance.
The IBM System x SAP HANA Development team provides at regular intervals
new images for the SAP HANA appliance. Since these images have
dependencies regarding hardware, operating system, and drivers use the latest
image for maintenance and installation of SAP HANA systems. These images
can be obtained through IBM support. Part number information is contained in
the Quick Start Guide.
If the firmware level recommendations for the IBM components of the SAP HANA
appliance are given through the individual IBM System x Support teams that fix
known code bugs, it is the customer's responsibility to up-/downgrade to the
recommended levels as instructed by IBM Support.

Chapter 7. SAP HANA operations

147

If the operating system recommendations for the SUSE Linux components of the
SAP HANA appliance are given through the SAP, SUSE, or IBM Support teams
that fix known code bugs, it is the customer's responsibility to up- or downgrade
to the recommended levels, as instructed by SAP through an explicit SAP Note
or allowed through a Customer OSS Message. SAP describes their operational
concept, including updating of the operating system components in SAP Note
1599888 - SAP HANA: Operational Concept. If the Linux kernel is updated, take
extra care to recompile the IBM High IOPS drivers and IBM GPFS software as
well.
If an IBM High IOPS driver or IBM GPFS recommendation to update the software
is given through the individual IBM Support teams (System x, Linux, GPFS) that
fix known code bugs, it is not recommend to update these drivers without first
asking the IBM System x SAP HANA support team through an SAP OSS
Customer Message.
If the other hardware or software recommendations for IBM components of the
SAP HANA appliance are given through the individual IBM Support teams that fix
known code bugs, it is the customer's responsibility to up-/downgrade to the
recommended levels as instructed by IBM Support.

148

In-memory Computing with SAP HANA on IBM eX5 Systems

Chapter 8.

Summary
This chapter summarizes the benefits of in-memory computing and the
advantages of IBM infrastructure for running the SAP HANA solution. We discuss
the following topics:

8.1, Benefits of in-memory computing on page 150


8.2, SAP HANA: An innovative analytic appliance on page 150
8.3, IBM Systems Solution for SAP HANA on page 151
8.4, Going beyond infrastructure on page 154

Copyright IBM Corp. 2013. All rights reserved.

149

8.1 Benefits of in-memory computing


In todays data-driven culture, tools for business analysis are quickly evolving.
Organizations need new ways to take advantage of critical data dynamically to
not only accelerate decision making, but also to gain insights into key trends. The
ability to instantly explore, augment, and analyze all data in near real time can
deliver the competitive edge that your organization needs to make better
decisions faster and to leverage favorable market conditions, customer trends,
price fluctuations, and other factors that directly influence the bottom line.
Made possible through recent technology advances that combine large, scalable
memory, multi-core processing, fast solid-state storage, and data management,
in-memory computing leverages these technology innovations to establish a
continuous real-time link between insight, foresight, and action to deliver
significantly accelerated business performance.

8.2 SAP HANA: An innovative analytic appliance


To support todays information-critical business environment, SAP HANA gives
companies the ability to process huge amounts of data faster than ever before.
The appliance lets business users instantly access, model, and analyze all of a
companys transactional and analytical data from virtually any data source in real
time, in a single environment, without impacting existing applications or systems.
The result is accelerated business intelligence (BI), reporting, and analysis
capabilities with direct access to the in-memory data models residing in SAP
in-memory database software. Advanced analytical workflows and planning
functionality directly access operational data from SAP ERP or other sources.
SAP HANA provides a high-speed data warehouse environment, with an SAP
in-memory database serving as a next-generation, in-memory acceleration
engine.
SAP HANA efficiently processes and analyzes massive amounts of data by
packaging SAPs use of in-memory technology, columnar database design, data
compression, and massive parallel processing together with essential tools and
functionality such as data replication and analytic modeling.
Delivered as an optimized hardware appliance based on IBM eX5 enterprise
servers, the SAP HANA software includes:
High-performance SAP in-memory database and a powerful data calculation
engine
Real-time replication service to access and replicate data from SAP ERP

150

In-memory Computing with SAP HANA on IBM eX5 Systems

Data repository to persist views of business information


Highly tuned integration with SAP BusinessObjects BI solutions for insight
and analytics
SQL and MDX interfaces for third-party application access
Unified information-modeling design environment
SAP BusinessObjects Data services to provide access to virtually any SAP
and non-SAP data source
To explore, model, and analyze data in real time without impacting existing
applications or systems, SAP HANA can be leveraged as a high-performance
side-by-side data mart to an existing data warehouse. It can also replace the
database server for SAP NetWeaver Business Warehouse, adding in-memory
acceleration features.
These components create an excellent environment for business analysis, letting
organizations merge large volumes of SAP transactional and analytical
information from across the enterprise, and instantly explore, augment, and
analyze it in near-real time.

8.3 IBM Systems Solution for SAP HANA


The IBM Systems Solution for SAP HANA based on IBM eX5 enterprise servers
provides the performance and scalability to run SAP HANA, enabling customers
to drive near real-time business decisions and helping organizations stay
competitive. IBM eX5 enterprise servers provide a proven, scalable platform for
SAP HANA that enables better operational planning, simulation, and forecasting,
in addition to optimized storage, search, and ad hoc analysis of todays
information. SAP HANA running on powerful IBM eX5 enterprise servers
combines the speed and efficiency of in-memory processing with the ability of
IBM eX5 enterprise servers to analyze massive amounts of business data.
Based on scalable IBM eX5 technology included in IBM System x3690 X5 and
System x3950 X5 servers, SAP HANA running on eX5 enterprise servers offers
a solution that can help meet the need to analyze growing amounts of
transactional data, delivering significant gains in both performance and scalability
in a single, flexible appliance.

8.3.1 Workload Optimized Solution


IBM offers several Workload Optimized Solution models for SAP HANA. These
models, based on the 2-socket x3690 X5 and 4-socket x3950 X5, are optimally

Chapter 8. Summary

151

designed and certified by SAP. They are delivered preconfigured with key
software components preinstalled to help speed delivery and deployment of the
solution. The x3690 X5-based configurations offer 128 - 256 GB of memory and
the choice of only solid-state disk or a combination of spinning disk and
solid-state disk. The x3950 X5-based configurations leverage the scalability of
eX5 and offer the capability to pay as you grow, starting with a 2-processor, 256
GB configuration and growing to an 8-processor, 1 TB configuration. The x3950
X5-based configurations integrate High IOPS SSD PCIe adapters. The 8-socket
configuration uses a scalability kit that combines the 7143-H2x or 7143-HBx with
the 7143-H3x or 7143-HCx to create a single 8-socket, 1 TB system.
IBM offers the appliance in a box with no need for external storage. With the
x3690 X5-based SSD only models, IBM has a unique offering with no spinning
hard drives, providing greater reliability and performance.

8.3.2 Leading performance


IBM eX5 enterprise servers offer extreme memory and performance scalability.
With improved hardware economics and new technology offerings, IBM is
helping SAP realize a real-time enterprise with in-memory business applications.
IBM eX5 enterprise servers deliver a long history of leading SAP benchmark
performance.
IBM eX5 enterprise servers come equipped with the Intel Xeon processor E7
series. These processors deliver performance that is ideal for your most
data-demanding SAP HANA workloads and offer improved scalability along with
increased memory and I/O capacity, which is critical for SAP HANA. Advanced
reliability and security features work to maintain data integrity, accelerate
encrypted transactions, and maximize the availability of SAP HANA applications.
In addition, Machine Check Architecture Recovery, a reliability, availability, and
serviceability (RAS) feature built into the Intel Xeon processor E7 series, enables
the hardware platform to generate machine check exceptions. In many cases,
these notifications enable the system to take corrective action that allows the
SAP HANA to keep running when an outage would otherwise occur.
IBM eX5 features, such as eXFlash solid-state disk technology, can yield
significant performance improvements in storage access, helping deliver an
optimized system solution for SAP HANA. Standard features in the solution, such
as the High IOPS Adapters for IBM System x, can also provide fast access to
storage.

152

In-memory Computing with SAP HANA on IBM eX5 Systems

8.3.3 IBM GPFS enhancing performance, scalability, and reliability


Explosions of data, transactions, and digitally aware devices are straining IT
infrastructure and operations, while storage costs and user expectations are
increasing. The IBM General Parallel File System (GPFS), with its
high-performance enterprise file management, can help you move beyond simply
adding storage to optimizing data management for SAP HANA.
High-performance enterprise file management using GPFS gives SAP HANA
applications these:
Performance to satisfy the most demanding SAP HANA applications
Seamless capacity expansion to handle the explosive growth of SAP HANA
information
Scalability to enable support for the largest SAP HANA database
requirements
High reliability and availability to help eliminate production outages and
provide disruption-free maintenance and capacity upgrades
Seamless capacity and performance scaling, along with the proven reliability
features and flexible architecture of GPFS, help your company foster innovation
by simplifying your environment and streamlining data workflows for increased
efficiency.

8.3.4 Scalability
IBM offers configurations allowing customers to start with a 2 CPU/256 GB RAM
model (S+), which can scale up to a 4 CPU/512 GB RAM model (M), and then to
an 8 CPU/1024 GB configuration (L). With the option to upgrade S+ to M, and M+
to L, IBM can provide an unmatched upgrade path from a T-shirt size S up to a
T-shirt size L, without the need to retire a single piece of hardware.
If you have large database requirements, you can scale the workload-optimized
solutions to multi-server configurations. IBM and SAP have validated
configurations of up to sixteen nodes with high availability, each node holding
either 256 GB, 512 GB or 1 TB of main memory. This scale-out support enables
support for databases as large 16 TB, able to hold the equivalent of about 56 TB
of uncompressed data. While the IBM solution is certified for up to 16 nodes, its
architecture is designed for extreme scalability and can even grow beyond that.
The IBM solution does not require external storage for the stand-alone or for the
scale-out solution. The solution is easy to grow by the simple addition of nodes to
the network. There is no need to reconfigure a storage area network for failover.
That is all covered by GPFS under the hood.

Chapter 8. Summary

153

IBM uses the same base building blocks from stand-alone servers to scale out,
providing investment protection for customers who want to grow their SAP HANA
solution beyond a single server.
IBM or IBM Business Partners can provide these scale-out configurations
preassembled in a rack, helping to speed installation and setup of the SAP
HANA appliance.

8.3.5 Services to speed deployment


To help speed deployment and simplify maintenance of your x3690 X5 and
x3950 X5, the Workload Optimized Solution for SAP HANA, IBM Lab Services,
and IBM Global Technology Services offer quick-start services to help set up and
configure the appliance and health-check services to ensure that it continues to
run optimally. In addition, IBM also offers skills and enablement services for
administration and management of IBM eX5 enterprise servers.

8.4 Going beyond infrastructure


Many clients require more than software and hardware products. IBM as a
globally integrated enterprise can provide clients real end-to-end offering ranging
across hardware, software, infrastructure, and consulting services - all of them
provided by single company and integrated together.

8.4.1 A trusted service partner


Clients need a partner to help them assess their current capabilities, identify
areas for improvement, and develop a strategy for moving forward. This is where
IBM Global Business Services provides immeasurable value with thousands of
SAP consultants in 80 countries organized by industry sectors.
IBM Global Business Services worked together with IBM Research teams, IBM
Software group and IBM Hardware teams to prepare an integrated offering
focused on business analytical space and mobility. Among others this offering
also covers all services around the SAP HANA appliance.
Through this offering IBM can help you to take full advantage of SAP HANA
running on IBM eX5 enterprise servers.

Defining the strategy for business analytics


An important step before implementing the SAP HANA solution is the formulation
of an overall strategy how this new technology can be leveraged to deliver

154

In-memory Computing with SAP HANA on IBM eX5 Systems

business value and how it will be implemented in the existing customer


landscape.
Customers are typically facing the following challenges where IBM Global
Business Services offering can help:
Mapping of existing paint-points to available offerings
Designing a customer specific use case where no existing offering is available
Creation of a business case for implementing SAP HANA technology
Understanding long term technology trends and their influence on individual
decisions
Underestimating importance of high availability, disaster recovery and
operational aspects of the SAP in-memory solution
Avoiding delays caused by poor integration between hardware and
implementation partners
Alignment of already running projects to a newly developed in-memory
strategy
IBM experts conduct a series of workshops with all important stakeholders
including decision makers, key functional and technical leads and architects. As
a result of these workshops, an SAP HANA implementation roadmap is defined.
The implementation roadmap is based on the existing customer landscape and
on defined functional, technical, and business-related needs and requirements. It
will reflect current analytic capabilities and current status of existing systems.
An SAP HANA implementation roadmap contains individual use cases about
how SAP HANA can be best integrated into the customer landscape to deliver
the desired functionality. Other technologies that can bring additional value are
identified and the required architectural changes are documented.
For certain situations, a proof of concept might be recommended to validate that
desired key performance indicators (KPIs) can be met including:

Data compression rates


Data load performance
Data replication rates
Backup/restore speed
Front-end performance

After the SAP HANA implementation roadmap is accepted by the customer, IBM
expert teams work with the customer to implement the roadmap.

Chapter 8. Summary

155

Used implementation methods


Existing use case scenarios can be divided in two groups based on how SAP
HANA is deployed:
SAP HANA as a stand-alone component
The technology platform, operational reporting and accelerator use case
scenarios are describing the SAP HANA database as stand-alone
component.
IBM Global Business Services is offering services to implement the SAP
HANA database using a combination of the IBM Lean implementation
approach, ASAP 7.2 Business Add-on for SAP HANA methodology and agile
development methodologies that are important for this type of projects.
Used methodologies are keeping strict control upon following solution
components:

Use case (overall approach how SAP HANA will be implemented)


Sources of data (source tables containing required information)
Data replication (replication methods for transferring data into SAP HANA)
Data models (transformation of source data into required format)
Reporting (front-end components like reports and dashboards)

This approach has following implementation phases:

Project preparation
Project kick-off
Blueprint
Realization
Testing
Go-live preparation
Go-live

This methodology is focused to help both IBM and customer to keep the
defined and agreed scope under control and to help with issue classification
and resolution management. It is also giving the required visibility about the
current progress of development or testing to all involved stakeholders.
SAP HANA as the underlying database for SAP Business Suite products
A SAP NetWeaver BW system running on SAP HANA is currently the only
released solution from this category. Offerings for other products are
announced after they are released to run on SAP HANA database.
IBM Global Business Services is using a facility called IBM SAP HANA
migration factory designed specially for this purpose. Local experts who are
directly working with the clients are cooperating with remote teams
performing the required activities based on a defined conversion methodology
agreed with SAP. This facility is having the required amount of trained experts

156

In-memory Computing with SAP HANA on IBM eX5 Systems

covering all key positions needed for a smooth transition from a traditional
database to SAP HANA.
The migration service related to conversion of existing SAP NetWeaver BW
system to run on SAP HANA database has following phases:
Initial assessment
Local teams perform an initial assessment of the existing systems, their
relations and technical status. Required steps are identified, and an
implementation roadmap is developed and presented to the customer.
Conversion preparation
IBM remote teams perform all required preparations for the conversion.
BW experts clean BW systems to remove unnecessary objects. If
required, the system is cloned and upgraded to the required level.
Migration to SAP HANA database
In this phase, IBM remote teams perform the conversion to the SAP HANA
database including all related activities. Existing InfoCubes and DataStore
objects are converted to an in-memory optimized format. After successful
testing the system is released back for customer usage.

8.4.2 IBM and SAP team for long-term business innovation


With a unique combination of expertise, experience, and proven methodologies
and a history of shared innovation IBM can help strengthen and optimize
your information infrastructure to support your SAP applications.
IBM and SAP have worked together for nearly 40 years to deliver innovation to
their shared customers. Since 2006, IBM has been the market leader for
implementing SAPs original in-memory appliance, the SAP NetWeaver Business
Warehouse Accelerator. Hundreds of SAP NetWeaver BW Accelerator
deployments have been successfully completed in multiple industries. These
SAP NetWeaver BW Accelerator appliances have been successfully deployed on
many of SAPs largest business warehouse implementations, which are based
on IBM hardware and DB2, optimized for SAP.
IBM and SAP offer solutions that move business forward and anticipate
organizational change by strengthening your business analytics information
infrastructure for greater operational efficiency and offering a way to make
smarter decisions faster.

Chapter 8. Summary

157

158

In-memory Computing with SAP HANA on IBM eX5 Systems

Appendix A.

Appendix
This appendix provides information about the GPFS license.

Copyright IBM Corp. 2013. All rights reserved.

159

GPFS license information


The models of the IBM Systems Solution for SAP HANA come with GPFS
licenses, including three years of Software Subscription and Support. Software
Subscription and Support contracts, including Subscription and Support
renewals, are managed through IBM Passport Advantage or Passport
Advantage Express.
There are currently four different types of GPFS licenses:
The GPFS on x86 Single Server for Integrated Offerings provides file system
capabilities for single-node integrated offerings. This kind of GPFS license
does not cover the use in multi-node environments like the scale-out solution
discussed here. In order to use building blocks that come with the GPFS on
x86 Single Server for Integrated Offerings licenses, for a scale-out solution,
GPFS on x86 Server licenses or GPFS File Placement Optimizer licenses
have to be obtained for these building blocks.
The GPFS Server license permits the licensed node to perform GPFS
management functions such as cluster configuration manager, quorum node,
manager node, and network shared disk (NSD) server. In addition, the GPFS
Server license permits the licensed node to share GPFS data directly through
any application, service, protocol or method, such as NFS (Network File
System), CIFS (Common Internet File System), FTP (File Transfer Protocol),
or HTTP (Hypertext Transfer Protocol).
The GPFS File Placement Optimizer license permits the licensed node to
perform NSD server functions for sharing GPFS data with other nodes that
have a GPFS File Placement Optimizer or GPFS Server license. This license
cannot be used to share data with nodes that have a GPFS Client license or
non-GPFS nodes.
The GPFS Client license permits exchange of data between nodes that
locally mount the same file system (i.e. via a shared storage). No other
export of the data is permitted. The GPFS Client may not be used for nodes
to share GPFS data directly through any application, service, protocol or
method, such as NFS, CIFS, FTP, or HTTP. For these functions, a GPFS
Server license would be required. Due to the architecture of the IBM Systems
solution for SAP HANA (not having a shared storage system) this type of
license cannot be used for the IBM solution.
Table A-1 on page 161 lists the types of GPFS license and the processor value
units (PVUs) included for each of the models.

160

In-memory Computing with SAP HANA on IBM eX5 Systems

Table A-1 GPFS licenses included in the custom models for SAP HANA
MTM

Type of GPFS license included

PVUs
included

7147-H1x

GPFS on x86 Server

1400

7147-H2x

GPFS on x86 Server

1400

7147-H3x

GPFS on x86 Server

1400

7147-H7x

GPFS on x86 Server

1400

7147-H8x

GPFS on x86 Server

1400

7147-H9x

GPFS on x86 Server

1400

7147-HAx

GPFS on x86 Single Server for Integrated Offerings

1400

7147-HBx

GPFS on x86 Single Server for Integrated Offerings

1400

7143-H1x

GPFS on x86 Server

1400

7143-H2x

GPFS on x86 Server

4000

7143-H3x

GPFS on x86 Server

5600

7143-H4x

GPFS on x86 Server

1400

7143-H5x

GPFS on x86 Server

4000

7143-HAx

GPFS on x86 Single Server for Integrated Offerings

4000

7143-HBx

GPFS on x86 Single Server for Integrated Offerings

4000

7143-HCx

GPFS on x86 Single Server for Integrated Offerings

5600

Licenses for IBM GPFS on x86 Single Server for Integrated Offerings, V3
(referred to as Integrated in the table) cannot be ordered independent of the
select hardware for which it is included. This type of license provides file system
capabilities for single-node integrated offerings. Therefore the model 7143-HAx
includes 4000 PVUs of GPFS on x86 Single Server for Integrated Offerings, V3
licenses, so that an upgrade to the 7143-HBx model does not require additional
licenses. The PVU rating for the 7143-HAx model to consider when purchasing
other GPFS license types is 1400 PVUs.
Clients with highly available, multi-node clustered scale-out configurations must
purchase the GPFS on x86 Server and GPFS File Placement Optimizer product,
as described in 6.4.4, Hardware and software additions required for scale-out
on page 121.

Appendix A. Appendix

161

162

In-memory Computing with SAP HANA on IBM eX5 Systems

Abbreviations and acronyms


ABAP

Advanced Business
Application Programming

HPI

Hasso Plattner Institute

I/O

input/output

ACID

Atomicity, Consistency,
Isolation, Durability

IBM

International Business
Machines

APO

Advanced Planner and


Optimizer

ID

Identifier

IDs

identifiers

IMM

Integrated Management
Module

BI

Business Intelligence

BICS

BI Consumer Services

BM

bridge module

IOPS

I/O operations per second

BW

Business Warehouse

ISICC

CD

compact disc

IBM SAP International


Competence Center

CPU

central processing unit

ITSO

CRC

cyclic redundancy checking

International Technical
Support Organization

CRM

Customer Relationship
Management

JDBC

Java Database Connectivity

JRE

Java Runtime Environment

CRU

customer replaceable unit

KPIs

key performance indicators

DB

database

LM

landscape management

DEV

development

LUW

logical unit of work

DIMM

dual inline memory module

MB

megabyte

DSOs

DataStore Objects

MCA

Machine Check Architecture

DR

Disaster Recovery

MCOD

DXC

Direct Extractor Connection

Multiple Components in One


Database

ECC

ERP Central Component

MCOS

ECC

error checking and correcting

Multiple Components on One


System

ERP

enterprise resource planning

MDX

Multidimensional Expressions

ETL

Extract, Transform, and Load

NOS

Notes object services

FTSS

Field Technical Sales Support

NSD

network shared disk

GB

gigabyte

NUMA

non-uniform memory access

GBS

Global Business Services

ODBC

Open Database Connectivity

GPFS

General Parallel File System

ODBO

OLE DB for OLAP

GTS

Global Technology Services

OLAP

online analytical processing

HA

high availability

OLTP

online transaction processing

HDD

hard disk drive

OS

operating system

Copyright IBM Corp. 2013. All rights reserved.

163

OSS

Online Service System

SSD

solid state drive

PAM

Product Availability Matrix

SSDs

solid state drives

PC

personal computer

STG

Systems & Technology Group

PCI

Peripheral Component
Interconnect

SUM

Software Update Manager

TB

terabyte

POC

proof of concept

TCO

total cost of ownership

PSA

Persistent Staging Area

TCP/IP

PVU

processor value unit

Transmission Control
Protocol/Internet Protocol

PVUs

processor value units

TDMS

Test Data Migration Server

QA

quality assurance

TREX

QPI

QuickPath Interconnect

Text Retrieval and Information


Extraction

RAID

Redundant Array of
Independent Disks

TSM

Tivoli Storage Manager

UEFI

RAM

random access memory

Unified Extensible Firmware


Interface

RAS

reliability, availability, and


serviceability

RDS

Rapid Deployment Solution

RPM

revolutions per minute

RPO

Recovery Point Objective

RTO

Recovery Time Objective

SAN

storage area network

SAPS

SAP Application Benchmark


Performance Standard

SAS

Serial Attached SCSI

SATA

Serial ATA

SCM

Supply Chain Management

SCM

software configuration
management

SD

Sales and Distribution

SDRAM

synchronous dynamic random


access memory

SLD

System Landscape Directory

SLES

SUSE Linux Enterprise


Server

SLO

System Landscape
Optimization

SMI

scalable memory interconnect

SQL

Structured Query Language

164

In-memory Computing with SAP HANA on IBM eX5 Systems

Related publications
The publications listed in this section are considered particularly suitable for a
more detailed discussion of the topics covered in this book.

IBM Redbooks
The following IBM Redbooks publications provide additional information about
the topic in this document. Note that some publications referenced in this list
might be available in softcopy only.
The Benefits of Running SAP Solutions on IBM eX5 Systems, REDP-4234
IBM eX5 Portfolio Overview: IBM System x3850 X5, x3950 X5, x3690 X5, and
BladeCenter HX5, REDP-4650
Implementing the IBM General Parallel File System (GPFS) in a Cross
Platform Environment, SG24-7844
You can search for, view, download, or order these documents and other
Redbooks, Redpapers, Web Docs, draft and additional materials, at the following
website:
ibm.com/redbooks

Other publications
This publication is also relevant as a further information source:
Prof. Hasso Plattner, Dr. Alexander Zeier, In-Memory Data Management,
Springer, 2011

Online resources
These websites are also relevant as further information sources:
IBM Systems Solution for SAP HANA
http://www.ibm.com/systems/x/solutions/sap/hana/
IBM Systems and Services for SAP HANA

Copyright IBM Corp. 2013. All rights reserved.

165

http://www.ibm-sap.com/hana
IBM and SAP: Business Warehouse Accelerator
http://www.ibm-sap.com/bwa
SAP In-Memory Computing - SAP Help Portal
http://help.sap.com/hana

Help from IBM


IBM Support and downloads
ibm.com/support
IBM Global Services
ibm.com/services

166

In-memory Computing with SAP HANA on IBM eX5 Systems

In-memory Computing with SAP HANA on IBM eX5 Systems

(0.2spine)
0.17<->0.473
90<->249 pages

Back cover

In-memory Computing
with SAP HANA on IBM
eX5 Systems
IBM Systems
Solution for SAP
HANA
SAP HANA overview
and use cases
Basic in-memory
computing principles

This IBM Redbooks publication describes in-memory


computing appliances from IBM and SAP that are based on
IBM eX5 flagship systems and SAP HANA. We first discuss
the history and basic principles of in-memory computing,
then we describe the SAP HANA offering, its architecture,
sizing methodology, licensing policy, and software
components. We also review IBM eX5 hardware offerings
from IBM. Then we describe the architecture and
components of IBM Systems solution for SAP HANA and its
delivery, operational, and support aspects. Finally, we
discuss the advantages of using IBM infrastructure platforms
for running the SAP HANA solution.

INTERNATIONAL
TECHNICAL
SUPPORT
ORGANIZATION

BUILDING TECHNICAL
INFORMATION BASED ON
PRACTICAL EXPERIENCE
IBM Redbooks are developed by
the IBM International Technical
Support Organization. Experts
from IBM, Customers and
Partners from around the world
create timely technical
information based on realistic
scenarios. Specific
recommendations are provided
to help you implement IT
solutions more effectively in
your environment.

For more information:


ibm.com/redbooks
SG24-8086-00

ISBN 073843762X

You might also like