Oracle® In-Memory Database Cache: Release 11.2.1
Oracle® In-Memory Database Cache: Release 11.2.1
Oracle® In-Memory Database Cache: Release 11.2.1
August 2011
Oracle In-Memory Database Cache Introduction, Release 11.2.1 E14261-09 Copyright 2011, Oracle and/or its affiliates. All rights reserved. This software and related documentation are provided under a license agreement containing restrictions on use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license, transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverse engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is prohibited. The information contained herein is subject to change without notice and is not warranted to be error-free. If you find any errors, please report them to us in writing. If this is software or related documentation that is delivered to the U.S. Government or anyone licensing it on behalf of the U.S. Government, the following notice is applicable: U.S. GOVERNMENT RIGHTS Programs, software, databases, and related documentation and technical data delivered to U.S. Government customers are "commercial computer software" or "commercial technical data" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental regulations. As such, the use, duplication, disclosure, modification, and adaptation shall be subject to the restrictions and license terms set forth in the applicable Government contract, and, to the extent applicable by the terms of the Government contract, the additional rights set forth in FAR 52.227-19, Commercial Computer Software License (December 2007). Oracle America, Inc., 500 Oracle Parkway, Redwood City, CA 94065. This software or hardware is developed for general use in a variety of information management applications. It is not developed or intended for use in any inherently dangerous applications, including applications that may create a risk of personal injury. If you use this software or hardware in dangerous applications, then you shall be responsible to take all appropriate fail-safe, backup, redundancy, and other measures to ensure its safe use. Oracle Corporation and its affiliates disclaim any liability for any damages caused by use of this software or hardware in dangerous applications. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Intel and Intel Xeon are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are used under license and are trademarks or registered trademarks of SPARC International, Inc. AMD, Opteron, the AMD logo, and the AMD Opteron logo are trademarks or registered trademarks of Advanced Micro Devices. UNIX is a registered trademark of The Open Group. This software or hardware and documentation may provide access to or information on content, products, and services from third parties. Oracle Corporation and its affiliates are not responsible for and expressly disclaim all warranties of any kind with respect to third-party content, products, and services. Oracle Corporation and its affiliates will not be responsible for any loss, costs, or damages incurred due to your access to or use of third-party content, products, or services.
Contents
Preface ................................................................................................................................................................ vii
Audience...................................................................................................................................................... vii Related documents..................................................................................................................................... vii Conventions ................................................................................................................................................ vii Documentation Accessibility ................................................................................................................... viii
Uses for IMDB Cache .............................................................................................................................. TimesTen application scenario .............................................................................................................. Real-time quote service application ................................................................................................ IMDB Cache application scenarios....................................................................................................... Call center application....................................................................................................................... Caller usage metering application ...................................................................................................
4 Concurrent Operations
Transaction isolation................................................................................................................................ Read committed isolation ................................................................................................................. Serializable isolation .......................................................................................................................... Locks ........................................................................................................................................................... Database-level locking....................................................................................................................... Table-level locking ............................................................................................................................. Row-level locking............................................................................................................................... For more information............................................................................................................................... 4-1 4-1 4-2 4-3 4-3 4-3 4-4 4-4
5 Query Optimization
Optimization time and memory usage................................................................................................. Statistics ..................................................................................................................................................... Optimizer hints......................................................................................................................................... Indexes........................................................................................................................................................ Scan methods ............................................................................................................................................ Join methods.............................................................................................................................................. Nested loop join.................................................................................................................................. Merge join............................................................................................................................................ Optimizer plan.......................................................................................................................................... For more information............................................................................................................................... 5-2 5-2 5-2 5-3 5-3 5-4 5-5 5-5 5-7 5-8
Writing the log buffer to disk ........................................................................................................... When are transaction log files deleted? .......................................................................................... TimesTen commits ............................................................................................................................. Checkpointing........................................................................................................................................... Nonblocking checkpoints ................................................................................................................. Blocking checkpoints ......................................................................................................................... Recovery from log and checkpoint files ......................................................................................... Replication................................................................................................................................................. Active standby pair............................................................................................................................ Other replication configurations...................................................................................................... Unidirectional replication.......................................................................................................... Bidirectional replication............................................................................................................. Asynchronous and return service replication................................................................................ Replication failover and recovery.................................................................................................... For more information...............................................................................................................................
6-1 6-2 6-2 6-2 6-2 6-3 6-3 6-3 6-4 6-5 6-5 6-6 6-7 6-8 6-8
7 Event Notification
Transaction Log API................................................................................................................................. How XLA works................................................................................................................................. Log update records ............................................................................................................................ Materialized views and XLA.................................................................................................................. SNMP traps................................................................................................................................................ For more information............................................................................................................................... 7-1 7-1 7-2 7-2 7-4 7-4
8 IMDB Cache
Cache grid .................................................................................................................................................. Cache groups ............................................................................................................................................. Dynamic cache groups and explicitly loaded cache groups ............................................................ Global and local cache groups ............................................................................................................... Transmitting data between the IMDB Cache and Oracle Database............................................... Updating a cache group from Oracle tables................................................................................... Updating Oracle tables from a cache group................................................................................... Aging feature............................................................................................................................................. Passthrough feature ................................................................................................................................. Replicating cache groups ........................................................................................................................ For more information............................................................................................................................... 8-1 8-2 8-3 8-4 8-4 8-4 8-5 8-5 8-5 8-6 8-6
Offline upgrades................................................................................................................................. 9-3 Online upgrades ................................................................................................................................. 9-3 For more information............................................................................................................................... 9-4
Index
vi
Preface
This guide provides an introduction to the Oracle In-Memory Database Cache.
Audience
This document is intended for readers with a basic understanding of database systems.
Related documents
TimesTen documentation is available on the product distribution media and on the Oracle Technology Network:
http://www.oracle.com/technetwork/database/timesten/documentation
Conventions
TimesTen supports multiple platforms. Unless otherwise indicated, the information in this guide applies to all supported platforms. The term Windows refers to Windows 2000, Windows XP and Windows Server 2003. The term UNIX refers to Solaris, Linux, HP-UX and AIX.
Note:
In TimesTen documentation, the terms "data store" and "database" are equivalent. Both terms refer to the TimesTen database unless otherwise noted.
vii
Convention
Meaning
italic monospace Italic monospace type indicates a variable in a code example that you must replace. For example: Driver=install_dir/lib/libtten.sl Replace install_dir with the path of your TimesTen installation directory. [] {} | ... % # Square brackets indicate that an item in a command line is optional. Curly braces indicated that you must choose one of the items separated by a vertical bar ( | ) in a command line. A vertical bar (or pipe) separates alternative arguments. An ellipsis (. . .) after an argument indicates that you may use more than one argument on a single command line. The percent sign indicates the UNIX shell prompt. The number (or pound) sign indicates the UNIX root prompt.
TimesTen documentation uses these variables to identify path, file and user names:
Convention install_dir TTinstance Meaning The path that represents the directory where the current release of TimesTen is installed. The instance name for your specific installation of TimesTen. Each installation of TimesTen must be identified at install time with a unique alphanumeric instance name. This name appears in the install path. Two digits, either 32 or 64, that represent either the 32-bit or 64-bit operating system. Three numbers that represent the first three numbers of the TimesTen release number, with or without a dot. For example, 1121 or 11.2.1 represents TimesTen Release 11.2.1. Two digits that represent the version number of the major JDK release. Specifically, 14 represent JDK 1.4; 5 represents JDK 5. The data source name.
bits or bb release or rr
jdk_version DSN
Documentation Accessibility
For information about Oracle's commitment to accessibility, visit the Oracle Accessibility Program website at http://www.oracle.com/pls/topic/lookup?ctx=acc&id=docacc. Access to Oracle Support Oracle customers have access to electronic support through My Oracle Support. For information, visit http://www.oracle.com/pls/topic/lookup?ctx=acc&id=info or visit http://www.oracle.com/pls/topic/lookup?ctx=acc&id=trs if you are hearing impaired.
viii
What's New
This section summarizes the new features of Oracle TimesTen In-Memory Database release 11.2.1 that are described in this guide. It provides links to more information.
PL/SQL support
TimesTen supports PL/SQL. See "SQL and PL/SQL functionality" on page 1-3.
Cache grid
A cache grid consists of one or more grid members each backed by an Oracle In-Memory Database Cache (IMDB Cache). Grid members cache tables from a central Oracle database or Real Application Cluster (Oracle RAC). See "Cache grid" on page 8-1.
Oracle Clusterware
You can use Oracle Clusterware to manage recovery of a TimesTen active standby pair. See "Replication failover and recovery" on page 6-8.
ix
Bitmap indexes
TimesTen supports bitmap indexes. See "Indexes" on page 5-3.
1
1
1-1
Disk-Based RDBMS
Applications Applications
TimesTen
SQL
IPC
SQL
Copy record to application buffer
IPC
Locate pointer to page in buffer pool using hashing and linear search Copy record to private buffer
Memory Address
Hash Function
Database is preloaded from disk into memory
Checkpoint File
Buffer Pool
Data Pages
In a conventional disk-based RDBMS, client applications communicate with a database server process over some type of IPC connection, which adds performance overhead to all SQL operations. An application can link TimesTen directly into its address space to eliminate the IPC overhead and streamline query processing. This is accomplished through a direct connection to TimesTen. Traditional client/server access is also supported for functions such as reporting, or when a large number of application-tier platforms must share access to a common in-memory database. From
an application's perspective, the TimesTen API is identical whether it is a direct connection or a client/server connection.
TimesTen API support Access Control Database connectivity Durability Query optimization Concurrency Automatic data aging Globalization support Administration and utilities Replication IMDB Cache
1-3
ODP.NET support
Oracle Data Provider for .NET (ODP.NET) is an implementation of the Microsoft ADO.NET interface. ODP.NET support for TimesTen and IMDB Cache provides fast and efficient ADO.NET data access from .NET client applications to TimesTen databases. For more information, see Oracle Data Provider for .NET Oracle TimesTen In-Memory Database Support User's Guide.
TTClasses
TimesTen C++ Interface Classes (TTClasses) is easier to use than ODBC while maintaining fast performance. This C++ class library provides wrappers around the most common ODBC functionality. The TTClasses library is also intended to promote best practices when writing application software. For more information, see Oracle TimesTen In-Memory Database TTClasses Guide.
The TimesTen implementation of the JTA interfaces is intended to enable Java applications, application servers, and transaction managers to use TimesTen resource managers in DTP environments. For more information, see Oracle TimesTen In-Memory Database C Developer's Guide and Oracle TimesTen In-Memory Database Java Developer's Guide.
Access Control
TimesTen and IMDB Cache are installed with access control to allow only users with specific privileges to access particular TimesTen features. TimesTen Access Control uses standard SQL operations to establish user accounts with specific privileges. TimesTen supports TimesTen offers object-level access control as well as database-level access control. For more information, see Oracle TimesTen In-Memory Database Operations Guide.
Database connectivity
TimesTen and IMDB Cache support direct driver connections for higher performance, as well as connections through a driver manager. TimesTen also supports client/server connections. These connection options allow users to choose the best tradeoff between performance and functionality for their applications. Direct driver connections are fastest. Client/server connections may provide more flexibility. Driver manager connections can provide support for ODBC applications written for a different ODBC version or for multiple RDBMS products with ODBC interfaces. See "TimesTen connection options" on page 3-4.
Durability
TimesTen and IMDB Cache achieve durability through a combination of transaction logging and periodic refreshes of a disk-resident version of the database. Log records are written to disk asynchronously or synchronously to the completion of the transaction and controlled by the application at the transaction level. For systems where maximum throughput is paramount, such as non-monetary transactions within network systems, asynchronous logging allows extremely high throughput with minimal exposure. In cases where data integrity must be preserved, such as securities trading, TimesTen and IMDB Cache ensure complete durability, with no loss of data. TimesTen uses the transaction log in the following situations:
Recover transactions if the application or database fails Undo transactions that are rolled back Replicate changes to other TimesTen databases Replicate TimesTen changes to Oracle tables Enable applications to detect changes to tables (using the XLA API)
TimesTen and IMDB Cache maintain the disk-resident version of the database with a checkpoint operation that takes place in the background and has very little impact on database applications. This operation is called a "fuzzy" checkpoint and is performed automatically. TimesTen and IMDB Cache also have a blocking checkpoint that does not require transaction log files for recovery. Blocking checkpoints must be initiated by the application. TimesTen and IMDB Cache maintain two checkpoint files in case a
1-5
failure occurs mid-checkpoint. Checkpoint files should reside on disks separate from the transaction logs to minimize the impact of checkpointing on application activity. See the following sections for more information:
"Transaction logging" on page 6-1 "When are transaction log files deleted?" on page 6-2 "Checkpointing" on page 6-2
Query optimization
TimesTen and IMDB Cache have a cost-based query optimizer that chooses the best query plan based on factors such as the presence of indexes, the cardinality of tables and the presence of ORDER BY clauses in the query. Optimizer cost sensitivity is somewhat higher in TimesTen and IMDB Cache than in disk-based systems because the cost structure of a main-memory system differs from that of disk-based systems in which disk access is a dominant cost factor. Because disk access is not a factor in TimesTen and IMDB Cache, the optimization cost model includes factors not usually considered by optimizers for disk-based systems, such as the cost of evaluating predicates. TimesTen and IMDB Cache provide range, hash and bitmap indexes and support two types of join methods (nested-loop and merge-join). The optimizer can create temporary indexes as needed. The optimizer also accepts hints that give applications the flexibility to make tradeoffs between such factors as temporary space usage and performance. See "Query Optimization" on page 5-1 for more information about the query optimizer and indexing techniques.
Concurrency
TimesTen and IMDB Cache provide full support for shared databases. Options are available so users can choose the optimum balance between response time, throughput and transaction semantics for an application. Read-committed isolation provides nonblocking operations and is the default isolation level. For databases with extremely strict transaction semantics, serializable isolation is available. These isolation levels conform to the ODBC standard and are implemented with optimal performance in mind. As defined by the ODBC standard, a default isolation level can be set for a TimesTen or IMDB Cache database, which can be dynamically modified for each connection at runtime. For more information about managing concurrent operations in TimesTen and IMDB Cache, see "Concurrent Operations" on page 4-1.
Time-based data aging based on timestamp values Usage-based data aging based on the LRU algorithm
For more information, see "Implementing aging in your tables" in Oracle TimesTen In-Memory Database Operations Guide and "Implementing aging on a cache group" in Oracle In-Memory Database Cache User's Guide.
Globalization support
TimesTen and IMDB Cache provide globalization support for storing, retrieving, and processing data in native languages. Over 50 different national, multinational, and vendor-specific character sets including the most popular single-byte and multibyte encodings, plus Unicode, are supported as the database storage character set. The connection character set can be defined to enable an application running in a different encoding to communicate to the TimesTen or IMDB Cache database; character set conversion between the application and the database occurs automatically and transparently. TimesTen and IMDB Cache offer linguistic sorting capabilities that handle the complex sorting requirements of different languages and cultures. More than 80 linguistic sorts are provided. They can be extended to enable the application to perform case-insensitive and accent-insensitive sorting and searches. For more information, see "Globalization Support" in Oracle TimesTen In-Memory Database Operations Guide.
Replication
TimesTen and IMDB Cache replication enable real-time data replication between servers for high availability and load sharing. Data replication configurations can be active-standby or active-active, using asynchronous or synchronous transmission, with conflict detection and resolution and automatic resynchronization after a failed server is restored. See "Replication" on page 6-3.
1-7
IMDB Cache
The Oracle In-Memory Database Cache creates a real-time, updatable cache for Oracle data. It offloads computing cycles from Oracle databases and enables responsive and scalable real-time applications. IMDB Cache loads a subset of Oracle tables into a cache database. It can be configured to propagate updates in both directions and to automate passthrough of SQL requests for uncached data. It automatically resynchronizes data after failures. See "IMDB Cache" on page 8-1.
2
2
This chapter describes how TimesTen and IMDB Cache can be used to enable applications that require real-time access to data. It includes the following sections:
Uses for TimesTen Uses for IMDB Cache TimesTen application scenario IMDB Cache application scenarios
The primary database for real-time applications. All data needed by the applications resides in the TimesTen database. A data utility for accelerating performance-critical points in an architecture. For example, providing persistence and transactional capabilities to a message queuing system might be achieved by using TimesTen as the repository for the messages. A data integration point for multiple data sources on top of which new applications can be built. For example, an organization may have large amounts of information stored in several data sources, but only subsets of this information may be relevant to running its daily business. A suitable architecture would be to pull the relevant information from the different data sources into one TimesTen operational database to provide a central repository for the data of immediate interest to the applications.
A real-time data manager for specific tasks in an overall workflow in collaboration with a disk-based RDBMS like the Oracle Database. For example, a phone billing application may capture and store recent call records in the IMDB Cache database while storing information about customers, their billing addresses and credit information in an Oracle database. It can also age and keep archives of all call records in the Oracle database. Thus the information that requires real-time access is stored in the IMDB Cache database while the information needed for longer-term analysis, auditing, and archival is stored in the Oracle database.
A read-only cache. Oracle data can be cached in an IMDB Cache read-only cache group. Read-only cache groups are automatically refreshed when Oracle tables are updated. A read-only cache group provides fast access to reference data such as look-up tables and subscriber profiles. An updatable cache. Oracle data can be cached in IMDB Cache updatable cache groups. Transactions on the cache groups can be committed synchronously or asynchronously to the associated Oracle tables. A distributed cache. Oracle data can be cached in multiple installations of IMDB Cache running on different machines to provide scalability. You can configure a dynamic distributed cache in which records are loaded automatically and aged automatically.
Data Feed
Message Bus
Backup NewsReader NewsReader
.....
TimesTen
TimesTen
Trading Application
As shown in Figure 22, the NewsReader updates stock price data in a Quotes table in the TimesTen database. Less dynamic earnings data is updated in an Earnings table. The Stock columns in the Quotes and Earnings tables are linked through a foreign key relationship. The purpose of the trading application is to track only those stocks with PE ratios below 50, then use internal logic to analyze the current stock price and trading volume to determine whether to place a trade using another part of the trading facility. For maximum performance, the trading application implements an event facility that uses the TimesTen Transaction Log API (XLA) to monitor the TimesTen transaction log for updates to the stocks of interest. To provide the fastest possible access to such updates, the company creates a materialized view, named PE_Alerts, with a WHERE clause that calculates the PE ratio from the Price column in the Quotes table and the Earns column in the Earnings table. By using the XLA event facility to monitor the transaction log for price updates in the materialized view, the trading application receives alerts only for those stocks that meet its trading criteria.
Figure 22 Using materialized views and XLA
NewsReader Trading Application
TimesTen
Quotes Stock Price Vol IBM 135.03 10 ORCL 16.23 15 SUNW 15.21 4 MSFT 61.06 12 JNPR 15.36 1
Detail tables
. . .
. . .
. . .
. . .
Earnings Stock Earns Est IBM 4.35 4.25 ORCL 0.43 0.55 SUNW -0.17 0.25 MSFT 1.15 0.95 JNPR 0.36 0.51
. . .
. . .
. . .
CREATE MATERIALIZED VIEW PE_Alerts AS SELECT Q.Stock, Q.Price, Q.Vol, E.Earns FROM Quotes Q, Earnings E WHERE Q.Stock = E.Stock AND Q.Price / E.Earns < 50; PE_Alerts Stock Price Vol IBM 135.03 10 ORCL 16.23 15 JNPR 15.36 1
Materialized view
. . .
. . .
. . .
. . .
....
IBM Update
ORCL Update
JNPR Update
....
Call center application: Uses IMDB Cache as an application-tier cache to hold customer profiles maintained in an Oracle database. Caller usage metering application: Uses IMDB Cache to store metering data on the activities of cellular callers. The metering data is collected from multiple TimesTen nodes distributed throughout a service area and archived in a central Oracle database for use by a central billing application.
Cache group
Cache group
Cache group
Customer Customer Profile Profile Customer Profile Customer Customer Profile Profile Customer Profile Customer Profile
Billing Application
Oracle Database
Central server
To manage a large volume of concurrent customer sessions, the call center has deployed several application server nodes and periodically deploys additional nodes as its customer base increases. Each node contains an IMDB Cache database. When a customer contacts the call center, the user is automatically routed to an available application server node and the appropriate customer profile is dynamically loaded from the Oracle database into the cache database. When a customer completes a call, changes to the customer profile are flushed from IMDB Cache database to the Oracle database. Least recently used (LRU) aging is configured to remove inactive customer profiles from the IMDB Cache database. If the same customer contacts the call center again shortly after the first call and is connected to a different application server node, the customer profile is dynamically loaded to the new node from either the Oracle database or from the first IMDB Cache node, depending on where the most recent copy resides. The IMDB Cache determines where the most recent copy resides and uses peer-to-peer communication to exchange
2-4 Oracle In-Memory Database Cache Introduction
information with other IMDB Cache databases in its grid. It also manages concurrent updates to the same data within its grid. All of the customer data is stored in the Oracle database. The Oracle database is much larger than the combined IMDB Cache databases and is best accessed by applications that do not require the real-time performance of IMDB Cache but do require access to large amounts of data. Such applications may include a billing application and a data mining application. As the customer base increases and demands to serve more customers concurrently increases, the call center may decide to deploy additional application server nodes. New IMDB Cache members can join the IMDB Cache grid with no disruption to ongoing requests in the grid. Similarly, failures or removal of individual nodes do not disrupt operations in the rest of the grid.
Service Areas
650 408 415
IMDB Cache
Standby
Oracle Database
A usage metering application and IMDB Cache are deployed on each node to handle the real-time processing for calls beginning and terminating at different geographical locations delineated by area code. For each call, the local node stores a separate record for the beginning and the termination of a call. This is because the beginning of a cellular call might be detected by one node and its termination by another node. Transactions that impact revenue (inserts and updates) must be durable. To ensure data availability, each IMDB Cache database is replicated to a standby database. Each time a customer makes, receives or terminates a cellular call, the application inserts a record of the activity into the Calls table in the IMDB Cache database. Each call record includes a timestamp, unique identifier, originating host's IP address, and information on the services used. An IMDB Cache process periodically archives the rows in the Calls table to the Oracle database. After the call records have been successfully archived in the Oracle database, they are deleted from the IMDB Cache database by a time-based aging process.
3
3
Architectural overview Shared libraries Memory-resident data structures Database processes Administrative programs Checkpoint and transaction log files Cached data Replication TimesTen connection options
Architectural overview
This section describes the architecture of the Oracle In-Memory Database Cache. The architecture of the Oracle TimesTen In-Memory Database is the same as the architecture of the IMDB Cache except that the Oracle database and cache agent are not included. Figure 31 shows the architecture of the IMDB Cache.
3-1
Shared libraries
Presentation Tier
Replication agents
Direct-linked applications
Application Tier
Administrative programs
Cache agent
Oracle database
The architectural components include shared libraries, memory-resident data structures, database processes, and administrative programs. Memory-resident data structures include tables, indexes, system tables, locks, cursors, compiled commands and temporary indexes. The application can connect to the IMDB Cache or TimesTen database by direct link and by client/server connections. Replication agents receive information from master databases and send information to subscriber databases. The cache agent performs all asynchronous data transfers between cache groups in the IMDB Cache and the Oracle database. These components are described in subsequent sections.
Shared libraries
The routines that implement the TimesTen functionality are embodied in a set of shared libraries that developers link with their applications and execute as a part of the application's process. This shared library approach is in contrast to a more conventional RDBMS, which is implemented as a collection of executable programs to which applications connect, typically over a client/server network. Applications can also use a client/server connection to access an IMDB Cache or TimesTen database, though in most cases the best performance will be realized with a directly linked application. See "TimesTen connection options" on page 3-4.
Cached data
Database processes
TimesTen assigns a separate process to each database to perform operations including the following tasks:
Loading the database into memory from a checkpoint file on disk Recovering the database if it needs to be recovered after loading it into memory Performing periodic checkpoints in the background against the active database Detecting and handling deadlocks Performing data aging Writing log records to files
Administrative programs
Utility programs are explicitly invoked by users, scripts, or applications to perform services such as interactive SQL, bulk copy, backup and restore, database migration and system monitoring.
Cached data
When the IMDB Cache is used to cache portions of an Oracle database in a TimesTen in-memory database, a cache group is created to hold the cached data. A cache group is a collection of one or more tables arranged in a logical hierarchy by using primary key and foreign key relationships. Each table in a cache group is related to an Oracle database table. A cache table can contain all rows and columns or a subset of the rows and columns in the related Oracle table. You can create or modify cache groups by using SQL statements or by using Oracle SQL Developer. Cache groups support these features:
Applications can read from and write to cache groups. Cache groups can be refreshed from Oracle data automatically or manually.
3-3
Replication
Updates to cache groups can be propagated to Oracle tables automatically or manually. Changes to either Oracle tables or the cache group can be tracked automatically.
When rows in a cache group are updated by applications, the corresponding rows in Oracle tables can be updated synchronously as part of the same transaction or asynchronously immediately afterward depending on the type of cache group. The asynchronous configuration produces significantly higher throughput and much faster application response times. Changes that originate in the Oracle tables are refreshed into the cache by the cache agent. See "IMDB Cache" on page 8-1 for more information.
Replication
TimesTen replication enables you to achieve near-continuous availability or workload distribution by sending updates between two or more servers. A master server is configured to send updates and a subscriber server is configured to receive them. A server can be both a master and a subscriber in a bidirectional replication scheme. Time-based conflict detection and resolution are used to establish precedence in case the same data is updated in multiple locations at the same time. When replication is configured, a replication agent is started for each database. If multiple databases on the same server are configured for replication, each database has a separate replication agent. Each replication agent can send updates to one or more subscribers and to receive updates from one or more masters. Each of these connections is implemented as a separate thread of execution inside the replication agent process. Replication agents communicate through TCP/IP stream sockets. For maximum performance, the replication agent detects updates to a database by monitoring the existing transaction log. It sends updates to the subscribers in batches, if possible. Only committed transactions are replicated. On the subscriber node, the replication agent updates the database through an efficient low-level interface, avoiding the overhead of the SQL layer. See "Replication" on page 6-3 for more information.
direct driver through the JDBC library. OCI applications access the ODBC direct driver through the OCI library. An application can create a direct driver connection when it runs on the same machine as the IMDB Cache or TimesTen database. In a direct driver connection, the ODBC driver directly loads the IMDB Cache or TimesTen database into a shared memory segment. The application then uses the direct driver to access the memory image of the database. Because no inter-process communication (IPC) of any kind is required, a direct-driver connection provides extremely fast performance and is the preferred way for applications to access the IMDB Cache or TimesTen database.
Client/server connection
The TimesTen client driver and server daemon processes accommodate connections from remote client machines to databases across a network. The server daemon spawns a separate server child process for each client connection to the database. Applications on a client machine issue ODBC, JDBC or OCI calls. These calls access a local ODBC client driver that communicates with a server child process on the TimesTen server machine. The server child process, in turn, issues native ODBC requests to the ODBC direct driver to access the IMDB Cache or TimesTen database. If the client and server reside on separate nodes in a network, they communicate by using sockets and TCP/IP. If both the client and server reside on the same machine, they can communicate more efficiently by using a shared memory segment as IPC. Traditional database systems are typically structured in this client/server model, even when the application and the database are on the same system. Client/server communication adds extra cost to all database operations.
3-5
4
4
Concurrent Operations
A database can be accessed in shared mode. When a shared database is accessed by multiple transactions, there must be a way to coordinate concurrent changes to data with reads of the same data in the database. TimesTen and IMDB Cache use transaction isolation and locks to coordinate concurrent access to data. This chapter includes the following topics:
Transaction isolation
Transaction isolation provides an application with the appearance that the system performs one transaction at a time, even though there are concurrent connections to the database. Applications can use the Isolation general connection attribute to set the isolation level for a connection. Concurrent connections can use different isolation levels. Isolation level and concurrency are inversely related. A lower isolation level enables greater concurrency, but with greater risk of data inconsistencies. A higher isolation level provides a higher degree of data consistency, but at the expense of concurrency. TimesTen has two isolation levels:
Concurrent Operations
4-1
Transaction isolation
Read committed isolation provides increased concurrency because readers do not block writers and writers do not block readers. This isolation level is useful for applications that have long-running scans that may conflict with other operations needing access to a scanned row. However, the disadvantage when using this isolation level is that non-repeatable reads are possible within a transaction or even a single statement (for example, the inner loop of a nested join). When using this isolation level, DDL statements that operate on a table can block readers and writers of that table. For example, an application cannot read a row from a table if another application has an uncommitted DROP TABLE, CREATE INDEX, or ALTER TABLE operation on that table. In addition, blocking checkpoints will block readers and writers. Read committed isolation does acquire read locks as needed during materialized view maintenance to ensure that views are consistent with their detail tables. These locks are not held until the end of the transaction but are instead released when maintenance has been completed.
Serializable isolation
When an application uses serializable isolation, locks are acquired within a transaction and are held until the transaction commits or rolls back for both reads and writes. This level of isolation provides for repeatable reads and increased isolation within a transaction at the expense of decreased concurrency. Transactions use serializable isolation when database-level locking is chosen. Figure 42 shows that locks are held until the transaction is committed.
Figure 42 Serializable isolation
Application
Read
Fetched row
Fetched row
Commit transaction
Serializable isolation level is useful for transactions that require the strongest level of isolation. Concurrent applications that must modify the data that is read by a
Locks
transaction may encounter lock timeouts because read locks are held until the transaction commits.
Locks
Locks are used to serialize access to resources to prevent one user from changing an element that is being read or changed by another user. TimesTen and IMDB Cache automatically perform locking if a database is accessed in shared mode. Serializable transactions acquire share locks on the items they read and exclusive locks on the items they write. These locks are held until the transaction commits or rolls back. Read-committed transactions acquire exclusive locks on the items they write and hold these locks until the transactions are committed. Read-committed transactions do not acquire locks on the items they read. Committing or rolling back a transaction closes all cursors and releases all locks held by the transaction. TimesTen and IMDB Cache perform deadlock detection to report and eliminate deadlock situations. If an application is denied a lock because of a deadlock error, it should roll back the entire transaction and retry it. Applications can select from three lock levels:
Database-level locking
Locking at the database level locks an entire database when it is accessed by a transaction. All database-level locks are exclusive. A transaction that requires a database-level lock cannot start until there are no active transactions on the database. After a transaction has obtained a database-level lock, all other transactions are blocked until the transaction commits or rolls back. Database-level locking restricts concurrency more than table-level locking and is useful only for initialization operations such as bulkloading, when no concurrency is necessary. Database-level locking has better response time than row-level or table-level locking at the cost of diminished concurrency and diminished throughput. Different transactions can coexist with different levels of locking, but the presence of even one transaction that uses database-level locking leads to reduced concurrency. Use the LockLevel general connection attribute or the ttLockLevel built-in procedure to implement database-level locking.
Table-level locking
Table-level locking locks a table when it is accessed by a transaction. It is useful when a statement accesses most of the rows in a table. Applications can call the ttOptSetFlag built-in procedure to request that the optimizer use table locks. The optimizer determines when a table lock should be used. Table locks can reduce throughput, so they should be used only when a substantial portion of the table must be locked or when high concurrency is not needed. For example, tables can be locked for operations such as bulk updates. In read-committed isolation, TimesTen and IMDB Cache do not use table-level locking for read operations unless it is explicitly requested by the application.
Concurrent Operations
4-3
Row-level locking
Row-level locking locks only the rows that are accessed by a transaction. It provides the best concurrency by allowing concurrent transactions to access rows in the same table. Row-level locking is preferable when there are many concurrent transactions, each operating on different rows of the same tables. Applications can use the LockLevel general connection attribute, the ttLockLevel built-in procedure and the ttOptSetFlag built-in procedure to manage row-level locking.
5
5
Query Optimization
TimesTen and IMDB Cache have a cost-based query optimizer that ensures efficient data access by automatically searching for the best way to answer queries. Optimization is performed in the third stage of the compilation process. The stages of compilation are shown in Figure 51.
Figure 51 Compilation stages
SQL Query Parser Semantic Analyzer Optimizer Code Generator Executable Code
The role of the optimizer is to determine the lowest cost plan for executing queries. By "lowest cost plan" we mean an access path to the data that will take the least amount of time. The optimizer determines the cost of a plan based on:
Table and column statistics Metadata information (such as referential integrity, primary key) Index choices (including automatic creation of temporary indexes) Scan methods (full table scan, rowid lookup, range index scan, bitmap index lookup, hash index lookup) Join algorithm choice (nested loop joins, nested loop joins with indexes, or merge join)
Optimization time and memory usage Statistics Optimizer hints Indexes Scan methods Join methods Optimizer plan
Statistics
When determining the execution path for a query, the optimizer examines statistics about the data referenced by the query, such as the number of rows in the tables, the minimum and maximum values and the number of unique values in interval statistics of columns used in predicates, the existence of primary keys within a table, the size and configuration of any existing indexes. These statistics are stored in the SYS.TBL_STATS and SYS.COL_STATS tables, which are populated when an applications calls the ttOptUpdateStats built-in procedure. The optimizer uses the statistics for each table to calculate the selectivity of predicates, such as t1.a=4, or a combination of predicates, such as t1.a=4 AND t1.b<10. Selectivity is an estimate of the number of rows in a table. If a predicate selects a small percentage of rows, it is said to have high selectivity, while a predicate that selects a large percentage of rows has low selectivity.
Optimizer hints
The optimizer allows applications to provide hints to adjust the way that plans are generated. For example, applications can use the ttOptSetFlag built-in procedure to provide the optimizer with hints about how to best optimize a particular query. This takes the form of directives that restrict the use of particular join algorithms, use of temporary indexes and types of index, use of locks, whether to optimize for all the rows or only the first n number of rows in a table and whether to materialize intermediate results. You can view the existing hints set for a database by using the ttOptGetFlag built-in procedure.
Scan methods
Applications can also use the ttOptSetOrder built-in procedure to specify the order in which tables are to be joined by the optimizer, as well as the ttOptUseIndex built-in procedure to specify which indexes should be considered for each correlation in a query.
Indexes
The query optimizer uses indexes to speed up the execution of a query. The optimizer uses existing indexes or creates temporary indexes to generate an execution plan when preparing a SELECT, INSERT SELECT, UPDATE, or DELETE statement. An index is a map of keys to row locations in a table. Strategic use of indexes is essential to obtain maximum performance from a TimesTen system. TimesTen uses these types of indexes:
Range index: Range indexes are useful for finding rows with column values within a range specified as an equality or inequality. Range indexes can be created over one or more columns of a table. They can be designated as unique or not unique. Multiple NULL values are permitted in a unique range index. When sorting data values, TimesTen considers NULL values to be larger than all non-NULL values. When you create an index using the CREATE INDEX SQL statement and do not specify the index type, TimesTen creates a range index. Bitmap index: Bitmap indexes encode information about a unique value in a row in a bitmap. Each bit in the bitmap corresponds to a row in the table. Use a bitmap index for columns that do not have many unique values. An example of such a column is a column that records gender as one of two values. Bitmap indexes are widely used in data warehousing environments. The environments typically have large amounts of data and ad hoc queries, but a low level of concurrent DML transactions. Bitmap indexes are compressed and have smaller storage requirements than other indexing techniques. Hash index: Hash indexes are created for tables with a primary key when you specify the UNIQUE HASH INDEX clause in the CREATE TABLE statement. There can be only one hash index for each table. In general, hash indexes are faster than range indexes for exact match lookups and equijoins. However, hash indexes cannot be used for lookups involving ranges or the prefix of a key and can require more space than range indexes and bitmap indexes.
Scan methods
The optimizer can select from multiple types of scan methods. The most common scan methods are:
Full table scan Rowid lookup Range index scan (on either a permanent or temporary index) Hash index lookup (on either a permanent or temporary index) Bitmap index lookup (on a permanent index)
TimesTen and IMDB Cache perform fast exact matches through hash indexes, bitmap indexes and rowid lookups. They perform range matches through range indexes. The ttOptSetFlag built-in procedure can be used to allow or disallow the optimizer from considering certain scan methods when choosing a query plan.
Join methods
A full table scan examines every row in a table. Because it is the least efficient way to evaluate a query predicate, a full scan is only used when no other method is available. TimesTen assigns a unique ID, called a rowid, to each row stored in a table. A rowid lookup is applicable if, for example, an application has previously selected a rowid and then uses a WHERE ROWID= clause to fetch that same row. Rowid lookups are faster than index lookups. A hash index lookup uses a hash index to find rows based on their primary keys. Such lookups are applicable if the table has a primary key that has a hash index and the predicate specifies an exact match over the primary key columns. A bitmap index lookup uses a bitmap index to find rows that satisfy an equality predicate such as customer.gender='male'. Bitmap indexes are appropriate for columns with few unique values. They are particularly useful in evaluating several predicates each of which can use a bitmap index lookup because the combined predicates can be efficiently evaluated through bit operations on the indexes themselves. For example, if table customer has a bitmap index on the column gender and if table sweater has a bitmap index on the column color, the predicates customer.gender='male' and sweater.color ='pink' could rapidly find all male customers who purchased pink sweaters by performing a logical AND operation on the two bitmap indexes. A range index scan uses a range index to access a table. Such a scan is applicable to exact match predicates such as t1.a=2 or to range predicates such as t1.a>2 and t1.a<10 as long as the column used in the predicate has a range index defined over it. If a range index is defined over multiple columns, it can be used for multiple column predicates. For example, the predicates t1.b=100 and t1.c>'ABC' result in a range index scan if a range index is defined over columns t1.b and t1.c. The index can be used if it is defined over more columns. For example, if a range index is defined over t1.b, t1.c and t1.d, the optimizer uses the index prefix over columns b and c and returns all the values for column d that match the stated predicate over columns b and c.
Join methods
The optimizer can select from multiple join methods. When the rows from two tables are joined, one table is designated the outer table and the other the inner table. During a join, the optimizer scans the rows in the outer and inner tables to locate the rows that match the join condition. The optimizer analyzes the statistics for each table and, for example, might identify the smallest table or the table with the best selectivity for the query as outer table. If indexes exist for one or more of the tables to be joined, the optimizer takes them into account when selecting the outer and inner tables. If more than two tables are to be joined, the optimizer analyzes the various combinations of joins on table pairs to determine which pair to join first, which table to join with the result of the join, and so on for the optimum sequence of joins. The cost of a join is largely influenced by the method in which the inner and outer tables are accessed to locate the rows that match the join condition. The optimizer can select from two join methods:
Join methods
t1 is the outer table and t2 is the inner table. Values in column a in table t1 that match values in column a in table t2 are 1 and 7. The join results concatenate the rows from t1 and t2. For example, the first join result is the following row: 7 50 43.54 21 13.69 It concatenates a row from t1: 7 50 43.54 with the first row from t2 in which the values in column a match: 7 21 13.69
Figure 52 Nested loop join
7 7 7 1
50 50 50 20
21 33 30 20
Results
t1
a 4 7 5 3 6 3 1 b 32 50 42 70 50 50 20 c 72.89 43.54 53.22 33.94 42.74 43.54 23.09 a 7 9 9 8 1 7 7
t2
b 21 62 20 21 20 33 30 c 13.69 12.19 23.09 23.12 43.59 61.79 55.54
Scan
Outer table
Merge join
Inner table
A merge join is used only when the join columns are sorted by range indexes. In a merge join, a cursor advances through each index one row at a time. Because the rows are already sorted on the join columns in each index, a simple formula is applied to
Join methods
efficiently advance the cursors through each row in a single scan. The formula looks something like:
If Inner.JoinColumn < Outer.JoinColumn, then advance inner cursor If Inner.JoinColumn = Outer.JoinColumn, then read match If Inner.JoinColumn > Outer.JoinColumn, then advance outer cursor
Unlike a nested loop join, there is no need to scan the entire inner table for each row in the outer table. A merge join can be used when range indexes have been created for the tables before preparing the query. If no range indexes exist for the tables being joined before preparing the query, the optimizer may in some situations create temporary range indexes in order to use a merge join. Figure 53 shows an example of a merge join. The join condition is:
WHERE t1.a=t2.a
x1 is the index on table t1, sorting on column a. x2 is the index on table t2, sorting on column a. The merge join results concatenate the rows in x1 with rows in x2 in which the values in column a match. For example, the first merge join result is: 1 20 23.09 20 43.59 It concatenates a row in x1: 1 20 23.09 with the first row in x2 in which the values in column a match: 1 20 43.59
Optimizer plan
1 7 7 7
20 50 50 50
20 21 33 30
Results
x1
a 1 3 3 4 5 6 7 a 4 7 5 3 6 3 1 b 20 50 70 32 42 50 50 b 32 50 42 70 50 50 20 c 23.09 43.54 33.94 72.89 53.22 42.74 43.54 c 72.89 43.54 53.22 33.94 42.74 43.54 23.09 a 1 7 7 7 8 9 9 a 7 9 9 8 1 7 7
x2
b 20 21 33 30 21 62 20 b 21 62 20 21 20 33 30 c 43.59 13.69 61.79 55.54 23.12 12.19 23.09 c 13.69 12.19 23.09 23.12 43.59 61.79 55.54
Scan
Sorted indexes
t1
t2
Original tables
Outer table
Inner table
Optimizer plan
Like most database optimizers, the query optimizer stores the details on how to most efficiently perform SQL operations in an execution plan, which can be examined and customized by application developers and administrators. The execution plan data is stored in the TimesTen SYS.PLAN table and includes information about which tables are to be accessed and in what order, which tables are to be joined, and which indexes are to be used. Users can direct the query optimizer to enable or disable the creation of an execution plan in the SYS.PLAN table by setting the GenPlan optimizer flag in the ttOptSetFlag built-in procedure. The execution plan designates a separate step for each database operation to be performed to execute the query. The steps in the plan are organized into levels that designate which steps must be completed to generate the results required by the step or steps at the next level. Consider this query:
SELECT COUNT(*) FROM t1, t2, t3 WHERE t3.b/t1.b > 1 AND t2.b <> 0 AND t1.a = -t2.a
In this example, the optimizer breaks down the query into its individual operations and generates a 5-step execution plan to be performed at three levels, as shown in Figure 54.
Figure 54 Example execution plan
Step 5
Merge results from Steps 3 and 4 and join the rows that match: t2.a = t3.a After join, return the rows that match: t3.b / t1.b > 1
Level 1
Step 3
Merge results from Steps 1 and 2 and join the rows that match: t1.A = -t2.A
Step 4
Scan table t3 and sort using a temporary range index
Level 2
Step 1
Scan table t1 and sort using a temporary range index
Step 2
Scan table t2 and sort using a temporary range index After scan, return the rows that match: t2.b <> 0
Level 3
6
6
TimesTen and IMDB Cache ensure the availability, durability, and integrity of data through the following mechanisms:
Transaction logging
The TimesTen or IMDB Cache transaction log is used for the following purposes:
Redo transactions if a system failure occurs Undo transactions that are rolled back Replicate changes to other TimesTen databases or IMDB Cache databases Replicate changes to an Oracle database Enable applications to monitor changes to tables through the XLA interface
The transaction log is stored in files on disk. The end of the transaction log resides in an in-memory buffer.
6-1
Checkpointing
All transactions writing log records to the transaction log file (or a previous transaction log file) have committed or rolled back. All changes recorded in the transaction log file have been written to the checkpoint files on disk. All changes recorded in the transaction log file have been replicated (if replication is used). All changes recorded in the transaction log file have been propagated to the Oracle database if the IMDB Cache has been configured for that behavior. All changes recorded in transaction log files have been reported to the XLA applications (if XLA is used).
TimesTen commits
ODBC provides an autocommit mode that forces a commit after each statement. By default, autocommit is enabled so that an implicit commit is issued immediately after a statement executes successfully. TimesTen recommends that you turn autocommit off so that commits are intentional. TimesTen issues an implicit commit before and after any data definition language (DDL) statement by default. This behavior is controlled by the DDLCommitBehavior general connection attribute. You can use the attribute to specify instead that DDL statements be executed as part of the current transaction and committed or rolled back along with the rest of the transaction.
Checkpointing
Checkpoints are used to keep a snapshot of the database. If a system failure occurs, TimesTen and the IMDB Cache can use a checkpoint file with transaction log files to restore a database to its last transactionally consistent state. Only the data that has changed since the last checkpoint operation is written to the checkpoint file. The checkpoint operation scans the database for blocks that have changed since the last checkpoint. It then updates the checkpoint file with the changes and removes any transaction log files that are no longer needed. TimesTen and IMDB Cache provide two kinds of checkpoints:
Nonblocking checkpoints
TimesTen and IMDB Cache initiate nonblocking checkpoints in the background automatically. Nonblocking checkpoints are also known as fuzzy checkpoints. The frequency of these checkpoints can be adjusted by the application. Nonblocking checkpoints do not require any locks on the database, so multiple applications can asynchronously commit or roll back transactions on the same database while the checkpoint operation is in progress.
6-2 Oracle In-Memory Database Cache Introduction
Replication
Blocking checkpoints
An application can call the ttCkptBlocking built-in procedure to initiate a blocking checkpoint in order to construct a transaction-consistent checkpoint. While a blocking checkpoint operation is in progress, any other new transactions are put in a queue behind the checkpointing transaction. If any transaction is long-running, it may cause many other transactions to be held up. No log is needed to recover from a blocking checkpoint because the checkpoint record contains the information needed to recover.
Replication
The fundamental motivation behind replication is to make data highly available to applications with minimal performance impact. In addition to its role in failure recovery, replication is also useful for distributing application workloads across multiple databases for maximum performance and for facilitating online upgrades and maintenance. Replication is the process of copying data from a master database to a subscriber database. Replication at each master and subscriber database is controlled by replication agents that communicate through TCP/IP stream sockets. The replication agent on the master database reads the records from the transaction log for the master database. It forwards changes to replicated elements to the replication agent on the subscriber database. The replication agent on the subscriber then applies the updates to its database. If the subscriber agent is not running when the updates are forwarded by the master, the master retains the updates in its transaction log until they can be applied at the subscriber. TimesTen recommends the active standby pair configuration for highest availability. It is the only replication configuration that you can use for replicating IMDB Cache. The rest of this section includes the following topics:
Active standby pair Other replication configurations Asynchronous and return service replication Replication failover and recovery
6-3
Replication
Active database
Replication
Standby database
Propagation
Read-only subscribers
In an active standby pair, two databases are defined as masters. One is an active database, and the other is a standby database. The active database is updated directly. The standby database cannot be updated directly. It receives the updates from the active database and propagates the changes to read-only subscribers. This arrangement ensures that the standby database is always ahead of the read-only subscribers and enables rapid failover to the standby database if the active database fails. Only one of the master databases can function as an active database at a specific time. If the active database fails, the role of the standby database must be changed to active before recovering the failed database as a standby database. The replication agent must be started on the new standby database. If the standby database fails, the active database replicates changes directly to the read-only subscribers. After the standby database has recovered, it contacts the active database to receive any updates that have been sent to the read-only subscribers while the standby was down or was recovering. When the active and the standby databases have been synchronized, then the standby resumes propagating changes to the subscribers. Active standby replication can be used with IMDB Cache to achieve cross-tier high availability. Active standby replication is available for both read-only and asynchronous writethrough cache groups. When used with read-only cache groups, updates are sent from the Oracle database to the active database. Thus the Oracle
6-4 Oracle In-Memory Database Cache Introduction
Replication
database plays the role of the application in this configuration. When used with asynchronous writethrough cache groups, the standby database propagates updates that it receives from the active database to the Oracle database. In this scenario, the Oracle database plays the role of one of the read-only subscribers. An active standby pair that replicates one of these types of cache groups can perform failover and recovery automatically with minimal chance of data loss. See "Active standby pairs with cache groups" in Oracle TimesTen In-Memory Database TimesTen to TimesTen Replication Guide.
Unidirectional replication
Figure 62 shows a unidirectional replication scheme. The application is configured on both nodes so that the subscriber is ready to take over if the master node fails. While the master is up, updates from the application to the master database are replicated to the subscriber database. The application on the subscriber node does not execute any updates against the subscriber database, but may read from that database. If the master fails, the application on the subscriber node takes over the update function and starts updating the subscriber database.
Figure 62 Unidirectional replication scheme
Application
Application
Master
Subscriber
Replication can also be used to copy updates from a master database to many subscriber databases. Figure 63 shows a replication scheme with multiple subscribers.
6-5
Replication
Application
Master
Figure 64 shows a propagation configuration. One master propagates updates to three subscribers. The subscribers are also masters that propagate updates to additional subscribers.
Figure 64 Propagation configuration
Application
Master
Propagators
Subscribers
Bidirectional replication
Bidirectional replication schemes are used for load balancing. The workload can be split between two bidirectionally replicated databases. There are two basic types of load-balancing configurations:
Split workload where each database bidirectionally replicates a portion of its data to the other database. Figure 65 shows a split workload configuration. Distributed workload where user access is distributed across duplicate application/database combinations that replicate updates to each other. In a distributed workload configuration, the application has the responsibility to divide the work between the two systems so that replication collisions do not
Replication
occur. If collisions do occur, TimesTen has a timestamp-based collision detection and resolution capability. Figure 66 shows a distributed workload configuration.
Figure 65 Split workload replication
= Master = Subscriber
Figure 66 Distributed workload replication
6-7
The return receipt service synchronizes the application with the replication mechanism by blocking the application until replication confirms that the update has been received by the subscriber replication agent. The return twosafe service enables fully synchronous replication by blocking the application until replication confirms that the update has been both received and committed on the subscriber.
Note:
Do not use return twosafe service in a distributed workload configuration. This can produce deadlocks.
Applications that use the return services trade some performance to ensure higher levels of consistency and reduce the risk of transaction loss between the master and subscriber databases. In the event of a master failure, the application has a higher degree of confidence that a transaction committed at the master persists in the subscribing database. Return receipt replication has less performance impact than return twosafe at the expense of potential loss of transactions.
7
7
Event Notification
TimesTen and IMDB Cache event notification is done through the Transaction Log API (XLA), which provides functions to detect changes to the database. XLA monitors log records. A log record describes an insert, update or delete on a row in a table. XLA can be used with materialized views to focus the scope of notification on changes made to specific rows across multiple tables. TimesTen and IMDB Cache also use SNMP traps to send asynchronous alerts of events. This chapter includes the following topics:
position in the log update stream. Bookmarks are stored in the database, so they are persistent across database connections, shutdowns, and failures.
Figure 71 How XLA works
XLA application Applications Commit transaction XLA interface Read update records for a transaction
....
Transaction log buffer First transaction update record Transaction commit record
....
Database
XLA also operates in nonpersistent mode. In nonpersistent mode, XLA obtains update records from the transaction log buffer and stages them in an XLA staging buffer. After records are read by the application from the staging buffer, they are removed and are no longer available. XLA in nonpersistent mode does not use bookmarks.
The table to which the updated row applies Whether the record is the first or last commit record in the transaction The type of transaction it represents The length of the returned row data Which columns in the row were updated
When a materialized view is present, an XLA application needs to monitor only update records that are of interest from a single materialized view. Without a materialized view, the XLA application would have to monitor all of the update records from all of the detail tables, including records reflecting updates to rows and columns of no interest to the application. Figure 72 shows an update made to a column in a detail table that is part of the materialized view result set. The XLA application monitoring updates to the materialized view captures the updated record. Updates to other columns and rows in the same detail table that are not part of the materialized view result set are not seen by the XLA application.
Figure 72 Using XLA to detect updates on a materialized view table
Application
XLA application
Update row
Materialized view
XLA interface
....
....
Detail tables
See "Real-time quote service application" on page 2-2 for an example of a trading application that uses XLA and a materialized view to detect updates to select stocks. The TimesTen and IMDB Cache implementation of materialized views emphasizes performance as well as the ability to detect updates across multiple tables. Readers familiar with other implementations of materialized views should note that the following tradeoffs have been made:
The application must explicitly create materialized views. The TimesTen query optimizer has no facility to create materialized views automatically. The query optimizer does not rewrite queries on the detail tables to reference materialized views. Application queries must directly reference views. There are some restrictions to the SQL used to create materialized views.
When creating a materialized view, the application must specify whether the maintenance of the view should be immediate or deferred. With immediate maintenance, a view is refreshed as soon as changes are made to its detail tables. With deferred maintenance, a view is refreshed only after the transaction that updated the detail tables is committed. A view with deferred maintenance is called an asynchronous materialized view. The refreshes may be automatic or may be initiated by the application, and they may be incremental or full. The application must specify the frequency of automatic refreshes. Note that the order of XLA notifications for an asynchronous materialized view is not necessarily the same as the order of transactions for the associated detail tables.
SNMP traps
SNMP traps
Simple Network Management Protocol (SNMP) is a protocol for network management services. Network management software typically uses SNMP to query or control the state of network devices like routers and switches. These devices sometimes also generate asynchronous alerts in the form of UDP/IP packets, called SNMP traps, to inform the management systems of problems. TimesTen and IMDB Cache cannot be queried or controlled through SNMP. However, TimesTen and IMDB Cache send SNMP traps for certain critical events to facilitate user recovery mechanisms. TimesTen sends traps for the following events:
IMDB Cache autorefresh failure Database out of space Replicated transaction failure Death of daemons Database invalidation Assertion failure
These events also cause log entries to be written by the TimesTen daemon, but exposing them through SNMP traps allows properly configured network management software to take immediate action.
8
8
IMDB Cache
IMDB Cache provides the ability to transfer data between an Oracle database and an IMDB Cache database. You can cache Oracle data in an IMDB Cache database by defining a cache grid and then creating cache groups in TimesTen where each cache group maps to a single table in the Oracle database or to a group of tables related by foreign key constraints. This chapter includes the following topics:
Cache grid Cache groups Dynamic cache groups and explicitly loaded cache groups Global and local cache groups Transmitting data between the IMDB Cache and Oracle Database Aging feature Passthrough feature Replicating cache groups
Cache grid
A cache grid is a collection of IMDB Cache databases that collectively manage the application data. A cache grid consists of one or more grid members that are each backed by an IMDB Cache database. Grid members cache tables from a central Oracle database or Real Application Cluster (Oracle RAC). Cached data is dynamically distributed across multiple grid members without shared storage. This architecture allows the capacity of the cache grid to scale based on the processing needs of the application. When the workload increases or decreases, new grid members attach to the grid or existing grid members detach from the grid without interrupting operations on other grid members. An IMDB Cache database within a cache grid can contain explicitly loaded and dynamic cache groups as well as global and local cache groups of any cache group type. A cache grid ensures that data is consistent across nodes. Figure 81 shows a cache grid. The grid has three members: two standalone IMDB Cache databases and an active standby pair with a read-only subscriber. The read-only subscriber is not part of the grid.
Cache groups
Standalone database 1
Standalone database 2
Oracle database
Cache groups
You can cache Oracle data by creating a cache group in an IMDB Cache database. A cache group can be created to cache a single Oracle table or a set of related Oracle tables. The cached Oracle data can consist of all the rows and columns or a subset of the rows and columns in the Oracle tables. IMDB Cache supports the following features:
Applications can both read from and write to cache groups. Cache groups can be refreshed (bring Oracle data into the cache group) automatically or manually. Cache updates can be sent to the Oracle database automatically or manually. The updates can be sent synchronously or asynchronously.
The IMDB Cache database interacts with the Oracle database to perform all of the synchronous cache group operations, such as creating a cache group and propagating updates between the cache group and the Oracle database. A process called the cache agent performs asynchronous cache operations, such as loading data into the cache group, manually refreshing the data from the Oracle database to the cache group, and automatically refreshing the data from the Oracle database to the cache group. Figure 82 illustrates the IMDB Cache features and processes.
Applications
Cache group Oracle database
IMDB Cache
Cache agent
Oracle tables
Each cache group has a root table that contains the primary key for the cache group. Rows in the root table may have one-to-many relationships with rows in child tables, each of which may have one-to-many relationships with rows in other child tables. A cache instance is the set of rows that are associated by foreign key relationships with a particular row in the root table. Each primary key value in the root table specifies a cache instance. Cache instances form the unit of cache loading and cache aging. No table in the cache group can be a child to more than one parent in the cache group. Each IMDB Cache record belongs to only one cache instance and has only one parent in its cache group. The most commonly used cache group types are:
Read-only cache group - A read-only cache group enforces a caching behavior in which committed updates to Oracle tables are automatically refreshed to the corresponding cache tables in the IMDB Cache database. Asynchronous writethrough (AWT) cache group - An AWT cache group enforces a caching behavior in which committed updates to cache tables in the IMDB Cache database are automatically propagated to the corresponding Oracle tables asynchronously.
Synchronous writethrough (SWT) cache group - An SWT cache group enforces a caching behavior in which committed updates to cache tables in the IMDB Cache database are automatically propagated to the corresponding Oracle tables synchronously. User managed cache group - A user managed cache group defines customized caching behavior. For example, individual cache tables in a user managed cache are not constrained to be all of the same type. Some tables may be defined as read-only while others may be defined as updatable.
In dynamic cache groups, cache instances are automatically loaded into the IMDB Cache from the Oracle database when the application references cache instances that are not already in the IMDB Cache. The use of dynamic cache groups is typically coupled with least recently used (LRU) aging so that less recently used cache instances are aged out of the cache to free up space for recently used cache instances. Using dynamic cache groups is appropriate when the size of the data that qualifies for caching exceeds the size of the memory available for the IMDB Cache database. All cache group types (read-only, AWT, SWT, user managed) can be defined as a explicitly loaded or dynamic.
Updating a cache group from Oracle tables Updating Oracle tables from a cache group
Autorefresh - An incremental autorefresh operation updates only records that have been modified in the Oracle database since the last refresh. The IMDB Cache automatically performs the incremental refresh at specified time intervals. You can also specify a full autorefresh operation, which automatically refreshes the entire cache group at specified time intervals.
Passthrough feature
Manual refresh - An application issues a REFRESH CACHE GROUP statement to refresh either an entire cache group or a specific cache instance. It is equivalent to unloading and then loading the cache group or cache instance.
These mechanisms are useful under different circumstances. A full autorefresh may be the best choice if the Oracle table is updated only once a day and many rows are changed. An incremental autorefresh is the best choice if the Oracle table is updated often, but only a few rows are changed with each update. A manual refresh is the best choice if the application logic knows when the refresh should happen.
Propagate - The most common way to propagate cache group data to the Oracle database is by using an asynchronous writethrough (AWT) cache group. Other methods of updating the Oracle tables are using a synchronous writethrough (SWT) cache group or specifying the PROPAGATE option in a user managed cache group. Changes to an AWT cache group are committed without waiting for the changes to be applied to the Oracle tables. AWT cache groups provide better response times and performance than SWT cache groups and user managed cache groups with the PROPAGATE option, but the IMDB Cache database and the Oracle database do not always contain the same data because changes are applied to the Oracle tables asynchronously.
Flush - A flush operation can be used to propagate updates manually from a user managed cache group to the Oracle database.An application initiates a flush operation by issuing a FLUSH CACHE GROUP statement. Flush operations are useful when frequent updates occur for a limited period of time over a set of records. Flush operations do not propagate deletes.
Aging feature
Records can be automatically aged out of a TimesTen database, and cache instances can be automatically aged out of an IMDB Cache database. Aging can be usage-based or time-based.You can configure both usage-based and time-based aging in the same system, but you can define only one type of aging on a specific cache group. Dynamic load can be used to reload a requested cache instance that has been deleted by aging.
Passthrough feature
Applications can send SQL statements to either a cache group or to the Oracle database through a single connection to an IMDB Cache. This single-connection capability is enabled by a passthrough feature that checks whether the SQL statement can be handled locally by the cached tables in the IMDB Cache or if it must be redirected to the Oracle database, as shown in Figure 83. The passthrough feature provides settings that specify what types of statements are to be passed through and under what circumstances. The specific behavior of the passthrough feature is controlled by the PassThrough IMDB Cache general connection attribute.
Cache group
Oracle database
For more information about the passthrough feature, see "Setting a passthrough level" in Oracle In-Memory Database Cache User's Guide. For more information about replicating cache groups, see "Cache groups and replication" in Oracle TimesTen In-Memory Database TimesTen to TimesTen Replication Guide.
9
9
Installing TimesTen and IMDB Cache Access Control Command line administration SQL administration SQL Developer ODBC Administrator Upgrading TimesTen and the IMDB Cache
Access Control
TimesTen and IMDB Cache are installed with Access Control to allow only users with specific privileges to access particular TimesTen features. TimesTen Access Control uses standard SQL statements to establish TimesTen user accounts with specific privilege levels. TimesTen offers object-level access control as well as database-level access control.
9-1
SQL administration
Description Used to transfer data between TimesTen tables and ASCII files. Used to run SQL interactively from the command line. Also provides a number of administrative commands to reconfigure and monitor databases. Used to save tables and cache group definitions to a binary data file. Also used to restore tables and cache group definitions from the binary file. Used to monitor replication status. Used to estimate the amount of space to allocate for a table in the database. Used to display information that describes the current state of TimesTen or IMDB Cache. Used to enable and disable the TimesTen and IMDB Cache internal tracing facilities. Used to list ownership, status, log and lock information for each outstanding transaction. The ttXactAdmin utility also allows users to commit, abort or forget an XA transaction branch.
ttMigrate
SQL administration
TimesTen provides SQL statements for administrative activities such as creating and managing tables, replication schemes, cache groups, materialized views, and indexes. The metadata for each TimesTen database is stored in a group of system tables. Applications can use SQL SELECT queries on these tables to monitor the current state of a database. Administrators can use the ttIsql utility for SQL interaction with a database. For example, there are several built-in ttIsql commands that display information on database structures.
SQL Developer
Oracle SQL Developer is a graphical tool for database development tasks. Use SQL Developer to:
Browse, create, and edit database objects and PL/SQL programs Automate cache group operations Manipulate and export data Execute SQL and PL/SQL statements and scripts View and create reports
SQL Developer is a Java application that supports direct-linked and client/server connections to the TimesTen databases. Support for connecting to multiple databases enables SQL Developer users to work with data in the TimesTen and the Oracle databases concurrently.
ODBC Administrator
The ODBC Administrator is a utility program used on Windows to create, configure and delete data source definitions. You can use it to define a data source and set connection attributes.
In-place upgrades
In-place upgrades are typically used to move to a new patch release of TimesTen or IMDB Cache. In-place upgrades can be done without destroying the existing databases. However, all applications must first disconnect from the databases, and the databases must be unloaded from shared memory. After uninstalling the old release of TimesTen or IMDB Cache and installing the new release, applications can reconnect to the databases and resume operation.
Offline upgrades
Offline upgrades are performed by using the ttMigrate utility to export the database into an external file and to restore the database with the desired changes. Use offline upgrades to perform the following tasks:
Move to a new major TimesTen or IMDB Cache release Move to a different directory or machine Reduce database size
During an offline upgrade, the database is not available to applications. Offline upgrades usually require enough disk space for an extra copy of the upgraded database.
Online upgrades
TimesTen replication enables online upgrades, which can be performed online by the ttMigrate and ttRepAdmin utilities while the database and its applications remain operational and available to users. Online upgrades are useful for applications where continuous availability of the database is critical. Use online upgrades to perform the following tasks:
Move to a new major release of TimesTen or IMDB Cache and retain continuous availability to the database Increase or reduce the database size Move the database to a new location or machine
9-3
Updates made to the database during the upgrade are transmitted to the upgraded database at the end of the upgrade process. Because an online upgrade requires that the database be replicated to another database, it can require more memory and disk space than offline upgrades.
Index
A
Access Control, 9-1 database level, 1-5 object level, 1-5 active standby pair, 6-4 administration command line utilities, 9-1 aging cache group, 8-4 cache groups, 8-5 data, 1-6 architecture IMDB cache, 3-1 TimesTen, 3-1 asynchronous materialized view, 7-3 autorefresh, 8-4 AWT cache group, 8-3 character sets, 1-7 checkpoint recovery, 6-3 checkpoint operation overview, 1-5 checkpoints blocking, 6-3 fuzzy, 6-2 nonblocking, 6-2 purpose, 3-3 client configuring automatic failover on Windows, 6-8 client/server connection, 1-5, 3-5 cluster managers, 6-8 commit behavior, 6-2 concurrency, 1-6 connection client/server, 3-5 direct driver, 3-4 driver manager, 3-5
B
bitmap index, 5-3
D
data structures, 3-3 deadlock detection description, 4-3 direct driver connection, 1-5, 3-4 disaster recovery, 8-6 driver manager connection, 3-5 durability, 1-5 durable commits, 6-1 dynamic cache group, 8-3 definition, 8-4
C
C++ interface, 1-4, 7-1 cache grid definition, 8-1 cache group aging, 8-4 asynchronous writethrough, 8-3 definition, 3-3 description, 8-2 dynamic, 8-4 explicitly loaded, 8-3 global, 8-4 local, 8-4 passthrough feature, 8-5 read-only, 8-3 synchronous writethrough, 8-3 user managed, 8-3 cache groups replicating, 8-6 cache instance, 8-3 caching Oracle data in TimesTen overview, 3-3
E
explicitly loaded cache group definition, 8-3
F
failover configuring for client on Windows, 6-8 flush from IMDB Cache to Oracle database, 8-5
Index-1
G
global cache group definition, 8-4 dynamic, 8-4 explicitly loaded, 8-4 globalization support, 1-7
asynchronous, 7-3 materialized views and XLA, 7-2 comparing with other databases, 7-3 memory usage query optimization, 5-2
H
hash index, 5-3
O
OCI support, 1-4 ODBC Administrator, 9-3 ODBC interface, 1-3 optimizer description, 5-1 plan, 5-7 scan methods, 5-3 optimizer hints, 5-2 Oracle Call Interface support, 1-4 Oracle Clusterware, 6-8 Oracle In-Memory Database Cache, 1-1
I
IMDB Cache, 1-8 architecture, 3-1 scenarios, 2-1 using, 2-1 index bitmap, 5-3 hash, 5-3 range, 5-3 indexes and query optimizer, 5-3 supported types, 5-3 isolation, 1-6 read committed, 4-1 serializable, 4-2 transactions, 4-1
P
passthrough feature, 8-5 PL/SQL support, 1-3 Pro*C/C++ Precompiler support, 1-4 processes database, 3-3 propagate changes from IMDB Cache to Oracle database, 8-5
J
JDBC interface, 1-3 join merge, 5-5 methods, 5-4 nested loop, 5-5 JTA support overview, 1-4
Q
query optimizer, 1-6 description, 5-1 hints, 5-2 memory usage, 5-2 plan, 5-7 using statistics, 5-2
L
linguistic sorting, 1-7 local cache group definition, 8-4 locks database level, 4-3 description, 4-3 row level, 4-4 table level, 4-3 log buffer writing to disk, 6-1 log files interaction with checkpoints, 3-3 when deleted, 6-2 logging, 1-5 transaction, 6-1
R
range index, 5-3 read committed isolation description, 4-1 read-only cache group, 8-3 recovery using checkpoint files, 6-3 refresh manual (cache group), 8-5 replication active standby pair, 6-4 as part of architecture, 3-4 bidirectional, 6-6 distributed workload, 6-6 failover, 6-8 multiple subscribers, 6-5 propagation to subscribers, 6-6 split workload, 6-6 unidirectional, 6-5
M
materialized view
Index-2
S
scan methods, 5-3 serializable isolation description, 4-2 shared libraries, 3-2 SNMP traps, 7-4 SQL Developer, 9-2 statistics query optimizer, 5-2 SWT cache group, 8-3
T
TimesTen architecture, 3-1 scenarios, 2-1 using, 2-1 transaction isolation overview, 4-1 read committed, 4-1 Transaction Log API, 7-1 overview, 1-4 transaction logging, 1-5, 6-1 transactions recovery, 1-5 replication, 1-5 rollback, 1-5 TTClasses, 1-4
U
upgrade in place, 9-3 offline, 9-3 online, 9-3 upgrading TimesTen, 9-3 user managed cache group, 8-3
X
XA support overview, 1-4 XLA, 7-1 materialized views, overview, 1-4
7-2
Index-3
Index-4