Java Persistence
Java Persistence
Java Persistence
Wikibooks.org
March 18, 2013
On the 28th of April 2012 the contents of the English as well as German Wikibooks and Wikipedia
projects were licensed under Creative Commons Attribution-ShareAlike 3.0 Unported license. An
URI to this license is given in the list of figures on page 283. If this document is a derived work
from the contents of one of these projects and the content was still licensed by the project under
this license at the time of derivation this document has to be licensed under the same, a similar or a
compatible license, as stated in section 4b of the license. The list of contributors is included in chapter
Contributors on page 281. The licenses GPL, LGPL and GFDL are included in chapter Licenses on
page 287, since this book and/or parts of it may or may not be licensed under one or more of these
licenses, and thus require inclusion of these licenses. The licenses of the figures are given in the list of
figures on page 283. This PDF was generated by the LATEX typesetting software. The LATEX source
code is included as an attachment (source.7z.txt) in this PDF file. To extract the source from the
PDF file, we recommend the use of http://www.pdflabs.com/tools/pdftk-the-pdf-toolkit/
utility or clicking the paper clip attachment symbol on the lower left of your PDF Viewer, selecting
Save Attachment. After extracting it from the PDF file you have to rename it to source.7z. To
uncompress the resulting archive we recommend the use of http://www.7-zip.org/. The LATEX
source itself was generated by a program written by Dirk Hünniger, which is freely available under
an open source license from http://de.wikibooks.org/wiki/Benutzer:Dirk_Huenniger/wb2pdf.
This distribution also contains a configured version of the pdflatex compiler with all necessary
packages and fonts needed to compile the LATEX source included in this PDF file.
Contents
1 Preface 3
3 What is Java? 7
3.1 Google Trends . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
3.2 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
4 What is a database? 9
5 What is JPA? 11
9 Persistence Products 19
9.1 Existing Persistence Products . . . . . . . . . . . . . . . . . . . . . . . . . 19
10 EclipseLink 23
11 TopLink 25
12 Hibernate 27
13 TopLink Essentials 29
14 Kodo 31
15 Open JPA 33
16 Ebean 35
16.1 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
17 Mapping 37
18 Mapping 39
18.1 Access Type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
18.2 Common Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
19 Tables 45
III
Contents
20 Advanced 47
20.1 Multiple tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
20.2 Multiple tables with foreign keys . . . . . . . . . . . . . . . . . . . . . . . 49
20.3 Multiple table joins . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
20.4 Multiple table outer joins . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
20.5 Tables with special characters and mixed case . . . . . . . . . . . . . . . . 52
20.6 Table qualifiers, schemas, or creators . . . . . . . . . . . . . . . . . . . . . 52
21 Identity 55
22 Sequencing 57
22.1 Sequence Strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
23 Advanced 65
23.1 Composite Primary Keys . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
23.2 Primary Keys through OneToOne and ManyToOne Relationships . . . . . 69
23.3 Advanced Sequencing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
23.4 Primary Keys through Triggers . . . . . . . . . . . . . . . . . . . . . . . . 74
23.5 Primary Keys through Events . . . . . . . . . . . . . . . . . . . . . . . . . 75
23.6 No Primary Key . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
23.7 Single Table Inheritance . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
23.8 Joined, Multiple Table Inheritance . . . . . . . . . . . . . . . . . . . . . . 79
24 Advanced 83
24.1 Table Per Class Inheritance . . . . . . . . . . . . . . . . . . . . . . . . . . 83
24.2 Mapped Superclasses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
25 Embeddables 89
26 Advanced 93
26.1 Sharing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
26.2 Embedded Ids . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
26.3 Nulls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
26.4 Nesting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
26.5 Inheritance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
26.6 Relationships . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
26.7 Collections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
26.8 Querying . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
27 Locking 99
27.1 Optimistic Locking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
27.2 Advanced . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
28 Basics 111
28.1 Common Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
29 Advanced 115
29.1 Temporal, Dates, Times, Timestamps and Calendars . . . . . . . . . . . . 115
29.2 Enums . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
IV
Contents
30 Relationships 125
30.1 Lazy Fetching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
30.2 Cascading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
30.3 Orphan Removal (JPA 2.0) . . . . . . . . . . . . . . . . . . . . . . . . . . 130
30.4 Target Entity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
30.5 Collections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
30.6 Common Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
31 Advanced 139
31.1 Advanced Relationships . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
31.2 Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
31.3 Join Fetching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
31.4 Batch Fetching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
31.5 Filtering, Complex Joins . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
31.6 Variable and Heterogeneous Relationships . . . . . . . . . . . . . . . . . . 148
31.7 Nested Collections, Maps and Matrices . . . . . . . . . . . . . . . . . . . . 149
32 OneToOne 151
32.1 Inverse Relationships, Target Foreign Keys and Mapped By . . . . . . . . 154
32.2 See Also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
32.3 Common Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
33 Advanced 157
33.1 Target Foreign Keys, Primary Key Join Columns, Cascade Primary Keys 157
33.2 Mapping a OneToOne Using a Join Table . . . . . . . . . . . . . . . . . . 161
34 ManyToOne 165
34.1 See Also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
34.2 Common Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
35 Advanced 169
35.1 Target Foreign Keys, Primary Key Join Columns, Cascade Primary Keys 169
36 OneToMany 171
36.1 Join Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
36.2 See Also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
36.3 Common Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
37 Advanced 177
37.1 Undirectional OneToMany, No Inverse ManyToOne, No Join Table (JPA
2.0 ONLY) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
V
Contents
38 ManyToMany 179
38.1 Bi-directional Many to Many . . . . . . . . . . . . . . . . . . . . . . . . . 180
38.2 See Also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
38.3 Common Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
39 Advanced 183
39.1 Mapping a Join Table with Additional Columns . . . . . . . . . . . . . . . 183
40 ElementCollection 187
40.1 Embedded Collections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
40.2 Basic Collections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
40.3 See Also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
40.4 Common Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
42 Runtime 207
44 Querying 215
44.1 Named Queries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
44.2 Dynamic Queries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
44.3 JPQL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
44.4 Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
44.5 Query Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
44.6 Common Queries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220
45 Advanced 225
45.1 Join Fetch and Query Optimization . . . . . . . . . . . . . . . . . . . . . . 225
45.2 Timeouts, Fetch Size and other JDBC Optimizations . . . . . . . . . . . . 225
VI
Contents
47 Persisting 241
47.1 Persist . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
47.2 Merge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243
47.3 Remove . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245
48 Advanced 247
48.1 Refresh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
48.2 Lock . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
48.3 Get Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248
48.4 Flush . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248
48.5 Clear . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
48.6 Close . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250
48.7 Get Delegate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250
48.8 Unwrap (JPA 2.0) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
49 Transactions 253
49.1 Resource Local Transactions . . . . . . . . . . . . . . . . . . . . . . . . . . 253
49.2 JTA Transactions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254
50 Advanced 257
50.1 Join Transaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257
50.2 Retrying Transactions, Handling Commit Failures . . . . . . . . . . . . . . 258
50.3 Nested Transactions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258
50.4 Transaction Isolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259
51 Caching 261
51.1 Object Identity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262
51.2 Object Cache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262
51.3 Data Cache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263
51.4 Cache Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264
51.5 Query Cache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265
51.6 Stale Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265
51.7 Cache Transaction Isolation . . . . . . . . . . . . . . . . . . . . . . . . . . 271
51.8 Common Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
VII
Contents
52 Spring 273
52.1 Persistence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273
52.2 JPA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273
53 Databases 275
54 MySQL 277
54.1 Installing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277
54.2 Configuration tips . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277
55 References 279
56 Contributors 281
57 Licenses 287
57.1 GNU GENERAL PUBLIC LICENSE . . . . . . . . . . . . . . . . . . . . 287
57.2 GNU Free Documentation License . . . . . . . . . . . . . . . . . . . . . . 288
57.3 GNU Lesser General Public License . . . . . . . . . . . . . . . . . . . . . . 289
1
1 Preface
This book is meant to cover Java persistence, that is, storing stuff in the Java programming
language to a persistent storage medium. Specifically using the Java Persistence API (JPA)
to store Java objects to relational databases, but I would like it to have a somewhat wider
scope than just JPA and concentrate more on general persistence patterns and use cases,
after all JPA is just the newest of many failed Java persistence standards, this book should
be able to evolve beyond JPA when it is replaced by the next persistence standard. I do not
want this to be just a regurgitation of the JPA Spec, nor a User Manual to using one of the
JPA products, but more focused on real-world use cases of users and applications trying
to make use of JPA (or other Java persistence solution) and the patterns they evolved and
pitfalls they made.
This book is intended to be useful for or to anyone learning to, or developing Java applications
that require persisting data to a database. It is mainly intended for Java developers intending
to persist Java objects through the Java Persistence API (JPA) standard to a relational
database. Please don't just read this book, if you're learning or developing with JPA please
contribute your experiences to this book.
1.0.3 Style
This book is meant to be written in a casual manner. The goal is avoid sounding dry, overly
technical, or impersonal. The book should sound casual, like a co-worker explaining to you
how to use something, or a fellow consultant relating their latest engagement to another.
Please refrain from being overly critical of any product, ranting about bugs, or marketing
your own product or services.
3
Preface
1.0.4 Authors
Everyone is encouraged to participate in the ongoing development of this book. You do not
need to be a Java persistence superstar to contribute, many times the best advice/information
for other users comes from first time users who have not yet been conditioned to think
something that may be confusing is obvious.
List of authors: (please contribute and sign your name)
James Sutherland1 : Currently working on Oracle2 TopLink3 and Eclipse4 EclipseLink5 ,
over 12 years of experience in object persistence and ORM.
Doug Clarke6 : Oracle7 TopLink8 and Eclipse9 EclipseLink10 , over 10 years of experience in
the object persistence industry.
Preface11
1 http://en.wikibooks.org/wiki/User%3AJamesssss
2 http://en.wikipedia.org/wiki/Oracle%20Corporation
3 http://en.wikipedia.org/wiki/TopLink
4 http://en.wikipedia.org/wiki/Eclipse%20Foundation
5 http://en.wikipedia.org/wiki/EclipseLink
6 http://en.wikibooks.org/wiki/User%3ADjclarke
7 http://en.wikipedia.org/wiki/Oracle%20Corporation
8 http://en.wikipedia.org/wiki/TopLink
9 http://en.wikipedia.org/wiki/Eclipse%20Foundation
10 http://en.wikipedia.org/wiki/EclipseLink
11 http://en.wikibooks.org/wiki/Category%3AJava%20Persistence
4
2 What is Java persistence?
Persistence1 , in computer science, is an adjective describing data that outlives the process
that created it. Java persistence could be defined as storing anything to any level of
persistence using the Java programming language, but obviously this would be too broad a
definition to cover in a single book. That is why this book is more focused on storing Java
objects2 to relational databases3 . In particular using the Java Persistence API4 (JPA).
There are many ways to make data persist in Java, including (to name a few): JDBC5 ,
serialization6 , file IO, JCA7 , object databases8 , and XML databases9 . However, the
majority of data is persisted in databases, specifically relational databases. Most things
that you do on a computer or web site that involve storing data, involve accessing a
relational database. Relational databases are the standard mode of persistent storage for
most industries, from banking to manufacturing.
There are many things that can be stored in databases with Java. Java data includes strings,
numbers, date and byte arrays, images, XML, and Java objects. Many Java applications
use Java objects to model their application data. Because Java is an Object Oriented10
language, storing Java objects is a natural and common approach to persisting data from
Java.
There are many ways to access a relational database from Java, JPA is just the latest of
many different specifications, but it seems to be the direction that most programmers are
heading.
What is Java persistence11 What is Java persistence12
1 http://en.wikipedia.org/wiki/Persistence_%28computer_science%29
2 http://en.wikipedia.org/wiki/Object%20%28computer_science%29%20
3 http://en.wikipedia.org/wiki/Relational_databases%20
4 http://en.wikipedia.org/wiki/Java_Persistence_API%20
5 http://en.wikipedia.org/wiki/JDBC%20
6 http://en.wikipedia.org/wiki/Serialization%20
7 http://en.wikipedia.org/wiki/J2EE_Connector_Architecture%20
8 http://en.wikipedia.org/wiki/Object_databases%20
9 http://en.wikipedia.org/wiki/XML_database%20
10 http://en.wikipedia.org/wiki/Object%20Oriented%20
http://en.wikibooks.org/wiki/Category%3AJava%20Persistence%2FWhat%20is%20Java%
11
20persistence
12 http://en.wikibooks.org/wiki/Category%3AJava%20Persistence
5
3 What is Java?
• Programming Languages67
• JEE Servers89
• Java Programming10
1 http://en.wikipedia.org/wiki/Java%20%28programming%20language%29%20
2 http://en.wikipedia.org/wiki/object%20oriented%20
3 http://en.wikipedia.org/wiki/Sun%20Microsystems%20
4 http://en.wikipedia.org/wiki/C%2B%2B%20
5 http://en.wikipedia.org/wiki/Smalltalk%20
http://www.google.com/trends?q=c%23%2C+php%2C+java%2C+c%2B%2B%2C+perl&ctab=0&geo=
6
all&date=all&sort=0
7 Unfortunately hard to seperate Java island from Java language.
http://www.google.com/trends?q=websphere%2C+weblogic%2C+jboss%2C+glassfish%2C+
8
geronimo+apache&ctab=0&geo=all&date=all&sort=0
9 Could not include Oracle because of Oracle database hits.
10 http://en.wikibooks.org/wiki/Java%20Programming
7
What is Java?
http://en.wikibooks.org/wiki/Category%3AJava%20Persistence%2FWhat%20is%20Java%
11
20persistence
12 http://en.wikibooks.org/wiki/Category%3AJava%20Persistence
8
4 What is a database?
A database1 is a program that stores data. There are many types of databases: flat-file,
hierarchical, relational, object-relational, object-oriented, xml, and others. The original
databases were mainly proprietary and non-standardized.
Relational databases were the first databases to achieve great success and standardization.
Relational databases are characterized by the SQL2 (structured query language) standard
to query and modify the database, their client/server3 architecture, and relational table
storage structure. Relational databases achieved great success because their standardization
allowed many different vendors such as Oracle4 , IBM5 , and Sybase6 to produce interoperable
products giving users the flexibility to switch their vendor and avoid vendor lock-in to a
proprietary solution. Their client/server architecture allows the client programming language
to be decoupled from the server, allowing the database server to support interface APIs into
multiple different programming languages and clients.
Although relational databases are relatively old technology, they still dominate the industry.
There have been many attempts to replace the relational model, first with object-oriented
databases, then with object-relational databases, and finally with XML databases, but none
of the new database models achieved much success and relational databases remain the
overwhelmingly dominant database model.
The main relational databases used today are: Oracle7 , MySQL8 (Oracle), PostgreSQL9 ,
DB210 (IBM), SQL Server11 (Microsoft).
• Google trend for databases12
What is a database13 What is a database14
1 http://en.wikipedia.org/wiki/database
2 http://en.wikipedia.org/wiki/SQL
3 http://en.wikipedia.org/wiki/client%2Fserver
4 http://en.wikipedia.org/wiki/Oracle%20Corporation
5 http://en.wikipedia.org/wiki/IBM
6 http://en.wikipedia.org/wiki/Sybase
7 http://en.wikipedia.org/wiki/Oracle%20database
8 http://en.wikipedia.org/wiki/MySQL
9 http://en.wikipedia.org/wiki/PostgreSQL
10 http://en.wikipedia.org/wiki/DB2
11 http://en.wikipedia.org/wiki/Microsoft%20SQL%20Server
http://www.google.com/trends?q=oracle%2C+sybase%2C+sql+server%2C+mysql%2C+db2&ctab=
12
0&geo=all&date=all&sort=0
http://en.wikibooks.org/wiki/Category%3AJava%20Persistence%2FWhat%20is%20Java%
13
20persistence
14 http://en.wikibooks.org/wiki/Category%3AJava%20Persistence
9
5 What is JPA?
The Java Persistence Architecture API (JPA1 ) is a Java specification for accessing, persisting,
and managing data between Java objects / classes and a relational database. JPA was
defined as part of the EJB2 3.0 specification as a replacement for the EJB 2 CMP Entity
Beans specification. JPA is now considered the standard industry approach for Object to
Relational Mapping (ORM) in the Java Industry.
JPA itself is just a specification, not a product; it cannot perform persistence or anything
else by itself. JPA is just a set of interfaces, and requires an implementation. There are
open-source and commercial JPA implementations to choose from and any Java EE 5
application server should provide support for its use. JPA also requires a database to persist
to.
JPA allows POJO3 (Plain Old Java Objects) to be easily persisted without requiring the
classes to implement any interfaces or methods as the EJB 2 CMP specification required. JPA
allows an object's object-relational mappings to be defined through standard annotations or
XML defining how the Java class maps to a relational database table. JPA also defines a
runtime EntityManager API for processing queries and transaction on the objects against
the database. JPA defines an object-level query language, JPQL, to allow querying of the
objects from the database.
JPA is the latest of several Java persistence specifications. The first was the OMG persistence
service Java binding, which was never very successful; I'm not sure of any commercial products
supporting it. Next came EJB 1.0 CMP Entity Beans, which was very successful in being
adopted by the big Java EE providers (BEA, IBM), but there was a backlash against the
spec by some users who thought the spec requirements on the Entity Beans overly complex
and overhead and performance poor. EJB 2.0 CMP tried to reduce some of the complexity
of Entity Beans through introducing local interfaces, but the majority of the complexity
remained. EJB 2.0 also lacked portability, in that the deployment descriptors defining the
object-relational mapping were not specified and were all proprietary.
This backlash in part led to the creation of another Java persistence specification, JDO4
(Java Data Objects). JDO obtained somewhat of a "cult" following of several independent
vendors such as Kodo JDO, and several open-source implementations, but never had much
success with the big Java EE vendors.
Despite the two competing Java persistence standards of EJB CMP and JDO, the majority
of users continued to prefer proprietary api solutions, mainly TopLink5 (which had been
1 http://en.wikipedia.org/wiki/Java%20Persistence%20API
2 http://en.wikipedia.org/wiki/EJB
3 http://en.wikipedia.org/wiki/POJO
4 http://en.wikipedia.org/wiki/Java%20Data%20Objects
5 Chapter 11 on page 25
11
What is JPA?
around for some time and had its own POJO API) and Hibernate6 (which was a relatively
new open-source product that also had its own POJO API and was quickly becoming the
open-source industry standard). The TopLink product formerly owned by WebGain was
also acquired by Oracle, increasing its influence on the Java EE community.
The EJB CMP backlash was only part of a backlash against all of Java EE which was seen
as too complex in general and prompted such products as the Spring container. This led
the EJB 3.0 specification to have a main goal of reducing the complexity, which led the spec
committee down the path of JPA. JPA was meant to unify the EJB 2 CMP, JDO, Hibernate,
and TopLink APIs and products, and seems to have been very successful in doing so.
Currently most of the persistence vendors have released implementations of JPA confirming
its adoption by the industry and users. These include Hibernate (acquired by JBoss, acquired
by Red Hat), TopLink (acquired by Oracle), and Kodo JDO (acquired by BEA, acquired
by Oracle). Other products that have added support for JPA include Cocobase (owned by
Thought Inc.) and JPOX.
• EJB JPA Spec7
• JPA ORM XML Schema8
• JPA Persistence XML Schema9
• JPA JavaDoc10
• JPQL BNF11
What is JPA12 What is JPA13
6 Chapter 12 on page 27
7 http://jcp.org/aboutJava/communityprocess/final/jsr220/index.html
8 http://java.sun.com/xml/ns/persistence/orm_1_0.xsd
9 http://java.sun.com/xml/ns/persistence/persistence_1_0.xsd
10 https://java.sun.com/javaee/5/docs/api/javax/persistence/package-summary.html
11 Chapter 45.8 on page 232
http://en.wikibooks.org/wiki/Category%3AJava%20Persistence%2FWhat%20is%20Java%
12
20persistence
13 http://en.wikibooks.org/wiki/Category%3AJava%20Persistence
12
6 What is new in JPA 2.0?
The JPA 2.0 specification adds several enhancements to the JPA 1.0 specification including:
• Extended Map support1 - Support for maintaining a key column for a Basic, Embeddable,
or Entity key value in any collection relationship using a Map.
• Derived Identifiers2
• Nested embedding3
• New collection mappings4 - Support for collections of Basic or Embeddable types.
• Undirectional OneToMany5
• Ordered List mappings6 - Support for maintaining an index column in any collection
relationship using a List.
• Orphan removal7 - Automatic deletion of objects removed from relationships.
• Pessimistic Locking8
• EntityManager API updates9
• Cache APIs10
• Standard Properties11
• Metadata
• Criteria API12
• JPQL enhancements13
6.0.1 Resources
13
What is new in JPA 2.0?
http://en.wikibooks.org/wiki/Category%3AJava%20Persistence%2FWhat%20is%20Java%
18
20persistence
19 http://en.wikibooks.org/wiki/Category%3AJava%20Persistence
14
7 Other Persistence Specs
There are many specifications related to persistence in Java. The following table summarises
the latest version of each specification.1
5.0 2010
JDO3 (Java Data Objects)
3.0 2010
JPA4 (Java Persistence API)
2.0 2009
JCA5 (Java EE Connector Architec-
ture)
1.6 2009
SDO6 (Service Data Objects)
2.1 2006
EJB CMP7 (Enterprise Java Beans,
Container Managed Persistence)
2.1 (EJB) 2003
15
8 Why use JPA or ORM?
This is an intriguing question. There are many reasons to use an ORM framework or
persistence product, and many reasons to use JPA in particular.
• Leverages large persistence library to avoid developing solutions to problems that others
have already solved.
• Avoids low level JDBC and SQL code.
• Leverages object oriented programming and object model usage.
• Provides database and schema independence.
• Most ORM products are free and open source.
• Many enterprise corporations provide support and services for ORM products.
• Provides high end performance features such as caching and sophisticated database and
query optimizations.
1 http://saloon.javaranch.com/cgi-bin/ubb/ultimatebb.cgi?ubb=get_topic&f=78&t=003738
2 http://www.theserverside.com/news/thread.tss?thread_id=44526
17
Why use JPA or ORM?
3 http://www.theserverside.com/news/thread.tss?thread_id=40965
http://en.wikibooks.org/wiki/Category%3AJava%20Persistence%2FWhat%20is%20Java%
4
20persistence
5 http://en.wikibooks.org/wiki/Category%3AJava%20Persistence
18
9 Persistence Products
There are many persistence products to choose from. Most persistence products now support
a JPA interface, although there still are some exceptions. Which product you use depends
on your preference, but most people would recommend you use the JPA standard whichever
product you choose. This gives you the flexibility to switch persistence providers, or port
your application to another server platform which may use a different persistence provider.
Determining which persistence product to use involves many criteria. Valid things to consider
include:
• Which persistence product does your server platform support and integrate with?
• What is the cost of the product, is it free and open source, can you purchase enterprise
level support and services?
• Do you have an existing relationship with the company producing the product?
• Is the product active and does it have a large user base?
• How does the product perform and scale?
• Does the product integrate with your database platform?
• Does the product have active and open forums, do questions receive useful responses?
• Is the product JPA compliant, what functionality does the product offer beyond the JPA
specification?
19
20
Prod- JPA 1.0 JPA 2.0 JDO JDO CMP Ver- Year Open Appli-
uct 2.0 3.0 2.1 sion of Last Source cation
Release Servers2
Hiber- Yes Yes Yes JBoss
nate3
(Red 4.1 2012
Hat)
4
EclipseLinkYes Yes Yes Ora-
Persistence Products
3 Chapter 12 on page 27
4 Chapter 10 on page 23
5 Chapter 11 on page 25
Prod- JPA 1.0 JPA 2.0 JDO JDO CMP Ver- Year Open Appli-
uct 2.0 3.0 2.1 sion of Last Source cation
Release Servers2
Open- Yes Yes Yes Geron-
JPA6 imo,
(Apache) 2.1.0 2011 Web-
Sphere
Appli-
cation
Server
(8.0)
DataNu- Yes Yes Yes Yes Yes
cleus7
(DataNu- 3.1.3 2013
cleus)
TopLink Yes Yes Glass-
Essen- fish
tials8 2.0 2007 (v2),
(java.net) SunAS
(9), Or-
acleAS
(10.1.3)
Kodo9 Yes Yes Ora-
(Oracle) cle We-
4.1 2007 bLogic
(10.3)
6 Chapter 15 on page 33
7 http://www.datanucleus.org/
8 Chapter 13 on page 29
21
Existing Persistence Products
9 Chapter 14 on page 31
Persistence Products
10 http://en.wikibooks.org/wiki/Category%3AJava%20Persistence%2FPersistence%20Products
11 http://en.wikibooks.org/wiki/Category%3AJava%20Persistence
22
10 EclipseLink
EclipseLink1 is the open source Eclipse Persistence Services Project from the Eclipse
Foundation. The product provides an extensible framework that allows Java developers to
interact with various data services, including databases, XML, and Enterprise Information
Systems (EIS). EclipseLink supports a number of persistence standards including the Java
Persistence API (JPA), Java API for XML Binding (JAXB), Java Connector Architecture
(JCA), and Service Data Objects (SDO).
EclipseLink is based on the TopLink2 product, which Oracle3 contributed the source code
from to create the EclipseLink project. The original contribution was from TopLink's 11g
code base, and the entire code-base/feature set was contributed, with only EJB 2 CMP
and some minor Oracle AS specific integration removed. This differs from the TopLink
Essentials Glassfish contribution, which did not include some key enterprise features. The
package names were changed and some of the code was moved around.
The TopLink Mapping Workbench UI has also been contributed to the project.
EclipseLink is the intended path forward for persistence for Oracle and TopLink. It is
intended that the next major release of Oracle TopLink will include EclipseLink as well as
the next major release of Oracle AS.
EclipseLink supports usage in an OSGi4 environment.
EclipseLink was announced to be the JPA 2.0 reference implementation, and announced to
be the JPA provider for Glassfish v3.
• EclipseLink Home5
• EclipseLink Newsgroup6
• EclipseLink Wiki7
EclipseLink8 EclipseLink9
1 http://en.wikipedia.org/wiki/EclipseLink
2 Chapter 11 on page 25
3 http://en.wikipedia.org/wiki/Oracle%20Corporation
4 http://en.wikipedia.org/wiki/OSGi
5 http://www.eclipse.org/eclipselink/
6 http://www.eclipse.org/newsportal/thread.php?group=eclipse.technology.eclipselink
7 http://wiki.eclipse.org/EclipseLink
8 http://en.wikibooks.org/wiki/Category%3AJava%20Persistence%2FPersistence%20Products
9 http://en.wikibooks.org/wiki/Category%3AJava%20Persistence
23
11 TopLink
TopLink1 is one of the leading Java persistence products and JPA implementations. TopLink
is produced by Oracle2 and part of Oracle's OracleAS, WebLogic, and OC4J servers.
As of TopLink 11g, TopLink bundles the open source project EclipseLink3 for most of its
functionality.
The TopLink 11g release supports the JPA 1.0 specification. TopLink 10.1.3 also supports
EJB CMP and is the persistence provider for OracleAS OC4J 10.1.3 for both JPA and
EJB CMP. TopLink provides advanced object-relational mapping functionality beyond the
JPA specification, as well as providing persistence for object-relational data-types, and
Enterprise Information Systems (EIS/mainframes). TopLink includes sophisticated object
caching and performance features. TopLink provides a Grid extension that integrate with
Oracle Coherence. TopLink provides object-XML mapping support and provides a JAXB
implementation and web service integration. TopLink provides a Service Data Object (SDO)
implementation.
TopLink provides a rich user interface through the TopLink Mapping Workbench. The
Mapping Workbench allows for graphical mapping of an object model to a data model, as
allows for generation of a data model from an object model, and generation of an object
model from a data model, and auto-mapping of an existing object and data model. The
TopLink Mapping Workbench functionality is also integrated with Oracle's JDeveloper IDE.
TopLink contributed part of its source code to become the JPA 1.0 reference implementation
under the Sun java.net Glassfish project. This open-source product is called TopLink
Essentials, and despite a different package name (oracle.toplink.essentials) it is basically a
branch of the source code of the TopLink product with some advanced functionality stripped
out.
TopLink contributed practically its entire source code to the Eclipse Foundation EclipseLink
product. This is an open source product currently in incubation that represents the path
forward for TopLink. The package name is different (org.eclipse.persistence) but the source
code it basically a branch of the TopLink 11g release. Oracle also contributed its Mapping
Workbench source code to the project. The TopLink Mapping Workbench developers also
were major contributors to the Eclipse Dali project for JPA support.
TopLink was first developed in Smalltalk and ported to Java in the 90's, and has over
15 years worth of object persistence solutions. TopLink originally provided a proprietary
POJO persistence API, when EJB was first released TopLink provided one of the most
popular EJB CMP implementations, although it continued to recommend its POJO solution.
1 http://en.wikipedia.org/wiki/TopLink
2 http://en.wikipedia.org/wiki/Oracle%20Corporation
3 Chapter 10 on page 23
25
TopLink
TopLink also provided a JDO 1.0 implementation for a few releases, but this was eventually
deprecated and removed once the JPA specification had been formed. Oracle and TopLink
have been involved in each of the EJB, JDO and EJB3/JPA expert groups, and Oracle was
the co-lead for the EJB3/JPA specification.
• Oracle TopLink Home4
• Oracle TopLink Forum5
• Oracle TopLink Wiki6
TopLink Resources
• TopLink Automatic Schema Generation Options7
TopLink8 TopLink9
4 http://www.oracle.com/technology/products/ias/toplink/index.html
5 http://forums.oracle.com/forums/forum.jspa?forumID=48
6 http://wiki.oracle.com/page/TopLink
7 http://docs.sun.com/app/docs/doc/819-3672/gbwmk?a=view
8 http://en.wikibooks.org/wiki/Category%3AJava%20Persistence%2FPersistence%20Products
9 http://en.wikibooks.org/wiki/Category%3AJava%20Persistence
26
12 Hibernate
Hibernate1 was an open source project developed by a team of Java software developers
around the world led by Gavin King. JBoss, Inc. (now part of Red Hat) later hired the lead
Hibernate developers and worked with them in supporting Hibernate.
The current version of Hibernate is Version 4.1.x. Hibernate provides both a proprietary
POJO API, and JPA support.
Hibernate 2 Hibernate 3
1 http://en.wikipedia.org/wiki/Hibernate%20%28Java%29
2 http://en.wikibooks.org/wiki/Category%3AJava%20Persistence%2FPersistence%20Products
3 http://en.wikibooks.org/wiki/Category%3AJava%20Persistence
27
13 TopLink Essentials
TopLink Essentials1 is an open source project from the Sun java.net Glassfish community. It
is the EJB3 JPA 1.0 reference implementation, and is the JPA provider for the Sun Glassfish
v1 application server.
TopLink Essentials was based on the TopLink2 product, which Oracle3 contributed some of
the source code from to create the TopLink Essentials project. The original contribution
was from TopLink's 10.1.3 code base, only some of the TopLink product source code was
contributed, which did not include some key enterprise features. The package names were
changed and some of the code was moved around.
TopLink Essentials has been replaced by the EclipseLink4 project. EclipseLink will be the
JPA 2.0 reference implementation and be part of Sun Glassfish v3.
• TopLink Essentials Home5
• TopLink Essentials Forum6
• TopLink Essentials Wiki7
TopLink Essentials8 TopLink Essentials9
1 http://en.wikipedia.org/wiki/TopLink
2 Chapter 11 on page 25
3 http://en.wikipedia.org/wiki/Oracle%20Corporation
4 Chapter 10 on page 23
5 https://glassfish.dev.java.net/javaee5/persistence/index.html
6 http://www.nabble.com/java.net---glassfish-persistence-f13455.html
7 http://wiki.glassfish.java.net/Wiki.jsp?page=TopLinkEssentials
8 http://en.wikibooks.org/wiki/Category%3AJava%20Persistence%2FPersistence%20Products
9 http://en.wikibooks.org/wiki/Category%3AJava%20Persistence
29
14 Kodo
1 http://www.bea.com/kodo/
2 http://en.wikibooks.org/wiki/Category%3AJava%20Persistence%2FPersistence%20Products
3 http://en.wikibooks.org/wiki/Category%3AJava%20Persistence
31
15 Open JPA
OpenJPA1 is an Apache project for supporting the JPA specification. Its source code was
originally donated from (part of) BEA's Kodo product.
Open JPA2 Open JPA3
1 http://en.wikipedia.org/wiki/OpenJPA
2 http://en.wikibooks.org/wiki/Category%3AJava%20Persistence%2FPersistence%20Products
3 http://en.wikibooks.org/wiki/Category%3AJava%20Persistence
33
16 Ebean
16.1 Index
1. /Why Ebean/3
2. /Example Model/4
3. /Quick Start - examples/5
4. /Mapping via JPA Annotations/6
5. /Query/7
a) /Basics/8
b) /Future - Asychronous query execution/9
c) /PagingList/10
d) /Aggregation - Group By/11
6. /Save and Delete/12
7. /Transactions/13
8. /Caching/14
9. /Using raw SQL/15
1 http://www.avaje.org
2 Chapter 17 on page 37
3 http://en.wikibooks.org/wiki/%2FWhy%20Ebean%2F
4 http://en.wikibooks.org/wiki/%2FExample%20Model%2F
5 http://en.wikibooks.org/wiki/%2FQuick%20Start%20-%20examples%2F
6 http://en.wikibooks.org/wiki/%2FMapping%20via%20JPA%20Annotations%2F
7 http://en.wikibooks.org/wiki/%2FQuery%2F
8 http://en.wikibooks.org/wiki/%2FBasics%2F
9 http://en.wikibooks.org/wiki/%2FFuture%20-%20Asychronous%20query%20execution%2F
10 http://en.wikibooks.org/wiki/%2FPagingList%2F
11 http://en.wikibooks.org/wiki/%2FAggregation%20-%20Group%20By%2F
12 http://en.wikibooks.org/wiki/%2FSave%20and%20Delete%2F
13 http://en.wikibooks.org/wiki/%2FTransactions%2F
14 http://en.wikibooks.org/wiki/%2FCaching%2F
15 http://en.wikibooks.org/wiki/%2FUsing%20raw%20SQL%2F
35
17 Mapping
37
18 Mapping
The first thing that you need to do to persist something in Java is define how it is to be
persisted. This is called the mapping process ( details1 ). There have been many different
solutions to the mapping process over the years, including some object databases that didn't
require you map anything but let you persist anything directly. Object-relational mapping
tools that would generate an object model for a data model that included the mapping and
persistence logic in it. ORM products that provided mapping tools to allow the mapping of
an existing object model to an existing data model and stored this mapping meta-data in
flat files, database tables, XML and finally annotations.
In JPA mappings can either be stored through Java annotations, or in XML files. One
significant aspect of JPA is that only the minimal amount of mapping is required. JPA
implementations are required to provide defaults for almost all aspects of mapping an object.
The minimum requirement to mapping an object in JPA is to define which objects can be
persisted. This is done through either marking the class with the @Entity2 annotation,
or adding an <entity> tag for the class in the persistence unit's ORM XML file. Also the
primary key, or unique identifier attribute(s) must be defined for the class. This is done
through marking one of the class' fields or properties (get method) with the @Id annotation,
or adding an <id> tag for the class' attribute in the ORM XML3 file.
The JPA implementation will default all other mapping information, including defaulting
the table name, column names for all defined fields or properties, cardinality and mapping
of relationships, all SQL and persistence logic for accessing the objects. Most JPA imple-
mentations also provide the option of generating the database tables at runtime, so very
little work is required by the developer to rapidly develop a persistent JPA application.
1 http://en.wikipedia.org/wiki/Object-Relational_impedance_mismatch
2 https://java.sun.com/javaee/5/docs/api/javax/persistence/Entity.html
3 http://java.sun.com/xml/ns/persistence/orm_1_0.xsd
39
Mapping
Figure 5
40
Index
Figure 6
import javax.persistence.*;
...
@Entity
public class Employee {
@Id
private long id;
private String firstName;
private String lastName;
private Address address;
private List<Phone> phones;
private Employee manager;
private List<Employee> managedEmployees;
...
}
41
Mapping
xsi:schemaLocation="http://java.sun.com/xml/ns/persistence/orm
orm_1_0.xsd">
<description>The minimal mappings for a persistent entity in
XML.</description>
<entity name="Employee" class="org.acme.Employee" access="FIELD">
<attributes>
<id name="id"/>
</attributes>
</entity>
</entity-mappings>
JPA allows annotations to be placed on either the class field, or on the get method for the
property. This defines the access type of the class, which is either FIELD or PROPERTY. Either
all annotations must be on the fields, or all annotations on the get methods, but not both
(unless the @AccessType annotation is used). JPA does not define a default access type
(oddly enough), so the default may depend on the JPA provider. The default is assumed to
occur based on where the @Id annotation is placed, if placed on a field, then the class is
using FIELD access, if placed on a get method, then the class is using PROPERTY access. The
access type can also be defined through XML on the <entity> element. The access type
can be configured using the @AccessType4 annotation or access-type XML attribute.
For FIELD access the class field value will be accessed directly to store and load the value
from the database. This is normally done either through reflection, or through generated
byte code, but depends on the JPA provider and configuration. The field can be private or
any other access type. FIELD is normally safer, as it avoids any unwanted side-affect code
that may occur in the application get/set methods.
For PROPERTY access the class get and set methods will be used to store and load the value
from the database. This is normally done either through reflection, or through generated
byte code, but depends on the JPA provider and configuration. PROPERTY has the advantage
of allowing the application to perform conversion of the database value when storing it in
the object. The user should be careful to not put any side-affects in the get/set methods
that could interfere with persistence.
JPA 2.0 allows the access type to also be set on a specific field or get method. This allows
for the class to use one default access mechanism, but for one attribute to use a different
access type. This can be used for attributes that need to be converted through a set of
database specific get, set, or allow a specific attribute to avoid side-affects in its get, set
methods. This is done through specifying the @AccessType annotation or XML attribute
on the mapped attribute. You may also need to mark the field/property as @Transient if it
has a different attribute name than the mapped attribute.
JPA allows for a persistence unit default <access-type> element to be set in
persistence-unit-defaults or entity default in entity-mappings.
4 https://java.sun.com/javaee/5/docs/api/javax/persistence/AccessType.html
42
Common Problems
@Entity
@Access(AccessType.FIELD)
public class Employee {
@Id
private long id;
private String firstName;
private String lastName;
@Transient
private Money salary;
@ManyToOne(fetch=FetchType.LAZY)
private Employee manager;
@Access(AccessType.PROPERTY)
private BigDecimal getBDSalary() {
return this.salary.toNumber();
}
private void setBDSalary(BigDecimal salary) {
this.salary = new Money(salary);
}
This typically occurs when you annotate both the fields and methods (properties) of the
class. You must choose either field or property access, and be consistent. Also when
annotating properties you must put the annotation on the get method, not the set method.
Also ensure that you have not defined the same mappings in XML, which may be overriding
the annotations. You may also have a classpath issue, such as having an old version of the
class on the classpath.
5 Chapter 11 on page 25
6 Chapter 10 on page 23
43
Mapping
Odd behavior
There are many reasons that odd behavior can occur with persistence. One common issue
that can cause odd behavior is using property access and putting side effects in your get or
set methods. For this reason it is generally recommended to use field access in mapping,
i.e. putting your annotations on your variables not your get methods.
For example consider:
This may look innocent, but these side effects can have unexpected consequences. For
example if the relationship was lazy this would have the effect of always instantiating the
collection when set from the database. It could also have consequences with certain JPA
implementations for persisting, merging and other operations, causing duplicate inserts,
missed updates, or a corrupt object model.
I have also seen simply incorrect property methods, such as a get method that always
returns a new object, or a copy, or set methods that don't actually set the value.
In general if you are going to use property access, ensure your property methods are free of
side effects. Perhaps even use different property methods than your application uses.
Mapping7 Mapping8
7 http://en.wikibooks.org/wiki/Category%3AJava%20Persistence
8 http://en.wikibooks.org/wiki/Category%3AJava%20Persistence%2FMapping
44
19 Tables
Figure 7
A table1 is the basic persist structure of a relational database. A table contains a list of
columns which define the table's structure, and a list of rows that define the table's data.
Each column has a specific type and generally size. The standard set of relational types
are limited to basic types including numeric, character, date-time, and binary (although
most modern databases have additional types and typing systems). Tables can also have
constraints that define the rules which restrict the row data, such as primary key, foreign
1 http://en.wikipedia.org/wiki/Table%20%28database%29
45
Tables
key, and unique constraints. Tables also have other artifacts such as indexes, partitions and
triggers.
A typical mapping of a persist class will map the class to a single table. In JPA this is
defined through the @Table2 annotation or <table> XML element. If no table annotation
is present, the JPA implementation will auto assign a table for the class. The JPA default
table name is the name of the class (minus the package) with the first letter capitalized.
Each attribute of the class will be stored in a column in the table.
...
@Entity
@Table(name="EMPLOYEE")
public class Employee {
...
}
2 https://java.sun.com/javaee/5/docs/api/javax/persistence/Table.html
46
20 Advanced
Although in the ideal case each class would map to a single table, this is not always possible.
Other scenarios include:
• Multiple tables1 : One class maps to 2 or multiple tables.
• Sharing tables2 : 2 or multiple classes are stored in the same table.
• Inheritance3 : A class is involved in inheritance and has an inherited and local table.
• Views4 : A class maps to a view.
• Stored procedures5 : A class maps to a set of stored procedures.
• Partitioning6 : Some instances of a class map to one table, and other instances to another
table.
• Replication7 : A class's data is replicated to multiple tables.
• History8 : A class has historical data.
These are all advanced cases, some are handled by the JPA Spec and many are not. The
following sections investigate each of these scenarios further and include what is supported
by the JPA spec, what can be done to workaround the issue within the spec, and how to
use some JPA implementations extensions to handle the scenario.
Figure 8
47
Advanced
Sometimes a class maps to multiple tables. This typically occurs on legacy or existing
data models where the object model and data model do not match. It can also occur in
inheritance when subclass data is stored in additional tables. Multiple tables may also be
used for performance, partitioning or security reasons.
JPA allows multiple tables to be assigned to a single class. The @SecondaryTable9 and
SecondaryTables annotations or <secondary-table> elements can be used. By default
the @Id column(s) are assumed to be in both tables, such that the secondary table's @Id
column(s) are the primary key of the secondary table and a foreign key to the first table. If
the first table's @Id column(s) are not named the same the @PrimaryKeyJoinColumn10 or
<primary-key-join-column> can be used to define the foreign key join condition.
In a multiple table entity, each mapping must define which table the mapping's columns are
from. This is done using the table attribute of the @Column or @JoinColumn annotations
or XML elements. By default the primary table of the class is used, so you only need to set
the table for secondary tables. For inheritance the default table is the primary table of the
subclass being mapped.
...
@Entity
@Table(name="EMPLOYEE")
@SecondaryTable(name="EMP_DATA",
pkJoinColumns = @PrimaryKeyJoinColumn(name="EMP_ID",
referencedColumnName="ID")
)
public class Employee {
...
@Column(name="YEAR_OF_SERV", table="EMP_DATA")
private int yearsOfService;
@OneToOne
@JoinColumn(name="MGR_ID", table="EMP_DATA",
referencedColumnName="ID")
private Employee manager;
...
}
9 https://java.sun.com/javaee/5/docs/api/javax/persistence/SecondaryTable.html
10 https://java.sun.com/javaee/5/docs/api/javax/persistence/PrimaryKeyJoinColumn.html
48
Multiple tables with foreign keys
<one-to-one name="manager">
<join-column name="MGR_ID" table="EMP_DATA"
referenced-column-name="ID"/>
</one-to-one>
</attributes>
</entity>
With the @PrimaryKeyJoinColumn the name refers to the foreign key column in the secondary
table and the referencedColumnName refers to the primary key column in the first table. If
you have multiple secondary tables, they must always refer to the first table. When defining
the table's schema typically you will define the join columns in the secondary table as the
primary key of the table, and a foreign key to the first table. Depending how you have
defined your foreign key constraints, the order of the tables can be important, the order will
typically match the order that the JPA implementation will insert into the tables, so ensure
the table order matches your constraint dependencies.
For relationships to a class that has multiple tables the foreign key (join column) always
maps to the primary table of the target. JPA does not allow having a foreign key map to a
table other than the target object's primary table. Normally this is not an issue as foreign
keys almost always map to the id/primary key of the primary table, but in some advanced
scenarios this may be an issue. Some JPA products allow the column or join column to
use the qualified name of the column (i.e. @JoinColumn(referenceColumnName="EMP_-
DATA.EMP_NUM"), to allow this type of relationship. Some JPA products may also support
this through their own API, annotations or XML.
Figure 9
Sometimes you may have a secondary table that is referenced through a foreign key from
the primary table to the secondary table instead of a foreign key from the secondary table
to the primary table. You may even have a foreign key between two of the secondary tables.
Consider having an EMPLOYEE and ADDRESS table where EMPLOYEE refers to ADDRESS through
an ADDRESS_ID foreign key, and (for some strange reason) you only want a single Employee
class that has the data from both tables. The JPA spec does not cover this directly, so if
you have this scenario the first thing to consider, if you have the flexibility, is to change
your data model to stay within the confines of the spec. You could also change your object
49
Advanced
model to define a class for each table, in this case an Employee class and an Address class,
which is typically the best solution. You should also check with you JPA implementation to
see what extensions it supports in this area.
One way to solve the issue is simply to swap your primary and secondary tables. This will
result in having the secondary table referencing the primary tables primary key and is within
the spec. This however will have side-effects, one being that you now changed the primary
key of your object from EMP_ID to ADDRESS_ID, and may have other mapping and querying
implications. If you have more than 2 tables this also may not work.
Another option is to just use the foreign key column in the @PrimaryKeyJoinColumn, this
will technically be backward, and perhaps not supported by the spec, but may work for
some JPA implementations. However this will result in the table insert order not matching
the foreign key constraints, so the constraints will need to be removed, or deferred.
It is also possible to map the scenario through a database view. A view could be defined
joining the two tables and the class could be mapped to the view instead of the tables.
Views are read-only on some databases, but many also allow writes, or allow triggers to be
used to handle writes.
Some JPA implementations provide extensions to handle this scenarios.
TopLink11 , EclipseLink12 : Provides a proprietary API for its mapping model
ClassDescriptor.addForeignKeyFieldNameForMultipleTable() that allows for arbi-
trary complex foreign keys relationships to be defined among the secondary tables.
This can be configured through using a @DescriptorCustomizer annotation and
DescriptorCustomizer class.
Figure 10
11 Chapter 11 on page 25
12 Chapter 10 on page 23
50
Multiple table outer joins
Occasionally the data model and object model do not get along very well at all. The database
could be a legacy model and not fit very well with the new application model, or the DBA
or object architect may be a little crazy. In these cases you may require advanced multiple
table joins.
Examples of these include having two tables related not by their primary or foreign keys, but
through some constant or computation. Consider having an EMPLOYEE table and an ADDRESS
table, the ADDRESS table has an EMP_ID foreign key to the EMPLOYEE table, but there are
several address for each employee and only the address with the TYPE of "HOME" is desired.
In this case data from both of the tables is desired to be mapped in the Employee object. A
join expression is required where the foreign key matches and the constant matches.
Again this scenario could be handled through redesigning the data or object model, or
through using a view. Some JPA implementations provide extensions to handle this scenarios.
TopLink13 , EclipseLink14 : Provides a proprietary API for its mapping model
DescriptorQueryManager.setMultipleTableJoinExpression() that allows for arbi-
trary complex multiple table joins to be defined. This can be configured through using a
@DescriptorCustomizer annotation and DescriptorCustomizer class.
Another perversion of multiple table mapping is to desire to outer join the secondary table.
This may be desired if the secondary may or may not have a row defined for the object.
Typically the object should be read-only if this is to be attempted, as writing to a row that
may or may not be there can be tricky.
This is not directly supported by JPA, and it is best to reconsider the data model or object
model design if faced with this scenario. Again it is possible to map this through a database
view, where an outer join is used to join the tables in the view.
Some JPA implementation support using outer joins for multiple tables.
Hibernate15 : This can be accomplished through using the Hibernate @Table annotation
and set its optional attribute to true. This will configure Hibernate to use an outer join
to read the table, and will not write to the table if all of the attributes mapping to the
table are null.
TopLink16 , EclipseLink17 : If the database supports usage of outer join syntax in the where
clause (Oracle, Sybase, SQL Server), then the multiple table join expression could be used
to configure an outer join to be used to read the table.
13 Chapter 11 on page 25
14 Chapter 10 on page 23
15 Chapter 12 on page 27
16 Chapter 11 on page 25
17 Chapter 10 on page 23
51
Advanced
Some JPA providers may have issues with table and column names with special characters,
such as spaces. In general it is best to use standard characters, no spaces, and all uppercase
names. International languages should be ok, as long as the database and JDBC driver
supports the character set.
It may be required to "quote" table and column names with special characters or in some
cases with mixed case. For example if the table name had a space it could be defined as the
following:
@Table("\"Employee Data\"")
Some databases support mixed case table and column names, and others are case insensitive.
If your database is case insensitive, or you wish your data model to be portable, it is best to
use all uppercase names. This is normally not a big deal with JPA where you rarely use the
table and column names directly from your application, but can be an issue in certain cases
if using native SQL queries.
A database table may require to be prefixed with a table qualifier, such as the table's creator,
or its' namespace, schema, or catalog. Some databases also support linking table on other
database, so the link name can also be a table qualifier.
In JPA a table qualifier can be set on a table through the schema or catalog attribute.
Generally it does not matter which attribute is used as both just result in prefixing the table
name. Technically you could even include the full name "schema.table" as the table's name
and it would work. The benefit of setting the prefix in the schema or catalog is a default
table qualifier can be set for the entire persistence unit, also not setting the real table name
may impact native SQL queries.
If all of your tables require the same table qualifier, you can set the default in the orm.xml.
...
@Entity
@Table(name="EMPLOYEE", schema="ACME")
public class Employee {
...
}
20.6.2 Example mapping XML for default (entire persistence unit) table
qualifier
<entity-mappings>
<persistence-unit-metadata>
52
Table qualifiers, schemas, or creators
<persistence-unit-defaults>
<schema name="ACME"/>
</persistence-unit-defaults>
</persistence-unit-metadata>
....
</entity-mappings>
20.6.3 Example mapping XML for default (orm file) table qualifier
<entity-mappings>
<schema name="ACME"/>
...
</entity-mappings>
Tables 18 Tables 19
18 http://en.wikibooks.org/wiki/Category%3AJava%20Persistence
19 http://en.wikibooks.org/wiki/Category%3AJava%20Persistence%2FMapping
53
21 Identity
...
@Entity
public class Employee {
@Id
private long id
...
}
Common Problems
You must never change the id of an object. Doing so will cause errors, or strange behavior
depending on your JPA provider. Also do not create two objects with the same id, or try
persisting an object with the same id as an existing object. If you have an object that may
be existing use the EntityManager merge() API, do not use persist() for an existing
object, and avoid relating an un-managed existing object to other managed objects.
1 http://en.wikipedia.org/wiki/Primary%20key
2 https://java.sun.com/javaee/5/docs/api/javax/persistence/Id.html
55
Identity
No primary key.
56
22 Sequencing
An object id can either be a natural id or a generated id. A natural id is one that occurs in
the object and has some meaning in the application. Examples of natural ids include user
ids, email addresses, phone numbers, and social insurance numbers. A generated id is one
that is generated by the system. A sequence number in JPA is a sequential id generated
by the JPA implementation and automatically assigned to new objects. The benefits of
using sequence numbers are that they are guaranteed to be unique, allow all other data of
the object to change, are efficient values for querying and indexes, and can be efficiently
assigned. The main issue with natural ids is that everything always changes at some point;
even a person's social insurance number can change. Natural ids can also make querying,
foreign keys and indexing less efficient in the database.
In JPA an @Id can be easily assigned a generated sequence number through the
@GeneratedValue1 annotation, or <generated-value> element.
...
@Entity
public class Employee {
@Id
@GeneratedValue
private long id
...
}
There are several strategies for generating unique ids. Some strategies are database agnostic
and others make use of built-in databases support.
1 https://java.sun.com/javaee/5/docs/api/javax/persistence/GeneratedValue.html
57
Sequencing
JPA provides support for several strategies for id generation defined through the
GenerationType2 enum values: TABLE, SEQUENCE and IDENTITY.
The choice of which sequence strategy to use is important as it affects performance, concur-
rency and portability.
Table sequencing uses a table in the database to generate unique ids. The table has two
columns, one stores the name of the sequence, the other stores the last id value that was
assigned. There is a row in the sequence table for each sequence object. Each time a new id
is required the row for that sequence is incremented and the new id value is passed back to
the application to be assigned to an object. This is just one example of a sequence table
schema, for other table sequencing schemas see Customizing3 .
Table sequencing is the most portable solution because it just uses a regular database table,
so unlike sequence and identity can be used on any database. Table sequencing also provides
good performance because it allows for sequence pre-allocation, which is extremely important
to insert performance, but can have potential concurrency issues4 .
In JPA the @TableGenerator5 annotation or <table-generator> element is used to define
a sequence table. The TableGenerator defines a pkColumnName for the column used to
store the name of the sequence, valueColumnName for the column used to store the last id
allocated, and pkColumnValue for the value to store in the name column (normally the
sequence name).
SEQUENCE_TABLE
SEQ_NAME SEQ_COUNT
EMP_SEQ 123
PROJ_SEQ 550
...
@Entity
public class Employee {
@Id
@TableGenerator(name="TABLE_GEN", table="SEQUENCE_TABLE",
pkColumnName="SEQ_NAME",
valueColumnName="SEQ_COUNT", pkColumnValue="EMP_SEQ")
@GeneratedValue(strategy=GenerationType.TABLE,
2 https://java.sun.com/javaee/5/docs/api/javax/persistence/GenerationType.html
3 Chapter 23.3.2 on page 74
4 Chapter 23.3 on page 73
5 https://java.sun.com/javaee/5/docs/api/javax/persistence/TableGenerator.html
58
Sequence Strategies
generator="TABLE_GEN")
private long id;
...
}
Common Problems
Errors such as "table not found", "invalid column" can occur if you do not have a SEQUENCE
table defined in your database, or its schema does not match what you have configured,
or what your JPA provider is expecting by default. Ensure you create the sequence table
correctly, or configure your @TableGenerator to match the table that you created, or let
your JPA provider create you tables for you (most JPA provider support schema creation).
You may also get an error such as "sequence not found", this means you did not create a row
in the table for your sequence. You must insert an initial row in the sequence table for your
sequence with the initial id (i.e. INSERT INTO SEQUENCE_TABLE (SEQ_NAME, SEQ_COUNT)
VALUES ("EMP_SEQ", 0)), or let your JPA provider create your schema for you.
Sequence objects use special database objects to generate ids. Sequence objects are only
supported in some databases, such as Oracle, DB2, and Postgres. Usually, a SEQUENCE
object has a name, an INCREMENT, and other database object settings. Each time the
<sequence>.NEXTVAL is selected the sequence is incremented by the INCREMENT.
Sequence objects provide the optimal sequencing option, as they are the most efficient
and have the best concurrency, however they are the least portable as most databases do
59
Sequencing
not support them. Sequence objects support sequence preallocation through setting the
INCREMENT on the database sequence object to the sequence preallocation size.
In JPA the @SequenceGenerator7 annotation or <sequence-generator> element is used
to define a sequence object. The SequenceGenerator defines a sequenceName for the name
of the database sequence object, and an allocationSize for the sequence preallocation size
or sequence object INCREMENT.
...
@Entity
public class Employee {
@Id
@GeneratedValue(strategy=GenerationType.SEQUENCE,
generator="EMP_SEQ")
@SequenceGenerator(name="EMP_SEQ", sequenceName="EMP_SEQ",
allocationSize=100)
private long id;
...
}
Common Problems
Errors such as "sequence not found", can occur if you do not have a SEQUENCE object
defined in your database. Ensure you create the sequence object, or let your JPA provider
create your schema for you (most JPA providers support schema creation). When creating
your sequence object, ensure the sequence's INCREMENT matches your SequenceGenerator's
allocationSize. The DDL to create a sequence object depends on the database, for
Oracle it is, CREATE SEQUENCE EMP_SEQ INCREMENT BY 100 START WITH 100.
7 https://java.sun.com/javaee/5/docs/api/javax/persistence/SequenceGenerator.html
60
Sequence Strategies
This can occur if your sequence object's INCREMENT does not match your allocationSize.
This results in the JPA provider thinking it got back more sequences than it really did, and
ends up duplicating values, or with negative numbers. This can also occur on some JPA
providers if your sequence object's STARTS WITH is 0 instead of a value equal or greater to
the allocationSize.
Identity sequencing uses special IDENTITY columns in the database to allow the database
to automatically assign an id to the object when its row is inserted. Identity columns are
supported in many databases, such as MySQL, DB2, SQL Server, Sybase, and PostgreSQL.
Oracle does not support IDENTITY columns but its is possible to simulate them using
sequence objects and triggers.
Although identity sequencing seems like the easiest method to assign an id, they have several
issues. One is that since the id is not assigned by the database until the row is inserted
the id cannot be obtained in the object until after commit or after a flush call. Identity
sequencing also does not allow for sequence preallocation, so can require a select for each
object that is inserted, potentially causing a major performance problem, so in general are
not recommended.
In JPA there is no annotation or element for identity sequencing as there is no additional
information to specify. Only the GeneratedValue's strategy needs to be set to IDENTITY.
...
@Entity
public class Employee {
@Id
@GeneratedValue(strategy=GenerationType.IDENTITY)
private long id;
...
}
61
Sequencing
Sequence objects are database constructs can be instantiated as distinct objects as needed.
They provide incremental values when their "next" method is called. Note that the increments
can be values greater than one, for example incremented by two's, three's etc. An 'on insert'
trigger can be constructed for each table that calls a sequence object created for that specific
table primary key asking for the next value and inserting it into the table with the new record.
Identity datatypes in products like MySQL or SQL Server are encapsulated types that do
the same thing without requiring the set up of triggers and sequences (although PostgreSQL
at least has automated much of this with its Serial and Big Serial pseudo datatypes (these
actually create an int/bigint column and run a "macro" creating the sequence and 'on insert'
trigger when the CREATE TABLE statement is executed.)).
Common Problems
This typically occurs because the @Id was not configured to use an @Generated-
Value(strategy=GenerationType.IDENTITY). Ensure it is configured correctly. It could
also be that your JPA provider does not support identity sequencing on the database
platform that you are using, or you have not configured your database platform. Most
providers require that you set the database platform through a persistence.xml property,
most provider also allow you to customize your own platform if it is not directly supported.
It may also be that you did not set your primary key column in your table to be an identity
type.
Identity sequencing requires the insert to occur before the id can be assigned, so it is not
assigned on persist like other types of sequencing. You must either call commit() on the
current transaction, or call flush() on the EntityManager. It may also be that you did
not set your primary key column in your table to be an identity type.
A common issue is that the generated Id is part of a child object's Id through a OneToOne
or ManyToOne mapping. In this case, because JPA requires that the child define a duplicate
Basic mapping for the Id, its Id will be inserted as null. One solution to this is to mark the
Column on the Id mapping in the child as insertable=false, updateable=false, and
define the OneToOne or ManyToOne using a normal JoinColumn this will ensure the foreign
key field is populated by the OneToOne or ManyToOne not the Basic. Another option is to
first persist the parent, then call flush() before persisting the child.
62
Sequence Strategies
Identity sequencing does not support sequence preallocation, so requires a select after each
insert, in some cases doubling the insert cost. Consider using a sequence table, or sequence
object to allow sequence preallocation.
MySQL bug 1998 causes its autoincrement counter to be lost on restart. So if the last
entity is removed and the MySQL server restarted, the same id will be re-used and thus
not be unique.
8 http://bugs.mysql.com/bug.php?id=199
63
23 Advanced
A composite primary key is one that is made up of several columns in the table. A composite
primary key can be used if no single column in the table is unique. It is generally more
efficient and simpler to have a one-column primary key, such as a generated sequence number,
but sometimes a composite primary key is desirable and unavoidable.
Figure 11
Composite primary keys are common in legacy database schemas, where cascaded keys
can sometimes be used. This refers to a model where dependent objects' key definitions
include their parents' primary key; for example, COMPANY's primary key is COMPANY_ID,
DEPARTMENT's primary key is composed of a COMPANY_ID and a DEP_ID, EMPLOYEE's primary
key is composed of COMPANY_ID, DEP_ID, and EMP_ID, and so on. Although this generally does
not match object-oriented design principles, some DBA's prefer this model. Difficulties with
the model include the restriction that employees cannot switch departments, that foreign-key
relationships become more complex, and that all primary-key operations (including queries,
updates, and deletes) are less efficient. However, each department has control over its own
employee IDs, and if needed the database EMPLOYEE table can be partitioned based on the
COMPANY_ID or DEP_ID, as these are included in every query.
65
Advanced
Other common usages of composite primary keys include many-to-many relationships where
the join table has additional columns, so the table itself is mapped to an object whose primary
key consists of the pair of foreign-key columns and dependent or aggregate one-to-many
relationships where the child object's primary key consists of its parent's primary key and a
locally unique field.
There are two methods of declaring a composite primary key in JPA, IdClass and
EmbeddedId.
23.1.1 Id Class
An IdClass defines a separate Java class to represent the primary key. It is defined through
the @IdClass1 annotation or <id-class> XML element. The IdClass must define an
attribute (field/property) that mirrors each Id attribute in the entity. It must have the
same attribute name and type. When using an IdClass you still require to mark each Id
attribute in the entity with @Id.
The main purpose of the IdClass is to be used as the structure passed to the EntityManager
find() and getReference() API. Some JPA products also use the IdClass as a cache key
to track an object's identity. Because of this, it is required (depending on JPA product)
to implement an equals() and hashCode() method on the IdClass. Ensure that the
equals() method checks each part of the primary key, and correctly uses equals for objects
and == for primitives. Ensure that the hashCode() method will return the same value for
two equal objects.
TopLink2 / EclipseLink3 : Do not require the implementation of equals() or hashCode()
in the id class.
...
@Entity
@IdClass(EmployeePK.class)
public class Employee {
@Id
private long employeeId
@Id
private long companyId
@Id
private long departmentId
...
}
1 https://java.sun.com/javaee/5/docs/api/javax/persistence/IdClass.html
2 Chapter 11 on page 25
3 Chapter 10 on page 23
66
Composite Primary Keys
Example id class
...
public class EmployeePK {
private long employeeId;
23.1.2 Embedded Id
An EmbeddedId defines a separate Embeddable Java class to contain the entities primary
key. It is defined through the @EmbeddedId4 annotation or <embedded-id> XML element.
The EmbeddedId's Embeddable class must define each id attribute for the entity using Basic
mappings. All attributes in the EmbeddedId's Embeddable are assumed to be part of the
primary key.
The EmbeddedId is also used as the structure passed to the EntityManager find() and
getReference() API. Some JPA products also use the EmbeddedId as a cache key to
track an object's identity. Because of this, it is required (depending on JPA product) to
4 https://java.sun.com/javaee/5/docs/api/javax/persistence/EmbeddedId.html
67
Advanced
implement an equals() and hashCode() method on the EmbeddedId. Ensure that the
equals() method checks each part of the primary key, and correctly uses equals for objects
and == for primitives. Ensure that the hashCode() method will return the same value for
two equal objects.
TopLink5 / EclipseLink6 : Do not require the implementation of equals() or hashCode()
in the id class.
...
@Entity
public class Employee {
@EmbeddedId
private EmployeePK id
...
}
...
@Embeddable
public class EmployeePK {
@Basic
private long employeeId
@Basic
private long companyId
@Basic
private long departmentId
5 Chapter 11 on page 25
6 Chapter 10 on page 23
68
Primary Keys through OneToOne and ManyToOne Relationships
A common model is to have a dependent object share the primary key of its parent. In the
case of a OneToOne the child's primary key is the same as the parent, and in the case of a
ManyToOne the child's primary key is composed of the parent's primary key and another
locally unique field.
JPA 1.0 does not allow @Id on a OneToOne or ManyToOne, but JPA 2.0 does.
One of the biggest pitfalls when working with composite primary keys comes with the
implementation of associations to entity classes whose tables have multi-column primary
keys (using the @JoinColumns annotation). Many JPA implementations may throw seemingly
inconsistent exceptions when not specifying referencedColumnName for every @JoinColumn
annotation, which JPA requires (even if all referenced column names are equal to the
ones in the referencing table). See http://download.oracle.com/javaee/5/api/javax/
persistence/JoinColumns.html
Unfortunately JPA 1.0 does not handle this model well, and things become complicated,
so to make your life a little easier you may consider defining a generated unique id for the
child. JPA 1.0 requires that all @Id mappings be Basic mappings, so if your Id comes from
a foreign key column through a OneToOne or ManyToOne mapping, you must also define a
Basic @Id mapping for the foreign key column. The reason for this is in part that the Id
must be a simple object for identity and caching purposes, and for use in the IdClass or
the EntityManager find() API.
Because you now have two mappings for the same foreign key column you must define which
one will be written to the database (it must be the Basic one), so the OneToOne or ManyToOne
foreign key must be defined to be read-only. This is done through setting the JoinColumn
attributes insertable and updatable to false, or by using the @PrimaryKeyJoinColumn
instead of the @JoinColumn.
A side effect of having two mappings for the same column is that you now have to keep
the two in synch. This is typically done through having the set method for the OneToOne
69
Advanced
attribute also set the Basic attribute value to the target object's id. This can become very
complicated if the target object's primary key is a GeneratedValue, in this case you must
ensure that the target object's id has been assigned before relating the two objects.
Some times I think that JPA primary keys would be much simpler if they were just defined
on the entity using a collection of Columns instead of mixing them up with the attribute
mapping. This would leave you free to map the primary key field in any manner you desired.
A generic List could be used to pass the primary key to find() methods, and it would
be the JPA provider's responsibility for hashing and comparing the primary key correctly
instead of the user's IdClass. But perhaps for simple singleton primary key models the
JPA model is more straight forward.
TopLink7 / EclipseLink8 : Allow the primary key to be specified as a list of columns
instead of using Id mappings. This allows OneToOne and ManyToOne mapping foreign keys
to be used as the primary key without requiring a duplicate mapping. It also allows the
primary key to be defined through any other mapping type. This is set through using a
DescriptorCustomizer and the ClassDescriptor addPrimaryKeyFieldName API.
Hibernate9 / Open JPA10 / EclipseLink11 (as of 1.2): Allows the @Id annotation to be
used on a OneToOne or ManyToOne mapping.
...
@Entity
public class Address {
@Id
@Column(name="OWNER_ID")
private long ownerId;
@OneToOne
@PrimaryKeyJoinColumn(name="OWNER_ID",
referencedColumnName="EMP_ID")
private Employee owner;
...
7 Chapter 11 on page 25
8 Chapter 10 on page 23
9 Chapter 12 on page 27
10 Chapter 15 on page 33
11 Chapter 10 on page 23
70
Primary Keys through OneToOne and ManyToOne Relationships
<column name="OWNER_ID"/>
</id>
<one-to-one name="owner">
<primary-key-join-column name="OWNER_ID"
referencedColumnName="EMP_ID"/>
</one-to-one>
</attributes>
<entity/>
...
@Entity
@IdClass(PhonePK.class)
public class Phone {
@Id
@Column(name="OWNER_ID")
private long ownerId;
@Id
private String type;
@ManyToOne
@PrimaryKeyJoinColumn(name="OWNER_ID",
referencedColumnName="EMP_ID")
private Employee owner;
...
Defining an Id for a OneToOne or ManyToOne in JPA 2.0 is much simpler. The @Id annotation
or id XML attribute can be added to a OneToOne or ManyToOne mapping. The Id used for
the object will be derived from the target object's Id. If the Id is a single value, then the
source object's Id is the same as the target object's Id. If it is a composite Id, then the
71
Advanced
IdClass will contain the Basic Id attributes, and the target object's Id as the relationship
value. If the target object also has a composite Id, then the source object's IdClass will
contain the target object's IdClass.
...
@Entity
@IdClass(PhonePK.class)
public class Phone {
@Id
private String type;
@ManyToOne
@Id
@JoinColumn(name="OWNER_ID", referencedColumnName="EMP_ID")
private Employee owner;
...//getters and setters
}
...
public class PhonePK {
private String type;
private long owner;
public PhonePK() {}
72
Advanced Sequencing
One issue with table sequencing is that the sequence table can become a concurrency
bottleneck, even causing deadlocks. If the sequence ids are allocated in the same transaction
as the insert, this can cause poor concurrency, as the sequence row will be locked for
the duration of the transaction, preventing any other transaction that needs to allocate a
sequence id. In some cases the entire sequence table or the table page could be locked causing
even transactions allocating other sequences to wait or even deadlock. If a large sequence
pre-allocation size is used this becomes less of an issue, because the sequence table is rarely
accessed. Some JPA providers use a separate (non-JTA) connection to allocate the sequence
ids in, avoiding or limiting this issue. In this case, if you use a JTA data-source connection,
it is important to also include a non-JTA data-source connection in your persistence.xml.
Table sequencing also allows for truly sequential ids to be allocated. Sequence and identity
sequencing are non-transactional and typically cache values on the database, leading to
large gaps in the ids that are allocated. Typically this is not an issue and desired to have
good performance, however if performance and concurrency are less of a concern, and true
sequential ids are desired then a table sequence can be used. By setting the allocationSize
of the sequence to 1 and ensuring the sequence ids are allocated in the same transaction of
the insert, you can guarantee sequence ids without gaps (but generally it is much better to
live with the gaps and have good performance).
One paranoid delusional fear that programmers frequently have is running out of sequence
numbers. Since most sequence strategies just keep incrementing a number it is unavoidable
that you will eventually run out. However as long a large enough numeric precision is
used to store the sequence id this is not an issue. For example if you stored your id in a
NUMBER(5) column, this would allow 99,999 different ids, which on most systems would
eventually run out. However if you store your id in a NUMBER(10) column, which is
more typical, this would store 9,999,999,999 ids, or one id each second for about 300 years
(longer than most databases exist). But perhaps your system will process a lot of data, and
(hopefully) be around a very long time. If you store your id in a NUMBER(20) this would
be 99,999,999,999,999,999,999 ids, or one id each millisecond for about 3,000,000,000 years,
which is pretty safe.
73
Advanced
But you also need to store this id in Java. If you store the id in a Java int, this would be
a 32 bit number , which is 4,294,967,296 different ids (well actually 2,147,483,648 positive
ids), or one id each second for about 100 years. If you instead use a long, this would be a
64 bit number, which is 9,223,372,036,854,775,808 different ids, or one id each millisecond
for about 300,000,000 years, which is pretty safe. I would recommend using a long vs an
int however, as I have seen int ids run out on large databases before (what happens is they
become negative until they wrap around to 0 and start getting constraint errors).
23.3.2 Customizing
JPA supports three different strategies for generating ids, however there are many other
methods. Normally the JPA strategies are sufficient, so you would only use a different
method in a legacy situation.
Sometimes the application has an application specific strategy for generating ids, such as
prefixing ids with the country code, or branch number. There are several ways to integrate
a customize ids generation strategy, the simplest is just define the id as a normal id and
have the application assign the id value when the object is created.
Some JPA products provide additional sequencing and id generation options, and configura-
tion hooks.
TopLink12 , EclipseLink13 : Several additional sequencing options are provided. A
UnaryTableSequence allows a single column table to be used. A QuerySequence al-
lows for custom SQL or stored procedures to be used. An API also exists to allow a user
to supply their own code for allocating ids.
Hibernate14 : A GUID id generation options is provided through the @GenericGenerator
annotation.
A database table can be defined to have a trigger that automatically assign its' primary key.
Generally this is normally not a good idea (although some DBAs may think it is), and it is
better to use a JPA provider generated sequence id, or assign the id in the application. The
main issue with the id being assigned in a trigger is that the application and object require
this value back. For non-primary key values assigned through triggers it is possible to refresh
the object after committing or flushing the object to obtain the values back. However this is
not possible for the id, as the id is required to refresh an object.
If you have an alternative way to select the id generated by the trigger, such as selecting
the object's row using another unique field, you could issue this SQL select after the insert
to obtain the id and set it back in the object. You could perform this select in a JPA
@PostPersist15 event. Some JPA providers may not allow/like a query execution during
12 Chapter 11 on page 25
13 Chapter 10 on page 23
14 Chapter 12 on page 27
15 https://java.sun.com/javaee/5/docs/api/javax/persistence/PostPersist.html
74
Primary Keys through Events
an event, they also may not pick up a change to an object during an event callback, so there
may be issues with doing this. Also some JPA providers may not allow the primary key
to be un-assigned/null when not using a GeneratedValue, so you may have issues. Some
JPA providers have built-in support for returning values assigned in a trigger (or stored
procedure) back into the object.
TopLink16 / EclipseLink17 : Provide a ReturningPolicy that allows for any field values
including the primary key to be returned from the database after an insert or update. This is
defined through the @ReturnInsert, @ReturnUpdate annotations, or the <return-insert>,
<return-update> XML elements in the eclipselink-orm.xml.
Sometimes your object or table has no primary key. The best solution in this case is normally
to add a generated id to the object and table. If you do not have this option, sometimes
there is a column or set of columns in the table that make up a unique value. You can use
this unique set of columns as your id in JPA. The JPA Id does not always have to match the
database table primary key constraint, nor is a primary key or a unique constraint required.
If your table truly has no unique columns, then use all of the columns as the id. Typically
when this occurs the data is read-only, so even if the table allows duplicate rows with the
same values, the objects will be the same anyway, so it does not matter that JPA thinks
they are the same object. The issue with allowing updates and deletes is that there is no
way to uniquely identify the object's row, so all of the matching rows will be updated or
deleted.
If your object does not have an id, but its' table does, this is fine. Make the object an
Embeddable object, embeddable objects do not have ids. You will need a Entity that
contains this Embeddable to persist and query it.
Identity and Sequencing19 Identity and Sequencing20
16 Chapter 11 on page 25
17 Chapter 10 on page 23
18 https://java.sun.com/javaee/5/docs/api/javax/persistence/PrePersist.html
19 http://en.wikibooks.org/wiki/Category%3AJava%20Persistence
20 http://en.wikibooks.org/wiki/Category%3AJava%20Persistence%2FMapping
75
Advanced
21 https://java.sun.com/javaee/5/docs/api/javax/persistence/Inheritance.html
22 https://java.sun.com/javaee/5/docs/api/javax/persistence/InheritanceType.html
76
Single Table Inheritance
Single table inheritance is the simplest and typically the best performing and best solution.
In single table inheritance a single table is used to store all of the instances of the entire
inheritance hierarchy. The table will have a column for every attribute of every class in
the hierarchy. A discriminator column is used to determine which class the particular row
belongs to, each class in the hierarchy defines its own unique discriminator value.
PROJECT (table)
@Entity
@Inheritance
@DiscriminatorColumn(name="PROJ_TYPE")
@Table(name="PROJECT")
public abstract class Project {
@Id
private long id;
...
}
@Entity
@DiscriminatorValue("L")
public class LargeProject extends Project {
private BigDecimal budget;
}
@Entity
@DiscriminatorValue("S")
public class SmallProject extends Project {
}
23 https://java.sun.com/javaee/5/docs/api/javax/persistence/MappedSuperclass.html
77
Advanced
<table name="PROJECT"/>
<inheritance/>
<discriminator-column name="PROJ_TYPE"/>
<attributes>
<id name="id"/>
...
</attributes>
<entity/>
If you are mapping to an existing database schema, your table may not have a class
discriminator column. Some JPA providers do not require a class discriminator when using
a joined inheritance strategy, so this may be one solution. Otherwise you need some way
to determine the class for a row. Sometimes the inherited value can be computed from
several columns, or there is an discriminator but not a one to one mapping from value to
class. Some JPA providers provide extended support for this. Another option is to create
a database view that manufactures the discriminator column, and then map your hierarchy
to this view instead of the table. In general the best solution is just to add a discriminator
column to the table (truth be told, ALTER TABLE is your best friend in ORM).
TopLink24 / EclipseLink25 : Support computing the inheritance discriminator through
Java code. This can be done through using a DescriptorCustomizer and the
ClassDescriptor's InheritancePolicy's setClassExtractor() method.
Hibernate26 : This can be accomplished through using the Hibernate
@DiscriminatorFormula annotation. This allows database specific SQL or func-
tions to be used to compute the discriminator value.
Subclasses cannot define attributes as not allowing null, as the other subclasses must
insert null into those columns. A workaround to this issue is instead of defining a not null
constraint on the column, define a table constraint that check the discriminator value and
24 Chapter 11 on page 25
25 Chapter 10 on page 23
26 Chapter 12 on page 27
78
Joined, Multiple Table Inheritance
the not nullable value. In general the best solution is to just live without the constraint
(odds are you have enough constraints in your life to deal with as it is).
Joined inheritance is the most logical inheritance solution because it mirrors the object model
in the data model. In joined inheritance a table is defined for each class in the inheritance
hierarchy to store only the local attributes of that class. Each table in the hierarchy must
also store the object's id (primary key), which is only defined in the root class. All classes in
the hierarchy must share the same id attribute. A discriminator column is used to determine
which class the particular row belongs to, each class in the hierarchy defines its own unique
discriminator value.
Some JPA providers support joined inheritance with or without a discriminator column,
some required the discriminator column, and some do not support the discriminator column.
So joined inheritance does not seem to be fully standardized yet.
Hibernate27 : A discriminator column on joined inheritance is not supported. http:
//opensource.atlassian.com/projects/hibernate/browse/ANN-140
PROJECT (table)
ID PROJ_TYPE NAME
1 L Accounting
2 S Legal
SMALLPROJECT (table)
ID
2
LARGEPROJECT (table)
ID BUDGET
1 50000
@Entity
@Inheritance(strategy=InheritanceType.JOINED)
@DiscriminatorColumn(name="PROJ_TYPE")
27 Chapter 12 on page 27
79
Advanced
@Table(name="PROJECT")
public abstract class Project {
@Id
private long id;
...
}
@Entity
@DiscriminatorValue("L")
@Table(name="LARGEPROJECT")
public class LargeProject extends Project {
private BigDecimal budget;
}
@Entity
@DiscriminatorValue("S")
@Table(name="SMALLPROJECT")
public class SmallProject extends Project {
}
The main disadvantage to the joined model is that to query any class join queries are
required. Querying the root or branch classes is even more difficult as either multiple
queries are required, or outer joins or unions are required. One solution is to use single
table inheritance instead, this is good if the classes have a lot in common, but if it is a big
hierarchy and the subclasses have little in common this may not be desirable. Another
solution is to remove the inheritance and instead use a MappedSuperclass, but this means
that you can no longer query or have relationships to the class.
80
Joined, Multiple Table Inheritance
The poorest performing queries will be those to the root or branch classes. Avoiding
queries and relationships to the root and branch classes will help to alleviate this burden.
If you must query the root or branch classes there are two methods that JPA providers
use, one is to outer join all of the subclass tables, the second is to first query the root
table, then query only the required subclass table directly. The first method has the
advantage of only requiring one query, the second has the advantage of avoiding outer joins
which typically have poor performance in databases. You may wish to experiment with
each to determine which mechanism is more efficient in your application and see if your
JPA provider supports that mechanism. Typically the multiple query mechanism is more
efficient, but this generally depends on the speed of your database connection.
TopLink28 / EclipseLink29 : Support both querying mechanisms. The multiple
query mechanism is used by default. Outer joins can be used instead through
using a DescriptorCustomizer and the ClassDescriptor's InheritancePolicy's
setShouldOuterJoinSubclasses() method.
Most inheritance hierarchies do not fit with either the joined or the single table inheritance
strategy. Typically the desired strategy is somewhere in between, having joined tables
in some subclasses and not in others. Unfortunately JPA does not directly support this.
One workaround is to map your inheritance hierarchy as single table, but them add the
additional tables in the subclasses, either through defining a Table or SecondaryTable in
each subclass as required. Depending on your JPA provider, this may work (don't forget
to sacrifice the chicken). If it does not work, then you may need to use a JPA provider
specific solution if one exists for your provider, otherwise live within the constraints of
having either a single table or one per subclass. You could also change your inheritance
hierarchy so it matches your data model, so if the subclass does not have a table, then
collapse its' class into its' superclass.
If you are mapping to an existing database schema, your table may not have a class
discriminator column. Some JPA providers do not require a class discriminator when using
a joined inheritance strategy, so this may be one solution. Otherwise you need some way
to determine the class for a row. Sometimes the inherited value can be computed from
several columns, or there is an discriminator but not a one to one mapping from value to
class. Some JPA providers provide extended support for this. Another option is to create
a database view that manufactures the discriminator column, and then map your hierarchy
to this view instead of the table.
28 Chapter 11 on page 25
29 Chapter 10 on page 23
81
Advanced
30 Chapter 11 on page 25
31 Chapter 10 on page 23
32 Chapter 12 on page 27
82
24 Advanced
Table per class inheritance allows inheritance to be used in the object model, when it does
not exist in the data model. In table per class inheritance a table is defined for each concrete
class in the inheritance hierarchy to store all the attributes of that class and all of its
superclasses. Be cautious using this strategy as it is optional in the JPA spec, and querying
root or branch classes can be very difficult and inefficient.
SMALLPROJECT (table)
ID NAME
2 Legal
LARGEPROJECT (table)
ID NAME BUDGET
1 Accounting 50000
@Entity
@Inheritance(strategy=InheritanceType.TABLE_PER_CLASS)
public abstract class Project {
@Id
private long id;
...
}
@Entity
@Table(name="LARGEPROJECT")
public class LargeProject extends Project {
private BigDecimal budget;
}
@Entity
@Table(name="SMALLPROJECT")
public class SmallProject extends Project {
}
83
Advanced
The main disadvantage to the table per class model is queries or relationships to the root
or branch classes become expensive. Querying the root or branch classes require multiple
queries, or unions. One solution is to use single table inheritance instead, this is good if
the classes have a lot in common, but if it is a big hierarchy and the subclasses have little
in common this may not be desirable. Another solution is to remove the table per class
inheritance and instead use a MappedSuperclass, but this means that you can no longer
query or have relationships to the class.
Because table per class inheritance requires multiple queries, or unions, you cannot join
to, fetch join, or traverse them in queries. Also when ordering is used the results will be
ordered by class, then by the ordering. These limitations depend on your JPA provider,
some JPA provider may have other limitations, or not support table per class at all as it is
optional in the JPA spec.
Mapped superclass inheritance allows inheritance to be used in the object model, when it
does not exist in the data model. It is similar to table per class inheritance, but does not
allow querying, persisting, or relationships to the superclass. Its' main purpose is to allow
mappings information to be inherited by its' subclasses. The subclasses are responsible for
defining the table, id and other information, and can modify any of the inherited mappings.
A common usage of a mapped superclass is to define a common PersistentObject for your
84
Mapped Superclasses
application to define common behavoir and mappings such as the id and version. A mapped
superclass normally should be an abstract class. A mapped superclass is not an Entity but is
instead defined though the @MappedSuperclass1 annotation or the <mapped-superclass>
element.
SMALLPROJECT (table)
ID NAME
2 Legal
LARGEPROJECT (table)
ID PROJECT_NAME BUDGET
1 Accounting 50000
@MappedSuperclass
public abstract class Project {
@Id
private long id;
@Column(name="NAME")
private String name;
...
}
@Entity
@Table(name="LARGEPROJECT")
@AttributeOverride(name="name", column=@Column(name="PROJECT_NAME"))
public class LargeProject extends Project {
private BigDecimal budget;
}
@Entity
@Table("SMALLPROJECT")
public class SmallProject extends Project {
}
1 https://java.sun.com/javaee/5/docs/api/javax/persistence/MappedSuperclass.html
85
Advanced
...
</attributes>
<mapped-superclass/>
The main disadvantage of mapped superclasses is that they cannot be queried or persisted.
You also cannot have a relationship to a mapped superclass. If you require any of these
then you must use another inheritance model, such as table per class, which is virtually
identical to a mapped superclass except it (may) not have these limitations. Another
alternative is to change your model such that your classes do not have relationships to the
superclass, such as changing the relationship to a subclass, or removing the relationship
and instead querying for its value by querying each possible subclass and collecting the
results in Java.
Sometimes you have a subclass that needs to be mapped differently than its parent, or
is similar to its' parent but does not have one of the fields, or uses it very differently.
Unfortunately it is very difficult not to inherit everything from your parent in JPA, you
can override a mapping, but you cannot remove one, or change the type of mapping, or the
target class. If you define your mappings as properties (get methods), or through XML,
you may be able to attempt to override or mark the inherited mapping as Transient, this
may work depending on your JPA provider (don't forget to sacrifice a chicken).
Another solution is to actually fix your inheritance in your object model. If you inherit
foo from Bar but don't want to inherit it, then remove it from Bar, if the other subclasses
need it, either add it to each, or create a FooBar subclass of Bar that has the foo and have
the other subclasses extend this.
Some JPA providers may provide ways to be less stringent on inheritance.
86
Mapped Superclasses
2 Chapter 11 on page 25
3 Chapter 10 on page 23
4 http://en.wikibooks.org/wiki/Category%3AJava%20Persistence
5 http://en.wikibooks.org/wiki/Category%3AJava%20Persistence%2FMapping
87
25 Embeddables
Figure 13
In an application object model some objects are considered independent, and others are
considered dependent parts of other objects. In UML a relationship to a dependent object
is consider an aggregate or composite association. In a relational database this kind of
relationship could be modeled in two ways, the dependent object could have its own table,
or its data could be embedded in the independent objects table.
In JPA a relationship where the target object's data is embedded in the source object's table
is considered an embedded relationship, and the target object is considered an Embeddable
object. Embeddable objects have different requirements and restrictions than Entity objects
and are defined by the @Embeddable1 annotation or <embeddable> element.
An embeddable object cannot be directly persisted, or queried2 , it can only be persisted or
queried in the context of its parent. An embeddable object does not have an id or table.
1 https://java.sun.com/javaee/5/docs/api/javax/persistence/Embeddable.html
2 Chapter 26.8 on page 98
89
Embeddables
The JPA spec does not support embeddable objects having inheritance3 , although some JPA
providers may allow this. Relationships4 from embeddable objects are supported in JPA 2.0.
Relationships to embeddable objects are defined through the @Embedded5 annotation or
<embedded> element. The JPA 2.0 spec also supports collection6 relationships to embeddable
objects.
@Embeddable
public class EmploymentPeriod {
@Column(name="START_DATE")
private java.sql.Date startDate;
@Column(name="END_DATE")
private java.sql.Date endDate;
...
}
@Entity
public class Employee {
@Id
private long id;
...
@Embedded
private EmploymentPeriod period;
...
}
90
Mapped Superclasses
<attributes>
<id name="id"/>
<embedded name="period"/>
</attributes>
</entity>
91
26 Advanced
26.1 Sharing
An embeddable object can be shared between multiple classes. Consider a Name object, that
both an Employee and a User contain. Both Employee and a User have their own tables, with
different column names that they desire to store their name in. Embeddables support this
through allowing each embedded mapping to override the columns used in the embeddable.
This is done through the @AttributeOverride1 annotation or <attribute-override>
element.
Note that an embeddable cannot be shared between multiple instances. If you desire to
share an embeddable object instance, then you must make it an independent object with its
own table.
@Entity
public class Employee {
@Id
private long id;
...
@Embedded
@AttributeOverrides({
@AttributeOverride(name="startDate",
column=@Column(name="START_DATE")),
@AttributeOverride(name="endDate",
column=@Column(name="END_DATE"))
})
private Period employmentPeriod;
...
}
@Entity
public class User {
@Id
private long id;
...
@Embedded
@AttributeOverrides({
@AttributeOverride(name="startDate",
column=@Column(name="SDATE")),
@AttributeOverride(name="endDate", column=@Column(name="EDATE"))
})
private Period period;
...
}
1 https://java.sun.com/javaee/5/docs/api/javax/persistence/AttributeOverride.html
93
Advanced
26.3 Nulls
An embeddable object's data is contained in several columns in its parent's table. Since there
is no single field value, there is no way to know if a parent's reference to the embeddable
is null. One could assume that if every field value of the embeddable is null, then the
reference should be null, but then there is no way to represent an embeddable with all
null values. JPA does not allow embeddables to be null, but some JPA providers may
support this.
TopLink3 / EclipseLink4 : Support an embedded reference being null. This is set through
using a DescriptorCustomizer and the AggregateObjectMapping setIsNullAllowed
API.
94
Nesting
26.4 Nesting
@Embeddable
public class EmploymentDetails {
private EmploymentPeriod period;
private int yearsOfService;
private boolean fullTime;
....
public EmploymentDetails() {
this.period = new EmploymentPeriod();
}
@Transient
public EmploymentPeriod getEmploymentPeriod() {
return period;
}
@Basic
public Date getStartDate() {
return getEmploymentPeriod().getStartDate();
}
public void setStartDate(Date startDate) {
getEmploymentPeriod().setStartDate(startDate);
}
@Basic
public Date getEndDate() {
return getEmploymentPeriod().getEndDate();
}
public void setEndDate(Date endDate) {
getEmploymentPeriod().setEndDate(endDate);
}
....
}
5 Chapter 11 on page 25
6 Chapter 10 on page 23
95
Advanced
26.5 Inheritance
Embeddable inheritance is when one embeddable class subclasses another embeddable class.
The JPA spec does not allow inheritance in embeddable objects, however some JPA products
may support this. Technically there is nothing preventing the @DiscriminatorColumn
annotation being used in an embeddable object, so this may just work depending on your
JPA provider (cross your fingers). Inheritance in embeddables is always single table as an
embeddable must live within its' parent's table. Generally attempting to mix inheritance
between embeddables and entities is not a good idea, but may work in some cases.
TopLink7 / EclipseLink8 : Support inheritance with embeddables. This is set through
using a DescriptorCustomizer and the InheritancePolicy.
26.6 Relationships
@Entity
public class Employee {
....
private EmploymentDetails details;
7 Chapter 11 on page 25
8 Chapter 10 on page 23
9 Chapter 11 on page 25
10 Chapter 10 on page 23
96
Collections
....
@Embedded
public EmploymentDetails getEmploymentDetails() {
return details;
}
@OneToOne
public Address getEmploymentAddress() {
return getEmploymentDetails().getAddress();
}
public void setEmploymentAddress(Address address) {
getEmploymentDetails().setAddress(address);
}
}
@Entity
public class Employee {
....
private EmploymentDetails details;
....
@Embedded
public EmploymentDetails getEmploymentDetails() {
return details;
}
public void setEmploymentDetails(EmploymentDetails details) {
this.details = details;
details.setParent(this);
}
}
26.7 Collections
A collection of embeddable objects is similar to a OneToMany except the target objects are
embeddables and have no Id. This allows for a OneToMany to be defined without a inverse
ManyToOne, as the parent is responsible for storing the foreign key in the target object's
table. JPA 1.0 did not support collections of embeddable objects, but some JPA providers
support this.
JPA 2.0 does support collections of embeddable objects through the ElementCollection
mapping. See, ElementCollection12 .
EclipseLink 13 (as of 1.2) : Supports the JPA 2.0 ElementCollection mapping.
11 Chapter 12 on page 27
12 Chapter 39.1.2 on page 185
13 Chapter 10 on page 23
97
Advanced
26.8 Querying
Embeddable objects cannot be queried directly, but they can be queried in the context of
their parent. Typically it is best to select the parent, and access the embeddable from the
parent. This will ensure the embeddable is registered with the persistence context. If the
embeddable is selected in a query, the resulting objects will be detached, and changes will
not be tracked.
Embeddables 18 Embeddables 19
14 Chapter 11 on page 25
15 Chapter 10 on page 23
16 Chapter 12 on page 27
17 http://en.wikibooks.org/wiki/Java%20Persistence%2FDataNucleus
18 http://en.wikibooks.org/wiki/Category%3AJava%20Persistence
19 http://en.wikibooks.org/wiki/Category%3AJava%20Persistence%2FMapping
98
27 Locking
Locking is perhaps the most ignored persistence consideration. Most applications tend to
ignore thinking about concurrency issues during development, and then smush in a locking
mechanism before going into production. Considering the large percentage of software
projects that fail or are canceled, or never achieve a large user base, perhaps this is logical.
However, locking and concurrency is a critical or at least a very important issue for most
applications, so probably should be something considered earlier in the development cycle.
If the application will have concurrent writers to the same objects, then a locking strategy
is critical so that data corruption can be prevented. There are two strategies for preventing
concurrent modification of the same object/row; optimistic1 and pessimistic2 locking. Tech-
nically there is a third strategy, ostrich3 locking, or no locking, which means put your head
in the sand and ignore the issue.
There are various ways to implement both optimistic and pessimistic locking. JPA has
support for version optimistic locking, but some JPA providers support other methods of
optimistic locking, as well as pessimistic locking.
Locking and concurrency can be a confusing thing to consider, and there are a lot of
misconcepts out there. Correctly implementing locking in your application typically involves
more than setting some JPA or database configuration option (although that is all many
applications that think4 they are using locking do). Locking may also involve application
level changes, and ensuring other applications accessing the database also do so correctly for
the locking policy being used.
Optimistic locking assumes that the data will not be modified between when you read
the data until you write the data. This is the most common style of locking used and
recommended in today's persistence solutions. The strategy involves checking that one or
more values from the original object read, are still the same when updating it. This verifies
that the object has not changed by another user in between the read and the write.
JPA supports using an optimistic locking version field that gets updated on each update.
The field can either be numeric or a timestamp value. A numeric value is recommended
as a numeric value is more precise, portable, performant and easier to deal with than a
timestamp.
99
Locking
The @Version5 annotation or <version> element is used to define the optimistic lock
version field. The annotation is defined on the version field or property for the object, similar
to an Id mapping. The object must contain an attribute to store the version field.
The object's version attribute is automatically updated by the JPA provider, and should
not normally be modified by the application. The one exception is if the application reads
the object in one transaction, sends the object to a client, and updates/merges the object
in another transaction. In this case the application must ensure that the original object
version is used, otherwise any changes in between the read and write will not be detected.
The EntityManager merge() API will always merge the version, so the application is only
responsible for this if manually merging.
When a locking contention is detected an OptimisticLockException6 will be thrown. This
could be wrapped insider a RollbackException, or other exceptions if using JTA, but it
should be set as the cause of the exception. The application can handle the exception, but
should normally report the error to the user, and let them determine what to do.
@Entity
public abstract class Employee{
@Id
private long id;
@Version
private long version;
...
}
Probably the most common mistake in locking in general is locking the wrong section of
code. This is true no matter what form of locking is used, whether it be optimistic or
pessimistic. The basic scenario is:
5 https://java.sun.com/javaee/5/docs/api/javax/persistence/Version.html
6 https://java.sun.com/javaee/5/docs/api/javax/persistence/OptimisticLockException.html
100
Optimistic Locking
1. User requests some data, the server reads the data from the database and sends it to
the user in their client (doesn't matter if the client is html, rmi, web service).
2. The user edits the data in the client.
3. The user submits the data back to the server.
4. The server begins a transaction, reads the object, merges the data and commits the
transaction.
The issues is that the original data was read in step 1, but the lock was not obtained until
step 4, so any changes made to the object in between steps 1 and 4 would not result in a
conflict. This means there is little point to using any locking.
A key point is that when using database pessimistic locking or database transaction
isolation, this will always be the case, the database locks will only occur in step 4, and
any conflicts will not be detected. This is the main reason why using database locking
does not scale to web applications, for the locking to be valid, the database transaction
must be started at step 1 and not committed until step 4. This means the a live database
connection and transaction must be held open while waiting for the web client, as well as
locks, since there is no guarantee that the web client will not sit on the data for hours, go
to lunch, or disappear of the face of the earth, holding database resources and locking data
for all other users can be very undesirable.
For optimistic locking the solution is relatively simple, the object's version must be sent to
the client along with the data (or kept in the http session). When the user submits the
data back, the original version must be merged into the object read from the database, to
ensure that any changes made between step 1 and 4 will be detected.
Unfortunately programmers can frequently be too clever for their own good. The first issue
that comes up when using optimistic locking is what to do when an OptimisticLockException
occurs. The typical response of the friendly neighborhood super programmer, is to
automatically handle the exception. They will just create a new transaction, refresh
the object to reset its version, and merge the data back into the object and re-commit it.
Presto problem solved, or is it?
This actually defeats the whole point of locking in the first place. If this is what you
desire, you may as well use no locking. Unfortunately, the OptimisticLockException should
rarely be automatically handled, and you really need to bother the user about the issue.
You should report the conflict to the user, and either say "your sorry but an edit conflict
occurred and they are going to have to redo their work", or in the best case, refresh the
object and present the user with the current data and the data that they submitted and
help them merge the two if appropriate.
Some automated merge tools will compare the two conflicting versions of the data and if
none of the individual fields conflict, then the data will just be automatically merged without
the user's aid. This is what most software version control systems do. Unfortunately the
user is typically better able to decide when something is a conflict than the program, just
because two versions of the .java file did not change the same line of code does not mean
there was no conflict, the first user could have deleted a method that the other user added
101
Locking
a method to reference, and several other possible issues that cause the typically nightly
build to break every so often.
Paranoid Delusionment
Locking can prevent most concurrency issues, but be careful of going overboard in over
analyzing to death every possible hypothetical occurrence. Sometimes in an concurrent
application (or any software application) bad stuff can happen. Users are pretty used to
this by now, and I don't think anyone out there thinks computers are perfect.
A good example is a source code control system. Allowing users to overwrite each other
changes is a bad thing; so most systems avoid this through versioning the source files.
If a user submits changes to a file that originated from a older version than the current
version, the source code control system will raise a conflict and make the user merge the
two files. This is essentially optimistic locking. But what if one user removes, or renames a
method in one file, then another user adds a new method or call in another file to that old
method. No source code control system that I know of will detect this issue, it is a conflict
and will cause the build to break. The solution to this is to start locking or checking the
lock on every file in the system (or at least every possible related file). Similar to using
optimistic read locking on every possible related object, or pessimistically locking every
possible related object. This could be done, but would probably be very expensive, and
more importantly would now raise possible conflicts every time a user checked in, so would
be entirely useless.
So, in general be careful of being too paranoid, such that you sacrifice the usability of your
system.
Any form of locking that is going to work requires that all applications accessing the same
data follow the same rules. If you use optimistic locking in one application, but no locking
in another accessing the same data, they will still conflict. One fake solution is to configure
an update trigger to always increment the version value (unless incremented in the update).
This will allow the new application to avoid overwriting the old application's changes, but
the old application will still be able to overwrite the new application's changes. This still
may be better than no locking at all, and perhaps the old application will eventually go
away.
One common misconception is that if you use pessimistic locking, instead of adding a version
field, you will be ok. Again pessimistic locking requires that all applications accessing the
same data use the same form of locking. The old application can still read data (without
locking), then update the data after the new application reads, locks, and updates the
same data, overwriting its changes.
102
Optimistic Locking
Possibly, but most likely not. Most databases default to read committed transaction
isolation. This means that you will never see uncommitted data, but this does not prevent
concurrent transactions from overwriting the same data.
1. Transaction A reads row x.
2. Transaction B reads row x.
3. Transaction A writes row x.
4. Transaction B writes row x (and overwrites A's changes).
5. Both commit successfully.
This is the case with read committed, but with serializable this conflict would not occur.
With serializable either Transaction B would lock on the select for B and wait (perhaps
a long time) until Transaction A commits. In some databases Transaction A may not
wait, but would fail on commit. However, even with serializable isolation the typical web
application would still have a conflict. This is because each server request operates in a
different database transaction. The web client reads the data in one transaction, then
updates it in another transaction. So optimistic locking is really the only viable locking
option for the typical web application. Even if the read and write occurs in the same
transaction, serializable is normally not the solution because of concurrency implications
and deadlock potential.
See Serializable Transaction Isolation7
The best solution is probably just to add one. Field locking is another solution, as well as
pessimistic locking in some cases.
See Field Locking8
103
Locking
27.2 Advanced
Timestamp version locking is supported by JPA and is configured the same as numeric
version locking, except the attribute type will be a java.sql.Timestamp or other date/time
type. Be cautious in using timestamp locking as timestamps have different levels of precision
in different databases, and some database do not store a timestamp's milliseconds, or do
not store them precisely. In general timestamp locking is less efficient than numeric version
locking, so numeric version locking is recommended.
Timestamp locking is frequently used if the table already has a last updated timestamp column,
and is also a convenient way to auto update a last updated column. The timestamp version
value can be more useful than a numeric version, as it includes the relevant information on
when the object was last updated.
The timestamp value in timestamp version locking can either come from the database, or
from Java (mid-tier). JPA does not allow this to be configured, however some JPA providers
may provide this option. Using the database's current timestamp can be very expensive, as
it requires a database call to the server.
An object can only have one version in JPA. Even if the object maps to multiple tables,
only the primary table will have the version. If any fields in any of the tables changes, the
version will be updated. If you desire multiple versions, you may need to map multiple
version attributes in your object and manually maintain the duplicate versions, perhaps
through events. Technically there is nothing preventing your from annotating multiple
attributes with @Version, and potentially some JPA providers may support this (don't
forget to sacrifice a chicken).
104
Advanced
Locking objects is different than locking rows in the database. An object can be more
complex than a simple row; an object can span multiple tables, have inheritance, have
relationships, and have dependent objects. So determining when an object has changed and
needs to update its' version can be more difficult than determining when a row has changed.
JPA does define that when any of the object's tables changes the version is updated. However
it is less clear on relationships. If Basic, Embedded, or a foreign key relationship (OneToOne,
ManyToOne) changes, the version will be updated. But what about OneToMany, ManyToMany,
and a target foreign key OneToOne? For changes to these relationships the update to the
version may depend on the JPA provider.
What about changes made to dependent objects? JPA does not have a cascade option for
locking, and has no direct concept of dependent objects, so this is not an option. Some JPA
providers may support this. One way to simulate this is to use write locking12 . JPA defines
the EntityManager lock()13 API. You can define a version only in your root parent objects,
and when a child (or relationship) is changed, you can call the lock API with the parent to
cause a WRITE lock. This will cause the parent version to be updated. You may also be able
to automate this through persistence events.
Usage of cascaded locking depends on your application. If in your application you consider
one user updating one dependent part of an object, and another user updating another part
of the object to be a locking contention, then this is what you want. If your application
does not consider this to be a problem, then you do not want cascaded locking. One of
the advantages of cascaded locking is you have fewer version fields to maintain, and only
the update to the root object needs to check the version. This can make a difference in
optimizations such as batch writing, as the dependent objects may not be able to be batched
if they have their own version that must be checked.
TopLink14 / EclipseLink15 : Support cascaded locking through their @OptimisticLocking
and @PrivateOwned annotations and XML.
If you do not have a version field in your table, optimistic field locking is another solution.
Field locking involves comparing certain fields in the object when updating. If those fields
have changed, then the update will fail. JPA does not support field locking, but some JPA
providers do support it.
Field locking can also be used when a finer level of locking is desired. For example if one
user changes the object's name and another changes the objects address, you may desire for
these updates to not conflict, and only desire optimistic lock errors when users change the
105
Locking
same fields. You may also only be concerned about conflicts in changes to certain fields, and
not desire lock errors from conflicts in the other fields.
Field locking can also be used on legacy schemas, where you cannot add a version column, or
to integrate with other applications accessing the same data which are not using optimistic
locking (note if the other applications are not also using field locking, you can only detect
conflicts in one direction).
There are several types of field locking:
• All fields compared in the update - This can lead to a very big where clause, but will
detect any conflicts.
• Selected fields compared in the update - This is useful if conflicts in only certain fields
are desired.
• Changed fields compared in the update - This is useful if only changes to the same fields
are considered to be conflicts.
If your JPA provider does not support field locking, it is difficult to simulate, as it requires
changes to the update SQL. Your JPA provider may allow overriding the update SQL, in
which case, All or Selected field locking may be possible (if you have access to the original
values), but Changed field locking is more difficult because the update must be dynamic.
Another way to simulate field locking is to flush you changes, then refresh the object using
a separate EntityManager and connection and compare the current values with your original
object.
When using field locking it is important to keep the original object that was read. If you
read the object in one transaction and send it to a client, then update in another, you are
not really locking. Any changes made between the read and write will not be detected. You
must keep the original object read managed in an EntityManager for your locking to have
any effect.
TopLink16 / EclipseLink17 : Support field locking through their @OptimisticLocking
annotation and XML.
It is sometimes desirable to lock something that you did not change. Normally this is done
when making a change to one object, that is based on the state of another object, and you
wish to ensure that the other object represents the current state of the database at the point
of the commit. This is what serializable transaction isolation gives you, but optimistic read
and write locking allow this requirement to be met declaratively and optimistically (and
without deadlock, concurrency, and open transaction issues).
JPA supports read and write locks through the EntityManager.lock()18 API. The
LockModeType19 argument can either be READ or WRITE. A READ lock will ensure that
16 Chapter 11 on page 25
17 Chapter 10 on page 23
https://java.sun.com/javaee/5/docs/api/javax/persistence/EntityManager.html#lock(java.
18
lang.Object,%20javax.persistence.LockModeType)
19 https://java.sun.com/javaee/5/docs/api/javax/persistence/LockModeType.html
106
Advanced
the state of the object does not change on commit. A WRITE lock will ensure that this
transaction conflicts with any other transaction changing or locking the object. Essentially
the READ lock check the optimistic version field, and the WRITE checks and increments it.
Write locking can also be used to provide object-level locks. If you desire for a change to
a dependent object to conflict with any change to the parent object, or any other of its
dependent objects, this can be done through write locking. This can also be used to lock
relationships, when you change a OneToMany or ManyToMany relationship you can also
force the parent's version to be incremented.
Conceptually people may scoff and be alarmed at the thought of no locking, but it is probably
the most common form of locking in use. Some call it Ostrich locking as the strategy is to
stick your head in the sand and ignore the issue. Most prototypes or small applications
frequently do not have the requirement or in most cases the need for locking, and handling
what to do when a locking contention does occur is beyond the scope of the application, so
best to just ignore the issue.
In general it is probably best in JPA to enable optimistic locking always, as it is fairly simple
to do, as least in concept, but what does occur on a conflict without any form of locking?
Essentially it is last in wins, so if two users edit the same object, at the same time, the
last one to commit will have their changes reflected in the database. This is true, at least
for users editing the same fields, but if two users edit different fields in the same object, it
depends on the JPA implementation. Some JPA providers only update exactly the fields
that were changed, where as other update all fields in the object. So in one case the first
user's changes would be overridden, but in the second they would not.
Pessimistic locking means acquiring a lock on the object before you begin to edit the
object, to ensure that no other users are editing the object. Pessimistic locking is typically
implemented through using database row locks, such as through the SELECT ... FOR
107
Locking
UPDATE SQL syntax. The data is read and locked, the changes are made and the transaction
is committed, releasing the locks.
JPA 1.0 did not support pessimistic locking, but some JPA 1.0 providers do. JPA 2.020
supports pessimistic locking. It is also possible to use JPA native SQL queries to issue
SELECT ... FOR UPDATE and use pessimistic locking. When using pessimistic locking you
must ensure that the object is refreshed when it is locked, locking a potentially stale object
is of no use. The SQL syntax for pessimistic locking is database specific, and different
databases have different syntax and levels of support, so ensure your database properly
supports your locking requirements.
EclipseLink21 (as of 1.2) : Supports JPA 2.0 pessimistic locking.
TopLink22 / EclipseLink23 : Support pessimistic locking through the
"eclipselink.pessimistic-lock" query hint.
The main issues with pessimistic locking is they use database resources, so require a database
transaction and connection to be held open for the duration of the edit. This is typically
not desirable for interactive web applications. Pessimistic locking can also have concurrency
issues and cause deadlocks. The main advantages of pessimistic locking is that once the
lock is obtained, it is fairly certain that the edit will be successful. This can be desirable
in highly concurrent applications, where optimistic locking may cause too many optimistic
locking errors.
There are other ways to implement pessimistic locking, it could be implemented at the
application level, or through serializable transaction24 isolation.
Application level pessimistic locking can be implemented through adding a locked field to
your object. Before an edit you must update the field to locked (and commit the change).
Then you can edit the object, and set the locked field back to false. To avoid conflicts in
acquiring the lock, you should also use optimistic locking, to ensure the lock field is not
updated to true by another user at the same time.
JPA 2.0 adds support for pessimistic locking, as well as other locking options. A lock can
be acquired using the EntityManager.lock()25 API, or passing a LockModeType to an
EntityManager find() or refresh() operation, or setting the lockMode of a Query or
NamedQuery.
The JPA 2.0 lock modes are defined in the LockModeType26 enum:
108
Advanced
• OPTIMISTIC (was READ in JPA 1.0) - The Entity will have its optimistic lock version
checked on commit, to ensure no other transaction updated the object.
• OPTIMISTIC_FORCE_INCREMENT (was WRITE in JPA 1.0) - The Entity will have
its optimistic lock version incremented on commit, to ensure no other transaction updated
(or READ locked) the object.
• PESSIMISTIC_READ - The Entity is locked on the database, prevents any other
transaction from acquiring a PESSIMISTIC_WRITE lock.
• PESSIMISTIC_WRITE - The Entity is locked on the database, prevents any other
transaction from acquiring a PESSIMISTIC_READ or PESSIMISTIC_WRITE lock.
• PESSIMISTIC_FORCE_INCREMENT - The Entity is locked on the database, prevents
any other transaction from acquiring a PESSIMISTIC_READ or PESSIMISTIC_WRITE
lock, and the Entity will have its optimistic lock version incremented on commit. This
is unusual as it does both an optimistic and pessimistic lock, normally an application
would only use one locking model.
• NONE - No lock is acquired, this is the default to any find, refresh or query operation.
JPA 2.0 also adds two new standard query hints. These can be passed to any Query,
NamedQuery, or find(), lock() or refresh() operation.
• "javax.persistence.lock.timeout" - Number of milliseconds to wait on the lock before giving
up and throwing a PessimisticLockException.
• "javax.persistence.lock.scope" - The valid scopes are defined in PessimisticLockScope27 ,
either NORMAL or EXTENDED. EXTENDED will also lock the object's owned join
tables and element collection tables.
Serializable transaction isolation guarantees that anything read in the transaction will not
be updated by any other user. Through using serializable transaction isolation and ensuring
the data being edited is read in the same transaction, you can achieve pessimistic locking.
It is important to ensure the objects are refreshed from the database in the transaction, as
editing cached or potentially stale data defeats the point of locking.
Serializable transaction isolation can typically be enabled on the database, some databases
even have this as the default. It can also be set on the JDBC Connection, or through native
SQL, but this is database specific and different databases have different levels of support.
The main issues with serializable transaction isolation are the same as using SELECT ...
FOR UPDATE (see above for the gory details), in addition everything read is locked, so you
cannot decide to only lock certain objects at certain times, but lock everything all the time.
This can be a major concurrency issue for transactions with common read-only data, and
can lead to deadlocks.
How database implement serializable transaction isolation differs between databases. Some
databases (such as Oracle) can perform serializable transaction isolation in more of an
optimistic sense, than the typically pessimistic implementation. Instead of each transaction
requiring locks on all the data as it is read, the row versions are not checked until the
27 https://java.sun.com/javaee/6/docs/api/javax/persistence/PessimisticLockScope.html
109
Locking
transaction is committed, if any of the data changed an exception is thrown and the
transaction is not allowed to commit.
Locking 28
28 http://en.wikibooks.org/wiki/Category%3AJava%20Persistence%2FMapping
110
28 Basics
A basic attribute is one where the attribute class is a simple type such as String, Number,
Date or a primitive. A basic attribute's value can map directly to the column value in the
database. The following table summarizes the basic types and the database types they map
to.
Java type Database type
String (char, char[]) VARCHAR (CHAR, VARCHAR2,
CLOB, TEXT)
Number (BigDecimal, BigInteger, Inte- NUMERIC (NUMBER, INT, LONG,
ger, Double, Long, Float, Short, Byte) FLOAT, DOUBLE)
int, long, float, double, short, byte NUMERIC (NUMBER, INT, LONG,
FLOAT, DOUBLE)
byte[] VARBINARY (BINARY, BLOB)
boolean (Boolean) BOOLEAN (BIT, SMALLINT, INT,
NUMBER)
java.util.Date TIMESTAMP (DATE, DATETIME)
java.sql.Date DATE (TIMESTAMP, DATETIME)
java.sql.Time TIME (TIMESTAMP, DATETIME)
java.sql.Timestamp TIMESTAMP (DATETIME, DATE)
java.util.Calendar TIMESTAMP (DATETIME, DATE)
java.lang.Enum NUMERIC (VARCHAR, CHAR)
java.util.Serializable VARBINARY (BINARY, BLOB)
In JPA a basic attribute is mapped through the @Basic1 annotation or the <basic> element.
The types and conversions supported depend on the JPA implementation and database
platform. Some JPA implementations may support conversion between many different
data-types or additional types, or have extended type conversion support, see the advanced2
section for more details. Any basic attribute using a type that does not map directly to a
database type can be serialized to a binary database type.
The easiest way to map a basic attribute in JPA is to do nothing. Any attributes that have
no other annotations and do not reference other entities will be automatically mapped as
basic, and even serialized if not a basic type. The column name for the attribute will be
defaulted, named the same as the attribute name, as uppercase. Sometimes auto-mapping
can be unexpected if you have an attribute in your class that you did not intend to have
1 https://java.sun.com/javaee/5/docs/api/javax/persistence/Basic.html
2 Chapter 29 on page 115
111
Basics
persisted. You must mark any such non-persistent fields using the @Transient3 annotation
or <transient> element.
Although auto-mapping makes rapid prototyping easy, you typically reach a point where you
want control over your database schema. To specify the column name for a basic attribute
the @Column4 annotation or <column> element is used. The column annotation also allows
for other information to be specified such as the database type, size, and some constraints.
@Entity
public class Employee {
// Id mappings are also basic mappings.
@Id
@Column(name="ID")
private long id;
@Basic
@Column(name="F_NAME")
private String firstName;
// The @Basic is not required in general because it is the default.
@Column(name="L_NAME")
private String lastName;
// Any un-mapped field will be automatically mapped as basic and
column name defaulted.
private BigDecimal salary;
// Non-persistent fields must be marked as transient.
@Transient
private EmployeeService service;
...
}
3 https://java.sun.com/javaee/5/docs/api/javax/persistence/Transient.html
4 https://java.sun.com/javaee/5/docs/api/javax/persistence/Column.html
112
Common Problems
See Conversion5
A common issue is that data, such as Strings, written from the object are truncated when
read back from the database. This is normally caused by the column length not being large
enough to handle the object's data. In Java there is no maximum size for a String, but in
a database VARCHAR field, there is a maximum size. You must ensure that the length you
set in your column when you create the table is large enough to handle any object value.
For very large Strings CLOBs can be used, but in general CLOBs should not be over used, as
they are less efficient than a VARCHAR.
If you use JPA to generate your database schema, you can set the column length through
the Column annotation or element, see Column Definition and Schema Generation6 .
See Timezones7
113
Basics
114
29 Advanced
Dates, times, and timestamps are common types both in the database and in Java, so in
theory mappings these types should be simple, right? Well sometimes this is the case and
just a normal Basic mapping can be used, however sometimes it becomes more complex.
Some databases do not have DATE and TIME types, only TIMESTAMP fields, however some
do have separate types, and some just have DATE and TIMESTAMP. Originally in Java 1.0,
Java only had a java.util.Date type, which was both a date, time and milliseconds. In
Java 1.1 this was expanded to support the common database types with java.sql.Date,
java.sql.Time, and java.sql.Timestamp, then to support internationalization Java cre-
ated the java.util.Calendar type and virtually deprecated (almost all of the methods)
the old date types (which JDBC still uses).
If you map a Java java.sql.Date type to a database DATE, this is just a basic mapping
and you should not have any issues (ignore Oracle's DATE type that is/was a timestamp for
now). You can also map java.sql.Time to TIME, and java.sql.Timestamp to TIMESTAMP.
However if you have a java.util.Date or java.util.Calendar in Java and wish to map
it to a DATE or TIME, you may need to indicate that the JPA provider perform some sort
of conversion for this. In JPA the @Temporal1 annotation or <temporal> element is used
to map this. You can indicate that just the DATE or TIME portion of the date/time value
be stored to the database. You could also use Temporal to map a java.sql.Date to a
TIMESTAMP field, or any other such conversion.
@Entity
public class Employee {
...
@Basic
@Temporal(DATE)
private Calendar startDate;
...
}
1 https://java.sun.com/javaee/5/docs/api/javax/persistence/Temporal.html
115
Advanced
<attributes>
...
<basic name="startDate">
<temporal>DATE</temporal>
</basic>
</attributes>
</entity>
29.1.3 Milliseconds
The precision of milliseconds is different for different temporal classes and database types,
and on different databases. The java.util.Date and Calendar classes support millisec-
onds. The java.sql.Date and java.sql.Time classes do not support milliseconds. The
java.sql.Timestamp class supports nanoseconds.
On many databases the TIMESTAMP type supports milliseconds. On Oracle prior to Oracle 9,
there was only a DATE type, which was a date and a time, but had no milliseconds. Oracle 9
added a TIMESTAMP type that has milliseconds (and nanoseconds), and now treats the old
DATE type as only a date, so be careful using it as a timestamp. MySQL has DATE, TIME
and DATETIME types. DB2 has a DATE, TIME and TIMESTAMP types, the TIMESTAMP supports
microseconds. Sybase and SQL Server just have a DATETIME type which has milliseconds,
but at least on some versions has precision issues, it seems to store an estimate of the
milliseconds, not the exact value.
If you use timestamp version locking you need to be very careful of your milliseconds
precision. Ensure your database supports milliseconds precisely otherwise you may have
issues, especially if the value is assigned in Java, then differs what gets stored on the database,
which will cause the next update to fail for the same object.
In general I would not recommend using a timestamp and as primary key or for version
locking. There are too many database compatibility issues, as well as the obvious issue of
not supporting two operations in the same millisecond.
29.1.4 Timezones
Temporals become a lot more complex when you start to consider time zones, international-
ization, eras, locals, day-light savings time, etc. In Java only Calendar supports time zones.
Normally a Calendar is assumed to be in the local time zone, and is stored and retrieved
from the database with that assumption. If you then read that same Calendar on another
computer in another time zone, the question is if you will have the same Calendar or will
you have the Calendar of what the original time would have been in the new time zone? It
depends on if the Calendar is stored as the GMT time, or the local time, and if the time
zone was stored in the database.
Some databases support time zones, but most database types do not store the time zone.
Oracle has two special types for timestamps with time zones, TIMESTAMPTZ (time zone is
stored) and TIMESTAMPLTZ (local time zone is used). Some JPA providers may have extended
support for storing Calendar objects and time zones.
116
Enums
TopLink2 , EclipseLink3 : Support the Oracle TIMESTAMPTZ and TIMESTAMPLTZ types using
the @TypeConverter annotation and XML.
Forum Posts
• Investigation of storing timezones in MySQL4
29.2 Enums
Java Enums5 are typically used as constants in an object model. For example an Employee
may have a gender of enum type Gender (MALE, FEMALE).
By default in JPA an attribute of type Enum will be stored as a Basic to the database,
using the integer Enum values as codes (i.e. 0, 1). JPA also defines an @Enumerated6
annotation and <enumerated> element (on a <basic>) to define an Enum attribute. This
can be used to store the Enum as the STRING value of its name (i.e. "MALE", "FEMALE").
For translating Enum types to values other than the integer or String name, such as character
constants, see Translating Values7 .
@Entity
public class Employee {
...
@Basic
@Enumerated(EnumType.STRING)
private Gender gender;
...
}
2 Chapter 11 on page 25
3 Chapter 10 on page 23
4 http://old.nabble.com/MySQL%27s-datetime-and-time-zones--td21006801.html
5 https://java.sun.com/java/5/docs/api/java/lang/Enum.html
6 https://java.sun.com/javaee/5/docs/api/javax/persistence/Enumerated.html
7 Chapter 29.1.4 on page 116
117
Advanced
</attributes>
</entity>
A LOB is a Large OBject, such as a BLOB (Binary LOB), or a CLOB (Character LOB). It
is a database type that can store a large binary or string value, as the normal VARCHAR
or VARBINARY types typically have size limitations. A LOB is often stored as a locator in
the database table, with the actual data stored outside of the table. In Java a CLOB will
normally map to a String, and a BLOB will normally map to a byte[], although a BLOB
may also represent some serialized object.
By default in JPA any Serializable attribute that is not a relationship or a basic type
(String, Number, temporal, primitive), will be serialized to a BLOB field.
JPA defines the @Lob8 annotation and <lob> element (on a <basic>) to define that an
attribute maps to a LOB type in the database. The annotation is just a hint to the JPA
implementation that this attribute will be stored in a LOB, as LOBs may need to be persisted
specially. Sometimes just mapping the LOB as a normal Basic will work fine as well.
Various databases and JDBC drivers have various limits for LOB sizes. Some JDBC drivers
have issues beyond 4k, 32k or 1meg. The Oracle thin JDBC drivers had a 4k limitation
in some versions for binding LOB data. Oracle provided a workaround for this limitation,
which some JPA providers support. For reading LOBs, some JDBC drivers prefer using
streams, some JPA providers also support this option.
Typically the entire LOB will be read and written for the attribute. For very large LOBs
reading the value always, or reading the entire value may not be desired. The fetch type
of the Basic could be set to LAZY to avoid reading a LOB unless accessed. Support for
LAZY fetching on Basic is optional in JPA, so some JPA providers may not support it. A
workaround, which is often a good idea in general given the large performance cost of LOBs,
is to store the LOB in a separate table and class and define a OneToOne to the LOB object
instead of a Basic. If the entire LOB is never desired to be read, then it should not be
mapped. It is best to use direct JDBC to access and stream the LOB in this case. In
may be possible to map the LOB to a java.sql.Blob/java.sql.Clob in your object to
avoid reading the entire LOB, but these require a live connection, so may have issues with
detached objects.
@Entity
public class Employee {
...
@Basic(fetch=FetchType.LAZY)
@Lob
private Image picture;
...
}
8 https://java.sun.com/javaee/5/docs/api/javax/persistence/Lob.html
118
Lazy Fetching
The fetch attribute can be set on a Basic mapping to use LAZY fetching. By default all
Basic mappings are EAGER, which means the column is selected whenever the object is
selected. By setting the fetch to LAZY, the column will not be selected with the object. If
the attribute is accessed, then the attribute value will be selected in a separate database
select. Support for LAZY is an optional feature of JPA, so some JPA providers may not
support it. Typically support for lazy on basics will require some form of byte code weaving,
or dynamic byte code generation, which may have issues in certain environments or JVMs,
or may require preprocessing your application's persistence unit jar.
Only attributes that are rarely accessed should be marked lazy, as accessing the attribute
causes a separate database select, which can hurt performance. This is especially true if a
large number of objects is queried. The original query will require one database select, but
if each object's lazy attribute is accessed, this will require n database selects, which can be
a major performance issue.
Using lazy fetching on basics is similar to the concept of fetch groups. Lazy basics is basically
support for a single default fetch group. Some JPA providers support fetch groups in general,
which allow more sophisticated control over what attributes are fetched per query.
TopLink9 , EclipseLink10 : Support lazy basics and fetch groups. Fetch groups can be
configured through the EclipseLink API using the FetchGroup class.
29.5 Optional
A Basic attribute can be optional if its value is allowed to be null. By default everything
is assumed to be optional, except for an Id, which can not be optional. Optional is basically
only a hint that applies to database schema generation, if the persistence provider is
configured to generate the schema. It adds a NOT NULL constraint to the column if false.
Some JPA providers also perform validation of the object for optional attributes, and will
throw a validation error before writing to the database, but this is not required by the JPA
specification. Optional is defined through the optional attribute of the Basic annotation
or element.
9 Chapter 11 on page 25
10 Chapter 10 on page 23
119
Advanced
There are various attributes on the Column11 annotation and element for database schema
generation. If you do not use JPA to generate your schema you can ignore these. Many JPA
providers do provide the feature of auto generation of the database schema. By default the
Java types of the object's attributes are mapped to their corresponding database type for
the database platform you are using. You may require configuring your database platform
with your provider (such as a persistence.xml property) to allow schema generation for your
database, as many database use different type names.
The columnDefinition attribute of Column can be used to override the default database
type used, or enhance the type definition with constraints or other such DDL. The length,
scale and precision can also be set to override defaults. Since the defaults for the length
are just defaults, it is normally a good idea to set these to be correct for your data model's
expected data, to avoid data truncation. The unique attribute can be used to define a
unique constraint on the column, most JPA providers will automatically define primary key
and foreign key constraints based on the Id and relationship mappings.
JPA does not define any options to define an index. Some JPA providers may provide
extensions for this. You can also create your own indexes through native queries
@Entity
public class Employee {
@Id
@Column(name="ID")
private long id;
@Column(name="SSN" unique=true optional=false)
private long ssn;
@Column(name="F_NAME" length=100)
private String firstName;
@Column(name="L_NAME" length=200)
private String lastName;
@Column(name="SALARY" scale=10 precision=2)
private BigDecimal salary;
@Column(name="S_TIME" columnDefinition="TIMESTAMPTZ")
private Calendar startTime;
@Column(name="E_TIME" columnDefinition ="TIMESTAMPTZ")
private Calendar endTime;
...
}
11 https://java.sun.com/javaee/5/docs/api/javax/persistence/Column.html
120
Insertable, Updatable / Read Only Fields / Returning
If using BigDecimal with Postgresql, JPA maps salary to a table column of type NU-
MERIC(38,0). You can adjust scale and precision for BigDecimal within the @Column
annotation.
@Column(precision=8, scale=2)
private BigDecimal salary;
The Column annotation and XML element defines insertable and updatable options.
These allow for this column, or foreign key field to be omitted from the SQL INSERT or
UPDATE statement. These can be used if constraints on the table prevent insert or update
operations. They can also be used if multiple attributes map to the same database column,
such as with a foreign key field through a ManyToOne and Id or Basic mapping. Setting
both insertable and updatable to false, effectively mark the attribute as read-only.
insertable and updatable can also be used in the database table defaults, or auto assigns
values to the column on insert or update. Be careful in doing this though, as this means
that the object's values will be out of synch with the database, unless it is refreshed. For
IDENTITY or auto assigned id columns a GeneratedValue should normally be used, instead
of setting insertable to false. Some JPA providers also support returning auto assigned
fields values from the database after insert or update operations. The cost of refreshing
or returning fields back into the object can effect performance, so it is normally better to
initialize field values in the object model, not in the database.
TopLink12 , EclipseLink13 : Support returning insert and update values back into the object
using the ReturnInsert and ReturnUpdate annotations and XML elements.
12 Chapter 11 on page 25
13 Chapter 10 on page 23
121
Advanced
29.8 Conversion
A common problem in storing values to the database is that the value desired in Java differs
from the value used in the database. Common examples include using a boolean in Java
and a 0, 1 or a 'T', 'F' in the database. Other examples are using a String in Java and a
DATE in the database.
One way to accomplish this is to translate the data through property get/set methods.
@Entity
public class Employee {
...
private boolean isActive;
...
@Transient
public boolean getIsActive() {
return isActive;
}
public void setIsActive(boolean isActive) {
this.isActive = isActive;
}
@Basic
private String getIsActiveValue() {
if (isActive) {
return "T";
} else {
return "F";
}
}
private void setIsActiveValue(String isActive) {
this.isActive = "T".equals(isActive);
}
}
JPA defines support for most common database types, however some databases and JDBC
driver have additional types that may require additional support.
Some custom database types include:
• TIMESTAMPTZ, TIMESTAMPLTZ (Oracle)
• TIMESTAMP WITH TIMEZONE (Postgres)
• XMLTYPE (Oracle)
• XML (DB2)
122
Custom Types
17 Chapter 11 on page 25
18 Chapter 10 on page 23
19 http://en.wikibooks.org/wiki/Category%3AJava%20Persistence
20 http://en.wikibooks.org/wiki/Category%3AJava%20Persistence%2FMapping
123
30 Relationships
A relationship is a reference from one object to another. In Java, relationships are defined
through object references (pointers) from a source object to the target object. Technically,
in Java there is no difference between a relationship to another object and a "relationship"
to a data attribute such as a String or Date (primitives are different), as both are pointers;
however, logically and for the sake of persistence, data attributes are considered part of the
object, and references to other persistent objects are considered relationships.
In a relational database relationships are defined through foreign keys. The source row
contains the primary key of the target row to define the relationship (and sometimes the
inverse). A query must be performed to read the target objects of the relationship using the
foreign key and primary key information.
In Java, if a relationship is to a collection of other objects, a Collection or array type is
used in Java to hold the contents of the relationship. In a relational database, collection
relations are either defined by the target objects having a foreign key back to the source
object's primary key, or by having an intermediate join table to store the relationship (both
objects' primary keys).
All relationships in Java and JPA are unidirectional, in that if a source object references
a target object there is no guarantee that the target object also has a relationship to the
source object. This is different than a relational database, in which relationships are defined
through foreign keys and querying such that the inverse query always exists.
125
Relationships
allowing either a join table, or foreign key in the target, and collection mappings also allow
Collection types and Map types. There are also other possible complex relationship types,
see Advanced Relationships7 .
The cost of retrieving and building an object's relationships, far exceeds the cost of selecting
the object. This is especially true for relationships such as manager or managedEmployees
such that if any employee were selected it would trigger the loading of every employee through
the relationship hierarchy. Obviously this is a bad thing, and yet having relationships in
objects is very desirable.
The solution to this issue is lazy fetching (lazy loading). Lazy fetching allows the fetching of
a relationship to be deferred until it is accessed. This is important not only to avoid the
database access, but also to avoid the cost of building the objects if they are not needed.
In JPA lazy fetching can be set on any relationship using the fetch attribute. The fetch
can be set to either LAZY or EAGER as defined in the FetchType8 enum. The default fetch
type is LAZY for all relationships except for OneToOne and ManyToOne, but in general it is a
good idea to make every relationship LAZY. The EAGER default for OneToOne and ManyToOne
is for implementation reasons (more difficult to implement), not because it is a good idea.
Technically in JPA LAZY is just a hint, and a JPA provider is not required to support it,
however in reality all main JPA providers support it, and they would be pretty useless if
they did not.
@Entity
public class Employee {
@Id
private long id;
...
@OneToOne(fetch=FetchType.LAZY)
@JoinColumn(name="ADDR_ID")
private Address address;
...
}
126
Lazy Fetching
</attributes>
</entity>
30.1.3 Magic
Lazy fetching normally involves some sort of magic in the JPA provider to transparently
fault in the relationships as they are accessed. The typical magic for collection relationships
is for the JPA provider to set the relationships to its own Collection, List, Set or Map
implementation. When any (or most) method is accessed on this collection proxy, it loads
the real collection and forwards the method. This is why JPA requires that all collection
relationships use one of the collection interfaces (although some JPA providers support
collection implementations too).
For OneToOne and ManyToOne relationships the magic normally involves some sort of byte
code manipulation of the entity class, or creation of a subclass. This allows the access to
the field or get/set methods to be intercepted, and for the relationships to be first retrieved
before allowing access to the value. Some JPA providers use different methods, such as
wrapping the reference in a proxy object, although this can have issues with null values and
primitive methods. To perform the byte code magic normally an agent or post-processor is
required. Ensure that you correctly use your providers agent or post-processor otherwise
lazy may not work. You may also notice additional variables when in a debugger, but in
general debugging will still work as normal.
30.1.4 Basics
A Basic attribute can also be made LAZY, but this is normally a different mechanism than
lazy relationships, and should normally be avoided unless the attribute is rarely accessed.
See Basic Attributes : Lazy Fetching9 .
A major issue with lazy relationships, is ensuring that the relationship is still available after
the object has been detached, or serialized. For most JPA providers, after serialization any
lazy relationship that was not instantiated will be broken, and either throw an error when
accessed, or return null.
The naive solution is to make every relationship eager. Serialization suffers from the same
issue as persistence, in that you can very easily serialize your entire database if you have no
lazy relationships. So lazy relationships are just as necessary for serialization, as they are
for database access, however you need to ensure you have everything you will need after
serialization instantiated upfront. You may mark only the relationships that you think you
will need after serialization as EAGER, this will work, however there are probably many cases
when you do not need these relationships.
127
Relationships
A second solution is to access any relationship you will need before returning the object for
serialization. This has the advantage of being use case specific, so different use cases can
instantiate different relationships. For collection relationships sending size() is normally
the best way to ensure a lazy relationship is instantiated. For OneToOne and ManyToOne rela-
tionships, normally just accessing the relationship is enough (i.e. employee.getAddress()),
although for some JPA providers that use proxies you may need to send the object a message
(i.e.employee.getAddress().hashCode()).
A third solution is to use the JPQL JOIN FETCH for the relationship when querying the
objects. A join fetch will normally ensure the relationship has been instantiated. Some
caution should be used with join fetch however, as it can become inefficient if used on
collection relationships, especially multiple collection relationships as it requires an nˆ2 join
on the database.
Some JPA providers may also provide certain query hints, or other such serialization options.
The same issue can occur without serialization, if a detached object is accessed after the end
of the transaction. Some JPA providers allow lazy relationship access after the end of the
transaction, or after the EntityManager has been closed, however some do not. If your JPA
provider does not, then you may require that you ensure you have instantiated all the lazy
relationships that you will need before ending the transaction.
One common misconception is that EAGER means that the relationship should be join fetched,
i.e. retrieved in the same SQL SELECT statement as the source object. Some JPA providers
do implement eager this way. However, just because something is desired to be loaded, does
not mean that it should be join fetched. Consider Employee - Phone, a Phone's employee
reference is made EAGER as the employee is almost always loaded before the phone. However
when loading the phone, you do not want to join the employee, the employee has already
been read and is already in the cache or persistence context. Also just because you want
two collection relationships loaded, does not mean you want them join fetch which would
result in a very inefficient join that would return nˆ2 data.
Join fetching is something that JPA currently only provides through JPQL, which is normally
the correct place for it, as each use case has different relationship requirements. Some JPA
providers also provider a join fetch option at the mapping level to always join fetch a
relationship, but this is normally not the same thing as EAGER. Join fetching is not normally
the most efficient way to load a relationship anyway, normally batch reading a relationship
is much more efficient when supported by your JPA provider.
See Join Fetching10
See Batch Reading11
128
Cascading
30.2 Cascading
Relationship mappings have a cascade option that allows the relationship to be cascaded
for common operations. cascade is normally used to model dependent relationships, such
as Order -> OrderLine. Cascading the orderLines relationship allows for the Order's ->
OrderLines to be persisted, removed, merged along with their parent.
The following operations can be cascaded, as defined in the CascadeType12 enum:
• PERSIST - Cascaded the EntityManager.persist() operation. If persist() is called
on the parent, and the child is also new, it will also be persisted. If it is existing, nothing
will occur, although calling persist() on an existing object will still cascade the persist
operation to its dependents. If you persist an object, and it is related to a new object,
and the relationship does not cascade persist, then an exception will occur. This may
require that you first call persist on the related object before relating it to the parent.
General it may seem odd, or be desirable to always cascade the persist operation, if a new
object is related to another object, then it should probably be persisted. There is most
likely not a major issue with always cascading persist on every relationship, although it
may have an impact on performance. Calling persist on a related object is not required,
on commit any related object whose relationship is cascade persist will automatically be
persisted. The advantage of calling persist up front is that any generated ids will (unless
using identity) be assigned, and the prePersist event will be raised.
• REMOVE - Cascaded the EntityManager.remove() operation. If remove() is called on
the parent then the child will also be removed. This should only be used for dependent
relationships. Note that only the remove() operation is cascaded, if you remove a
dependent object from a OneToMany collection it will not be deleted, JPA requires that
you explicitly call remove() on it. Some JPA providers may support an option to have
objects removed from dependent collection deleted, JPA 2.0 also defines an option for
this.
• MERGE - Cascaded the EntityManager.merge() operation. If merge() is called on the
parent, then the child will also be merged. This should normally be used for dependent
relationships. Note that this only effects the cascading of the merge, the relationship
reference itself will always be merged. This can be a major issue if you use transient
variables to limit serialization, you may need to manually merge, or reset transient
relationships in this case. Some JPA providers provide additional merge operations.
• REFRESH - Cascaded the EntityManager.refresh() operation. If refresh() is called
on the parent then the child will also be refreshed. This should normally be used for
dependent relationships. Be careful enabling this for all relationships, as it could cause
changes made to other objects to be reset.
• ALL - Cascaded all the above operations.
@Entity
public class Employee {
@Id
12 https://java.sun.com/javaee/5/docs/api/javax/persistence/CascadeType.html
129
Relationships
Cascading of the remove operation only occurs when the remove is called on the object. This
is not normally what is desired on a dependent relationship. If the related objects cannot
exist without the source, then it is normally desired to have them deleted when the source
is deleted, but also have them deleted when they are no longer referenced from the source.
JPA 1.0 did not provide an option for this, so when a dependent object was removed from
the source relationship, it had to be explicitly removed from the EntityManager. JPA 2.0
provides a orphanRemoval option on the OneToMany13 and OneToOne14 annotations and
XML. Orphan removal will ensure that any object no longer referenced from the relationship
is deleted from the database.
@Entity
public class Employee {
@Id
private long id;
...
@OneToMany(orphanRemoval=true, cascade={CascadeType.ALL})
private List<PhoneNumbers> phones;
...
}
13 https://java.sun.com/javaee/6/docs/api/javax/persistence/OneToMany.html
14 https://java.sun.com/javaee/6/docs/api/javax/persistence/OneToMany.html
130
Target Entity
Relationship mappings have a targetEntity attribute that allows the reference class (target)
of the relationship to be specified. This is normally not required to be set as it is defaulted
from the field type, get method return type, or collection's generic type.
This can also be used if your field uses a public interface type, for example field is interface
Address, but the mapping needs to be to implementation class AddressImpl. Another
usage is if your field is a superclass type, but you want to map the relationship to a subclass.
@Entity
public class Employee {
@Id
private long id;
...
@OneToMany(targetEntity=Phone.class)
@JoinColumn(name="OWNER_ID")
private List phones;
...
}
131
Relationships
30.5 Collections
Your field should not be of a collection implementation type, such as ArrayList. Some JPA
providers may support using collection implementations, many support EAGER collection
relationships to use the implementation class. You can set any implementation as the
instance value of the collection, but when reading an object from the database, if it is LAZY
the JPA provider will normally put in a special LAZY collection.
30.5.2 Duplicates
A List in Java supports duplicate entries, and a Set does not. In the database, duplicates
are generally not supported. Technically it could be possible if a JoinTable is used, but
JPA does not require duplicates to be supported, and most providers do not.
If you require duplicate support, you may need to create an object that represents and maps
to the join table. This object would still require a unique Id, such as a GeneratedValue.
See Mapping a Join Table with Additional Columns18 .
30.5.3 Ordering
JPA allows the collection values to be ordered by the database when retrieved. This is done
through the @OrderBy19 annotation or <order-by> XML element.
The value of the OrderBy is a JPQL ORDER BY20 string. The can be an attribute name
followed by ASC or DESC for ascending or descending ordering. You could also use a path or
nested attribute, or a "," for multiple attributes. If no OrderBy value is given it is assumed
to be the Id of the target object.
The OrderBy value must be a mapped attribute of the target object. If you want to have
an ordered List you need to add an index attribute to your target object and an index
column to it's table. You will also have to ensure you set the index values. JPA 2.0 will
have extended support for an ordered List using an OrderColumn.
Note that using an OrderBy does not ensure the collection is ordered in memory. You are
responsible for adding to the collection in the correct order. Java does define a SortedSet
132
Collections
interface and TreeSet collection implementation that does maintain an order. JPA does not
specifically support SortedSet, but some JPA providers may allow you to use a SortedSet
or TreeSet for your collection type, and maintain the correct ordering. By default these
require your target object to implement the Comparable interface, or set a Comparator.
You can also use the Collections.sort() method to sort a List when required. One
option to sort in memory is to use property access and in your set and add methods call
Collections.sort().
@Entity
public class Employee {
@Id
private long id;
...
@OneToMany
@OrderBy("areaCode")
private List<Phone> phones;
...
}
JPA 2.0 adds support for an OrderColumn. An OrderColumn can be used to define an order
List on any collection mapping. It is defined through the @OrderColumn21 annotation or
<order-column> XML element.
The OrderColumn is maintained by the mapping and should not be an attribute of the target
object. The table for the OrderColumn depends on the mapping. For a OneToMany mapping
it will be in the target object's table. For a ManyToMany mapping or a OneToMany using a
JoinTable it will be in the join table. For an ElementCollection mapping it will be in the
target table.
EMPLOYEE (table)
21 https://java.sun.com/javaee/6/docs/api/javax/persistence/OrderColumn.html
133
Relationships
EMPLOYEE_PHONE (table)
PHONE(table)
ID AREACODE NUMBER
1 613 792-7777
2 416 798-6666
3 613 792-9999
4 416 798-5555
@Entity
public class Employee {
@Id
private long id;
...
@OneToMany
@OrderColumn(name="INDEX")
private List<Phone> phones;
...
}
134
Common Problems
Object corruption, one side of the relationship is not updated after updating
the other side
A common problem with bi-directional relationships is the application updates one side
of the relationship, but the other side does not get updated, and becomes out of sync. In
JPA, as in Java in general it is the responsibility of the application, or the object model to
maintain relationships. If your application adds to one side of a relationship, then it must
add to the other side.
This is commonly resolved through add or set methods in the object model that handle
both sides of the relationships, so the application code does not need to worry about it.
There are two ways to go about this, you can either only add the relationship maintenance
code to one side of the relationship, and only use the setter from one side (such as making
the other side protected), or add it to both sides and ensure you avoid a infinite loop.
For example:
135
Relationships
Also some JPA providers optimize their lazy collection objects to handle this case, so you
can still add to the collection without instantiating it.
This most common issue leading to poor performance is the usage of EAGER relationships.
This requires the related objects to be read when the source objects are read. So for
example reading the president of the company with EAGER managedEmployees will cause
every Employee in the company to be read. The solution is to always make all relationships
LAZY. By default OneToMany and ManyToMany are LAZY but OneToOne and ManyToOne are
not, so make sure you configure them to be. See, Lazy Fetching22 . Sometimes you have
LAZY configured but it does not work, see Lazy is not working23
Another common problems is the n+1 issue. For example consider that you read all Employee
objects then access their Address. Since each Address is accessed separately this will cause
n+1 queries, which can be a major performance problem. This can be solved through Join
Fetching24 and Batch Reading25 .
Lazy OneToOne and ManyToOne relationships typically require some form of weaving or byte-
code generation. Normally when running in JSE an agent option is required to allow the
byte-code weaving, so ensure you have the agent configured correctly. Some JPA providers
perform dynamic subclass generation, so do not require an agent.
Example agent
Some JPA providers also provide static weaving instead, or in addition to dynamic weaving.
For static weaving some preprocessor must be run on your JPA classes.
When running in JEE lazy should normally work, as the class loader hook is required by
the EJB specification. However some JEE providers may not support this, so static weaving
may be required.
Also ensure that you are not accessing the relationship when you shouldn't be. For example
if you use property access, and in your set method access the related lazy value, this will
cause it to be loaded. Either remove the set method side-effects, or use field access.
136
Common Problems
If your relationship is marked as lazy then if it has not been instantiated before the object
is serialized, then it may not get serialized. This may cause an error, or return null if it is
accessed after deserialization.
See, Serialization, and Detaching26
When you remove an object from a collection, if you also want the object deleted from
the database you must call remove() on the object. In JPA 1.0 even if your relationship
is cascade REMOVE, you still must call remove(), only the remove of the parent object is
cascaded, not removal from the collection.
JPA 2.0 will provide an option for having removes from the collection trigger deletion. Some
JPA providers support an option for this in JPA 1.0.
See, Cascading27
If your relationship field's type is a public interface of your class, and only has a single
implementer, then this is simple to solve, you just need to set a targetEntity on your
mapping. See, Target Entity28 .
If your interface has multiple implementers, then this is more complex. JPA does not directly
support mapping interfaces. One solution is to convert the interface to an abstract class and
use inheritance to map it. You could also keep the interface, create the abstract class and
make sure each implementer extends it, and set the targetEntity to be the abstract class.
Another solution is to define virtual attributes using get/set methods for each possible
implementer, and map these separately, and mark the interface get/set as transient. You
could also not map the attribute, and instead query for it as required.
See, Variable and Heterogeneous Relationships29
Some JPA providers have support for interfaces and variable relationships.
TopLink30 , EclipseLink31 : Support variable relationships through their
@VariableOneToOne annotation and XML. Mapping to and querying interfaces
are also supported through their ClassDescriptor's InterfacePolicy API.
137
31 Advanced
139
Advanced
31.2 Maps
Java defines the Map interface to represent collections whose values are indexed on a key.
There are several Map implementations, the most common is HashMap, but also Hashtable
and TreeMap.
JPA allows a Map to be used for any collection mapping including, OneToMany, ManyToMany
and ElementCollection. JPA requires that the Map interface be used as the attribute type,
although some JPA providers may also support using Map implementations.
In JPA 1.0 the map key must be a mapped attribute of the collection values. The @MapKey10
annotation or <map-key> XML element is used to define a map relationship. If the MapKey
is not specified it defaults to the target object's Id.
@Entity
public class Employee {
@Id
private long id;
...
@OneToMany(mappedBy="owner")
@MapKey(name="type")
private Map<String, PhoneNumber> phoneNumbers;
...
}
@Entity
public class PhoneNumber {
@Id
private long id;
@Basic
private String type; // Either "home", "work", or "fax".
...
@ManyToOne
private Employee owner;
...
}
10 https://java.sun.com/javaee/5/docs/api/javax/persistence/MapKey.html
140
Maps
<many-to-one name="owner"/>
</attributes>
</entity>
JPA 2.0 allows for a Map where the key is not part of the target object to be persisted. The
Map key can be any of the following:
• A Basic value, stored in the target's table or join table.
• An Embedded object, stored in the target's table or join table.
• A foreign key to another Entity, stored in the target's table or join table.
Map columns can be used for any collection mapping including, OneToMany, ManyToMany
and ElementCollection.
This allows for great flexibility and complexity in the number of different models that can
be mapped. The type of mapping used is always determined by the value of the Map, not
the key. So if the key is a Basic but the value is an Entity a OneToMany mapping is still
used. But if the value is a Basic but the key is an Entity a ElementCollection mapping
is used.
This allows some very sophisticated database schemas to be mapped. Such as a three way
join table, can be mapped using a ManyToMany with a MapKeyJoinColumn for the third
foreign key. For a ManyToMany the key is always stored in the JoinTable. For a OneToMany
it is stored in the JoinTable if defined, otherwise it is stored in the target Entity's table,
even though the target Entity does not map this column. For an ElementCollection the
key is stored in the element's table.
The @MapKeyColumn11 annotation or <map-key-column> XML element is used to de-
fine a map relationship where the key is a Basic value, the @MapKeyEnumerated12
and @MapKeyTemporal13 can also be used with this for Enum or Calendar types. The
@MapKeyJoinColumn14 annotation or <map-key-join-column> XML element is used to de-
fine a map relationship where the key is an Entity value, the @MapKeyJoinColumns15 can
also be used with this for composite foreign keys. The annotation @MapKeyClass16 or
<map-key-class> XML element can be used when the key is an Embeddable or to specify
the target class or type if generics are not used.
EMPLOYEE (table)
11 https://java.sun.com/javaee/6/docs/api/javax/persistence/MapKeyColumn.html
12 https://java.sun.com/javaee/6/docs/api/javax/persistence/MapKeyEnumerated.html
13 https://java.sun.com/javaee/6/docs/api/javax/persistence/MapKeyTemporal.html
14 https://java.sun.com/javaee/6/docs/api/javax/persistence/MapKeyJoinColumn.html
15 https://java.sun.com/javaee/6/docs/api/javax/persistence/MapKeyJoinColumns.html
16 https://java.sun.com/javaee/6/docs/api/javax/persistence/MapKeyClass.html
141
Advanced
PHONE(table)
@Entity
public class Employee {
@Id
private long id;
...
@OneToMany(mappedBy="owner")
@MapKeyColumn(name="PHONE_TYPE")
private Map<String, Phone> phones;
...
}
@Entity
public class Phone {
@Id
private long id;
...
@ManyToOne
private Employee owner;
...
}
EMPLOYEE (table)
142
Maps
PHONE(table)
PHONETYPE(table)
ID TYPE
1 home
2 cell
3 fax
4 work
@Entity
public class Employee {
@Id
private long id;
...
@OneToMany(mappedBy="owner")
@MapKeyJoinColumn(name="PHONE_TYPE_ID")
private Map<PhoneType, Phone> phones;
...
}
@Entity
public class Phone {
@Id
private long id;
...
@ManyToOne
private Employee owner;
...
}
@Entity
public class PhoneType {
@Id
private long id;
...
@Basic
private String type;
...
}
143
Advanced
EMPLOYEE (table)
EMPLOYEE_PHONE (table)
PHONE (table)
ID AREACODE NUMBER
1 613 792-7777
2 613 798-6666
3 416 792-9999
4 416 798-5555
@Entity
public class Employee {
144
Join Fetching
@Id
private long id;
...
@OneToMany
@MapKeyClass(PhoneType.class)
private Map<PhoneType, Phone> phones;
...
}
@Entity
public class Phone {
@Id
private long id;
...
}
@Embeddable
public class PhoneType {
@Basic
private String type;
...
}
Join fetching is a query optimization technique for reading multiple objects in a single
database query. It involves joining the two object's tables in SQL and selecting both object's
data. Join fetching is commonly used for OneToOne relationships, but also can be used for
any relationship including OneToMany and ManyToMany.
Join fetching is one solution to the classic ORM n+1 performance problem. The issue is if
you select n Employee objects, and access each of their addresses, in basic ORM (including
JPA) you will get 1 database select for the Employee objects, and then n database selects,
one for each Address object. Join fetching solves this issue by only requiring one select, and
selecting both the Employee and its Address.
145
Advanced
JPA supports join fetching through JPQL using the JOIN FETCH syntax.
This causes both the Employee and Address data to be selected in a single query.
Using the JPQL JOIN FETCH syntax a normal INNER join is performed. This has the side
effect of filtering any Employee from the result set that did not have an address. An OUTER
join in SQL is one that does not filter absent rows on the join, but instead joins a row
of all null values. If your relationship allows null or an empty collection for collection
relationships, then you can use an OUTER join fetch, this is done in JPQL using the LEFT
syntax.
Note that OUTER joins can be less efficient on some databases, so avoid using an OUTER if it
is not required.
JPA has no way to specify that a join fetch always be used for a relationship. Normally it is
better to specify the join fetch at the query level, as some use cases may require the related
objects, and other use cases may not. JPA does support an EAGER option on mappings,
but this means that the relationship will be loaded, not that it will be joined. It may be
desirable to mark all relationships as EAGER as everything is desired to be loaded, but join
fetching everything in one huge select could result in a inefficient, overly complex, or invalid
join on the database.
Some JPA providers do interpret EAGER as join fetch, so this may work on some JPA providers.
Some JPA providers support a separate option for always join fetching a relationship.
TopLink17 , EclipseLink18 : Support a @JoinFetch annotation and XML on a mapping to
define that the relationship always be join fetched.
17 Chapter 11 on page 25
18 Chapter 10 on page 23
146
Batch Fetching
JPA 1.0 does not allow nested join fetches in JPQL, although this may be supported by
some JPA providers. You can join fetch multiple relationships, but not nested relationships.
SELECT emp FROM Employee emp LEFT JOIN FETCH emp.address LEFT JOIN
FETCH emp.phoneNumbers
One issue with join fetching is that duplicate data can be returned. For example consider join
fetching an Employee's phoneNumbers relationship. If each Employee has 3 Phone objects
in its phoneNumbers collection, the join will require to bring back n*3 rows. As there are 3
phone rows for each employee row, the employee row will be duplicated 3 times. So you are
reading more data than if you have selected the objects in n+1 queries. Normally the fact
that your executing fewer queries makes up for the fact that you may be reading duplicate
data, but if you consider joining multiple collection relationships you can start to get back
j*i duplicate data which can start to become an issue. Even with ManyToOne relationships
you can be selecting duplicate data. Consider join fetching an Employee's manager, if all or
most employee's have the same manager, you will end up select this manager's data many
times, in this case you would be better off not using join fetch, and allowing a single query
for the manager.
If you start join fetching every relationship, you can start to get some pretty huge joins.
This can sometimes be an issue for the database, especially with huge outer joins.
One alternative solution to join fetch that does not suffer from duplicate data is using Batch
Fetching19 .
Batch fetching is a query optimization technique for reading multiple related objects in a
finite set of database queries. It involves executing the query for the root objects as normal.
But for the related objects the original query is joined with the query for the related objects,
allowing all of the related objects to be read in a single database query. Batch fetching can
be used for any type of relationship.
Batch fetching is one solution to the classic ORM n+1 performance problem. The issue is if
you select n Employee objects, and access each of their addresses, in basic ORM (including
JPA) you will get 1 database select for the Employee objects, and then n database selects,
one for each Address object. Batch fetching solves this issue by requiring one select for the
Employee objects and one select for the Address objects.
147
Advanced
Batch fetching is more optimal for reading collection relationships and multiple relationships
as it does not require selecting duplicate data as in join fetching.
JPA does not support batch reading, but some JPA providers do.
TopLink20 , EclipseLink21 : Support a @BatchFetch annotation and xml element and a
"eclipselink.batch" query hint to enable batch reading. Three forms of batch fetching
are supported, JOIN, EXISTS, and IN.
See also,
• Batch fetching - optimizing object graph loading 22
It is sometimes desirable to define a relationship where the type of the relationship can
be one of several unrelated, heterogeneous values. This could either be a OneToOne,
ManyToOne, OneToMany or ManyToMany relationship. The related values may share a
common interface, or may share nothing in common other than subclassing Object. It is
also possible to conceive of a relationship that could also be any Basic value, or even an
Embeddable.
20 Chapter 11 on page 25
21 Chapter 10 on page 23
http://java-persistence-performance.blogspot.com/2010/08/batch-fetching-optimizing-object-graph.
22
html
23 Chapter 11 on page 25
24 Chapter 10 on page 23
148
Nested Collections, Maps and Matrices
In general JPA does not support variable, interface, or heterogeneous relationships. JPA
does support relationships to inheritance classes, so the easiest workaround is normally to
define a common superclass for the related values.
Another solution is to define virtual attributes using get/set methods for each possible
implementer, and map these separately, and mark the heterogeneous get/set as transient.
You could also not map the attribute, and instead query for it as required.
For heterogeneous Basic or Embeddable relationships one solution is to serialize the value
to a binary field. You could also convert the value to a String representation that you can
restore the value from, or store the value to two columns, one storing the String value and
the other the class name or type.
Some JPA providers have support for interfaces, variable relationships, and/or heterogeneous
relationships.
TopLink25 , EclipseLink26 : Support variable relationships through their
@VariableOneToOne annotation and XML. Mapping to and querying interfaces
are also supported through their ClassDescriptor's InterfacePolicy API.
25 Chapter 11 on page 25
26 Chapter 10 on page 23
149
Advanced
EMPLOYEE (table)
PROJECTTYPE (table)
ID EMPLOYEE_ID PROJ_TYPE
1 1 small
2 1 medium
3 2 large
PROJECTTYPE_PROJECT (table)
PROJECTTYPE_ID PROJECT_ID
1 1
1 2
2 3
3 4
Relationships27 Relationships28
27 http://en.wikibooks.org/wiki/Category%3AJava%20Persistence
28 http://en.wikibooks.org/wiki/Category%3AJava%20Persistence%2FMapping
150
32 OneToOne
Figure 14
A OneToOne relationship in Java is where the source object has an attribute that references
another target object and (if) that target object had the inverse relationship back to the
source object it would also be a OneToOne relationship. All relationships in Java and JPA
are unidirectional, in that if a source object references a target object there is no guarantee
that the target object also has a relationship to the source object. This is different than a
relational database, in which relationships are defined through foreign keys and querying
such that the inverse query always exists.
JPA also defines a ManyToOne1 relationship, which is similar to a OneToOne relationship
except that the inverse relationship (if it were defined) is a OneToMany relationship. The main
151
OneToOne
EMPLOYEE (table)
ADDRESS (table)
2 https://java.sun.com/javaee/5/docs/api/javax/persistence/OneToOne.html
3 https://java.sun.com/javaee/5/docs/api/javax/persistence/JoinColumn.html
4 https://java.sun.com/javaee/5/docs/api/javax/persistence/JoinColumns.html
152
ADDRESS_ID STREET CITY PROVINCE COUNTRY P_CODE
6 17 Bank St Ottawa ON Canada K2H7Z5
7 22 Main St Toronto ON Canada L5H2D5
153
Nested Collections, Maps and Matrices
OneToOne
@Entity
public class Employee {
@Id
@Column(name="EMP_ID")
private long id;
...
@OneToOne(fetch=FetchType.LAZY)
@JoinColumn(name="ADDRESS_ID")
private Address address;
...
}
A typical object centric perspective of a OneToOne relationship has the data model mirror
the object model, in that the source object has a pointer to the target object, so the database
source table has a foreign key to the target table. This is not how things always work out in
the database though, in fact many database developers would think having the foreign key
in the target table to be logical, as this enforces the uniqueness of the OneToOne relationship.
Personally I prefer the object perspective, however you will most likely encounter both.
To start with consider a bi-directional OneToOne relationship, you do not require two foreign
keys, one in each table, so a single foreign key in the owning side of the relationship
is sufficient. In JPA the inverse OneToOne must use the mappedBy attribute (with some
exceptions5 ), this makes the JPA provider use the foreign key and mapping information in
the source mapping to define the target mapping.
See also, Target Foreign Keys, Primary Key Join Columns, Cascade Primary Keys6 .
The following gives an example of what the inverse address relationship would look like.
154
See Also
@Entity
public class Address {
@Id
private long id;
...
@OneToOne(fetch=FetchType.LAZY, mappedBy="address")
private Employee owner;
...
}
• Relationships7
• Cascading8
• Lazy Fetching9
• Target Entity10
• Join Fetching11
• Batch Reading12
• Common Problems13
• ManyToOne14
155
OneToOne
If you use the same field in two different mappings, you typically require to make one of
them read-only using insertable, updatable = false.
See Target Foreign Keys, Primary Key Join Columns, Cascade Primary Keys16 .
This typically occurs because you have incorrectly mapped the foreign key in a OneToOne
relationship.
See Target Foreign Keys, Primary Key Join Columns, Cascade Primary Keys17 .
It can also occur if your JPA provider does not support referential integrity, or does not
resolve bi-directional constraints. In this case you may either need to remove the constraint,
or use EntityManager flush() to ensure the order your objects are written in.
Ensure you set the value of the object's OneToOne, if the OneToOne is part of a bi-directional
OneToOne relationship, ensure you set OneToOne in both object's, JPA does not maintain
bi-directional relationships for you.
Also check that you defined the JoinColumn correctly, ensure you did not set insertable,
updatable = false or use a PrimaryKeyJoinColumn, or mappedBy.
156
33 Advanced
If a OneToOne relationship uses a target foreign key (the foreign key is in the target table, not
the source table), then JPA requires that you define a OneToOne mapping in both directions,
and that the target foreign key mapping use the mappedBy attribute. The reason for this, is
the mapping in the source object only affects the row the JPA writes to the source table, if
the foreign key is in the target table, JPA has no easy way to write this field.
There are other ways around this problem however. In JPA the JoinColumn defines an
insertable and updatable attribute, these can be used to instruct the JPA provider that
the foreign key is actually in the target object's table. With these enabled JPA will not
write anything to the source table, most JPA providers will also infer that the foreign key
constraint is in the target table to preserve referential integrity on insertion. JPA also defines
the @PrimaryKeyJoinColumn1 that can be used to define the same thing. You still must
map the foreign key in the target object in some fashion though, but could just use a Basic
mapping to do this.
Some JPA providers may support an option for a unidirectional OneToOne mapping for
target foreign keys.
Target foreign keys can be tricky to understand, so you might want to read this section
twice. They can get even more complex though. If you have a data model that cascades
primary keys then you can end up with a single OneToOne that has both a logical foreign
key, but has some fields in it that are logically target foreign keys.
For example consider Company, Department, Employee. Company's id is COM_ID,
Department's id is a composite primary key of COM_ID and DEPT_ID, and Employee's id is a
composite primary key of COM_ID, DEP_ID, and EMP_ID. So for an Employee its relationship
to company uses a normal ManyToOne with a foreign key, but its relationship to department
uses a ManyToOne with a foreign key, but the COM_ID uses insertable, updatable = false
or PrimaryKeyJoinColumn, because it is actually mapped through the company relationship.
The Employee's relationship to its address then uses a normal foreign key for ADD_ID but
a target foreign key for COM_ID, DEP_ID, and EMP_ID.
This may work in some JPA providers, others may require different configuration, or not
support this type of data model.
1 https://java.sun.com/javaee/5/docs/api/javax/persistence/PrimaryKeyJoinColumn.html
157
Advanced
COMPANY(table)
COM_ID NAME
1 ACME
2 Wikimedia
DEPARTMENT(table)
EMPLOYEE (table)
158
COM_ID DEP_ID EMP_ID NAME MNG_ID ADD_ID
1 1 1 Bob Way null 1
1 1 2 Joe Smith 1 2
1 2 1 Sarah Way null 1
1 2 2 John Doe 1 2
2 1 1 Jane Doe null 1
2 2 1 Alice Smith null 1
159
Target Foreign Keys, Primary Key Join Columns, Cascade Primary Keys
Advanced
ADDRESS(table)
@Entity
@IdClass(EmployeeId.class)
public class Employee {
@Id
@Column(name="EMP_ID")
private long employeeId;
@Id
@Column(name="DEP_ID", insertable=false, updatable=false)
private long departmentId;
@Id
@Column(name="COM_ID", insertable=false, updatable=false)
private long companyId;
...
@ManyToOne(fetch=FetchType.LAZY)
@JoinColumn(name="COM_ID")
private Company company;
@ManyToOne(fetch=FetchType.LAZY)
@JoinColumns({
@JoinColumn(name="DEP_ID"),
@JoinColumn(name="COM_ID", insertable=false, updatable=false)
})
private Department department;
@ManyToOne(fetch=FetchType.LAZY)
@JoinColumns({
@JoinColumn(name="MNG_ID"),
@JoinColumn(name="DEP_ID", insertable=false, updatable=false),
@JoinColumn(name="COM_ID", insertable=false, updatable=false)
})
private Employee manager;
@OneToOne(fetch=FetchType.LAZY)
@JoinColumns({
@JoinColumn(name="ADD_ID")
@JoinColumn(name="DEP_ID", insertable=false, updatable=false),
@JoinColumn(name="COM_ID", insertable=false, updatable=false)
})
private Address address;
...
}
@Entity
@IdClass(EmployeeId.class)
160
Mapping a OneToOne Using a Join Table
In some data models, you may have a OneToOne relationship defined through a join table.
For example consider you had existing EMPLOYEE and ADDRESS tables with no foreign key,
and wanted to define a OneToOne relationship without changing the existing tables. To do
this you could define an intermediate table that contained the primary key of both objects.
This is similar to a ManyToMany relationship, but if you add a unique constraint to each
foreign key you can enforce that it is OneToOne (or even OneToMany).
JPA defines a join table using the @JoinTable2 annotation and <join-table> XML
element. A JoinTable can be used on a ManyToMany or OneToMany mappings, but the
JPA 1.0 specification is vague whether it can be used on a OneToOne. The JoinTable
documentation does not state that it can be used in a OneToOne, but the XML schema
for <one-to-one> does allow a nested <join-table> element. Some JPA providers may
support this, and others may not.
If your JPA provider does not support this, you can workaround the issue by instead defining
a OneToMany or ManyToMany relationship and just define a get/set method that returns/sets
the first element on the collection.
2 https://java.sun.com/javaee/5/docs/api/javax/persistence/JoinTable.html
161
Advanced
EMPLOYEE (table)
EMP_ADD(table)
EMP_ID ADDR_ID
1 6
2 7
ADDRESS (table)
162
ADDRESS_ID STREET CITY PROVINCE COUNTRY P_CODE
6 17 Bank St Ottawa ON Canada K2H7Z5
7 22 Main St Toronto ON Canada L5H2D5
163
Mapping a OneToOne Using a Join Table
Advanced
@OneToOne(fetch=FetchType.LAZY)
@JoinTable(
name="EMP_ADD"
joinColumns=
@JoinColumn(name="EMP_ID", referencedColumnName="EMP_ID"),
inverseJoinColumns=
@JoinColumn(name="ADDR_ID",
referencedColumnName="ADDRESS_ID"))
private Address address;
...
@OneToMany
@JoinTable(
name="EMP_ADD"
joinColumns=
@JoinColumn(name="EMP_ID", referencedColumnName="EMP_ID"),
inverseJoinColumns=
@JoinColumn(name="ADDR_ID",
referencedColumnName="ADDRESS_ID"))
private List<Address> addresses;
...
public Address getAddress() {
if (this.addresses.isEmpty()) {
return null;
}
return this.addresses.get(0);
}
public void setAddress(Address address) {
if (this.addresses.isEmpty()) {
this.addresses.add(address);
} else {
this.addresses.set(1, address);
}
}
...
OneToOne3 OneToOne4
3 http://en.wikibooks.org/wiki/Category%3AJava%20Persistence
4 http://en.wikibooks.org/wiki/Category%3AJava%20Persistence%2FMapping
164
34 ManyToOne
Figure 15
A ManyToOne relationship in Java is where the source object has an attribute that references
another target object and (if) that target object had the inverse relationship back to the
source object it would be a OneToMany relationship. All relationships in Java and JPA are
unidirectional, in that if a source object references a target object there is no guarantee
that the target object also has a relationship to the source object. This is different than a
relational database, in which relationships are defined through foreign keys and querying
such that the inverse query always exists.
JPA also defines a OneToOne1 relationship, which is similar to a ManyToOne relationship
except that the inverse relationship (if it were defined) is a OneToOne relationship. The main
difference between a OneToOne and a ManyToOne relationship in JPA is that a ManyToOne
always contains a foreign key from the source object's table to the target object's table,
where as a OneToOne relationship the foreign key may either be in the source object's table
or the target object's table.
165
ManyToOne
EMPLOYEE (table)
PHONE (table)
@Entity
public class Phone {
@Id
private long id;
...
@ManyToOne(fetch=FetchType.LAZY)
@JoinColumn(name="OWNER_ID")
private Employee owner;
...
}
2 https://java.sun.com/javaee/5/docs/api/javax/persistence/ManyToOne.html
3 Chapter 35.1 on page 169
166
See Also
• Relationships4
• Cascading5
• Lazy Fetching6
• Target Entity7
• Join Fetching8
• Batch Reading9
• Common Problems10
• OneToOne11
• Mapping a OneToOne Using a Join Table12
• OneToMany13
If you use the same field in two different mappings, you typically require to make one of
them read-only using insertable, updateable = false.
See Target Foreign Keys, Primary Key Join Columns, Cascade Primary Keys15 .
This typically occurs because you have incorrectly mapped the foreign key in a OneToOne
relationship.
See Target Foreign Keys, Primary Key Join Columns, Cascade Primary Keys16 .
167
ManyToOne
It can also occur if your JPA provider does not support referential integrity, or does not
resolve bi-directional constraints. In this case you may either need to remove the constraint,
or use EntityManager flush() to ensure the order your objects are written in.
Ensure you set the value of the object's OneToOne, if the OneToOne is part of a bi-directional
OneToMany relationship, ensure you set the object's OneToOne when adding an object to
the OneToMany, JPA does not maintain bi-directional relationships for you.
Also check that you defined the JoinColumn correctly, ensure you did not set insertable,
updateable = false or use a PrimaryKeyJoinColumn.
168
35 Advanced
In complex data models it may be required to use a target foreign key, or read-only
JoinColumn in mapping a ManyToOne if the foreign key/JoinColumn is shared with other
ManyToOne or Basic mappings.
See, Target Foreign Keys, Primary Key Join Columns, Cascade Primary Keys1
ManyToOne 2 ManyToOne 3
169
36 OneToMany
Figure 16
A OneToMany relationship in Java is where the source object has an attribute that stores a
collection of target objects and (if) those target objects had the inverse relationship back to
the source object it would be a ManyToOne relationship. All relationships in Java and JPA
are unidirectional, in that if a source object references a target object there is no guarantee
that the target object also has a relationship to the source object. This is different than a
relational database, in which relationships are defined through foreign keys and querying
such that the inverse query always exists.
JPA also defines a ManyToMany1 relationship, which is similar to a OneToMany relationship
except that the inverse relationship (if it were defined) is a ManyToMany relationship. The
main difference between a OneToMany and a ManyToMany relationship in JPA is that a
ManyToMany always makes use of a intermediate relational join table to store the relationship,
where as a OneToMany can either use a join table, or a foreign key in target object's table
referencing the source object table's primary key. If the OneToMany uses a foreign key in the
171
OneToMany
target object's table JPA requires that the relationship be bi-directional (inverse ManyToOne
relationship must be defined in the target object), and the source object must use the
mappedBy attribute to define the mapping.
In JPA a OneToMany relationship is defined through the @OneToMany2 annotation or the
<one-to-many> element.
EMPLOYEE (table)
PHONE (table)
@Entity
public class Employee {
@Id
@Column(name="EMP_ID")
private long id;
...
@OneToMany(mappedBy="owner")
private List<Phone> phones;
...
}
@Entity
public class Phone {
@Id
private long id;
...
@ManyToOne(fetch=FetchType.LAZY)
@JoinColumn(name="OWNER_ID")
private Employee owner;
...
}
2 https://java.sun.com/javaee/5/docs/api/javax/persistence/OneToMany.html
172
Target Foreign Keys, Primary Key Join Columns, Cascade Primary Keys
As the relationship is bi-directional so as the application updates one side of the relationship,
the other side should also get updated, and be in synch. In JPA, as in Java in general it is
the responsibility of the application, or the object model to maintain relationships. If your
application adds to one side of a relationship, then it must add to the other side.
This can be resolved through add or set methods in the object model that handle both sides
of the relationships, so the application code does not need to worry about it. There are two
ways to go about this, you can either only add the relationship maintenance code to one
side of the relationship, and only use the setter from one side (such as making the other side
protected), or add it to both sides and ensure you avoid a infinite loop.
For example:
173
OneToMany
this.owner = employee;
if (!employee.getPhones().contains(this)) {
employee.getPhones().add(this);
}
}
...
}
Some expect the JPA provider to have magic that automatically maintains relationships.
This was actually part of the EJB CMP 2 specification. However the issue is if the objects
are detached or serialized to another VM, or new objects are related before being managed,
or the object model is used outside the scope of JPA, then the magic is gone, and the
application is left figuring things out, so in general it may be better to add the code to the
object model. However some JPA providers do have support for automatically maintaining
relationships.
In some cases it is undesirable to instantiate a large collection when adding a child object.
One solution is to not map the bi-directional relationship, and instead query for it as required.
Also some JPA providers optimize their lazy collection objects to handle this case, so you
can still add to the collection without instantiating it.
A common mismatch between objects and relational tables is that a OneToMany does not
require a back reference in Java, but requires a back reference foreign key in the database.
Normally it is best to define the ManyToOne back reference in Java, if you cannot or don't
want to do this, then you can use a intermediate join table to store the relationship. This
is similar to a ManyToMany relationship, but if you add a unique constraint to the target
foreign key you can enforce that it is OneToMany.
JPA defines a join table using the @JoinTable4 annotation and <join-table> XML
element. A JoinTable can be used on a ManyToMany or OneToMany mappings.
See also, Undirectional OneToMany5
EMPLOYEE (table)
EMP_PHONE (table)
EMP_ID PHONE_ID
1 1
4 https://java.sun.com/javaee/5/docs/api/javax/persistence/JoinTable.html
5 Chapter 37.1 on page 177
174
Join Table
1 2
2 3
PHONE (table)
@Entity
public class Employee {
@Id
@Column(name="EMP_ID")
private long id;
...
@OneToMany
@JoinTable
(
name="EMP_PHONE",
joinColumns={ @JoinColumn(name="EMP_ID",
referencedColumnName="EMP_ID") },
inverseJoinColumns={ @JoinColumn(name="PHONE_ID",
referencedColumnName="ID", unique=true) }
)
private List<Phone> phones;
...
}
175
OneToMany
• Relationships6
• Cascading7
• Lazy Fetching8
• Target Entity9
• Collections10
• Maps11
• Join Fetching12
• Batch Reading13
• Common Problems14
• ManyToOne15
• ManyToMany16
176
37 Advanced
JPA 1.0 does not support a unidirectional OneToMany relationship without a JoinTable.
JPA 2.0 will have support for a unidirectional OneToMany. In JPA 2.0 a @JoinColumn1 can
be used on a OneToMany to define the foreign key, some JPA providers may support this
already.
The main issue with an unidirectional OneToMany is that the foreign key is owned by the
target object's table, so if the target object has no knowledge of this foreign key, inserting
and updating the value is difficult. In a unidirectional OneToMany the source object take
ownership of the foreign key field, and is responsible for updating its value.
The target object in a unidirectional OneToMany is an independent object, so it should not
rely on the foreign key in any way, i.e. the foreign key cannot be part of its primary key,
nor generally have a not null constraint on it. You can model a collection of objects where
the target has no foreign key mapped, but uses it as its primary key, or has no primary key
using a Embeddable collection mapping, see Embeddable Collections2 .
If your JPA provider does not support unidirectional OneToMany relationships, then you will
need to either add a back reference ManyToOne or a JoinTable. In general it is best to use
a JoinTable if you truly want to model a unidirectional OneToMany on the database.
There are some creative workarounds to defining a unidirectional OneToMany. One is to map
it using a JoinTable, but make the target table the JoinTable. This will cause an extra
join, but work for the most part for reads, writes of course will not work correctly, so this is
only a read-only solution and a hacky one at that.
EMPLOYEE (table)
1 https://java.sun.com/javaee/5/docs/api/javax/persistence/JoinColumn.html
2 Chapter 26.7 on page 97
177
Advanced
PHONE (table)
@Entity
public class Employee {
@Id
@Column(name="EMP_ID")
private long id;
...
@OneToMany
@JoinColumn(name="OWNER_ID", referencedColumnName="EMP_ID")
private List<Phone> phones;
...
}
@Entity
public class Phone {
@Id
private long id;
...
}
OneToMany3 OneToMany4
3 http://en.wikibooks.org/wiki/Category%3AJava%20Persistence
4 http://en.wikibooks.org/wiki/Category%3AJava%20Persistence%2FMapping
178
38 ManyToMany
A ManyToMany relationship in Java is where the source object has an attribute that stores a
collection of target objects and (if) those target objects had the inverse relationship back
to the source object it would also be a ManyToMany relationship. All relationships in Java
and JPA are unidirectional, in that if a source object references a target object there is
no guarantee that the target object also has a relationship to the source object. This is
different than a relational database, in which relationships are defined through foreign keys
and querying such that the inverse query always exists.
JPA also defines a OneToMany1 relationship, which is similar to a ManyToMany relationship
except that the inverse relationship (if it were defined) is a ManyToOne relationship. The main
difference between a OneToMany and a ManyToMany relationship in JPA is that a ManyToMany
always makes use of a intermediate relational join table to store the relationship, where as a
OneToMany can either use a join table, or a foreign key in target object's table referencing
the source object table's primary key.
In JPA a ManyToMany relationship is defined through the @ManyToMany2 annotation or the
<many-to-many> element.
All ManyToMany relationships require a JoinTable. The JoinTable is defined using the
@JoinTable3 annotation and <join-table> XML element. The JoinTable defines a foreign
key to the source object's primary key (joinColumns), and a foreign key to the target object's
primary key (inverseJoinColumns). Normally the primary key of the JoinTable is the
combination of both foreign keys.
EMPLOYEE (table)
ID FIRSTNAME LASTNAME
1 Bob Way
2 Sarah Smith
EMP_PROJ (table)
EMP_ID PROJ_ID
1 1
1 2
179
ManyToMany
2 1
PROJECT (table)
ID NAME
1 GIS
2 SIG
@Entity
public class Employee {
@Id
@Column(name="ID")
private long id;
...
@ManyToMany
@JoinTable(
name="EMP_PROJ",
joinColumns={@JoinColumn(name="EMP_ID",
referencedColumnName="ID")},
inverseJoinColumns={@JoinColumn(name="PROJ_ID",
referencedColumnName="ID")})
private List<Project> projects;
...
}
180
See Also
If the mappedBy is not used, then the persistence provider will assume there are two
independent relationships, and you will end up getting duplicate rows inserted into the
join table. If you have a conceptual bi-directional relationship, but have two different join
tables in the database, then you must not use the mappedBy, as you need to maintain two
independent tables.
As with all bi-directional relationships it is your object model's and application's responsibility
to maintain the relationship in both direction. There is no magic in JPA, if you add or
remove to one side of the collection, you must also add or remove from the other side,
see object corruption4 . Technically the database will be updated correctly if you only
add/remove from the owning side of the relationship, but then your object model will be
out of synch, which can cause issues.
@Entity
public class Project {
@Id
@Column(name="PROJ_ID")
private long id;
...
@ManyToMany(mappedBy="projects")
private List<Employee> employees;
...
}
• Relationships5
• Cascading6
• Lazy Fetching7
• Target Entity8
• Collections9
• Maps10
• Join Fetching11
• Batch Reading12
• Common Problems13
• OneToMany14
181
ManyToMany
If you have a bi-directional ManyToMany relationship, ensure that you add to both sides of
the relationship.
See Object corruption15 .
If you have a bidirectional ManyToMany relationship, you must use mappedBy on one side of
the relationship, otherwise it will be assumed to be two difference relationships and you
will get duplicate rows inserted into the join table.
182
39 Advanced
A frequent problem is that two classes have a ManyToMany relationship, but the relational join
table has additional data. For example if Employee has a ManyToMany with Project but the
PROJ_EMP join table also has an IS_PROJECT_LEAD column. In this case the best solution is
to create a class that models the join table. So a ProjectAssociation class would be created.
It would have a ManyToOne to Employee and Project, and attributes for the additional
data. Employee and Project would have a OneToMany to the ProjectAssociation. Some
JPA providers also provide additional support for mapping to join tables with additional
data.
Unfortunately mapping this type of model becomes more complicated1 in JPA because it
requires a composite primary key. The association object's Id is composed of the Employee
and Project ids. The JPA 1.0 spec does not allow an Id to be used on a ManyToOne so
the association class must have two duplicate attributes to also store the ids, and use an
IdClass, these duplicate attributes must be kept in synch with the ManyToOne attributes.
Some JPA providers may allow a ManyToOne to be part of an Id, so this may be simpler
with some JPA providers. To make your life simpler, I would recommend adding a generated
Id attribute to the association class. This will give the object a simpler Id and not require
duplicating the Employee and Project ids.
This same pattern can be used no matter what the additional data in the join table is.
Another usage is if you have a Map relationship between two objects, with a third unrelated
object or data representing the Map key. The JPA spec requires that the Map key be an
attribute of the Map value, so the association object pattern can be used to model the
relationship.
If the additional data in the join table is only required on the database and not used in
Java, such as auditing information, it may also be possible to use database triggers to
automatically set the data.
EMPLOYEE (table)
ID FIRSTNAME LASTNAME
1 Bob Way
2 Sarah Smith
183
Advanced
PROJ_EMP (table)
PROJECT (table)
ID NAME
1 GIS
2 SIG
@Entity
public class Employee {
@Id
private long id;
...
@OneToMany(mappedBy="employee")
private List<ProjectAssociation> projects;
...
}
@Entity
public class Project {
@Id
private long id;
...
@OneToMany(mappedBy="project")
private List<ProjectAssociation> employees;
...
// Add an employee to the project.
// Create an association object for the relationship and set its
data.
public void addEmployee(Employee employee, boolean teamLead) {
ProjectAssociation association = new ProjectAssociation();
association.setEmployee(employee);
association.setProject(this);
association.setEmployeeId(employee.getId());
association.setProjectId(this.getId());
association.setIsTeamLead(teamLead);
this.employees.add(association);
// Also add the association object to the employee.
employee.getProjects().add(association);
}
}
@Entity
@Table(name="PROJ_EMP")
@IdClass(ProjectAssociationId.class)
public class ProjectAssociation {
@Id
184
Mapping a Join Table with Additional Columns
• If the given examples won't suit your expectations, try the solution indicated in this link:
http://giannigar.wordpress.com/2009/09/04/mapping-a-many-to-many-join-table-with-extra-col
ManyToMany2 ManyToMany3
2 http://en.wikibooks.org/wiki/Category%3AJava%20Persistence
3 http://en.wikibooks.org/wiki/Category%3AJava%20Persistence%2FMapping
185
40 ElementCollection
Figure 17
187
ElementCollection
EMPLOYEE (table)
PHONE (table)
@Entity
public class Employee {
@Id
@Column(name="EMP_ID")
4 https://java.sun.com/javaee/5/docs/api/javax/persistence/JoinColumn.html
5 https://java.sun.com/javaee/5/docs/api/javax/persistence/JoinColumns.html
188
Basic Collections
@Embeddable
public class Phone {
private String type;
private String areaCode;
@Column(name="P_NUMBER")
private String number;
...
}
189
ElementCollection
EMPLOYEE (table)
PHONE (table)
OWNER_ID PHONE_NUMBER
1 613-792-0001
1 613-494-1234
2 416-892-0005
@Entity
public class Employee {
@Id
@Column(name="EMP_ID")
private long id;
...
@ElementCollection
@CollectionTable(
name="PHONE",
joinColumns=@JoinColumn(name="OWNER_ID")
)
@Column(name="PHONE_NUMBER")
private List<String> phones;
...
}
190
See Also
• Relationships6
• Lazy Fetching7
• Target Entity8
• Collections9
• Maps10
• Join Fetching11
• Batch Reading12
• Common Problems13
• OneToMany14
• ManyToMany15
• Embeddables16
The JPA 2.0 specification does not provide a way to define the Id in the Embeddable.
However, to delete or update a element of the ElementCollection mapping, some unique
key is normally required. Otherwise, on every update the JPA provider would need to
delete everything from the CollectionTable for the Entity, and then insert the values
back. So, the JPA provider will most likely assume that the combination of all of the fields
in the Embeddable are unique, in combination with the foreign key (JoinColumn(s)). This
however could be inefficient, or just not feasible if the Embeddable is big, or complex.
Some JPA providers may allow the Id to be specified in the Embeddable, to resolve this
issue. Note in this case the Id only needs to be unique for the collection, not the table, as the
foreign key is included. Some may also allow the unique option on the CollectionTable
to be used for this. Otherwise, if your Embeddable is complex, you may consider making it
an Entity and use a OneToMany instead.
ElementCollection17 ElementCollection18
191
41 Advanced Topics
41.1 Events
A event is a hook into a system that allows the execution of some code when the event
occurs. Events can be used to extend, integrate, debug, audit or monitor a system.
JPA defines several events for the persistent life-cycle of Entity objects. JPA events are
defined through annotations or in the orm.xml. Any method of a persistent class can
be annotated with an event annotation to be called for all instances of that class. An
event listener can also be configured for a class using the EntityListeners1 annotation or
<entity-listeners> XML element. The specified listener class does not need to implement
any interface (JPA does not use the Java event model), it only needs to annotate its methods
with the desired event annotation.
JPA defines the following events:
• PostLoad2 - Invoked after an Entity is loaded into the persistence context
(EntityManager), or after a refresh operation.
• PrePersist3 - Invoked before the persist operation is invoked on an Entity. Also
invoked on merge for new instances, and on cascade of a persist operation. The Id of
the object may not have been assigned, and code be assigned by the event.
• PostPersist4 - Invoked after a new instance is persisted to the database. This occurs
during a flush or commit operation after the database INSERT has occurred, but before
the transaction is committed. It does not occur during the persist operation. The Id
of the object should be assigned.
• PreUpdate5 - Invoked before an instance is updated in the database. This occurs during
a flush or commit operation after the database UPDATE has occurred, but before the
transaction is committed. It does not occur during the merge operation.
• PostUpdate6 - Invoked after an instance is updated in the database. This occurs during
a flush or commit operation after the database UPDATE has occurred, but before the
transaction is committed. It does not occur during the merge operation.
• PreRemove7 - Invoked before the remove operation is invoked on an Entity. Also invoked
for cascade of a remove operation. It is also invoked during a flush or commit for
orphanRemoval in JPA 2.0.
1 http://download.oracle.com/javaee/6/api/javax/persistence/EntityListeners.html
2 http://download.oracle.com/javaee/6/api/javax/persistence/PostLoad.html
3 http://download.oracle.com/javaee/6/api/javax/persistence/PrePersist.html
4 http://download.oracle.com/javaee/6/api/javax/persistence/PostPersist.html
5 http://download.oracle.com/javaee/6/api/javax/persistence/PreUpdate.html
6 http://download.oracle.com/javaee/6/api/javax/persistence/PostUpdate.html
7 http://download.oracle.com/javaee/6/api/javax/persistence/PreRemove.html
193
Advanced Topics
• PostRemove8 - Invoked after an instance is deleted from the database. This occurs during
a flush or commit operation after the database DELETE has occurred, but before the
transaction is committed. It does not occur during the remove operation.
@Entity
public class Employee {
@Id
private String uid;
@Basic
private Calendar lastUpdated;
...
@PrePersist
public void prePersist() {
this.uid = UIDGenerator.newUUI();
this.lastUpdated = Calendar.getInstance();
}
@PreUpdate
public void preUpdate() {
this.lastUpdated = Calendar.getInstance();
}
}
@Entity
@EventListeners(EmployeeEventListener.class)
public class Employee {
@Id
private String uid;
@Basic
private Calendar lastUpdated;
...
}
@PreUpdate
public void preUpdate(Object object) {
Employee employee = (Employee)object;
employee.setLastUpdated(Calendar.getInstance());
}
}
8 http://download.oracle.com/javaee/6/api/javax/persistence/PostRemove.html
194
Events
It is also possible to configure a default Entity listener. This listener will receive events
for all of the Entity classes in the persistence unit. Default listeners can only be defined
through XML.
If a default Entity listener is defined, and a class wants to define its own listener, or does
not want the default listener, this can be disabled using the ExcludeDefaultListeners9
annotation or <exclude-default-listeners> XML element.
xsi:schemaLocation="http://java.sun.com/xml/ns/persistence/orm
orm_2_0.xsd">
<persistence-unit-metadata>
<persistence-unit-defaults>
<entity-listeners>
<entity-listener class="org.acme.ACMEEventListener">
<pre-persist method-name="prePersist"/>
<pre-update method-name="preUpdate"/>
</entity-listener>
</entity-listeners>
</persistence-unit-defaults>
http://download.oracle.com/javaee/6/api/javax/persistence/ExcludeDefaultListeners.
9
html
195
Advanced Topics
</persistence-unit-metadata>
</entity-mappings>
Entity listeners are inherited. If a subclass does not wish to inherit a superclass En-
tity listener, then it must define the ExcludeSuperclassListeners10 annotation or
<exclude-superclass-listeners> XML element.
JPA does not define any events for Embeddables. Some JPA providers may allow defining
events for Embeddable objects.
The JPA events are only defined for the Entity life-cycle. There are no EntityMangager
events, or system level events.
Some JPA providers may provide additional events.
TopLink11 / EclipseLink12 : Provide an extended event mechanism. Additional Entity
level events are defined through the DescriptorEventListener API. A session level event
mechanism is also provided through the SessionEventListener API. The event objects
also provide additional information including the database row and set of object changes.
41.2 Views
A database VIEW is a virtual view of a table or query. Views are useful for hiding the
complexity of a table, set of tables, or data-set.
In JPA you can map to a VIEW the same as a table, using the @Table annotation. You can
then map each column in the view to your object's attributes. Views are normally read-only,
so object's mapping to views are normally also read-only. In most databases views can
also be updatable depending on how complex to query is that they encapsulate. Even for
complex queries database triggers can normally be used to update into the view.
Views can often be used in JPA to workaround mapping limitations. For example if a table
join is not supported by JPA, or a database function is desired to be called to transform
data, this can normally be done inside a view, so JPA can just map to the simplified data.
Using views does require database expertise, and the definition of views can be database
dependent.
http://download.oracle.com/javaee/6/api/javax/persistence/ExcludeSuperclassListeners.
10
html
11 Chapter 11 on page 25
12 Chapter 10 on page 23
196
Interfaces
41.3 Interfaces
Interfaces can be used for two main purposes in a model. The first is as a public interface
that defines the public API for the class. The second is as a common type that allows
multiple distinct implementers.
If you have a public interface in JPA, you just need to map the implementation class and
are fine for the most part. One issue is that you need to use the implementation class for
queries, as JPA does not know about the interface. For JPQL the default alias is also the
implementation class, but you could redefine this to be the public interface by setting the
name of the Entity to be the public interface.
Some JPA providers allow interfaces to be defined.
TopLink13 / EclipseLink14 : Support defining and querying on public interfaces using a
DescriptorCustomizer and the InterfacePolicy.
@Entity(name="Employee")
public class EmployeeImpl {
...
}
If you have a common interfaces with multiple distinct implementers, this can have some
issues. If you use the interface to define a variable relationship, then this is difficult
to map. JPA has no direct support for interfaces or variable relationships. You could
change the interface to be a abstract class, then use TABLE_PER_CLASS inheritance to
map it. Otherwise, you could split the relationship into multiple relationships, one per
each implementer, or you could just remove the relationship and query for the related
objects instead. Querying on an interface is also difficult, you would need to query on each
implementer of the interface, and then union the results in memory.
Some JPA providers have support for mapping interfaces, and for variable relationships.
TopLink15 / EclipseLink16 : Support mapping interfaces using a SessionCustomizer and
a RelationalDescriptor and InterfacePolicy. Variable relationships can be defined
using the @VariableOneToOne annotation or XML.
13 Chapter 11 on page 25
14 Chapter 10 on page 23
15 Chapter 11 on page 25
16 Chapter 10 on page 23
197
Advanced Topics
A stored procedure is a procedure or function that resides on the database. Stored procedures
are typically written in some database specific language that is similar to SQL, such as
PL/SQL on Oracle. Some databases such as Oracle also support stored procedures written
in Java.
Stored procedures can be useful to perform batch data processing tasks. By writing the task
in the database, it avoids the cost of sending the data to and from the database client, so
can operate much more efficiently. Stored procedures can also be used to access database
specific functionality that can only be accessed on the server. Stored procedures can also be
used if strict security requirements as required, to avoid giving users access to the raw tables
or unverified SQL operations. Some legacy application have also been written in database
procedural languages, and need to be integrated with.
The disadvantages of using stored procedures is they are less flexible than using SQL,
and require developing and maintaining functionality that is often written in a different
language than the application developers may be used to, and difficult to develop and debug,
and typically using a limited procedural programming language. There is also a general
misconception that using stored procedures will improve performance, in that if you put the
same SQL the application is executing inside a stored procedure it will somehow become
faster. This is a false, and normally the opposite is true, as stored procedures restrict the
dynamic ability of the persistence layer to optimize data retrieval. Stored procedures only
improve performance when they use more optimal SQL than the application, typically when
they perform an entire task on the database. To achieve optimal performance from SQL
generated in an application you must use prepared statements - otherwise the database will
have to create a new execution plan each time you submit a query.
JPA does not have any direct support for stored procedures. Some types of stored procedures
can be executed in JPA through using native queries. Native queries in JPA allow any SQL
that returns nothing, or returns a database result set to be executed. The syntax to execute
a stored procedure depends on the database. JPA does not support stored procedures that
use OUTPUT or INOUT parameters. Some databases such as DB2, Sybase and SQL Server
allow for stored procedures to return result sets. Oracle does not allow results sets to be
returned, only OUTPUT parameters, but does define a CURSOR type that can be returned as an
OUTPUT parameter. Oracle also supports stored functions, that can return a single value. A
stored function can normally be executed using a native SQL query by selecting the function
value from the Oracle DUAL table.
Some JPA providers have extended support for stored procedures, some also support
overriding any CRUD operation for an Entity with a stored procedure or custom SQL.
Some JPA providers have support for CURSOR OUTPUT parameters.
TopLink17 / EclipseLink18 : Support stored procedures and stored functions using the
@NamedStoredProcedureQuery, @NamedStoredFunctionQuery annotations or XML, or
the StoredProcedureCall, StoredFunctionCall classes. Overriding any CRUD opera-
tion for a class or relationship are also supported using a DescriptorCustomizer and the
17 Chapter 11 on page 25
18 Chapter 10 on page 23
198
Structured Object-Relational Data Types
DescriptorQueryManager class. IN, OUT, INOUT, and CURSOR OUTPUT parameters are
supported.
EntityManager em = getEntityManager();
Query query = em.createNativeQuery("BEGIN VALIDATE_EMP(P_EMP_ID=>?);
END;");
query.setParameter(1, empId);
query.executeUpdate();
In Oracle stored procedures are typically written in Oracle's PL/SQL language. PL/SQL in
Oracle supports some additional data-types, that Oracle does not support through SQL or
JDBC. These include types such as BOOLEAN, TABLE and RECORD. Accessing these types or
procedures is difficult from Java, as these types are not supported by JDBC. One workaround
is to wrap the PL/SQL stored procedures with normal stored procedures that transform the
PL/SQL types to standard SQL/JDBC types, such as INT, VARRAY (Array), and OBJECT
TYPE (Struct). Some JPA providers have extended support for calling PL/SQL stored
procedures.
TopLink19 / EclipseLink20 : Support PL/SQL stored procedures and functions us-
ing the @NamedPLSQLStoredProcedureQuery, @NamedPLSQLStoredFunctionQuery anno-
tations or XML, or the PLSQLStoredProcedureCall, PLSQLStoredFunctionCall classes.
Back in the hay day of object-oriented databases (OODBMS) many of the relational database
vendors decided to added object-oriented concepts to relational data. These new hybrid
databases were called Object-Relational in that they could store both object and relational
data. These object-relational data-types were standardized as part of SQL3 and support was
added for them from Java in the JDBC 2.0 API. Although there was lots of hype around the
new forms of data, object-relational data never caught on much, as people seemed to prefer
their standard relational data. I would not normally recommend using object-relational
data, as relational data is much more standard, but if you have really complex data, it may
be something to investigate.
Some common object-relational database features include:
• Object types (structures)
• Arrays and array types
• Nested tables
• Inheritance
19 Chapter 11 on page 25
20 Chapter 10 on page 23
199
Advanced Topics
With the advent of XML databases, many relational database decided to add enhanced
XML support. Although it was always possible to store XML in a relational database just
using a VARCHAR or CLOB column, having the database aware of the XML data does have its
advantages. The main advantage is databases that offer XML support allow querying of the
XML data using XPath or XQuery syntax. Some databases also allow the XML data to be
stored more efficiently than in Lob storage.
Databases with XML support include:
• Oracle (XDB)
• DB2
• PostgreSQL
JPA has no extended support for XML data, although it is possible to store an XML String
into the database, just mapped as a Basic. Some JPA provider may offer extended XML
data support. Such as query extensions, or allow mapping an XML DOM.
21 Chapter 11 on page 25
22 Chapter 10 on page 23
23 http://ronaldoblanc.blogspot.com/2011/11/jpa-eclipselink-e-stored-procedures-com.html
200
Filters
If you wish to map the XML data into objects, you could make use of the JAXB specification.
You may even be able to integrate this with your JPA objects.
TopLink24 / EclipseLink25 : Support Oracle XDB XMLType columns using their
DirectToXMLTypeMapping. XMLTypes can be mapped either as String or as an XML
DOM (Document). Query extensions are provided for XPath queries within Expression
queries. EclipseLink also includes a JAXB implementation for object-XML mapping.
41.7 Filters
Some times it is desirable to filter some of the contents of a table from all queries. This
is normally because the table is shared, either by multiple types, applications, tenants, or
districts, and the JPA application is only interested in a subset of the rows. It may also
be that the table includes historical or archive rows that should be ignored by the JPA
application.
JPA does not provide any specific support for filtering data, but there are some options
available. Inheritance can be used to include a type check on the rows for a class. For
example, if you had a STATUS column in an EMPLOYEE table, you could define an Employee
and a CurrentEmployee subclass whose discriminator STATUS was ACTIVE, and always use
CurrentEmployee in the application. Similarly you could define an ACMEEmployee subclass
that used the TENANT column as its class discriminator of value ACME. Another solution is to
use database views to filter the data and map the entities to the views.
These solutions do not work with dynamic filtering, where the filter criteria parameters
are not know until runtime (such as tenant, or district). Also complex criteria cannot be
modeled through inheritance, although database views should still work. One solution is to
always append the criteria to any query, such as appending a JPQL string, or JPA Criteria
in the application code.
Virtual Private Database (VPD) support may also provide a solution. Some databases such
as Oracle support VPD, allow context based filtering of rows based on the connected user or
proxy certificate.
Some JPA providers have specific support for filtering data.
TopLink26 / EclipseLink27 : Support filtering data through their @AdditionalCriteria
annotation and XML. This allows an arbitrary JPQL fragment to be appended to all queries
for the entity. The fragment can contain parameters that can be set through persistence
unit or context properties at runtime. Oracle VPD is also supported, include Oracle proxy
authentication and isolated data.
24 Chapter 11 on page 25
25 Chapter 10 on page 23
26 Chapter 11 on page 25
27 Chapter 10 on page 23
201
Advanced Topics
41.8 History
A common desire in database applications is to maintain a record and history of the database
changes. This can be used for tracking and auditing purposes, or to allow undoing of
changes, or to keep a record of the system's data over time. Many database have auditing
functionality that allows some level of tracking changes made to the data. Some databases
such as Oracle's Flashback feature allow the automatic tracking of history at the row level,
and even allow querying on past versions of the data.
It is also possible for an application to maintain its own history through its data model. All
that is required is to add a START and END timestamp column to the table. The current
row is then the one in which the END timestamp is null. When a row is inserted its START
timestamp will be set to the current time. When a row is updated, instead of updating the
row a new row will be inserted with the same id and data, but a different START timestamp,
and the old row will be updated to set its END timestamp to the current time. The primary
key of the table will need to have the START timestamp added to it.
History can also be used to avoid deletion. Instead of deleting a row, the END timestamp
can just be set to the current time. Another solution is to add a DELETED boolean column
to the table, and record when a row is deleted.
The history data could either be stored in-place in the altered table, or the table could be
left to only contain the current version of the data, and a mirror history table could be
added to store the data. In the mirror case, database triggers could be used to write to the
history table. For the in-place case a database view could be used to give a view of the table
as of the current time.
To query the current data from a history table, any query must include the clause where
the END is NULL. To query as of a point in time, the where clause must include where the
point in time is between the START and END timestamps.
JPA does not define any specific history support.
Oracle flashback can be used with JPA, but any queries for historical data will need to use
native SQL.
If a mirror history table is used with triggers, JPA can still be used to query to current data.
A subclass or sibling class could also be mapped to the history table to allow querying of
history data.
If a database view is used, the JPA could be used by mapping the Entity to the view.
If a history table is used JPA could still be used to map to the table, and a start and
end attribute could be added to the object. Queries for the current data could append the
current time to the query. Relationships are more difficult, as JPA requires relationships to
be by primary key, and historical relationships would not be.
Some JPA providers have support for history.
202
Logical Deletes
41.10 Auditing
41.11 Replication
Data replication can be used to backup data, for fault tolerance and fail-over, or for load
balancing and scaling the database.
For replication, changes are written to multiple databases, either by the application, JPA
provider, or database back-end. For fail-over, if one of the databases goes down, the other
can be used without loss of data or application downtime. For load-balancing, read requests
can be load balanced across the replicated databases to reduce the load on each database,
and improve the scalability of the application.
Most enterprise database support some form of automatic backup or replication. Clustered
database such as Oracle RAC also allow for load balancing, fail-over and high availability. If
your database supports replication or clustering, then it is normally transparent to JPA. A
specialize DataSource (such as Oracle UCP, or WebLogic GridLink) may need to be used to
handle load-balancing and fail-over.
JPA does not define any specific support for data replication, but some JPA provider provide
replication support. If your database does not support replication, you can implement it
yourself through having multiple persistence units, and persisting and merging your objects
to both databases.
TopLink31 / EclipseLink32 : Support replication, load-balancing and fail-over. Replication
and load balancing is supported through EclipseLink's partitioning support using the
@ReplicationPartitioning, and @RoundRobinPartitioning annotations and XML.
28 Chapter 11 on page 25
29 Chapter 10 on page 23
30 http://en.wikibooks.org/wiki/Java_Persistence%2FAuditing_and_Security
31 Chapter 11 on page 25
32 Chapter 10 on page 23
203
Advanced Topics
41.12 Partitioning
Data partitioning can be used to scale an application across multiple database machines, or
with a clustered database such as Oracle RAC.
Partitioning splits your data across each of the database nodes. There is horizontal parti-
tioning, and vertical partitioning. Vertical partitioning is normally the easiest to implement.
You can just put half of your classes on one database, and the other half on another. Ideally
the two sets would be isolated from each other and not have any cross database relationships.
This can be done directly in JPA, by having two different persistence units, one for each
database.
For horizontal partitioning you need to split your data across multiple database nodes.
Each database node will have the same tables, but each node's table will only store part
of the data. You can partition the data by the data values, such as range partitioning,
value partitioning, hash partitioning, or even round robin. JPA does not define any data
partitioning support, so you either need to define a different class per partition, or use JPA
vendor specific functionality.
TopLink33 / EclipseLink34 : Support both horizontal and vertical data par-
titioning. Hash, value, range, pinned and custom partitioning is supported
at the Session, Entity, and Query level. Partitioning is support through the
@Partitioning, @HashPartitioning, @RangePartitioning, @ValuePartitioning,
@PinnedPartitioning, and @Partitioned annotations and XML.
See also,
• Data Partitioning - Scaling the Database (Blog)35
See, NoSQL36
33 Chapter 11 on page 25
34 Chapter 10 on page 23
http://java-persistence-performance.blogspot.com/2011/05/data-partitioning-scaling-database.
35
html
36 http://en.wikibooks.org/wiki/Java_Persistence%2FNoSQL
204
Multi-Tenancy
41.15 Multi-Tenancy
Advanced Topics37
37 http://en.wikibooks.org/wiki/Category%3AJava%20Persistence
205
42 Runtime
Once you have mapped your object model the second step in persistence development is
to access and process your objects from your application, this is referred to as the runtime
usage of persistence. Various persistence specifications have had various runtime models.
The most common model is to have a runtime API; a runtime API typically will define API
for connecting to a data-source, querying and transactions.
207
43 Entity Manager
JPA provides a runtime API defined by the javax.persistence1 package. The main
runtime class is the EntityManager2 class. The EntityManager provides API for creating
queries, accessing transactions, and finding, persisting, merging and deleting objects. The
JPA API can be used in any Java environment including JSE and JEE.
An EntityManager can be created through an EntityManagerFactory3 , or can be injected
into an instance variable in an EJB SessionBean, or can be looked up in JNDI in a JEE
server.
JPA is used differently in Java Standard Edition (JSE) versus Java Enterprise Edition (JEE).
1 https://java.sun.com/javaee/5/docs/api/javax/persistence/package-summary.html
2 https://java.sun.com/javaee/5/docs/api/javax/persistence/EntityManager.html
3 https://java.sun.com/javaee/5/docs/api/javax/persistence/EntityManagerFactory.html
4 https://java.sun.com/javaee/5/docs/api/javax/persistence/Persistence.html
209
Entity Manager
xsi:schemaLocation="http://java.sun.com/xml/ns/persistence
persistence_1_0.xsd"
version="1.0">
<persistence-unit name="acme" transaction-type="RESOURCE_LOCAL">
<!-- EclipseLink -->
<provider>org.eclipse.persistence.jpa.PersistenceProvider</provider>
<!--provider>org.hibernate.ejb.HibernatePersistence</provider-->
<!-- DataNucleus-->
<!--provider>org.datanucleus.jpa.PersistenceProviderImpl</provider-->
<exclude-unlisted-classes>false</exclude-unlisted-classes>
<properties>
<property name="eclipselink.jdbc.driver"
value="org.acme.db.Driver"/>
<property name="eclipselink.jdbc.url"
value="jdbc:acmedb://localhost/acme"/>
<property name="eclipselink.jdbc.user" value="wile"/>
<property name="eclipselink.jdbc.password"
value="elenberry"/>
</properties>
</persistence-unit>
</persistence>
xsi:schemaLocation="http://java.sun.com/xml/ns/persistence
persistence_2_0.xsd"
version="2.0">
<persistence-unit name="acme" transaction-type="RESOURCE_LOCAL">
<!-- EclipseLink -->
<provider>org.eclipse.persistence.jpa.PersistenceProvider</provider>
210
Java Enterprise Edition
<!--provider>org.hibernate.ejb.HibernatePersistence</provider-->
<!--provider>org.datanucleus.jpa.PersistenceProviderImpl</provider-->
<exclude-unlisted-classes>false</exclude-unlisted-classes>
<properties>
<property name="javax.persistence.jdbc.driver"
value="org.acme.db.Driver"/>
<property name="javax.persistence.jdbc.url"
value="jdbc:acmedb://localhost/acme"/>
<property name="javax.persistence.jdbc.user"
value="wile"/>
<property name="javax.persistence.jdbc.password"
value="elenberry"/>
</properties>
</persistence-unit>
</persistence>
EntityManagerFactory factory =
Persistence.createEntityManagerFactory("acme");
EntityManager entityManager = factory.createEntityManager();
...
entityManager.close();
211
Entity Manager
must be closed, and typically does not integrate with JTA, but this is possible through
the joinTransaction5 API. The entities in a non-managed EntityManager do not be-
come detached after a transaction completes, and can continue to be used in subsequent
transactions.
xsi:schemaLocation="http://java.sun.com/xml/ns/persistence
persistence_1_0.xsd"
version="1.0">
<persistence-unit name="acme" transaction-type="JTA">
<jta-data-source>jdbc/ACMEDataSource</jta-data-source>
</persistence-unit>
</persistence>
<business-remote>org.acme.EmployeeService</business-remote>
<ejb-class>org.acme.EmployeeServiceBean</ejb-class>
<session-type>Stateless</session-type>
<persistence-context-ref>
<persistence-context-re
f-name>persistence/acme/entity-manager</persistence-context-ref-name>
<persistence-unit-name>acme</persistence-unit-name>
</persistence-context-ref>
<persistence-unit-ref>
<persisten
ce-unit-ref-name>persistence/acme/factory</persistence-unit-ref-name>
<persistence-unit-name>acme</persistence-unit-name>
</persistence-unit-ref>
</session>
</enterprise-beans>
</ejb-jar>
212
Java Enterprise Edition
@Stateless(name="EmployeeService", mappedName="acme/EmployeeService")
@Remote(EmployeeService.class)
public class EmployeeServiceBean implements EmployeeService {
@PersistenceContext(unitName="acme")
private EntityManager entityManager;
@PersistenceUnit(unitName="acme")
private EntityManagerFactory factory;
...
}
Runtime7 Runtime8
213
44 Querying
1 http://en.wikibooks.org/wiki/Java%20Persistence%2FJPQL
2 Chapter 45.8 on page 232
3 http://en.wikibooks.org/wiki/Java%20Persistence%2FCriteria
4 Chapter 45.6 on page 229
5 Chapter 45.8 on page 232
6 http://en.wikipedia.org/wiki/SQL
http://wiki.eclipse.org/EclipseLink/UserGuide/JPA/Basic_JPA_Development/Querying/
7
JPQL#EclipseLink_Extensions_.28EQL.29
8 http://en.wikipedia.org/wiki/Object%20Query%20Language
9 http://www.querydsl.com/
215
Querying
There are two main types of queries in JPA, named queries and dynamic queries. A named
query is used for a static query that will be used many times in the application. The
advantage of a named query is that it can be defined once, in one place, and reused in the
application. Most JPA providers also pre-parse/compile named queries, so they are more
optimized than dynamic queries which typically must be parsed/compiled every time they
are executed. Since named queries are part of the persistence meta-data they can also be
optimized or overridden in the orm.xml without changing the application code.
Named queries are defined through the @NamedQuery10 and @NamedQueries11 anno-
tations, or <named-query> XML element. Named queries are accessed through the
EntityManager.createNamedQuery12 API, and executed through the Query13 interface.
Named queries can be defined on any annotated class, but are typically defined on the
Entity that they query for. The name of the named query must be unique for the entire
persistence unit, they name is not local to the Entity. In the orm.xml named queries can
be defined either on the <entity-mappings> or on any <entity>.
Named queries are typically parametrized, so they can be executed with different parameter
values. Parameters are defined in JPQL using the :<name> syntax for named parameters, or
the ? syntax for positional parameters.
A collection of query hints can also be provided to a named query. Query hints can be
used to optimize or to provide special configuration to a query. Query hints are specific
to the JPA provider. Query hints are defined through the @QueryHint14 annotation or
query-hint XML element.
@NamedQuery(
name="findAllEmployeesInCity",
query="Select emp from Employee emp where emp.address.city = :city"
hints={@QueryHint(name="acme.jpa.batch", value="emp.address")}
)
public class Employee {
...
}
10 https://java.sun.com/javaee/5/docs/api/javax/persistence/NamedQuery.html
11 https://java.sun.com/javaee/5/docs/api/javax/persistence/NamedQueries.html
https://java.sun.com/javaee/5/docs/api/javax/persistence/EntityManager.html#
12
createNamedQuery(java.lang.String)
13 https://java.sun.com/javaee/5/docs/api/javax/persistence/Query.html
14 https://java.sun.com/javaee/5/docs/api/javax/persistence/QueryHint.html
216
Dynamic Queries
<entity-mappings>
<entity name="Employee" class="org.acme.Employee" access="FIELD">
<named-query name="findAllEmployeesInCity">
<query>Select emp from Employee emp where emp.address.city =
:city</query>
<hint name="acme.jpa.batch" value="emp.address"/>
</named-query>
<attributes>
<id name="id"/>
</attributes>
</entity>
</entity-mappings>
EntityManager em = getEntityManager();
Query query = em.createNamedQuery("findAllEmployeesInCity");
query.setParameter("city", "Ottawa");
List<Employee> employees = query.getResultList();
...
Dynamic queries are normally used when the query depends on the context. For example,
depending on which items in the query form were filled in, the query may have different
parameters. Dynamic queries are also useful for uncommon queries, or prototyping.
JPA provides two main options for dynamic queries, JPQL and the Criteria API.
Dynamic queries can use parameters, and query hints the same as named queries.
Dynamic queries are accessed through the EntityManager.createQuery15 API, and exe-
cuted through the Query16 interface.
EntityManager em = getEntityManager();
Query query = em.createQuery("Select emp from Employee emp where
emp.address.city = :city");
query.setParameter("city", "Ottawa");
query.setHint("acme.jpa.batch", "emp.address");
List<Employee> employees = query.getResultList();
...
https://java.sun.com/javaee/5/docs/api/javax/persistence/EntityManager.html#
15
createQuery(java.lang.String)
16 https://java.sun.com/javaee/5/docs/api/javax/persistence/Query.html
217
Querying
44.3 JPQL
See JPQL18 .
44.4 Parameters
Parameters are defined in JPQL using the : syntax, i.e. "Select e from Employee
e where e.id = :id". The parameter values are set on the Query using the
Query.setParameter19 API.
Parameters can also be defined using the ?, mainly for native SQL queries. You can also
use ?<int>. These are positional parameters, not named parameters and are set using the
Query API Query.setParameter20 . The int is the index of the parameter in the SQL.
Positional parameters start a 1 (not 0). Some JPA providers also allow the : syntax for
native queries.
For temporal parameters (Date, Calendar) you can also pass the temporal type, depending
on if you want the Date, Time or Timestamp from the value.
Parameters are normally basic values, but you can also reference object's if comparing on
their Id, i.e. "Select e from Employee e where e.address = :address", can take the
Address object as a parameter. The parameter values are always at the object level when
comparing to a mapped attribute, for example if comparing a mapped enum the enum value
is used, not the database value.
Parameters are always set on the Query, no matter what type of query it is (JPQL, Criteria,
native SQL, NamedQuery).
Named Parameter:
Positional Parameter:
17 http://en.wikibooks.org/wiki/Java%20Persistence%2FCriteria
18 http://en.wikibooks.org/wiki/Java%20Persistence%2FJPQL
https://java.sun.com/javaee/5/docs/api/javax/persistence/Query.html#setParameter(java.
19
lang.String,%20java.lang.Object)
https://java.sun.com/javaee/5/docs/api/javax/persistence/Query.html#setParameter(int,
20
%20java.lang.Object)
218
Query Results
Normally JPA queries return your persistent Entity objects. The returned objects will be
managed by the persistent context (EntityManager) and changes made to the objects will
be tracked as part of the current transaction. In some case more complex queries can be
built that just return data, instead of Entity objects, or even perform update or deletion
operations.
There are three methods to execute a Query, each returning different results:
• Query.getResultList21
• Query.getSingleResult22
• Query.executeUpdate23
getResultList returns a List of the results. This is normally a List of Entity objects,
but could also be a list of data, or arrays.
getSingleResult returns the results. This is normally an Entity objects, but could also
be data, or an object array. If the query returns nothing, an exception is thrown. This is
unfortunate, as typically just returning null would be desired. Some JPA providers may
have an option to return null instead of throwing an exception if nothing is returned. Also
if the query returns more than a single row, and exception is also thrown. This is also
unfortunate, as typically just returning the first result is desired. Some JPA providers may
have an option to return the first result instead of throwing an exception, otherwise you
need to call getResultList and get the first element.
21 https://java.sun.com/javaee/5/docs/api/javax/persistence/Query.html#getResultList()
22 https://java.sun.com/javaee/5/docs/api/javax/persistence/Query.html#getSingleResult()
23 https://java.sun.com/javaee/5/docs/api/javax/persistence/Query.html#executeUpdate()
219
Querying
executeUpdate returns the database row count. This can be used for UPDATE DELETE JPQL
queries, or any native SQL (DML or DDL) query that does not return a result.
To query all employees with a phone number in 613 area code a join is used.
JPQL:
Criteria:
CriteriaBuilder cb = em.getCriteriaBuilder();
CriteriaQuery<Employee> query = cb.createQuery(Employee.class);
Root<Employee> employee = query.from(Employee.class);
Join<PhoneNumber> phone = employee.join("phoneNumbers");
query.where(cb.equal(phone.get("areaCode"), "613"));
To query all employees whose projects are all in trouble a subselect with a double negation is
used.
JPQL:
Criteria:
220
Common Queries
CriteriaBuilder cb = em.getCriteriaBuilder();
CriteriaQuery<Employee> query = cb.createQuery(Employee.class);
Root<Employee> employee = query.from(Employee.class);
Join<Project> project = employee.join("projects");
Subquery<Project> subquery = query.subquery(Project.class);
Root<Project> subProject = query.from(Project.class);
subquery.where(cb.and(cb.equal(project, subProject),
cb.equal(subProject.get("status"), "In trouble")));
query.where(cb.not(cb.exists(subquery));
44.6.3 Join fetch, read both employee and address in same query
To query all employees and their address a join fetch is used. This selects both the employee
and address data in the same query. If the join fetch was not used, the employee address
would still be available, but could cause a query for each employee for its address. This
reduces n+1 queries to 1 query.
Join fetch:
Outer joins can be used to avoid null and empty relationships from filtering the results:
SELECT e FROM Employee e LEFT OUTER JOIN FETCH e.address LEFT OUTER
JOIN FETCH e.phones
You can also select multiple objects in a query, but note that this does not instantiate the
relationship, so accessing the relationship could still trigger another query:
To query all employees for a given project where the employee project relationship is a
ManyToMany.
If the relationship is bi-directional you could use:
or,
221
Querying
To query all employees who have a large project with a budget greater than 1,000,000 where
the employee only has a relationship to Project, not to the LargeProject subclass. JPA
1.0 JPQL does not define a cast operation (JPA 2.0 may define this), so querying on an
attribute of a subclass is not obvious. This can be done indirectly however, if you add a
secondary join to the subclass to the query.
To query the employees first project for a particular employee. There are a few different
ways to do this, some using straight JPQL, and some using the Query setMaxResuls API.
If a JPA 2.0 indexed list is used to map the collection, then the INDEX function can be used.
setMaxResults:
JPQL:
JPA 2.0:
To query all employees ordered by the number of projects. There are a few different ways to
do this, some end up using sub selects in SQL, and some use group by. Depending on your
JPA provider and database you solution may be limited to one or the other.
Using SIZE function (uses sub-select in SQL)
Using SIZE function, also selects the size (uses group by)
222
Common Queries
Using GROUP BY
223
45 Advanced
There are several ways to optimize queries in JPA. The typical query performance issue is
that an object is read first, then its related objects are read one by one. This can be optimized
using JOIN FETCH in JPQL, otherwise by query hints specific for each JPA provider.
See,
• Join Fetching1
• Batch Fetching2
There are several JDBC options that can be used when executing a query. These JDBC
options are not exposed by JPA, but some JPA providers may support query hints for them.
• Fetch size : Configures the number of rows to fetch from the database in each page. A
larger fetch size is more efficient for large queries.
• Timeout : Instructs the database to cancel the query if its execution takes too long.
EclipseLink3 /TopLink4 : Provide many query hints including:
"eclipselink.jdbc.fetch-size" - Fetch size.
"eclipselink.jdbc.timeout" - Timeout.
"eclipselink.read-only" - The objects returned from the query are not managed by the
persistence context, and not tracked for changes.
"eclipselink.query-type" - Defines the native type of query to use for the query.
"eclipselink.sql.hint" - Allows an SQL hint to be included in the SQL for the query.
"eclipselink.jdbc.bind-parameters" - Specifies if parameter binding should be used or not,
(is used by default).
225
Advanced
JPQL also allows for UPDATE and DELETE queries to be executed. This is not the recommend
or normal way to modify objects in JPA. Normally in JPA you first read the object, then either
modify it directly using its set methods to update it, or call the EntityManager.remove()
method to delete it.
UPDATE and DELETE queries in JPQL are for performing batch updates or deletions. There
allow a set of objects to be updated or deleted in a single query. These queries are useful for
performing batch operations, or clearing test data.
UPDATE and DELETE queries have a WHERE the same as SELECT queries, and can use the same
functions and operations, and traverse relationships and make use of sub selects. UPDATE
and DELETE queries are executed using the Query.executeUpdate() method, and return
the row count from the database. Note that some caution should be used in execute these
queries in an active persistence context, as the queries may effect the objects that have
already been registered in the EntityManager. Normally it is a good idea to clear() the
EntityManager after executing the query, or to execute the query in a new EntityManager
or transaction.
Within a transaction context in JPA, changes made to the managed objects are normally
not flushed (written) to the database until commit. So if a query were executed against
the database directly, it would not see the changes made within the transaction, as these
changes are only made in memory within the Java. This can cause issues if new objects have
been persisted, or objects have been removed or changed, as the application may expect the
query to return these results. Because of this JPA requires that the JPA provider performs
a flush of all changes to the database before any query operation. This however can cause
issues if the application is not expecting that a flush as a side effect of a query operation.
If the application changes are not yet in a state to be flushed, a flush may not be desired.
Flushing also can be expensive and causes the database transaction, and database locks are
other resources to be held for the duration of the transaction, which can effect performance
and concurrency.
226
Pagination, Max/First Results
JPA allows the flush mode for a query to be configured using the FlushModeType5 enum and
the Query.setFlushMode()6 API. The flush mode is either AUTO the default which means
flush before every query execution, or COMMIT which means only flush on commit. The flush
mode can also be set on an EntityManager using the EntityManager.setFlushMode()7
API, to affect all queries executed with the EntityManager. The EntityManager.flush()8
API can be called directly on the EntityManager anytime that a flush is desired.
Some JPA providers also let the flush mode be configured through persistence unit properties,
or offer alternatives to flushing, such as performing the query against the in memory objects.
TopLink9 / EclipseLink10 : Allow the auto flush to be disabled using the persistence unit
property "eclipselink.persistence-context.flush-mode"="COMMIT".
A common requirement is to allow the user to page through a large query result. Typically
a web user is given the first page of n results after a query execution, and can click next to
go to the next page, or previous to go back.
If you are not concerned about performance, or the results are not too big, the easiest way
to implement this is to query all of the results, then access the sub-list from the result list
to populate your page. However, you will then have to re-query the entire results on every
page request.
One simple solution is to store the query results in a stateful SessionBean or an http session.
This means the initial query make take a while, but paging will be fast. Some JPA providers
also support the caching of query results, so you can cache the results in your JPA providers
cache and just re-execute the query to obtain the cached results.
If the query result is quite large, then another solution may be required. JPA provides the
Query API setFirstResult11 , setMaxResults12 to allow paging through a large query
result. The maxResults can also be used as a safeguard to avoid letting users execute queries
that return too many objects.
How these query properties are implemented depends on the JPA provider and database.
JDBC allows the maxResults to be set, and most JDBC drivers support this, so it will
normally work for most JPA providers and most databases. Support for firstResult can
be less guaranteed to be efficient, as it normally requires database specific SQL. There is no
5 https://java.sun.com/javaee/5/docs/api/javax/persistence/FlushModeType.html
https://java.sun.com/javaee/5/docs/api/javax/persistence/Query.html#
6
setFlushMode(javax.persistence.FlushModeType)
https://java.sun.com/javaee/5/docs/api/javax/persistence/EntityManager.html#
7
setFlushMode(javax.persistence.FlushModeType)
8 https://java.sun.com/javaee/5/docs/api/javax/persistence/EntityManager.html#flush()
9 Chapter 11 on page 25
10 Chapter 10 on page 23
https://java.sun.com/javaee/5/docs/api/javax/persistence/Query.html#
11
setFirstResult(int)
https://java.sun.com/javaee/5/docs/api/javax/persistence/Query.html#
12
setMaxResults(int)
227
Advanced
standard SQL for pagination, so whether if this is supported depends on your database, and
your JPA providers support.
When performing pagination, it is also important to order the result. If the query does
not order the result, then each subsequent query could potentially return the results in a
different order, and give a different page. Also if rows are insert/deleted in between the
queries, the results can be slightly different.
An alternative to using firstResult is to filter the first result in the where clause based on
the order by and the value from the previous page.
Another alternative is to only query the Ids, and store this result in a stateful SessionBean
or an http session. Then query for the set of Ids for each page.
Pagination can also be used for server processes, or batch jobs. On the server, it is normally
used to avoid using too much memory upfront, and allow processing each batch one at a
time. Any of these techniques can be used, also some JPA providers support returning a
database cursor for the query results that allows scrolling through the results.
TopLink13 / EclipseLink14 : Support streams and scrollable cursors through the query hints
"eclipselink.cursor.scrollable" and "eclipselink.cursor", and CursoredStream
and ScrollableCursor classes.
13 Chapter 11 on page 25
14 Chapter 10 on page 23
228
Native SQL Queries
Typically queries in JPA are defined through JPQL. JPQL allows the queries to be defined
in terms of the object model, instead of the data model. Since developers are programming
in Java using the object model, this is normally more intuitive. This also allows for data
abstraction and database schema and database platform independence. JPQL supports much
of the SQL syntax, but some aspects of SQL, or specific database extensions or functions
may not be possible through JPQL, so native SQL queries are sometimes required. Also
some developers have more experience with SQL than JPQL, so may prefer to use SQL
queries. Native queries can also be used for calling some types of stored procedures or
executing DML or DDL operations.
Native queries are defined through the @NamedNativeQuery15 and @NamedNativeQueries16
annotations, or <named-native-query> XML element. Native queries can also be defined
dynamically using the EntityManager.createNativeQuery()17 API.
A native query can be for a query for instances of a class, a query for raw data, an update
or DML or DDL operation, or a query for a complex query result. If the query is for a class,
the resultClass attribute of the query must be set. If the query result is complex, a Result
Set Mapping18 can be used.
Native queries can be parameterized, so they can be executed with different parameter
values. Parameters are defined in SQL using the ? syntax for positional parameters, JPA
does not require native queries support named parameters, but some JPA providers may.
For positional parameter the position starts a 1 (not 0).
A collection of query hints can also be provided to a native query. Query hints can be
used to optimize or to provide special configuration to a query. Query hints are specific
to the JPA provider. Query hints are defined through the @QueryHint19 annotation or
query-hint XML element.
@NamedNativeQuery(
name="findAllEmployeesInCity",
query="SELECT E.* from EMP E, ADDRESS A WHERE E.EMP_ID = A.EMP_ID
AND A.CITY = ?",
resultClass=Employee.class
)
public class Employee {
...
}
15 https://java.sun.com/javaee/5/docs/api/javax/persistence/NamedNativeQuery.html
16 https://java.sun.com/javaee/5/docs/api/javax/persistence/NamedNativeQueries.html
https://java.sun.com/javaee/5/docs/api/javax/persistence/EntityManager.html#
17
createNativeQuery(java.lang.String)
18 Chapter 45.6.1 on page 230
19 https://java.sun.com/javaee/5/docs/api/javax/persistence/QueryHint.html
229
Advanced
<entity-mappings>
<entity name="Employee" class="org.acme.Employee" access="FIELD">
<named-native-query name="findAllEmployeesInCity"
result-class="org.acme.Employee">
<query>SELECT E.* from EMP E, ADDRESS A WHERE E.EMP_ID =
A.EMP_ID AND A.CITY = ?</query>
</named-native-query>
<attributes>
<id name="id"/>
</attributes>
</entity>
</entity-mappings>
EntityManager em = getEntityManager();
Query query = em.createNamedQuery("findAllEmployeesInCity");
query.setParameter(1, "Ottawa");
List<Employee> employees = query.getResultList();
...
EntityManager em = getEntityManager();
Query query = em.createNativeQuery("SELECT E.* from EMP E, ADDRESS A
WHERE E.EMP_ID = A.EMP_ID AND A.CITY = ?", Employee.class);
query.setParameter(1, "Ottawa");
List<Employee> employees = query.getResultList();
...
When a native SQL query returns objects, the SQL must ensure it returns the correct data
to build the resultClass using the correct column names as specified in the mappings. If
the SQL is more complex and returns different column names, or returns data for multiple
objects then a @SqlResultSetMapping20 must be used.
@SqlResultSetMapping is a fairly complex annotation containing an array of
@EntityResult21 and @ColumnResult22 . This allows multiple Entity objects in combina-
tion with raw data to be returned. The @EntityResult contains an array of @FieldResult23 ,
which can be used to map the alias name used in the SQL to the column name required
by the mapping. This is required if you need to return two different instances of the same
class, or if the SQL needs to alias the columns differently for some reason. Note that in the
@FieldResult the name is the name of the attribute in the object, not the column name in
20 https://java.sun.com/javaee/5/docs/api/javax/persistence/SqlResultSetMapping.html
21 https://java.sun.com/javaee/5/docs/api/javax/persistence/EntityResult.html
22 https://java.sun.com/javaee/5/docs/api/javax/persistence/ColumnResult.html
23 https://java.sun.com/javaee/5/docs/api/javax/persistence/FieldResult.html
230
Stored Procedures
the mapping. This seems odd, because this would make mapping an Embedded or composite
id relationship not possible.
Normally it is easiest to either select raw data or a single object with native SQL queries,
so @SqlResultSetMappings can normally be avoided, as they are quite complex. Also note
that even if you select the Employee and its Address with the SQL, these are two unrelated
objects, the employee's address is not set, and may trigger a query if accessed unless a cache
hit occurs. Some JPA providers may provide a query hint to allow join fetching to be used
with native SQL queries.
@NamedNativeQuery(
name="findAllEmployeesInCity",
query="SELECT E.*, A.* from EMP E, ADDRESS A WHERE E.EMP_ID =
A.EMP_ID AND A.CITY = ?",
resultSetMapping="employee-address"
)
@SqlResultSetMapping(name="employee-address",
entities={
@EntityResult(entityClass=Employee.class),
@EntityResult(entityClass=Address.class)}
)
public class Employee {
...
}
It can sometimes be required to mix JDBC code with JPA code. This may be to access
certain JDBC driver specific features, or to integrate with another application that uses
JDBC instead of JPA.
If you just require a JDBC connection, you could access one from your JEE server's
DataSource, or connect directly to DriverManager or a third party connection pool. If you
need a JDBC connection in the same transaction context and your JPA application, you
could use a JTA DataSource for JPA and your JDBC access to have them share the same
global transaction. If you are not using JEE, or not using JTA, then you may be able to
access the JDBC connection directly from your JPA provider.
Some JPA providers provide an API to access a raw JDBC connection from their internal
connection pool, or from their transaction context. In JPA 2.0 this API is somewhat
standardized by the unwrap API on EntityManager.
231
Advanced
To access a JDBC connection from an EntityManager, some JPA 2.0 providers may support:
java.sql.Connection connection =
entityManager.unwrap(java.sql.Connection.class);
This connection could then be used for raw JDBC access. It normally should not be close
when finished, as the connection is being used by the EntityManager and will be released
when the EntityManager is closed or transaction committed.
Querying 25 Querying 26
25 http://en.wikibooks.org/wiki/Category%3AJava%20Persistence
26 http://en.wikibooks.org/wiki/Category%3AJava%20Persistence%2FRuntime
232
46 JPQL BNF
The following defines the structure of the JPQL query language. For further examples and
usage see Querying1 .
| = or
[] = optional
* = repeatable (zero or more times)
{} = mandatory
46.1 Select
46.1.1 From
233
JPQL BNF
association_path_expression ::=
collection_valued_path_expression |
single_valued_association_path_expression
join_association_path_expression ::=
join_collection_valued_path_expression |
join_single_valued_association_path_expression
join_collection_valued_path_expression::=
identification_variable.collection_valued_association_field
join_single_valued_association_path_expression::=
identification_variable.single_valued_association_field
collection_member_declaration ::=
IN (collection_valued_path_expression) [AS] identification_variable
single_valued_path_expression ::=
state_field_path_expression |
single_valued_association_path_expression
state_field_path_expression ::=
{identification_variable |
single_valued_association_path_expression}.state_field
single_valued_association_path_expression ::=
identification_variable.{single_valued_association_field.}*
single_valued_association_field
collection_valued_path_expression ::=
identification_variable.{
single_valued_association_field.}*collection_valued_association_field
234
Select
46.1.3 Where
235
JPQL BNF
46.1.4 Functions
236
Update
46.1.5 Group By
46.1.6 Order By
46.1.7 Subquery
46.2 Update
237
JPQL BNF
46.3 Delete
46.4 Literals
It is normally best to define data values in a query using parameters, using the syntax
:parameter or ?. JPQL also allows for data values to be in-lined in the JPQL query using
literals.
JPQL defines the following syntax for literals:
• String - 'string' - "Select e from Employee e where e.name = 'Bob'"
• To define a ' (quote) character in a string, the quote is double quoted, i.e.
'Baie-D''Urfé'.
• Integer - digits - "Select e from Employee e where e.id = 1234"
• Long - +|-digitsL - "Select e from Employee e where e.id = 1234L"
• Float - +|-digits.decimaleexponentF - "Select s from Stat s where
s.ratio > 3.14F"
• Double - +|-digits.decimaleexponentD - "Select s from Stat s where
s.ratio > 3.14e32D"
• Boolean - TRUE | FALSE - "Select e from Employee e where e.active =
TRUE"
• Date - {d'yyyy-mm-dd'} - "Select e from Employee e where e.startDate =
{d'2012-01-03'}"
• Time - {t'hh:mm:ss'} - "Select e from Employee e where e.startTime =
{t'09:00:00'}"
• Timestamp - {ts'yyy-mm-dd hh:mm:ss.nnnnnnnnn'} - "Select e from Employee
e where e.version = {ts'2012-01-03 09:00:00.000000001'}"
• Enum - package.class.enum - "Select e from Employee e where e.gender =
org.acme.Gender.MALE"
• null - NULL - "Update Employee e set e.manager = NULL where e.manager =
:manager"
238
New in JPA 2.0
239
JPQL BNF
SELECT e.name, CASE WHEN (e.salary >= 100000) THEN 1 WHEN (e.salary <
100000) THEN 2 ELSE 0 END from Employee e
13 http://en.wikibooks.org/wiki/Category%3AJava%20Persistence
14 http://en.wikibooks.org/wiki/Category%3AJava%20Persistence%2FRuntime
240
47 Persisting
JPA uses the EntityManager1 API for runtime usage. The EntityManager represents the
application session or dialog with the database. Each request, or each client will use its own
EntityManager to access the database. The EntityManager also represents a transaction
context, and in a typical stateless model a new EntityManager is created for each transaction.
In a stateful model, an EntityManager may match the lifecycle of a client's session.
The EntityManager provides an API for all required persistence operations. These include
the following CRUD operations:
• persist2 (INSERT)
• merge3 (UPDATE)
• remove4 (DELETE)
• find5 (SELECT)
The EntityManager is an object-oriented API, so does not map directly onto database SQL
or DML operations. For example to update an object, you just need to read the object and
change its state through its set methods, and then call commit on the transaction. The
EntityManager figures out which objects you changed and performs the correct updates to
the database, there is no explicit update operation in JPA.
47.1 Persist
The EntityManager.persist()6 operation is used to insert a new object into the database.
persist does not directly insert the object into the database, it just registers it as new in the
persistence context (transaction). When the transaction is committed, or if the persistence
context is flushed, then the object will be inserted into the database.
If the object uses a generated Id, the Id will normally be assigned to the object when
persist is called, so persist can also be used to have an object's Id assigned. The one
exception is if IDENTITY sequencing is used, in this case the Id is only assigned on commit
1 https://java.sun.com/javaee/5/docs/api/javax/persistence/EntityManager.html
https://java.sun.com/javaee/5/docs/api/javax/persistence/EntityManager.html#
2
persist(java.lang.Object)
https://java.sun.com/javaee/5/docs/api/javax/persistence/EntityManager.html#
3
merge(java.lang.Object)
https://java.sun.com/javaee/5/docs/api/javax/persistence/EntityManager.html#
4
remove(java.lang.Object)
https://java.sun.com/javaee/5/docs/api/javax/persistence/EntityManager.html#find(java.
5
lang.Class,%20java.lang.Object)
https://java.sun.com/javaee/5/docs/api/javax/persistence/EntityManager.html#
6
persist(java.lang.Object)
241
Persisting
or flush because the database will only assign the Id on INSERT. If the object does not use
a generated Id, you should normally assign its Id before calling persist.
The persist operation can only be called within a transaction, an exception will be thrown
outside of a transaction. The persist operation is in-place, in that the object being persisted
will become part of the persistence context. The state of the object at the point of the
commit of the transaction will be persisted, not its state at the point of the persist call.
persist should normally only be called on new objects. It is allowed to be called on existing
objects if they are part of the persistence context, this is only for the purpose of cascading
persist to any possible related new objects. If persist is called on an existing object that
is not part of the persistence context, then an exception may be thrown, or it may be
attempted to be inserted and a database constraint error may occur, or if no constraints are
defined, it may be possible to have duplicate data inserted.
persist can only be called on Entity objects, not on Embeddable objects, or collections,
or non-persistent objects. Embeddable objects are automatically persisted as part of their
owning Entity.
Calling persist is not always required. If you related a new object to an existing object
that is part of the persistence context, and the relationship is cascade persist, then it will be
automatically inserted when the transaction is committed, or when the persistence context
is flushed.
Example persist
EntityManager em = getEntityManager();
em.getTransaction().begin();
em.persist(employee);
em.getTransaction().commit();
Calling persist on an object will also cascade the persist operation to across any relation-
ship that is marked as cascade persist. If a relationship is not cascade persist, and a related
object is new, then an exception may be thrown if you do not first call persist on the
related object. Intuitively you may consider marking every relationship as cascade persist to
avoid having to worry about calling persist on every objects, but this can also lead to issues.
One issue with marking all relationships cascade persist is performance. On each persist
call all of the related objects will need to be traversed and checked if they reference any
new objects. This can actually lead to nˆ2 performance issues if you mark all relationships
cascade persist, and persist a large new graph of objects. If you just call persist on the
root object, this is ok. However, if you call persist on each object in the graph, then you
242
Merge
will traverse the entire graph for each object in the graph, and this can lead to a major
performance issue. The JPA spec should probably define persist to only apply to new
objects, not already part of the persistence context, but it requires persist apply to all
objects, whether new, existing, or already persisted, so can have this issue.
A second issue is that if you remove an object to have it deleted, if you then call persist
on the object, it will resurrect the object, and it will become persistent again. This may be
desired if it is intentional, but the JPA spec also requires this behavior for cascade persist.
So if you remove an object, but forget to remove a reference to it from a cascade persist
relationship, the remove will be ignored.
I would recommend only marking relationships that are composite or privately owned as
cascade persist.
47.2 Merge
https://java.sun.com/javaee/5/docs/api/javax/persistence/EntityManager.html#
7
merge(java.lang.Object)
243
Persisting
Example merge
EntityManager em = createEntityManager();
Employee detached = em.find(Employee.class, id);
em.close();
...
em = createEntityManager();
em.getTransaction().begin();
Employee managed = em.merge(detached);
em.getTransaction().commit();
Calling merge on an object will also cascade the merge operation across any relationship
that is marked as cascade merge. Even if the relationship is not cascade merge, the reference
will still be merged. If the relationship is cascade merge the relationship and each related
object will be merged. Intuitively you may consider marking every relationship as cascade
merge to avoid having to worry about calling merge on every objects, but this is normally a
bad idea.
One issue with marking all relationships cascade merge is performance. If you have an object
with a lot of relationships, then each merge call can require to traverse a large graph of
objects.
Another issues arises if your detached object is corrupt in some way. For example say you
have an Employee who has a manager, but that manager has a different copy of the detached
Employee object as its managedEmployee. This may cause the same object to be merged
twice, or at least may not be consistent which object will be merged, so you may not get the
changes you expect merged. The same is true if you didn't change an object at all, but some
other user did, if merge cascades to this unchanged object, it will revert the other user's
changes, or throw an OptimisticLockException (depending on your locking policy). This
is normally not desirable.
I would recommend only marking relationships that are composite or privately owned as
cascade merge.
Another issue with merge is transient variables. Since merge is normally used with object
serialization, if a relationship was marked as transient (Java transient, not JPA transient),
then the detached object will contain null, and null will be merged into the object, even
though it is not desired. This will occur even if the relationship was not cascade merge, as
merge always merges the references to related objects. Normally transient is required when
using serialization to avoid serializing the entire database when only a single, or small set of
objects are required.
One solution is to avoid marking anything transient, and instead use LAZY relationships in
JPA to limit what is serialized (lazy relationships that have not been accessed, will normally
not be serialized). Another solution is to manually merge in your own code.
244
Remove
Some JPA providers provide extended merge operations, such as allowing a shallow merge
or deep merge, or merging without merging references.
47.3 Remove
Example remove
EntityManager em = getEntityManager();
em.getTransaction().begin();
Employee employee = em.find(Employee.class, id);
em.remove(employee);
em.getTransaction().commit();
Calling remove on an object will also cascade the remove operation across any relationship
that is marked as cascade remove.
Note that cascade remove only effects the remove call. If you have a relationship that is
cascade remove, and remove an object from the collection, or dereference an object, it will
not be removed. You must explicitly call remove on the object to have it deleted. Some
JPA providers provide an extension to provide this behavior, and in JPA 2.0 there will be
an orphanRemoval option on OneToMany and OneToOne mappings to provide this.
47.3.2 Reincarnation
Normally an object that has been removed, stays removed, but in some cases you may need
to bring the object back to life. This normally occurs with natural ids, not generated ones,
https://java.sun.com/javaee/5/docs/api/javax/persistence/EntityManager.html#
8
remove(java.lang.Object)
245
Persisting
where a new object would always get an new id. Generally the desire to reincarnate an
object occurs from a bad object model design, normally the desire to change the class type
of an object (which cannot be done in Java, so a new object must be created). Normally the
best solution is to change your object model to have your object hold a type object which
defines its type, instead of using inheritance. But sometimes reincarnation is desirable.
When done in two separate transactions, this is normally fine, first you remove the object,
then you persist it back. This can be more complex if you wish to remove and persist an
object with the same Id in the same transaction. If you call remove on an object, then call
persist on the same object, it will simply no longer be removed. If you call remove on an
object, then call persist on a different object with the same Id the behavior may depend
on your JPA provider, and probably will not work. If you call flush after calling remove,
then call persist, then the object should be successfully reincarnated. Note that it will be
a different row, the existing row will have been deleted, and a new row inserted. If you wish
the same row to be updated, you may need to resort to using a native SQL update query.
246
48 Advanced
48.1 Refresh
Example refresh
EntityManager em = getEntityManager();
em.refresh(employee);
48.2 Lock
https://java.sun.com/javaee/5/docs/api/javax/persistence/EntityManager.html#
1
refresh(java.lang.Object)
2 Chapter 11 on page 25
3 Chapter 10 on page 23
4 Chapter 51.6.4 on page 268
5 Chapter 27.2.5 on page 106
247
Advanced
Example getReference
EntityManager em = getEntityManager();
Employee manager = em.getReference(managerId);
Employee employee = new Employee();
...
em.persist(employee);
employee.setManager(manager);
em.commit();
48.4 Flush
The EntityManager.flush()7 operation can be used the write all changes to the database
before the transaction is committed. By default JPA does not normally write changes to
the database until the transaction is committed. This is normally desirable as it avoids
database access, resources and locks until required. It also allows database writes to be
ordered, and batched for optimal database access, and to maintain integrity constraints and
avoid deadlocks. This means that when you call persist, merge, or remove the database
DML INSERT, UPDATE, DELETE is not executed, until commit, or until a flush is triggered.
Flush has several usages:
• Flush changes before a query execution to enable the query to return new objects and
changes made in the persistence unit.
• Insert persisted objects to ensure their Ids are assigned and accessible to the application
if using IDENTITY sequencing.
• Write all changes to the database to allow error handling of any database errors (useful
when using JTA or SessionBeans).
• To flush and clear a batch for batch processing in a single transaction.
• Avoid constraint errors, or reincarnate an object.
https://java.sun.com/javaee/5/docs/api/javax/persistence/EntityManager.html#
6
getReference(java.lang.Class,%20java.lang.Object)
7 https://java.sun.com/javaee/5/docs/api/javax/persistence/EntityManager.html#flush()
248
Clear
Example flush
48.5 Clear
The EntityManager.clear()8 operation can be used to clear the persistence context. This
will clear all objects read, changed, persisted, or removed from the current EntityManager
or transaction. Changes that have already been written to the database through flush,
or any changes made to the database will not be cleared. Any object that was read or
persisted through the EntityManager is detached, meaning any changes made to it will not
be tracked, and it should no longer be used unless merged into the new persistence context.
clear can be used similar to a rollback to abandon changes and restart a persistence
context. If a transaction commit fails, or a rollback is performed the persistence context
will automatically be cleared.
clear is similar to closing the EntityManager and creating a new one, the main difference
being that clear can be called while a transaction is in progress. clear can also be used
to free the objects and memory consumed by the EntityManager. It is important to note
that an EntityManager is responsible for tracking and managing all objects read within its
persistence context. In an application managed EntityManager this includes every objects
read since the EntityManager was created, including every transaction the EntityManager
was used for. If a long lived EntityManager is used, this is an intrinsic memory leak,
so calling clear or closing the EntityManager and creating a new one is an important
application design consideration. For JTA managed EntityManagers the persistence context
is automatically cleared across each JTA transaction boundary.
Clearing is also important on large batch jobs, even if they occur in a single transaction.
The batch job can be slit into smaller batches within the same transaction and clear can
be called in between each batch to avoid the persistence context from getting too big.
Example clear
8 https://java.sun.com/javaee/5/docs/api/javax/persistence/EntityManager.html#clear()
249
Advanced
try {
for (int batch = 0; batch < openOrderIds.size(); batch += 100) {
for (int index = 0; index < 100 && (batch + index) <
openOrderIds.size(); index++) {
Long id = openOrderIds.get(batch + index);
Order order = em.find(Order.class, id);
order.process(em);
}
em.flush();
em.clear();
}
em.getTransaction().commit();
} catch (RuntimeException error) {
if (em.getTransaction().isActive()) {
em.getTransaction().rollback();
}
}
}
48.6 Close
Example close
9 https://java.sun.com/javaee/5/docs/api/javax/persistence/EntityManager.html#close()
https://java.sun.com/javaee/5/docs/api/javax/persistence/EntityManager.html#
10
getDelegate()
250
Unwrap (JPA 2.0)
specific API is desired the getDelegate() API allows the JPA implementation to be accessed
to call the API.
In JEE a managed EntityManager will typically create a new EntityManager per JTA
transaction. Also the behavior is somewhat undefined outside of a JTA transaction
context. Outside a JTA transaction context, a JEE managed EntityManager may
create a new EntityManager per method, so getDelegate() may return a temporary
EntityManager or even null. Another way to access the JPA implementation is through
the EntityManagerFactory, which is typically not wrapped with a proxy, but may be in
some servers.
In JPA 2.0 the getDelegate() API has been replaced by the unwrap() API which is more
generic.
Example getDelegate
Example unwrap
https://java.sun.com/javaee/6/docs/api/javax/persistence/EntityManager.html#
11
unwrap(java.lang.Class)
251
Advanced
ServerSession().getIdentityMapAccessor().initializeAllIdentityMaps();
}
Persisting12 Persisting13
kk:Java Persistence14
12 http://en.wikibooks.org/wiki/Category%3AJava%20Persistence%2FRuntime
13 http://en.wikibooks.org/wiki/Category%3AJava%20Persistence
14 http://kk.wikibooks.org/wiki/Java%20Persistence
252
49 Transactions
A transaction1 is a set of operations that either fail or succeed as a unit. Transactions are
a fundamental part of persistence. A database transaction consists of a set of SQL DML2
(Data Manipulation Language) operations that are committed or rolled back as a single
unit. An object level transaction is one in which a set of changes made to a set of objects
are committed to the database as a single unit.
JPA provides two mechanisms for transactions. When used in JEE3 JPA provides integra-
tion with JTA4 (Java Transaction API). JPA also provides its own EntityTransaction
implementation for JSE5 and for use in a non-managed mode in JEE. Transactions in JPA
are always at the object level, this means that all changes made to all persistent objects in
the persistence context are part of the transaction.
1 http://en.wikipedia.org/wiki/Database%20transaction
2 http://en.wikipedia.org/wiki/Data%20Manipulation%20Language
3 http://en.wikipedia.org/wiki/Java%20Enterprise%20Edition
4 http://en.wikipedia.org/wiki/Java%20Transaction%20API
5 http://en.wikipedia.org/wiki/Java%20Standard%20Edition
6 https://java.sun.com/javaee/5/docs/api/javax/persistence/EntityTransaction.html
253
Transactions
to allow garbage collection and avoid stale data. If the commit fails, then the managed
objects are considered detached, and the EntityManager is cleared. This means that commit
failures cannot be caught and retried, if a failure occurs, the entire transaction must be
performed again. The previously managed object may also be left in an inconsistent state,
meaning some of the objects locking version may have been incremented. Commit will
also fail if the transaction has been marked for rollback. This can occur either explicitly
by calling setRollbackOnly or is required to be set if any query or find operation fails.
This can be an issue, as some queries may fail, but may not be desired to cause the entire
transaction to be rolled back.
The rollback operation will rollback the database transaction only. The managed objects
in the persistence context will become detached and the EntityManager is cleared. This
means any object previously read, should no longer be used, and is no longer part of the
persistence context. The changes made to the objects will be left as is, the object changes
will not be reverted.
<persistence xmlns="http://java.sun.com/xml/ns/persistence"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://java.sun.com/xml/ns/persistence
persistence_1_0.xsd" version="1.0">
<persistence-unit name="acme" transaction-type="RESOURCE_LOCAL">
<non-jta-data-source>amce</non-jta-data-source>
</persistence-unit>
</persistence>
EntityManager em = createEntityManager();
em.getTransaction().begin();
Employee employee = em.find(Employee.class, id);
employee.setSalary(employee.getSalary() + 1000);
em.getTransaction().commit();
em.close();
JTA transactions are used in JEE, in managed mode (EJB7 ). To use JTA transactions the
transaction-type attribute in the persistence.xml is set to JTA. If JTA transactions
are used with a DataSource, the <jta-datasource> element should be used to reference a
server DataSource that has been configure to be JTA managed.
JTA transactions are defined through the JTA UserTransaction8 class, or more likely
implicitly defined through SessionBean usage/methods. In a SessionBean normally each
SessionBean method invocation defines a JTA transaction. UserTransaction can be
254
JTA Transactions
obtained through a JNDI lookup in most application servers, or from the EJBContext in
EJB 2.0 style SessionBeans.
JTA transactions can be used in two modes in JEE. In JEE managed mode, such as an
EntityManager injected into a SessionBean, the EntityManager reference, represents a
new persistence context for each transaction. This means objects read in one transaction
become detached after the end of the transaction, and should no longer be used, or need
to be merged into the next transaction. In managed mode, you never create or close an
EntityManager.
The second mode allows the EntityManager to be application managed, (normally ob-
tained from an injected EntityManagerFactory, or directly from JPA Persistence). This
allows the persistence context to survive transaction boundaries, and follow the normal
EntityManager life-cycle similar to resource local. If the EntityManager is created in the
context of an active JTA transaction, it will automatically be part of the JTA transaction
and commit/rollback with the JTA transaction. Otherwise it must join a JTA transaction
to commit/rollback using EntityManager.joinTransaction()9
<persistence xmlns="http://java.sun.com/xml/ns/persistence"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://java.sun.com/xml/ns/persistence
persistence_1_0.xsd" version="1.0">
<persistence-unit name="acme" transaction-type="JTA">
<jta-data-source>amce</jta-data-source>
</persistence-unit>
</persistence>
https://java.sun.com/javaee/5/docs/api/javax/persistence/EntityManager.html#
9
joinTransaction()
255
50 Advanced
EntityManager em = getEntityManagerFactory().createEntityManager();
Employee employee = em.find(Employee.class, id);
employee.setSalary(employee.getSalary() + 1000);
https://java.sun.com/javaee/5/docs/api/javax/persistence/EntityManager.html#
1
joinTransaction()
257
Advanced
Sometimes it is desirable to handle persistence errors, and recover and retry transactions.
This normally requires a lot of application knowledge to know what failed, what state the
system is in, and how to fix it.
Unfortunately JPA does not provide a direct way of handling commit failures or error
handling. When a transaction commit fails, the transaction is automatically rolled back,
and the persistence context cleared, and all managed objects detached. Not only is there no
way to handle a commit failure, but if any error occurs in an query before the commit, the
transaction will be marked for rollback, so there is no real way to handle any error. This is
because any query could potentially change the state of the database, so JPA does not know
if the database is in an invalid or inconsistent state so must rollback the transaction. As well
if the commit fails, the state of the objects registered in the persistence context may also
be inconsistent (such as partially committed objects having their optimistic lock versions
incremented), so the persistence context is cleared to avoid further errors.
Some JPA providers may provide extended API to allow handling commit failures, or
handling errors on queries.
There are some methods to generically handle commit failures and other errors. Error
handling in general is normally easier when using RESOURCE_LOCAL transactions, and not
when using JTA transactions.
One method of error handling is to call merge for each managed object after the commit
fails into a new EntityManager, then try to commit the new EntityManager. One issue
may be that any ids that were assigned, or optimistic lock versions that were assigned or
incremented may need to be reset. Also, if the original EntityManager was EXTENDED, any
objects that were in use would still become detached, and need to be reset.
Another more involved method to error handling is to always work with a non-transactional
EntityManager. When it's time to commit, a new EntityManager is created, the non-
transactional objects are merged into it, and the new EntityManager is committed. If the
commit fails, only the state of the new EntityManager may be inconsistent, the original
EntityManager will be unaffected. This can allow the problem to be corrected, and the
EntityManager re-merged into another new EntityManager. If the commit is successful
any commit changes can be merged back into the original EntityManager, which can then
continue to be used as normal. This solution requires a fair bit of overhead, so should only
be used if error handling is really required, and the JPA provider provides no alternatives.
258
Transaction Isolation
While the nested (child) transaction is active, the parent transaction may not perform any
operations other than to commit or abort, or to create more child transactions.
Committing a nested transaction has no effect on the state of the parent transaction. The
parent transaction is still uncommitted. However, the parent transaction can now see any
modifications made by the child transaction. Those modifications, of course, are still hidden
to all other transactions until the parent also commits.
Likewise, aborting the nested transaction has no effect on the state of the parent transaction.
The only result of the abort is that neither the parent nor any other transactions will see
any of the database modifications performed under the protection of the nested transaction.
If the parent transaction commits or aborts while it has active children, the child transactions
are resolved in the same way as the parent. That is, if the parent aborts, then the child
transactions abort as well. If the parent commits, then whatever modifications have been
performed by the child transactions are also committed.
The locks held by a nested transaction are not released when that transaction commits.
Rather, they are now held by the parent transaction until such a time as that parent commits.
Any database modifications performed by the nested transaction are not visible outside of the
larger encompassing transaction until such a time as that parent transaction is committed.
The depth of the nesting that you can achieve with nested transaction is limited only by
memory.
Transactions2 Transactions3
2 http://en.wikibooks.org/wiki/Category%3AJava%20Persistence
3 http://en.wikibooks.org/wiki/Category%3AJava%20Persistence%2FRuntime
259
51 Caching
Caching is the most important performance optimization technique. There are many things
that can be cached in persistence, objects, data, database connections, database statements,
query results, meta-data, relationships, to name a few. Caching in object persistence normally
refers to the caching of objects or their data. Caching also influences object identity, that
is that if you read an object, then read the same object again you should get the identical
object back (same reference).
JPA 1.0 does not define a shared object cache, JPA providers can support a shared object
cache or not, however most do. Caching in JPA is required with-in a transaction or within
an extended persistence context to preserve object identity, but JPA does not require that
caching be supported across transactions or persistence contexts.
JPA 2.0 defines the concept of a shared cache. The @Cacheable1 annotation or cacheable
XML attribute can be used to enable or disable caching on a class.
@Entity
@Cacheable
public class Employee {
...
}
The SharedCacheMode2 enum can also be set in the <shared-cache-mode> XML element
in the persistence.xml to configure the default cache mode for the entire persistence unit.
<persistence-unit name="ACME">
<shared-cache-mode>NONE</shared-cache-mode>
</persistence-unit>
There are two types of caching. You can cache the objects themselves including all of their
structure and relationships, or you can cache their database row data. Both provide a
benefit, however just caching the row data is missing a huge part of the caching benefit as
the retrieval of each relationship typically involves a database query, and the bulk of the
cost of reading an object is spent in retrieving its relationships.
1 https://java.sun.com/javaee/6/docs/api/javax/persistence/Cacheable.html
2 http://download.oracle.com/javaee/6/api/javax/persistence/SharedCacheMode.html
261
Caching
Object identity in Java means if two variables (x, y) refer to the same logical object, then
x == y returns true. Meaning that both reference the same thing (both a pointer to the
same memory location).
In JPA object identity is maintained within a transaction, and (normally) within the same
EntityManager. The exception is in a JEE managed EntityManager, object identity is only
maintained inside of a transaction.
So the following is true in JPA:
Object identity is normally a good thing, as it avoids having your application manage
multiple copies of objects, and avoids the application changing one copy, but not the other.
The reason different EntityManagers or transactions (in JEE) don't maintain object identity
is that each transaction must isolate its changes from other users of the system. This is
also normally a good thing, however it does require the application to be aware of copies,
detached objects and merging.
Some JPA products may have a concept of read-only objects, in which object identity may
be maintained across EntityManagers through a shared object cache.
An object cache is where the Java objects (entities) are themselves cached. The advantage
of an object cache is that the data is cached in the same format that it is used in Java.
Everything is stored at the object-level and no conversion is required when obtaining a cache
hit. With JPA the EntityManager must still copy the objects to and from the cache, as it
must maintain its transaction isolation, but that is all that is required. The objects do not
need to be re-built, and the relationships are already available.
262
Data Cache
With an object cache, transient data may also be cached. This may occur automatically, or
may require some effort. If transient data is not desired, you may also need to clear the
data when the object gets cached.
Some JPA products allow read-only queries to access the object cache directly. Some products
only allow object caching of read-only data. Obtaining a cache hit on read-only data is
extremely efficient as the object does not need to be copied, other than the look-up, no work
is required.
It is possible to create your own object cache for your read-only data by loading objects
from JPA into your own object cache or JCache implementation. The main issue, which is
always the main issue in caching in general, is how to handle updates and stale cached data,
but if the data is read-only, this may not be an issue.
TopLink3 / EclipseLink4 : Support an object cache. The object cache is on by default,
but can be globally or selectively enabled or configured per class. The persistence unit
property "eclipselink.cache.shared.default" can be set to "false" to disable the
cache. Read-only queries are supported through the "eclipselink.read-only" query
hint, entities can also be marked to always be read-only using the @ReadOnly annotation.
A data cache, caches the object's data, not the objects themselves. The data is normally
a representation of the object's database row. The advantage of a data cache is that it is
easier to implement as you do not have to worry about relationships, object identity, or
complex memory management. The disadvantage of a data cache is that it does not store
the data as it is used in the application, and does not store relationships. This means that
on a cache hit, the object must still be built from the data, and the relationships fetched
from the database. Some products that support a data cache, also support a relationship
cache5 , or query cache6 to allow caching of relationships.
Hibernate7 : Supports integration with a third party data cache. Caching is not enabled by
default and a third party caching product such as Ehcache must be used to enable caching.
Some products support a separate cache for caching relationships. This is normally required
for OneToMany and ManyToMany relationships. OneToOne and ManyToOne relationships nor-
mally do not need to be cached, as they reference the object's Id. However an inverse
OneToOne will require the relationship to be cached, as it references the foreign key, not
primary key.
3 Chapter 11 on page 25
4 Chapter 10 on page 23
5 Chapter 51.3.1 on page 263
6 Chapter 51.5 on page 265
7 Chapter 12 on page 27
263
Caching
For a relationship cache, the results normally only store the related object's Id, not the
object, or its data (to avoid duplicate and stale data). The key of the relationship cache
is the source object's Id and the relationship name. Sometimes the relationship is cached
as part of the data cache, if the data cache stores a structure instead of a database row.
When a cache hit occurs on a relationship, the related objects are looked up in the data
cache one by one. A potential issue with this, is that if the related object is not in the data
cache, it will need to be selected from the database. This could result in very poor database
performance as the objects can be loaded one by one. Some product that support caching
relationships also support batching the selects to attempt to alleviate this issue.
There are many different caching types. The most common is a LRU cache, or one that
ejects the Least Recently Used objects and maintains a fixed size number of MRU (Most
Recently Used) objects.
Some cache types include:
• LRU - Keeps X number of recently used objects in the cache.
• Full - Caches everything read, forever. (not always the best idea if the database is large)
• Soft - Uses Java garbage collection hints to release objects from the cache when memory
is low.
• Weak - Normally relevant with object caches, keeps any objects currently in use in the
cache.
• L18 - This refers to the transactional cache that is part of every EntityManager, this is
not a shared cache.
• L29 - This is a shared cache, conceptually stored in the EntityManagerFactory, so
accessible to all EntityManagers.
• Data cache10 - The data representing the objects is cached (database rows).
• Object cache11 - The objects are cached directly.
• Relationship cache12 - The object's relationships are cached.
• Query cache13 - The result set from a query is cached.
• Read-only - A cache that only stores, or only allows read-only objects.
• Read-write - A cache that can handle insert, updates and deletes (non read-only).
• Transactional - A cache that can handle insert, updates and deletes (non read-only), and
obeys transactional ACID properties.
• Clustered14 - Typically refers to a cache that uses JMS, JGroups or some other mechanism
to broadcast invalidation messages to other servers in the cluster when an object is updated
or deleted.
264
Query Cache
• Replicated - Typically refers to a cache that uses JMS, JGroups or some other mechanism
to broadcast objects to all servers when read into any of the servers cache.
• Distributed15 - Typically refers to a cache that spreads the cached objects across several
servers in a cluster, and can look-up an object in another server's cache.
TopLink16 / EclipseLink17 : Support an L1 and L2 object cache. LRU, Soft, Full and Weak
cache types are supported. A query cache is supported. The object cache is read-write and
always transactional. Support for cache coordination through RMI and JMS is provided
for clustering. The TopLink18 product includes a Grid component that integrates with
Oracle Coherence to provide a distributed cache.
A query cache caches query results instead of objects. Object caches cache the object by
its Id, so are generally not very useful for queries that are not by Id. Some object caches
support secondary indexes, but even indexed caches are not very useful for queries that can
return multiple objects, as you always need to access the database to ensure you have all
of the objects. This is where query caches are useful, instead of storing objects by Id, the
query results are cached. The cache key is based on the query name and parameters. So if
you have a NamedQuery that is commonly executed, you can cache its results, and only need
to execute the query the first time.
The main issue with query caches, as with caching in general is stale data. Query caches
normally interact with an object cache to ensure the objects are at least as up to date as
in the object cache. Query caches also typically have invalidation options similar to object
caches.
TopLink19 / EclipseLink20 : Support query cache enabled through the query hint
"eclipselink.query-results-cache". Several configuration options including invali-
dation are supported.
The main issue with caching anything is the possibility of the cache version getting out of
synch with the original. This is referred to as stale or out of synch data. For read-only data,
this is not an issue, but for data that changes in-frequently or frequently this can be a major
issue. There are many techniques for dealing with stale data and out of synch data.
265
Caching
Caching the object's state for the duration of a transaction or request is normally not an
issue. This is normally called a 1st level cache, or the EntityManager cache, and is required
by JPA for proper transaction semantics. If you read the same object twice, you must get
the identical object, with the same in-memory changes. The only issues occur with querying
and DML.
For queries that access the database, the query may not reflect the un-written state of the
objects. For example you have persisted a new object, but JPA has not yet inserted this object
in the database as it generally only writes to the database on the commit of the transaction.
So your query will not return this new object, as it is querying the database, not the 1st level
cache. This is normally solved in JPA by the user first calling flush(), or the flushMode
automatically triggering a flush. The default flushMode on an EntityManager or Query is
to trigger a flush, but this can be disabled if a write to the database before every query is not
desired (normally it isn't, as it can be expensive and lead to poor concurrency). Some JPA
providers also support conforming the database query results with the object changes in
memory, which can be used to get consistent data without triggering a flush. This can work
for simple queries, but for complex queries this typically gets very complex to impossible.
Applications normally query data at the start of a request before they start making changes,
or don't query for objects they have already found, so this is normally not an issue.
If you bypass JPA and execute DML directly on the database, either through native SQL
queries, JDBC, or JPQL UPDATE or DELETE queries, then the database can be out of synch
with the 1st level cache. If you had accessed objects before executing the DML, they will
have the old state and not include the changes. Depending on what you are doing this may
be ok, otherwise you may want to refresh the affected objects from the database.
The 1st level, or EntityManager cache can also span transaction boundaries in JPA.
A JTA managed EntityManager will only exist for the duration of the JTA transac-
tion in JEE. Typically the JEE server will inject the application with a proxy to an
EntityManager, and after each JTA transaction a new EntityManager will be created
automatically or the EntityManager will be cleared, clearing the 1st level cache. In an
application managed EntityManager, the 1st level cache will exist for the duration of the
EntityManager. This can lead to stale data, or even memory leaks and poor performance
if the EntityManager is held too long. This is why it is generally a good idea to create
a new EntityManager per request, or per transaction. The 1st level cache can also be
cleared using the EntityManager.clear() method, or an object can be refreshed using the
EntityManager.refresh() method.
The 2nd level cache spans transactions and EntityManagers, and is not required as part of
JPA. Most JPA providers support a 2nd level cache, but the implementation and semantics
vary. Some JPA providers default to enabling a 2nd level cache, and some do not use a 2nd
level cache by default.
If the application is the only application and server accessing the database there is little
issue with the 2nd level cache, as it should always be up to date. The only issue is with
266
Stale Data
DML, if the application executes DML directly to the database through native SQL queries,
JDBC, or JPQL UPDATE or DELETE queries. JPQL queries should automatically invalidate
the 2nd level cache, but this may depend on the JPA provider. If you use native DML
queries or JDBC directly, you may need to invalidate, refresh or clear the objects affected
by the DML.
If there are other applications, or other application servers accessing the same database,
then stale data can become a bigger issue. Read-only objects, and inserting new objects
should not be an issue. New objects should get picked up by other servers even when using
caching as queries typically still access the database. It is normally only find() operations
and relationships that hit the cache. Updated and deleted objects by other applications or
servers can cause the 2nd level cache to become stale.
For deleted objects, the only issue is with find() operations, as queries that access the
database will not return the deleted objects. A find() by the object's Id could return the
object if it is cached, even if it does not exist. This could lead to constraint issues if you add
relations to this object from other objects, or failed updates, if you try to update the object.
Note that these can both occur without caching, even with a single application and server
accessing the database. During a transaction, another user of the application could always
delete the object being used by another transaction, and the second transaction will fail in
the same way. The difference is the potential for this concurrency issue to occur increases.
For updated objects, any query for the objects can return stale data. This can trigger
optimistic lock exceptions on updates, or cause one user to overwrite another user's changes
if not using locking. Again note that these can both occur without caching, even with a
single application and server accessing the database. This is why it is normally always
important to use optimistic locking. Stale data could also be returned to the user.
51.6.3 Refreshing
Refreshing is the most common solution to stale data. Most application users are familiar
with the concept of a cache, and know when they need fresh data and are willing to click a
refresh button. This is very common in an Internet browser, most browsers have a cache of
web pages that have been accessed, and will avoid loading the same page twice, unless the
user clicks the refresh button. This same concept can be used in building JPA applications.
JPA provides several refreshing options, see refreshing21 .
Some JPA providers also support refreshing options in their 2nd level cache. One option is
to always refresh on any query to the database. This means that find() operations will still
access the cache, but if the query accesses the database and brings back data, the 2nd level
cache will be refreshed with the data. This avoids queries returning stale data, but means
there will be less benefit from caching. The cost is not just in refreshing the objects, but
in refreshing their relationships. Some JPA providers support this option in combination
with optimistic locking. If the version value in the row from the database is newer than
the version value from the object in the cache, then the object is refreshed as it is stale,
otherwise the cache value is returned. This option provides optimal caching, and avoids
stale data on queries. However objects returned through find() or through relationships
267
Caching
can still be stale. Some JPA providers also allow find() operation to be configured to first
check the database, but this general defeats the purpose of caching, so you are better off
not using a 2nd level cache at all. If you want to use a 2nd level cache, then you must have
some level of tolerance to stale data.
JPA 2.0 provides a set of standard query hints to allow refreshing or bypassing the cache.
The query hints are defined on the two enum classes CacheRetrieveMode22 and Cache-
StoreMode23 .
Query hints:
• javax.persistence.cache.retrieveMode : CacheRetrieveMode
• BYPASS : Ignore the cache, and build the object directly from the database result.
• USE : Allow the query to use the cache. If the object/data is already in the cache, the
cached object/data will be used.
• javax.persistence.cache.storeMode : CacheStoreMode
• BYPASS : Do not cache the database results.
• REFRESH : If the object/data is already in the cache, then refresh/replace it with the
database results.
• USE : Cache the objects/data returned from the query.
JPA 2.0 also provides a Cache24 interface. The Cache interface can be obtained from the
EntityManagerFactory using the getCache() API. The Cache can be used to manually
evict/invalidate entities in the cache. Either a specific entity, an entire class, or the entire
cache can be evicted. The Cache can also be checked to see if it contains an entity.
Some JPA providers may extend the getCache() interface to provide additional API.
TopLink25 / EclipseLink26 : Provide an extended Cache interface JpaCache. It provides
additional API for invalidation, query cache, access and clearing.
22 https://java.sun.com/javaee/6/docs/api/javax/persistence/CacheRetrieveMode.html
23 https://java.sun.com/javaee/6/docs/api/javax/persistence/CacheStoreMode.html
24 https://java.sun.com/javaee/6/docs/api/javax/persistence/Cache.html
25 Chapter 11 on page 25
26 Chapter 10 on page 23
268
Stale Data
A common way to deal with stale cached data is to use cache invalidation. Cache invalidation
removes or invalidates data or objects in the cache after a certain amount of time, or at a
certain time of day. Time to live invalidation guarantees that the application will never
read cached data that is older than a certain amount of time. The amount of the time
can be configured with respect to the application's requirements. Time of day invalidation
allows the cache to be invalidated at a certain time of day, typically done in the night, this
ensures data is never older than a day old. This can also be used if it is know that a batch
job updates the database a night, the invalidation time can be set after this batch job is
scheduled to run. Data can also be invalidated manually, such as using the JPA 2.0 evict()
API.
Most cache implementations support some form of invalidation, JPA does not define any
configurable invalidation options, so this depends on the JPA and cache provider.
TopLink27 / EclipseLink28 : Provide support for time to live and time of day cache
invalidation using the @Cache annotation and <cache> orm.xml element. Cache invalidation
is also supported through API, and can be used in a cluster to invalidate objects changed
on other machines.
Caching in a clustered environment is difficult because each machine will update the database
directly, but not update the other machine's caches, so each machines cache can become
out of date. This does not mean that caching cannot be used in a cluster, but you must be
careful in how it is configured.
For read-only objects caching can still be used. For read mostly objects, caching can be
used, but some mechanism should be used to avoid stale data. If stale data is only an issue
for writes, then using optimistic locking will avoid writes occurring on stale data. When an
optimistic lock exception occurs, some JPA providers will automatically refresh or invalidate
the object in the cache, so if the user or application retries the transaction the next write
will succeed. Your application could also catch the lock exception and refresh or invalidate
the object, and potentially retry the transaction if the user does not need to be notified of
the lock error (be careful doing this though, as normally the user should be aware of the
lock error). Cache invalidation can also be used to decrease the likelyhood of stale data by
setting a time to live on the cache. The size of the cache can also affect the occurrence of
stale data.
Although returning stale data to a user may be an issue, normally returning stale data to
a user that just updated the data is a bigger issue. This can normally be solved through
session infinitely, but ensuring the user interacts with the same machine in the cluster for the
duration of their session. This can also improve cache usage, as the same user will typically
access the same data. It is normally also useful to add a refresh button to the UI, this will
allow the user to refresh their data if they think their data is stale, or they wish to ensure
27 Chapter 11 on page 25
28 Chapter 10 on page 23
269
Caching
they have data that is up to date. The application can also choose the refresh the objects in
places where up to date data is important, such as using the cache for read-only queries,
but refreshing when entering a transaction to update an object.
For write mostly objects, the best solution may be to disable the cache for those objects.
Caching provides no benefit to inserts, and the cost of avoiding stale data on updates may
mean there is no benefit to caching objects that are always updated. Caching will add some
overhead to writes, as the cache must be updated, having a large cache also affects garbage
collection, so if the cache is not providing any benefit it should be turned off to avoid this
overhead. This can depend on the complexity of the object though, if the object has a lot
of complex relationships, and only part of the object is updated, then caching may still be
worth it.
Cache Coordination
Distributed Caching
A distributed cache is one where the cache is distributed across each of the machines in
the cluster. Each object will only live on one or a set number of the machines. This avoids
stale data, because when the cache is accessed or updated the object is always retrieved
from the same location, so is always up to date. The draw back to this solution is that
cache access now potentially requires a network access. This solution works best when
the machines in the cluster are connected together on the same high speed network, and
when the database machine is not as well connected, or is under load. A distributed cache
reduces database access, so allows the application to be scaled to a larger cluster without
the database becoming a bottleneck. Some distributed cache providers also provide a local
cache, and offer cache coordination between the caches.
TopLink31 : Supports integration with the Oracle Coherence distributed cache.
29 Chapter 11 on page 25
30 Chapter 10 on page 23
31 Chapter 11 on page 25
270
Cache Transaction Isolation
When caching is used, the consistency and isolation of the cache becomes as important as
the database transaction isolation. For basic cache isolation it is important that changes
are not committed into the cache until after the database transaction has been committed,
otherwise uncommitted data could be accessed by other users.
Caches can be either transactional, or non-transactional. In a transactional cache, the
changes from a transaction are committed to the cache as a single atomic unit. This means
the objects/data are first locked in the cache (preventing other threads/users from accessing
the objects/data), then updated in the cache, then the locks are released. Ideally the locks
are obtained before committing the database transaction, to ensure consistency with the
database. In a non-transactional cache the objects/data are updated one by one without any
locking. This means there will be a brief period where the data in the cache is not consistent
with the database. This may or may not be an issue, and is a complex issue to think about
and discuss, and gets into the issue of locking, and the application's isolation requirements.
Optimistic locking is another important consideration in cache isolation. If optimistic locking
is used, the cache should avoid replacing new data, with older data. This is important when
reading, and when updating the cache.
Some JPA providers may allow for configuration of their cache isolation, or different caches
may define different levels of isolation. Although the defaults should normally be used, it can
be important to understand how the usage of caching is affecting your transaction isolation,
as well as your performance and concurrency.
I can't see changes made directly on the database, or from another server
This means you have either enabled caching in your JPA configuration, or your JPA provider
caches by default. You can either disable the 2nd level cache in your JPA configuration, or
refresh the object or invalidate the cache after changing the database directly. See, Stale
Data32
TopLink33 / EclipseLink34 : Caching is enabled by default. To disable caching set the
persistence property "eclipselink.cache.shared.default" to false in your persis-
tence.xml or persistence properties. You can also configure this on a per class basis if you
want to allow caching in some classes and not in others. See, EclipseLink FAQ35 .
Caching 36 Caching 37
271
52 Spring
Spring1 is an application framework for Java. Spring is an IoC2 container that allows for
a different programming model. Spring is similar to a JEE server, in that it provides a
transaction service, XML deployment, annotation processing, byte-code weaving, and JPA
integration.
52.1 Persistence
Persistence in Spring in normally done through a DAO (Data Access Object) layer. The
Spring DAO layer is meant to encapsulate the persistence mechanism, so the same application
data access API would be given no matter if JDBC, JPA or a native API were used.
Spring also defines a transaction manager implementation that is similar to JTA. Spring
also supports transactional annotations and beans similar to SessionBeans.
52.2 JPA
Spring has specific support for JPA and can emulate some of the functionality of a JEE
container with respect to JPA. Spring allows a JPA persistence unit to be deployed in
container managed mode. If the spring-agent is used to start the JVM, Spring can de-
ploy a JPA persistence unit with weaving similar to a JEE server. Spring can also pass
a Spring DataSource and integrate its transaction service with JPA. Spring allows the
@PersistenceUnit and @PersistenceContext annotations to be used in any Spring bean
class, to have an EntityManager or EntityManagerFactory injected. Spring supports a
managed transactional EntityManager similar to JEE, where the EntityManager binds
itself as a new persistence context to each new transaction and commits as part of the
transaction. Spring supports both JTA integration, and its own transaction manager.
1 http://en.wikipedia.org/wiki/Spring_Framework
2 http://en.wikipedia.org/wiki/Inversion_of_Control
273
Spring
http://www.springframework.org/schema/beans/spring-beans-2.0.xsd">
<bean id="dataSource"
class="org.springframework.jdbc.datasource.DriverManagerDataSource">
<property name="driverClassName"
value="oracle.jdbc.OracleDriver" />
<property name="url"
value="jdbc:oracle:thin:@localhost:1521:orcl" />
<property name="username" value="scott" />
<property name="password" value="tiger" />
</bean>
<bean id="transactionManager"
class="org.springframework.orm.jpa.JpaTransactionManager">
<property name="entityManagerFactory"
ref="entityManagerFactory" />
</bean>
<bean id="entityManager"
class="org.springframework.orm.jpa.support.SharedEntityManagerBean">
<property name="entityManagerFactory"
ref="entityManagerFactory"/>
</bean>
</beans>
Spring3
3 http://en.wikibooks.org/wiki/Category%3AJava%20Persistence
274
53 Databases
275
54 MySQL
Using MySQL with Java Persistence API is quite straightforward as the JDBC driver is
available directly from MySQL web site http://dev.mysql.com/.
MySQL Connector/J is the official JDBC driver for MySQL and has a good documentation
available directly on the web site.
In this page we will see some aspects of managing MySQL using the Persistence API.
54.1 Installing
Installation is straightforward and consists in making the .jar file downloaded from MySQL
web site visible to the JVM. It can be already installed if you are using Apache or JBOSS.
You can learn a lot from the documentation on MySQL web site http://dev.mysql.com/
doc/refman/5.0/en/connector-j-reference-configuration-properties.html.
If you intend to create table and a database automatically , you will need to have the
correct user rights but also inform the Persistence API about the name of the Database you
want to create. For example with TopLink, if you used:
in the property.xml, you will be likely need to create a new database. To create the
database "NewDB" automatically, you need to give the following URL to the jdbc connection:
If not, the persistence API will complain that the database does not exist.
277
55 References
1 http://jcp.org/aboutJava/communityprocess/final/jsr220/index.html
2 http://jcp.org/en/jsr/detail?id=317
3 http://java.sun.com/xml/ns/persistence/orm_1_0.xsd
4 http://java.sun.com/xml/ns/persistence/persistence_1_0.xsd
5 https://java.sun.com/javaee/5/docs/api/javax/persistence/package-summary.html
6 http://en.wikibooks.org/wiki/%2FJPQL%20BNF
7 http://wiki.eclipse.org/EclipseLink/Development/JPA
8 http://en.wikibooks.org/wiki/Java%20Programming
9 http://wiki.eclipse.org/EclipseLink
10 http://wiki.oracle.com/page/TopLink
11 http://wiki.glassfish.java.net/Wiki.jsp?page=TopLinkEssentials
12 http://www.hibernate.org/37.html
13 http://en.wikipedia.org/wiki/Java_Persistence_API
14 http://wiki.java.net/bin/view/Javapedia/JPA
15 http://www.freebase.com/view/guid/9202a8c04000641f8000000004666d33
http://www.dmoz.org/Computers/Programming/Languages/Java/Databases_and_Persistence/
16
Object_Persistence/JPA/
279
References
17 http://forum.java.sun.com/forum.jspa?forumID=13
18 http://saloon.javaranch.com/cgi-bin/ubb/ultimatebb.cgi?ubb=forum&f=78
19 http://www.nabble.com/JPA-f27109.html
20 http://www.nabble.com/EclipseLink-f26430.html
21 http://www.eclipse.org/newsportal/thread.php?group=eclipse.rt.eclipselink
22 http://forums.oracle.com/forums/forum.jspa?forumID=48
23 http://forum.hibernate.org/
24 http://www.nabble.com/java.net---glassfish-persistence-f13455.html
25 http://www.oracle.com/technology/products/ias/toplink/index.html
26 http://www.eclipse.org/eclipselink/
27 https://glassfish.dev.java.net/javaee5/persistence/
28 http://www.hibernate.org/
29 http://openjpa.apache.org/
30 http://objectgeneration.com/eclipse/
31 http://java-persistence.blogspot.com/
32 http://jroller.com/mkeith/
33 http://ontoplink.blogspot.com/
34 http://eclipselink.blogspot.com/
35 http://blog.hibernate.org/
36 http://www.amazon.com/gp/product/1590596455/102-2412923-9620152?v=glance&n=283155
280
56 Contributors
Edits User
3 Adrignola1
1 AllenZh2
1 Avicennasis3
1 Benjihawaii4
1 Bru5
1 Chrylis6
1 Cinhtau7
4 Cowwoc8
1 DanielSerodio9
2 David.shah10
1 Dirk Hünniger11
1 Djaquay12
1 Djclarke13
1 DragonxXx14
1 Dwersin15
1 Guruwons16
1 Inas6617
1 Izogfif18
62 James.sutherland19
1272 Jamesssss20
1 Jan.derijke21
1 http://en.wikibooks.org/w/index.php?title=User:Adrignola
2 http://en.wikibooks.org/w/index.php?title=User:AllenZh
3 http://en.wikibooks.org/w/index.php?title=User:Avicennasis
4 http://en.wikibooks.org/w/index.php?title=User:Benjihawaii
5 http://en.wikibooks.org/w/index.php?title=User:Bru
6 http://en.wikibooks.org/w/index.php?title=User:Chrylis
7 http://en.wikibooks.org/w/index.php?title=User:Cinhtau
8 http://en.wikibooks.org/w/index.php?title=User:Cowwoc
9 http://en.wikibooks.org/w/index.php?title=User:DanielSerodio
10 http://en.wikibooks.org/w/index.php?title=User:David.shah
11 http://en.wikibooks.org/w/index.php?title=User:Dirk_H%C3%BCnniger
12 http://en.wikibooks.org/w/index.php?title=User:Djaquay
13 http://en.wikibooks.org/w/index.php?title=User:Djclarke
14 http://en.wikibooks.org/w/index.php?title=User:DragonxXx
15 http://en.wikibooks.org/w/index.php?title=User:Dwersin
16 http://en.wikibooks.org/w/index.php?title=User:Guruwons
17 http://en.wikibooks.org/w/index.php?title=User:Inas66
18 http://en.wikibooks.org/w/index.php?title=User:Izogfif
19 http://en.wikibooks.org/w/index.php?title=User:James.sutherland
20 http://en.wikibooks.org/w/index.php?title=User:Jamesssss
21 http://en.wikibooks.org/w/index.php?title=User:Jan.derijke
281
Contributors
1 Jomegat22
1 Kencyber23
1 Lawgers24
8 Lmdavid25
1 Moutonbreton26
1 Panic2k427
1 Pengo28
2 QuiteUnusual29
1 Recent Runes30
8 Sandro.roeder31
1 Shaunmsmith32
7 Svendhhh33
3 Tewari.deepak34
2 Theshowmecanuck35
1 Tjoneill36
1 Van der Hoorn37
3 Vernetto38
2 Vishal42339
1 Webaware40
1 Wutsje41
1 YMS42
1 ZeroOne43
22 http://en.wikibooks.org/w/index.php?title=User:Jomegat
23 http://en.wikibooks.org/w/index.php?title=User:Kencyber
24 http://en.wikibooks.org/w/index.php?title=User:Lawgers
25 http://en.wikibooks.org/w/index.php?title=User:Lmdavid
26 http://en.wikibooks.org/w/index.php?title=User:Moutonbreton
27 http://en.wikibooks.org/w/index.php?title=User:Panic2k4
28 http://en.wikibooks.org/w/index.php?title=User:Pengo
29 http://en.wikibooks.org/w/index.php?title=User:QuiteUnusual
30 http://en.wikibooks.org/w/index.php?title=User:Recent_Runes
31 http://en.wikibooks.org/w/index.php?title=User:Sandro.roeder
32 http://en.wikibooks.org/w/index.php?title=User:Shaunmsmith
33 http://en.wikibooks.org/w/index.php?title=User:Svendhhh
34 http://en.wikibooks.org/w/index.php?title=User:Tewari.deepak
35 http://en.wikibooks.org/w/index.php?title=User:Theshowmecanuck
36 http://en.wikibooks.org/w/index.php?title=User:Tjoneill
37 http://en.wikibooks.org/w/index.php?title=User:Van_der_Hoorn
38 http://en.wikibooks.org/w/index.php?title=User:Vernetto
39 http://en.wikibooks.org/w/index.php?title=User:Vishal423
40 http://en.wikibooks.org/w/index.php?title=User:Webaware
41 http://en.wikibooks.org/w/index.php?title=User:Wutsje
42 http://en.wikibooks.org/w/index.php?title=User:YMS
43 http://en.wikibooks.org/w/index.php?title=User:ZeroOne
282
List of Figures
283
List of Figures
284
List of Figures
1 PD
2
3
4 Her Pegship45 LGPL
5 PD
6 PD
7 Jamesssss46 PD
8 Jamesssss47 PD
9 Jamesssss48 PD
10 Jamesssss49 PD
11 Jamesssss50 PD
12 PD
13 PD
14 Jamesssss51 PD
15 Jamesssss52 PD
16 Jamesssss53 PD
17 PD
18
19
20 Traced by User:Stannered54 , original by David Vignoni LGPL
21
45 http://en.wikibooks.org/wiki/User%3APegship
46 http://en.wikibooks.org/wiki/User%3AJamesssss
47 http://en.wikibooks.org/wiki/User%3AJamesssss
48 http://en.wikibooks.org/wiki/User%3AJamesssss
49 http://en.wikibooks.org/wiki/User%3AJamesssss
50 http://en.wikibooks.org/wiki/User%3AJamesssss
51 http://en.wikibooks.org/wiki/User%3AJamesssss
52 http://en.wikibooks.org/wiki/User%3AJamesssss
53 http://en.wikibooks.org/wiki/User%3AJamesssss
54 http://en.wikibooks.org/wiki/User%3AStannered
285
57 Licenses