Users Guide Oracle Data Visualization Desktop

Download as pdf or txt
Download as pdf or txt
You are on page 1of 167

Oracle® Fusion Middleware

User’s Guide for Oracle Data Visualization


Desktop

E70158-16
October 2018
Oracle Fusion Middleware User’s Guide for Oracle Data Visualization Desktop,

E70158-16

Copyright © 2016, 2018, Oracle and/or its affiliates. All rights reserved.

Primary Author: Nick Fry

Contributing Authors: Arup Roy, Stefanie Rhone

Contributors: Oracle Business Intelligence development, product management, and quality assurance teams

This software and related documentation are provided under a license agreement containing restrictions on
use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your
license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify,
license, transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means.
Reverse engineering, disassembly, or decompilation of this software, unless required by law for
interoperability, is prohibited.

The information contained herein is subject to change without notice and is not warranted to be error-free. If
you find any errors, please report them to us in writing.

If this is software or related documentation that is delivered to the U.S. Government or anyone licensing it on
behalf of the U.S. Government, then the following notice is applicable:

U.S. GOVERNMENT END USERS: Oracle programs, including any operating system, integrated software,
any programs installed on the hardware, and/or documentation, delivered to U.S. Government end users are
"commercial computer software" pursuant to the applicable Federal Acquisition Regulation and agency-
specific supplemental regulations. As such, use, duplication, disclosure, modification, and adaptation of the
programs, including any operating system, integrated software, any programs installed on the hardware,
and/or documentation, shall be subject to license terms and license restrictions applicable to the programs.
No other rights are granted to the U.S. Government.

This software or hardware is developed for general use in a variety of information management applications.
It is not developed or intended for use in any inherently dangerous applications, including applications that
may create a risk of personal injury. If you use this software or hardware in dangerous applications, then you
shall be responsible to take all appropriate fail-safe, backup, redundancy, and other measures to ensure its
safe use. Oracle Corporation and its affiliates disclaim any liability for any damages caused by use of this
software or hardware in dangerous applications.

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of
their respective owners.

Intel and Intel Xeon are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are
used under license and are trademarks or registered trademarks of SPARC International, Inc. AMD, Opteron,
the AMD logo, and the AMD Opteron logo are trademarks or registered trademarks of Advanced Micro
Devices. UNIX is a registered trademark of The Open Group.

This software or hardware and documentation may provide access to or information about content, products,
and services from third parties. Oracle Corporation and its affiliates are not responsible for and expressly
disclaim all warranties of any kind with respect to third-party content, products, and services unless otherwise
set forth in an applicable agreement between you and Oracle. Oracle Corporation and its affiliates will not be
responsible for any loss, costs, or damages incurred due to your access to or use of third-party content,
products, or services, except as set forth in an applicable agreement between you and Oracle.
Contents
Preface
Audience x
Documentation Accessibility x
Related Resources x
Conventions xi

1 Get Started with Oracle Data Visualization Desktop


About Oracle Data Visualization Desktop 1-1
Get Started with Samples 1-2

2 Explore, Visualize, and Analyze Data


Typical Workflow to Visualize Data 2-1
Create a Project and Add Data Sets 2-2
Add Data from Data Sets to Visualization Canvases 2-3
Add Data to Blank Canvases 2-3
About Adding Data to the Visualization Grammar Pane 2-3
About Adding Data to the Visualization Assignment Pane 2-4
Customize Tooltip Data 2-6
Add Advanced Analytics to Visualizations 2-6
Create Calculated Data Elements in a Data Set 2-8
Undo and Redo Edits 2-9
Refresh Data in a Project 2-9
Adjust the Visualization Canvas Layout 2-9
Change Visualization Types 2-11
Adjust Visualization Properties 2-12
Assign Color to Visualize Data 2-13
Manage Color Settings 2-13
Format Numeric Data Properties 2-16
Apply Map Layers and Backgrounds to Enhance Visualizations 2-17
Work with Map Backgrounds 2-17
Enhance Visualizations with Map Backgrounds 2-18

iii
Use Different Map Backgrounds in a Project 2-18
Use Color to Interpret Data Values in Map Visualizations 2-19
Add Custom Map Layers 2-20
Update Custom Map Layers 2-21
Apply Multiple Data Layers on a Single Map Visualization 2-22
Create Heatmap Layers on a Map Visualization 2-24
Make Maps Available to Users 2-25
Make Map Backgrounds Available to Users 2-25
Sort and Select Data in Visualization Canvases 2-26
Replace a Data Set in a Project 2-27
Remove a Data Set from a Project 2-28
Analyze Your Data Set Using Machine Learning 2-28
About Using Machine Learning to Discover Data Insights 2-28
Add Data Insights to Visualizations 2-29
About Warnings for Data Issues in Visualizations 2-30

3 Create and Apply Filters to Visualize Data


Typical Workflow to Create and Apply Filters 3-1
About Filters and Filter Types 3-2
How Visualizations and Filters Interact 3-2
Synchronize Visualizations in a Project 3-3
About Automatically Applied Filters 3-4
Create Filters on a Project 3-4
Create Filters on a Visualization 3-5
Move Filter Panels 3-6
Apply Range Filters 3-7
Apply Top Bottom N Filters 3-8
Apply List Filters 3-8
Apply Date Filters 3-9
Build Expression Filters 3-9

4 Use Other Functions to Visualize Data


Typical Workflow to Prepare, Connect and Search Artifacts 4-1
Build Stories 4-2
Capture Insights 4-2
Create Stories 4-2
View Streamlined Content 4-3
Identify Content with Thumbnails 4-4
Manage Custom Plug-ins 4-4

iv
About Composing Expressions 4-5
Use Data Actions to Connect to Canvases and External URLs 4-5
Create Data Actions to Connect Visualization Canvases 4-6
Create Data Actions to Connect to External URLs from Visualization Canvases 4-7
Apply Data Actions to Visualization Canvases 4-7
Search Data, Projects, and Visualizations 4-8
Index Data for Search and BI Ask 4-8
Visualize Data with BI Ask 4-9
Search for Saved Projects and Visualizations 4-10
Search Tips 4-11
Save Your Changes Automatically 4-12

5 Add Data Sources to Analyze and Explore Data


Typical Workflow to Add Data Sources 5-1
About Data Sources 5-2
Connect to Database Data Sources 5-2
Create Database Connections 5-2
Create Data Sets from Databases 5-3
Edit Database Connections 5-4
Delete Database Connections 5-5
Connect to Oracle Applications Data Sources 5-5
Create Oracle Applications Connections 5-5
Compose Data Sets from Subject Areas 5-6
Compose Data Sets from Analyses 5-7
Edit Oracle Applications Connections 5-7
Delete Oracle Applications Connections 5-8
Create Connections to Dropbox 5-8
Create Connections to Google Drive or Google Analytics 5-9
Create Generic JDBC Connections 5-10
Create Generic ODBC Connections 5-11
Create Connections to Oracle Autonomous Data Warehouse Cloud 5-12
Create Connections to Oracle Big Data Cloud 5-13
Create Connections to Oracle Essbase 5-13
Create Connections to Oracle Talent Acquisition Cloud 5-14
Add a Spreadsheet as a Data Source 5-15
About Adding a Spreadsheet as a Data Set 5-15
Add a Spreadsheet from Your Computer 5-16
Add a Spreadsheet from Excel with the Smart View Plug-In 5-17
Add a Spreadsheet from Windows Explorer 5-18

v
Add a Spreadsheet from Dropbox or Google Drive 5-19

6 Manage Data that You Added


Typical Workflow to Manage Added Data 6-1
Manage Data Sets 6-2
Refresh Data that You Added 6-2
Update Details of Data that You Added 6-3
Delete Data Sets from Data Visualization 6-4
Rename a Data Set 6-4
Duplicate Data Sets 6-4
Blend Data that You Added 6-5
About Changing Data Blending 6-7
Change Data Blending 6-8
View and Edit Object Properties 6-8

7 Prepare Your Data Set for Analysis


Typical Workflow to Prepare Your Data Set for Analysis 7-1
About Data Preparation 7-1
Data Profiles and Semantic Recommendations 7-2
Accept Enrichment Recommendations 7-5
Transform Data Using Column Menu Options 7-5
Convert Text Columns to Date or Time Columns 7-6
Adjust the Display Format of Date or Time Columns 7-7
General Custom Format Strings 7-8
Create a Bin Column When You Prepare Data 7-10
Adjust the Column Properties 7-11
Edit the Data Preparation Script 7-11

8 Use Machine Learning to Analyze Data


Typical Workflow to Analyze Data with Machine Learning 8-1
Create a Train Model for a Data Flow 8-1
Interpret the Effectiveness of the Model 8-2
Score a Model 8-3
Add Scenarios to a Project 8-4

9 Use Data Flows to Create Curated Data Sets


Typical Workflow to Create Curated Data Sets with Data Flows 9-2
About Data Flows 9-2

vi
About Editing a Data Flow 9-3
Create a Data Flow 9-4
Add Filters to a Data Flow 9-5
Add Aggregates to a Data Flow 9-5
Merge Columns in a Data Flow 9-6
Merge Rows in a Data Flow 9-6
Create a Bin Column in a Data Flow 9-7
Create a Sequence of Data Flows 9-8
Create a Group in a Data Flow 9-8
Add Cumulative Values to a Data Flow 9-9
Add a Time Series Forecast to a Data Flow 9-10
Add a Sentiment Analysis to a Data Flow 9-11
Apply Custom Scripts to a Data Flow 9-11
Branch Out a Data Flow into Multiple Connections 9-11
Apply Incremental Processing to a Data Flow 9-13
Customize the Names and Descriptions of Data Flow Steps 9-13
Schedule a Data Flow 9-13
Create an Essbase Cube in a Data Flow 9-14
Execute a Data Flow 9-15
Save Output Data from a Data Flow 9-15
Run a Saved Data Flow 9-17
Apply Parameters to a Data Flow 9-17
Modify Parameter Prompts when You Run or Schedule a Data Flow 9-18

10 Import and Share


Typical Workflow to Import and Share Artifacts 10-1
Import and Share Projects or Folders 10-1
Import an Application or Project 10-2
Share a Project or Folder as an Application 10-2
Email Projects and Folders 10-3
Share a Project or Folder on Oracle Analytics Cloud 10-4
Share Visualizations, Canvases, or Stories 10-5
Share a File of a Visualization, Canvas, or Story 10-5
Email a File of a Visualization, Canvas, or Story 10-6
Print a Visualization, Canvas, or Story 10-6
Write Visualization Data to a CSV or TXT File 10-7
Share a File of a Visualization, Canvas, or Story on Oracle Analytics Cloud 10-7

vii
A Frequently Asked Questions
FAQs to Install Data Visualization Desktop A-1
FAQs for Data Visualization Projects and Data Sources A-2

B Troubleshoot
Troubleshoot Data Visualization Issues B-1

C Accessibility Features and Tips for Data Visualization Desktop


Start Data Visualization Desktop with Accessibility Features Enabled C-1
Keyboard Shortcuts for Data Visualization C-1

D Data Sources and Data Types Reference


Supported Data Sources D-1
Oracle Applications Connector Support D-4
Data Visualization Supported and Unsupported Data Types D-5
Unsupported Data Types D-5
Supported Base Data Types D-5
Supported Data Types by Database D-6

E Data Preparation Reference


Transform Recommendation Options E-1

F Expression Editor Reference


SQL Operators F-1
Conditional Expressions F-1
Functions F-2
Aggregate Functions F-3
Analytics Functions F-3
Calendar Functions F-4
Conversion Functions F-5
Display Functions F-6
Mathematical Functions F-7
String Functions F-8
System Functions F-10
Time Series Functions F-10
Constants F-10

viii
Types F-10

G Data Visualization SDK Reference


About the Oracle Data Visualization SDK G-1
Create the Visualization Plug-in Development Environment G-2
Create a Skeleton Visualization Plug-in G-3
Create a Skeleton Skin or Unclassified Plug-in G-4
Develop a Visualization Plug-in G-4
Run in SDK Mode and Test the Plug-in G-4
Validate the Visualization Plug-in G-5
Build, Package, and Deploy the Visualization Plug-in G-5
Delete Plug-ins from the Development Environment G-6

ix
Preface

Preface
Learn how to explore data using Oracle Data Visualization Desktop.

Topics
• Audience
• Documentation Accessibility
• Related Resources
• Conventions

Audience
User's Guide for Oracle Data Visualization Desktop is intended for business users who
use Oracle Data Visualization Desktop to upload and query data, analyze data within
visualizations, work with their favorite projects, and import and export their projects.

Documentation Accessibility
For information about Oracle's commitment to accessibility, visit the Oracle
Accessibility Program website at http://www.oracle.com/pls/topic/lookup?
ctx=acc&id=docacc.

Access to Oracle Support


Oracle customers that have purchased support have access to electronic support
through My Oracle Support. For information, visit http://www.oracle.com/pls/topic/
lookup?ctx=acc&id=info or visit http://www.oracle.com/pls/topic/lookup?ctx=acc&id=trs
if you are hearing impaired.

Related Resources
These related Oracle resources provide more information.
• Oracle Business Analytics Product Information
• Oracle Community Forum
• Oracle Data Visualization Desktop Installation Download
• Oracle Data Visualization Samples

x
Preface

Conventions
Conventions used in this document are described in this topic.

Text Conventions

Convention Meaning
boldface Boldface type indicates graphical user interface elements associated
with an action, or terms defined in text or the glossary.
italic Italic type indicates book titles, emphasis, or placeholder variables for
which you supply particular values.
monospace Monospace type indicates commands within a paragraph, URLs, code
in examples, text that appears on the screen, or text that you enter.

Videos and Images


Your company can use skins and styles to customize the look of the Oracle Business
Intelligence application, dashboards, reports, and other objects. It is possible that the
videos and images included in the product documentation look different than the skins
and styles your company uses.
Even if your skins and styles are different than those shown in the videos and images,
the product behavior and techniques shown and demonstrated are the same.

xi
1
Get Started with Oracle Data Visualization
Desktop
This topic describes the benefits of using Data Visualization Desktop and explains how
to get started using the samples provided.

Video

Topics:
• About Oracle Data Visualization Desktop
• Get Started with Samples

About Oracle Data Visualization Desktop


Data Visualization Desktop provides powerful personal data exploration and
visualization in a simple per-user desktop download. Data Visualization Desktop is the
perfect tool for quick exploration of sample data from multiple sources or for rapid
analyses and investigation of your own local data sets.
Data Visualization Desktop makes it easy to visualize your data so you can focus on
exploring interesting data patterns. Just upload data files or connect to Oracle
Applications or a database, select the elements that you’re interested in, and let Data
Visualization Desktop find the best way to visualize it. Choose from a variety of
visualizations to look at data in a specific way.
Data Visualization Desktop also gives you a preview of the self-service
visualization capabilities included in Oracle Analytics Cloud, Oracle's industrial-
strength cloud analytics platform. Oracle Analytics Cloud extends the data exploration
and visualization experience by offering secure sharing and collaboration across the
enterprise, additional data sources, greater scale, and a full mobile experience
including proactive self-learning analytics delivered to your device. Try Data
Visualization Desktop for personal analytics and to sample a taste of Oracle’s broader
analytics portfolio.
Data Visualization Desktop’s benefits include:
• A personal, single-user desktop application.
• Offline availability.
• Completely private analysis.
• Full control of data source connections.
• Direct access to on-premises data sources.
• Lightweight single-file download.
• No remote server infrastructure.
• No administration tasks.

1-1
Chapter 1
Get Started with Samples

Get Started with Samples


Use the samples provided to discover all the capabilities of Oracle Data Visualization
Desktop, and to learn the best practices.
Because these samples use business functions such as trending, binning, forecasting,
and clustering, you can use them as a quick reference when you create your own
visualization.
The sample data set is based on Sales Orders data and contains meaningful
dimensions, distributions, examples of data wrangling, calculated columns, and more.
You can optionally download the samples during installation. If you didn’t download the
samples during installation, then you can still get them by uninstalling and then
reinstalling Data Visualization Desktop. Your personal data isn’t deleted if you uninstall
and reinstall Data Visualization Desktop.

1-2
2
Explore, Visualize, and Analyze Data
This topic describes the many ways that you can explore and analyze your data.

Video

Topics:
• Typical Workflow to Visualize Data
• Create a Project and Add Data Sets
• Add Data from Data Sets to Visualization Canvases
• Add Advanced Analytics to Visualizations
• Create Calculated Data Elements in a Data Set
• Undo and Redo Edits
• Refresh Data in a Project
• Adjust the Visualization Canvas Layout
• Change Visualization Types
• Adjust Visualization Properties
• Assign Color to Visualize Data
• Format Numeric Data Properties
• Apply Map Layers and Backgrounds to Enhance Visualizations
• Sort and Select Data in Visualization Canvases
• Replace a Data Set in a Project
• Remove a Data Set from a Project
• Analyze Your Data Set Using Machine Learning
• About Warnings for Data Issues in Visualizations

Typical Workflow to Visualize Data


Here are the common tasks for visualizing your data.

Task Description More Information


Create a project and Create a new Data Visualization project and Create a Project and Add Data Sets
add data sets to it select one or more data sets to include in the
project.
Add data elements Add data elements (for example, data Add Data from Data Sets to Visualization
columns or calculations) from the selected Canvases
data set to the visualizations on the Visualize
canvas.

2-1
Chapter 2
Create a Project and Add Data Sets

Task Description More Information


Adjust the canvas Add, remove, and rearrange visualizations. Adjust the Visualization Canvas Layout
layout
Filter content Specify how many results and which items to Create and Apply Filters to Visualize Data
include in the visualizations.
Deploy machine Use Diagnostic Analytics (Explain) to show Analyze Your Data Set Using Machine
learning and Explain patterns and uncover insights in your data Learning
set, and add the visualizations that Explain
provides to your projects.

Create a Project and Add Data Sets


Projects contain visualizations that help you to analyze your data in a productive and
meaningful ways. When you create a project, you add one or more data sets
containing the data that you want to visualize and explore. Data sets contain data from
Oracle Applications, databases, or uploaded data files such as spreadsheets. You can
also add data sets to your existing projects.
You can use the Data Set page to familiarize yourself with all available data sets. Data
sets have distinct icons to help you quickly identify them by type.
1. To create a new project, go to the Home Page, click Create, then click Project.
The Add Data Set dialog is displayed.
• Alternatively, go to the Data page and click Data Sets. Select a data set you
want to analyze in a project and click Action menu or right-click. Select
Create Project.
• Alternatively, to open an existing project, on the Home page, click Navigator,
then select Projects. Locate an existing project in the My Folders, Shared
Folders, Projects, or Favorites page. You can also locate an existing project by
using the Home page search or by browsing the project thumbnails shown on
the Home page. Click the project’s Action menu, then click Open.
2. You can add data to a project using one of the following options:
• If you're working with a new project, then in the Add Data Set dialog browse
and select the data sources that you want to analyze, then click Add to
Project.
• If you’re working with an existing project, then in the Data Elements pane click
Add (+), then Add Data Set to display the Add Data Set dialog and add a data
source.
• You can also create a new data source based on a file or connection using the
Create Data Set dialog, then add it to your projects. See Add Data Sources for
Analyzing and Exploring Data.
3. To visualize data from multiple data set in the same project, in the Data Elements
pane click Add, and then select Add Data Set.
When you’ve multiple data set in a project, click Data Sets in the properties pane
to change the default data blending options. See Blend Data that You Added.
4. Drag the data elements that you want to visualize from the Data Elements pane
onto the visualization canvas, and start building your project. .

2-2
Chapter 2
Add Data from Data Sets to Visualization Canvases

• You can transform your data set to improve the quality of your analysis and
visualization using data preparation script in the Prepare canvas. See Prepare
Your Data Set for Analysis.

Add Data from Data Sets to Visualization Canvases


There are various ways that you can add data elements such as columns and
calculations to your visualizations.

Topics:
• Add Data to Blank Canvases
• About Adding Data to the Visualization Grammar Pane
• About Adding Data to the Visualization Assignment Pane
• Customize Tooltip Data

Add Data to Blank Canvases


You can add data elements directly from the Data Elements pane to a blank canvas.

You must create a project or open an existing project and add one or more data sets
to the project before you can add data elements to a blank canvas.
1. Confirm that you’re working in the Visualize canvas.
2. Drag one or more data elements to the blank canvas or between visualizations on
the canvas.
A visualization is automatically created and the best visualization type and layout
are selected.
For example, if you add time and product attributes and a revenue measure to a
blank canvas, the data elements are placed in the best locations and the Line
visualization type is selected.
If there are visualizations already on the canvas, then you can drag and drop data
elements between them.

About Adding Data to the Visualization Grammar Pane


After you’ve selected the data sets for your project, you can begin to add data
elements such as measures and attributes to visualizations. You can select compatible
data elements from the data sets and drop them onto the Visualization Grammar Pane
in the Visualize canvas. Based on your selections, visualizations are created on the

2-3
Chapter 2
Add Data from Data Sets to Visualization Canvases

canvas. The Visualization Grammar Pane contains sections such as Columns, Rows,
Values, and Category.

You must create a project or open an existing project and add one or more data sets
to the project before you can add data elements to the Visualization Grammar Pane.
You can only drop data elements based on attribute and type onto a specific
Visualization Grammar Pane section.
Confirm that you’re working in the Visualize canvas. Use one of the following methods
to add data elements to the Visualization Grammar Pane:
• Drag and drop one or more data elements from the Data Elements pane to the
Visualization Grammar Pane in the Visualize canvas.
The data elements are automatically positioned, and if necessary the visualization
changes to optimize its layout.
• Double-click data elements in the Data Elements pane to add them to the
Visualize canvas.
• Replace a data element by dragging it from the Data Elements pane and dropping
it over an existing data element.
• Swap data elements by dragging a data element already inside the Visualize
canvas and dropping it over another data element.
• Reorder data elements in the Visualization Grammar Pane section (for example,
Columns, Rows, Values) to optimize the visualization, if you’ve multiple data
elements in the Visualization Grammar Pane section.
• Remove a data element by selecting a data element in the Visualization Grammar
Pane, and click X.

About Adding Data to the Visualization Assignment Pane


You can use the visualization Assignment pane to help you position data elements in
the optimal locations for exploring content.

You must create a project or open an existing project and add one or more data sets
to the project before you can add data elements to the visualization Assignment pane.

2-4
Chapter 2
Add Data from Data Sets to Visualization Canvases

Confirm that you’re working in the Visualize canvas. Use one of the following methods
to add data elements to the Visualization Assignment Pane
• When you drag and drop a data element to a visualization (but not to a specific
drop target), you'll see a blue outline around the recommended Assignments (for
example Rows, Columns) in the visualization. In addition, you can identify any
valid visualization Assignment because you'll see a green plus sign icon appear
next to your data element. The sections in the visualization Assignment pane are
the same as in the Visualization Grammar Pane.

After you drop data elements into the visualization Assignment pane or when you
move your cursor outside of the visualization, the Assignment pane disappears.
• To display the Assignment pane again, on the visualization toolbar, click Show
Assignments.
You can also do this to keep the visualization Assignment pane in place while you
work.

2-5
Chapter 2
Add Advanced Analytics to Visualizations

Customize Tooltip Data


You can add data to the tooltips in a visualization.
You can select only measure columns for the Tooltip drop target. Tooltip data isn't
available for certain visualization types such as Table, Correlation Matrix, and List.
1. Confirm that you’re working in the Visualize canvas and select a visualization.
2. Drag and drop one or more measure columns from the Data Elements pane to the
Tooltip drop target in the Visualization Grammar Pane.
3. Hover the mouse pointer over the visualization to see the updated tooltip. The
tooltip displays the data intersection values at the top and the additional data at
the bottom.

Add Advanced Analytics to Visualizations


You can easily apply advanced analytics functions to a project to augment its
visualizations. For example, you can use advanced analytics to highlight outliers or
overlay trendlines. Advanced analytics are statistical functions that you apply to
enhance the data displayed in visualizations. Examples of advanced analytics
functions are Clusters, Outliers, and Trend Lines.
The Data Elements pane Analytics area contains standard analytics functions (for
example, Clusters and Trend Line). You can use analytics functions as they are, or
use them to create your own calculated columns that reference statistical scripts. See
Evaluate_Script in Analytics Functions.

Prerequisites
Before you can use analytic functions in Data Visualization:

2-6
Chapter 2
Add Advanced Analytics to Visualizations

• Install DVML and related packages ready to be used by Data Visualization


Desktop .
For example, on Windows use the Install DVML Start menu option, or on Mac
double-click the application Oracle Data Visualization Desktop Install DVML in
Finder under Applications or in Launchpad.

• Create a project or visualization to which you can apply one or more analytic
functions.

Use Analytic Functions


1. Confirm that you’re working in the Visualize canvas.
2. To display the available advanced analytic functions, click the Analytics icon in
the Data Elements pane.
3. To edit the applied advanced analytics in a visualization, highlight the visualization,
and in the properties pane click the Analytics icon.
4. To add advanced analytic functions to a visualization, do one of the following:
• Drag and drop an advanced analytic function (such as Clusters, Outliers,
Reference Line) from the Analytics pane to a visualization.
• Right-click a visualization, and select a advanced analytic function.
• In the properties pane select the Analytics icon and click Add (+), then select
a function such as Add Clusters or Add Outliers.

Add Reference Lines to Visualizations


You can use advanced analytics reference lines to identify the range of data element
values in a visualization. To add a Reference line function to a visualization, do the
following:
1. Click the Analytics icon in the Data Elements pane.
2. Drag and drop Reference Line into a visualization.
You can also double-click Reference Line to add it to the currently selected
visualization.
3. In the properties pane select Method, and click Line or Band.

Note:
Based on the selected Method or Reference functions, a line is displayed
in the visualization to highlight the value.

4. If you've selected the Line method, select from the following reference functions:

Reference Description
Function
Average Average value of the data element added to the visualization.
Median Median (middle) value of the data element added to the visualization.
Minimum Minimum value (lowest numeric value) of the data element added to the
visualization.

2-7
Chapter 2
Create Calculated Data Elements in a Data Set

Reference Description
Function
Maximum Maximum value (highest numeric value) of the data element added to
the visualization.
Percentile Percentile rank number ranks the percentile of the data element added
to the visualization.
Top N N value marks the highest values (ranked from highest to lowest) of the
data element added to the visualization.
Bottom N N value marks the lowest values (ranked from highest to lowest) of the
data element added to the visualization.
Constant Constant value highlights the constant value of the data element added
to the visualization.

5. If you've selected the Band method, add one or both of the following reference
functions:
• Custom - Select the to and from range of the data element values (such as
Median to Average).
• Standard Deviation - Select a value from 1 to 3 to show the standard
deviation for the selected value of the data element.

Create Calculated Data Elements in a Data Set


You can create a new data element (typically a measure) to add to your visualization.
For example, you can create a new measure called Profit that uses the Revenue and
Discount Amount measures.
The calculated data elements are stored in the data set’s My Calculations folder and
not in the project. In a project with a single data set only one My Calculations folder is
available and the new calculated data elements are added to it. In a project with
multiple data sets My Calculations folder is available for each set of joined and not-
joined data sets. Ensure that you’re creating the calculated data elements for the
required data set or joined data set. The new calculated data elements are added to
the My Calculations folder of the data sets (joined and non-joined) that you create the
calculation for.

Note:
In the Data Element pane right-click and select Data Diagram to see joined
and not-joined data sets.

1. In the Visualize canvas navigate to the bottom of the Data Elements pane, right-
click My Calculations, and click Add Calculation to open the New Calculation
dialog.
2. In the expression builder pane, compose and edit an expression. See About
Composing Expressions.

2-8
Chapter 2
Undo and Redo Edits

Note:
You can't drag and drop a column into the expression builder pane
unless the column is joined to the data set. If you try to do so, you see an
error message.

3. Click Validate.
4. Specify a name, then click Save.

Undo and Redo Edits


You can quickly undo your last action and then redo it if you change your mind. For
example, you can try a different visualization type when you don’t like the one you’ve
just selected, or you can go back to where you were before you drilled into the data.
The undo and redo options are useful as you experiment with different visualizations.
You can undo all the edits you've made since you last saved a project. However, in
some cases, you can't undo and then redo an edit. For example, in the Create Data
Set page, you've selected an analysis from an Oracle Application data source to use
as a data set in the project. In the next step, if you use the undo option to remove the
data set, you can't redo this change.
• To undo or redo an edit, go to the toolbar for the project or the data set and click
Undo Last Edit or Redo Last Edit. You can use these options only if you haven't
saved the project since making the changes.

• When you’re working on a new project, click Menu on the project toolbar and
select Revert to undo all changes that you've made to the project. If you're
working on an existing project, click Revert to Saved.

Refresh Data in a Project


To see if newer data is available to display in the visualizations of your project, you
can refresh the data and the metadata.
• On the project toolbar of the Visualize canvas, click Menu and select an option:
– Refresh Data - This action clears the data cache and reruns queries that
retrieve the latest data from the data sets.
– Refresh Data Set - This action refreshes the data and any project metadata
such as a column name change in the uploaded data set.

Adjust the Visualization Canvas Layout


You can adjust the look and feel of visualizations on the Visualize canvas to make
them more visually attractive.
You can copy a visualization and paste it within or between canvases in a project. You
can also duplicate canvases and visualizations to create multiple copies of them. After

2-9
Chapter 2
Adjust the Visualization Canvas Layout

copying and pasting or duplicating, you can modify the visualization by changing the
data elements, selecting a different visualization type, resizing it, and so on.
Here are the options available to alter or modify the format of the visualization canvas.

Option Location Description


Canvas Project toolbar Menu or right- Change the name, layout, width, and
Properties click a canvas tab height of the canvas in the Canvas
Properties dialog.
Use the Synchronize
Visualizations setting to specify how the
visualizations on your canvas interact.
Add Canvas Canvas tabs bar Add a new canvas to the project.
You can right-click and drag a canvas to a
different position on the canvas tabs bar.
Rename Right-click a canvas tab Rename a selected canvas.
Duplicate Canvas Right-click a canvas tab Add a copy of a selected canvas to the
project’s row of canvas tabs.
Clear Canvas Right-click a canvas tab Remove all the visualizations on the
canvas.
Delete Canvas Right-click a canvas tab Delete a specific canvas of a project.
Duplicate Visual Visualization Menu or right-click Add a copy of a selected visualization to
a visualization the canvas.
Copy Visual Visualization Menu or right-click Copy a visualization on the canvas.
a visualization
Paste Visual Visualization Menu, or right- Paste a copied visualization into the
click a visualization or blank current canvas or another canvas.
canvas
Delete Visual Visualization Menu or right-click Delete a visualization from the canvas.
a visualization

2-10
Chapter 2
Change Visualization Types

Option Location Description


Canvas Layout Visualization Menu, or right- Select one of the following:
click a visualization or blank • Freeform – If you select Freeform,
canvas you can perform the following
functions:
– Click Select All Visualizations
to select all the visualizations on
a canvas, and then copy them.
– Select one of the following
Order Visualization options:
Bring to Front, Bring Forward,
Send Backward, or Send to
Back to move a visualization on
a canvas with multiple
visualizations.
– Rearrange a visualization on the
canvas. Drag and drop the
visualization to the location (the
space between visualizations)
where you want it to be placed.
The target drop area is displayed
with a blue outline.
– Resize a visualization by
dragging its edges to the
appropriate dimensions.
• Autofit – Auto-arrange or correctly
align the visualizations on a canvas
with multiple visualizations.

Change Visualization Types


You can change visualization types to best suit the data you’re exploring.
When you create a project and add a visualization, Data Visualization chooses the
most appropriate visualization type based on the data elements you selected. After a
visualization type is added, dragging additional data elements to it won’t change the
visualization type automatically. If you want to use a different visualization type, then
you need to select it from the visualization type menu.
1. Confirm that you’re working in the Visualize canvas. Select a visualization on the
canvas, and on the visualization toolbar, click Change Visualization Type.

2. Select a visualization type. For example, change the visualization type from Pivot
to Treemap.

2-11
Chapter 2
Adjust Visualization Properties

When you change the visualization type, the data elements are moved to matching
drop target names. If an equivalent drop target doesn’t exist for the new
visualization type, then the data elements are moved to a Visualization Grammar
Pane section labeled Unused. You can then move them to the Visualization
Grammar Pane section you prefer.

Adjust Visualization Properties


You can change the visualization properties such as legend, type, axis values and
labels, data values, and analytics.
1. Go to the Visualize or Narrate canvas and select the visualization whose
properties you want to change. In the properties pane, you can see the
visualization properties.
Both common and type-specific properties of data elements or visualization are
displayed. The properties you can edit are displayed in tabs and depend on the
type of visualization or data element you’re handling.
2. In the properties pane tabs, adjust the visualization properties as needed:

Properties Description
Pane Tab
General Format title, type, legend, selection effect, and customize descriptions.
Axis Set horizontal and vertical value axis labels and start and end axis
values.
Data Sets Override the way the system automatically blends data from two data
sets.
Edge Labels Show or hide row or column totals and wrap label text.
Action Add URLs or links to insights in Tile, Image, and Text Box
visualizations.
If you use Chrome for Windows or Android, the Description text field
displays a Dictate button (microphone) that you can use to record an
audio description.

2-12
Chapter 2
Assign Color to Visualize Data

Properties Description
Pane Tab
Style Set the background and border color for Text visualizations.
Values Specify data value display options including the aggregation method
such as sum or average, and number formatting such as percent or
currency. You can specify the format for each value data element in the
visualization, for example, aggregation method, currency, data or
number format.
Date/Time Specify date and time display options including how you show the date
or time (for example, as Year, Quarter, Month, Week), what format to
use (for example, Auto or Custom).
Analytics Add reference lines, trend lines, and bands to display at the minimum or
maximum values of a measure included in the visualization.

Assign Color to Visualize Data


This topic covers how you can work with color to enhance visualizations.
You can work with color to make visualizations more attractive, dynamic, and
informative. You can color a series of measure values (for example, Sales or
Forecasted Sales) or a series of attribute values (for example, Product and Brand).
The Visualize canvas has a Color section in the Visualization Grammar Pane where
you can put a measure column, attribute column, or set of attributes columns. Note
how the canvas assigns color to the columns that are included in the Color section:
• When a measure is in the Color section, then you can select different measure
range types (for example, single color, two color, and three color) and specify
advanced measure range options (for example, reverse, number of steps, and
midpoint).
• When you’ve one attribute in the Color section, then the stretch palette is used by
default. Color palettes contain a set number of colors (for example, 12 colors), and
those colors repeat in the visualization. The stretch palette extends the colors in
the palette so that each value has a unique color shade.
• If you’ve multiple attributes in the Color section, then the hierarchical palette is
used by default, but you can choose to use the stretch palette, instead. The
hierarchical palette assigns colors to groups of related values. For example, if the
attributes in the Color section are Product and Brand and you’ve selected
Hierarchical Palette, then in your visualization, each brand has its own color, and
within that color, each product has its own shade.

Manage Color Settings


Use the Visualize canvas to modify the visualization’s color. Your color choices are
shared across all visualizations on the canvas, so if you change the series or data
point color in one visualization, then it appears on the other visualizations.

Access Color Options


• To edit color options for the whole project, click Menu on the project toolbar and
select Project Properties, then use the General tab to edit the color series or
continuous coloring.

2-13
Chapter 2
Assign Color to Visualize Data

• To edit color options for a visualization, highlight the visualization and click Menu
on the visualization toolbar and select Color. The available color options depend
on how the measures and attributes are set up in your visualization.

Change the Color Palette


Data Visualization includes several color palettes. Each palette contains 12 colors, but
you can use the color stretching option to expand the colors in the visualization.
1. If your project contains multiple visualizations, click the visualization that you want
to change the color palette for. Click Menu on the visualization toolbar and select
Color, then select Manage Assignments. The Manage Color Assignments dialog
is displayed.
2. Locate the Series Color Palette and click the color palette that’s currently used in
the visualization (for example, Default or Alta).

3. From the list, select the color palette that you want to apply to the visualization.

Manage Color Assignments


Instead of using the palette’s default colors, you can use the Manage Color
Assignments feature to choose specific colors to fine-tune the look of your
visualizations.
1. If your project contains multiple visualizations, click the visualization that you want
to manage the colors for. Click Menu on the visualization toolbar and select Color,

2-14
Chapter 2
Assign Color to Visualize Data

then select Manage Assignments. The Manage Color Assignments dialog is


displayed.
2. If you’re working with a measure column, you can do the following:
• Click the box containing the color assigned to the measure. From the color
picker dialog, select the color that you want to assign to the measure. Click
OK.
• Specify how you want the color range to be displayed for the measure (for
example, reverse the color range, pick a different color range, and specify how
many shades you want in the color range).

3. If you’re working with an attribute column, then click the box containing the color
assignment that you want to change. From the color picker dialog, select the color
that you want to assign to the value. Click OK.

2-15
Chapter 2
Format Numeric Data Properties

Reset Colors
You can experiment with visualization colors and then easily revert to the
visualization’s original colors.
• If your project contains multiple visualizations, click the visualization that you want
to reset the colors for. Click Menu on the visualization toolbar and select Color,
then select Reset Visualization Colors.

Apply or Remove the Stretch Palette


Color palettes have a set number of colors, and if your visualization contains more
values than the number of color values, then the palette colors are repeated. Use the
Stretch Palette option to expand the number of colors in the palette. Stretch coloring
adds light and dark shades of the palette colors to give each value a unique color. For
some visualizations, stretch coloring is used by default.
• If your project contains multiple visualizations, click the visualization that you want
to adjust the stretch palette from. Click Menu on the visualization toolbar and
select Color, then select Stretch Palette to turn this option on or off.

Apply a Repeat Color Palette


In some cases, you might want to use a repeating color palette in your visualization. If
your visualization contains more values than colors in the palette, then the colors are
reused and aren’t unique.
• In the Visualize canvas, click Color and click Stretch Palette to turn this option
off.

Format Numeric Data Properties


You can format numeric data in your visualizations using a wide range of ready-to-use
formats. For example, you might change the aggregation type from Sum to Average.

Format Numeric Values of Columns


1. Create or open the project that contains the numeric column whose properties you
want to change.
2. In the Data Elements pane, select the column.
3. In the properties pane for the selected column, use the General or Number
Format tabs to change the numeric properties.
• General - Change the column name, data type, treat as (measure or attribute),
and aggregation type.
For example, to change how a number is aggregated, use the Aggregation
option.
• Number Format - Change the default format of a number column.
4. Click Save.

Format Numeric Values of Visualizations


1. Create or open the project that contains the visualization whose properties you
want to change.

2-16
Chapter 2
Apply Map Layers and Backgrounds to Enhance Visualizations

2. In the Visualize canvas, select the visualization.


3. In the properties pane for the selected visualization, use the Values tab to change
the numeric properties.
For example, to change how a number is aggregated, use the Aggregation
Method option.
4. Click Save.

Apply Map Layers and Backgrounds to Enhance


Visualizations
You can add and maintain custom map layers to enhance map visualizations.

Topics:
• Work with Map Backgrounds
• Enhance Visualizations with Map Backgrounds
• Use Different Map Backgrounds in a Project
• Use Color to Interpret Data Values in Map Visualizations
• Add Custom Map Layers
• Update Custom Map Layers
• Apply Multiple Data Layers on a Single Map Visualization
• Create Heatmap Layers on a Map Visualization
• Make Maps Available to Users
• Make Map Backgrounds Available to Users

Work with Map Backgrounds


You can enhance map visualizations in projects by adding and maintaining map
backgrounds.
Oracle Data Visualization includes ready-to-use map backgrounds that you can easily
apply to a project. You can also add backgrounds from the available list of Web Map
Service (WMS) providers such as Google Maps and Baidu Maps. Background maps
from these providers offer details and language support (such as city or region name)
that certain geographic regions (such as Asian countries) require. You can enhance
backgrounds in these ways:
• Modify the background parameters such as map type, format, language and API
keys. The parameters are different for each WMS provider.
• Assign or change the default background in a project.
• Reverse the inherited default background settings in a project.
You can add a WMS provider and perform the following types of functions:
• Add the WMS map servers, and make them available as additional map
background options.
• Select one or more map backgrounds available from the WMS provider.

2-17
Chapter 2
Apply Map Layers and Backgrounds to Enhance Visualizations

• Assign an added WMS provider’s map as the default map background.

Enhance Visualizations with Map Backgrounds


You can use map backgrounds to enhance visualizations in a project.
1. Create or open the project where you want to include a map background.
Confirm that you’re working in the Visualize canvas.
2. To select a map-related column and render it in a map view, do one of the
following:
• In the Data Elements pane:
a. Right-click the map-related column.
b. Click Pick Visualization.
c. Select Map.
• In the Data Elements pane:
a. Drag and drop the map-related column to the blank canvas, or between
visualizations on the canvas.
You can also double-click the column to add it to the canvas.
b. On the visualization toolbar, click Change Visualization Type.
c. Select Map.
3. In the properties pane, click Map.
4. Specify the visualization properties:

Property Description
Zoom Enable or disable zoom control.
Control
Scale Select a scale, such as mile.
Background Select a map background.
Map If you want to see the list of available map backgrounds, click Manage Map
Backgrounds to display the Map Background tab.

Note:
You can also open the Console page, click Maps, and select the
Map Backgrounds tab to see the available backgrounds list.

Map Layer Select a layer, such as Asian countries.

5. Click Save.

Use Different Map Backgrounds in a Project


As an author you can use different map backgrounds in map visualizations.
Here is an example of how you might use a map background in a project.

2-18
Chapter 2
Apply Map Layers and Backgrounds to Enhance Visualizations

1. On the Home page click Create, then click Project.


2. Select a data set in the Add Data Set dialog.
3. Click Add to Project.
The Project pane and list of Data Elements is displayed.
4. Select a map-related data element (for example, click City), and click Pick
Visualization.
5. Select Map from the list of available visualizations.
Data Visualization displays the default map background.

Note:
If no default has been set, you can see an existing Oracle map
background.

6. In the visualization properties pane, select the Map tab.


7. Click the Background Map value and select a map from the drop-down list.
For example, select Google Maps, and Data Visualization displays Google Maps
as the map background.
8. (Optional) Click another value to change the type of map (such as Satellite, Road,
Hybrid, or Terrain).
9. (Optional) Click Manage Map Backgrounds from the Background Map options
to display the Map Backgrounds pane.
Use this option to maintain the map backgrounds that you want to use.

Use Color to Interpret Data Values in Map Visualizations


You can use color features to interpret the measure columns and attribute values in
projects that include map visualizations.
1. Create or open the project with a map visualization where you want to interpret
specific columns and values by color.
Confirm that you’re working in the Visualize canvas.
2. Select a measure column or attribute from the project and render it in a map view,
doing one of the following:
• In the Data Elements pane, right-click the column or attribute, click Pick
Visualization, and select Map.
• Drag and drop the columns or attributes from the Data Elements pane to the
blank canvas or between visualizations on the canvas. You can also double-
click the columns or attributes to add them to the canvas.
– On the visualization toolbar, click Change Visualization Type.
Alternatively, in the Data Elements pane, right-click the column or
attribute, and click Pick Visualization.
– Select Map.
3. Drag and drop one or multiple measure columns or attributes in the following map
color sections of the Visualization Grammar Pane:

2-19
Chapter 2
Apply Map Layers and Backgrounds to Enhance Visualizations

Section Description
Color Change the color for geometries displayed in the corresponding map
layer (for example, polygon fill color, bubble color) based on the values.
Size (Bubble) Change color bubble size based on the measure column values.
To change the size of the color bubble you’ve to drag and drop measure
columns only. The size shows the aggregated measure for a specific
geographic location in a map visualization.
Trellis Columns / Compare multiple map visualizations based on the column values using
Rows filters.

In the map visualization, you can also use the following color features to interpret
measure columns and attribute values:
• Legend - If the measure column or attribute has multiple values, then the legend
is displayed. Legends are grouped by Layer.
• Tooltip - Hover the mouse pointer over a color bubble to see the values in a
tooltip. If there are multiple values, then a Plus (+) symbol is displayed.

Add Custom Map Layers


You can add custom map layers to use in map visualizations.
You add a custom map layer to Data Visualization using a geometric data file with
the .json extension that conforms to GeoJSON schema https://en.wikipedia.org/wiki/
GeoJSON. You then use the custom map layer to view geometric map data in a
project. For example, you might add a Mexico_States.json file to enable you to
visualize geometric data in a map of Mexico States.
When creating a custom map layer, you must select layer keys that correspond with
data columns that you want to analyze in a map visualization. For example, if you want
to analyze Mexican States data on a map visualization, you might start by adding a
custom map layer for Mexican States, and select HASC code layer key from the
Mexican_States.json file. Here is an extract from the Mexican_States.json file that
shows some of the geometric data for the Baja California state.

2-20
Chapter 2
Apply Map Layers and Backgrounds to Enhance Visualizations

If you wanted to use the Mexican_States.json file, the layer keys that you select must
match columns that you want to analyze from the Mexican States Data tables. For
example, if you know there is a data cell for the Mexican state Baja California then
select the corresponding name field in the JSON file to display state names in the Map
visualization. When you create a project and select column (such as State, and
HASC), then Mexican states are displayed on the map. When you hover the mouse
pointer over a state, the HASC code (such as MX BN) for each state is displayed on
the map.
1. Open the Console page and click Maps to display the Map Layers page.
The Map Layers page contains a Custom Map Layers section and a System Map
Layers section. The Custom Map Layers section displays the custom map layers
that you maintain.

Note:
You can disable or enable both a System Map Layer and a Custom Map
Layer, but you can’t add or delete a System Map Layer.

2. To add a custom map layer, click Add Custom Layer or drag and drop a JSON
file from File Explorer to the Custom Maps area.
3. Browse the Open dialog, and select a JSON file (for example,
Mexico_States.json).
The JSON file must be a GeoJSON file that conforms to the standard specified in
https://en.wikipedia.org/wiki/GeoJSON.

Note:
Custom layers with Line String geometry type isn't fully supported. The
Color (Bubble) and Size (Bubble) fields in map visualization grammar are
not applicable in the case of line geometries.

4. Click Open to display the Map Layer dialog.


5. Enter a Name and an optional Description.
6. Select the layer keys that you want to use from the Layer Keys list.
The layer keys are a set of property attributes for each map feature, such as
different codes for each state in Mexico. The layer keys originate from the JSON
file. Where possible, select only the layer keys that correspond with your data.
7. Click Add to add the selected layer keys to your custom map layer.
A progress indicator shows that the map layers are being saved, and a success
message is displayed when the process is complete and the layer is added.

Update Custom Map Layers


You can maintain custom map layers that you’ve added to Data Visualization.
1. Open the Console page and click Maps to display the Map Layers page.

2-21
Chapter 2
Apply Map Layers and Backgrounds to Enhance Visualizations

The Map Layers page contains a Custom Map Layers section and a System Map
Layers section. The Custom Map Layers section displays the custom map layers
that you maintain.
2. Right-click the map layer, click Options, and then take the appropriate action:
• To view or make changes to the map layer settings, select Inspect.
The Map Layer dialog is displayed where you can update the Name,
Description, or the Layer Keys used in this layer.
• To upload the JSON file again, select Reload.
• To save the JSON file locally, select Download.
• To delete the custom map layer, select Delete.
You can disable or enable a System Map Layer and a Custom Map Layer, but
you can’t add or delete a System Map Layer.
3. Click the map layer to enable or disable it. For example, if you want to exclude
India States on the map, click the layer to disable it and remove it from searches.
4. To switch from using one map layer to another:
a. Select the desired columns from the project and select Map as the
visualization.
b. In the properties pane, select the Map tab to display the map properties.
c. Click the current Map Layer for example Mexican States. This displays a list
of available custom map layers that you can choose from.
d. Click the map layer that you want to use to match your data points.

Apply Multiple Data Layers on a Single Map Visualization


You can use the data layer feature to display multiple data series (different sets of
dimensions and metrics) on a single map visualization in a project. The data layers are
overlaid on one another in a single map visualization.
1. Create or open the project where you want to display multiple data layer overlays
on a single map visualization.
Confirm that you’re working in the Visualize canvas.
2. To select a map-related attribute column and a measure column and render in a
map view, do one of the following:
• In the Data Elements pane:
a. Right-click the map-related column (for example, State).
b. Click Pick Visualization.
c. Select Map.
d. Right-click a measure column (for example, Population).
e. Click Add to Selected Visualization.
• In the Data Elements pane:
a. Drag and drop the map-related column and measure column to the blank
canvas, or between visualizations on the canvas.
You can also double-click columns to add them to the canvas.

2-22
Chapter 2
Apply Map Layers and Backgrounds to Enhance Visualizations

b. On the visualization toolbar, click Change Visualization Type.


c. Select Map.
The selected column or attribute is displayed as a data layer in the Category
(Geography) section of the Visualization Grammar Pane and in the Data Layers
tab of the properties pane. Based on the column or attribute values a specific set
of dimensions and metrics is displayed on the map visualization.
3. Click Layer options in the Visualization Grammar Pane and click Add Layer.
Alternatively in the Data Layers tab click Add Layer (+). A new data layer, for
example Layer 2, is added to the Category (Geography) section and Data Layers
tab.
4. Drag and drop an attribute from the Data Elements pane to the Category
(Geography) section of the Visualization Grammar Pane. Based on the attribute
values the map visualization automatically updates with a different set of
dimensions, and it overlays on the previous layer.
5. Repeat step 3 and 4 to add multiple data layers on the map visualization.
• In the Category (Geography) section of the Visualization Grammar pane,
select an attribute and click X to remove it from the layer.
• Click Layer options and select the following:

Option Description
Add Layer Add a new data layer on the map visualization.
Order Select one of the following: Bring to Front, Bring Forward, Send
Layer Backward, Send to Back to change the order of the overlay layers.
Hide Layer Hide a particular layer.
Delete Delete a particular layer.
Layer
Manage Display a specific layer options in the Data Layers tab of the properties
Layers pane.

• In the Data Layers tab of the properties pane, select the option for a specific
layer, as described in the following table:

Field Description
Name Select Auto or Custom. Enter a name if you select Custom.
Map Layer Select a specific map layer, such as world countries, world cities. Click
Manage Map Layers to modify the options for the custom map or
system map layer on the Console page.
Layer Type Select Polygon , Point, Heat Map, or Line.
Transparen Select the transparency value.
cy
Show Move the slider, or toggle the layer icon in the legend, to show or hide a
Layer specific layer in the map visualization.
Reorder Change the order of a specific layer in the layer series.
Layer
Remove Remove a selected layer.
Layer

• You can apply a Filter, such as a Range Filter or List Filter, to the map
visualization to refine the data shown for the attribute and measure columns in
all the data layers in the map. For example, you can select a measure or

2-23
Chapter 2
Apply Map Layers and Backgrounds to Enhance Visualizations

attribute for a layer, then apply filter to reduce the amount of data shown, and
add the same measure or attribute to the Color section of the Visualization
Grammar Pane.

Create Heatmap Layers on a Map Visualization


You can use a heatmap as a data layer type on a map visualization to identify the
density or high concentration of point values or metric values associated with the
points. For example, you can use a heatmap to identify the high profit stores in a
geographic region or country.
You can create two types of heatmap layers:
• Density heatmap - Uses only map-related column data (such as latitude and
longitude columns). Density heatmap layers show the cumulative sum of a point,
where each point carries a specific weight. A point has a radius of influence
around it, such that other points that fall in the same area also contribute to the
total cumulative result of a point.
• Metric heatmap - Uses measure column data in the same layer. For example, if
you add a measure column to the Color section of the Visualization Grammar
Pane the heat map is updated to show interpolated metric values.
1. Create or open the project where you want to use a heatmap layer on a map
visualization. Confirm that you’re working in the Visualize canvas.
2. Create an empty map visualization.
3. Drag and drop attribute columns containing map-related data from the Data
Elements pane to the Category (Geography) section of the Visualization Grammar
Pane.
• If you’re creating a project with a map visualization, in the Data Elements
pane, right-click an attribute column and click Pick Visualization then select
Map.
4. Click Layer options in the Visualization Grammar Pane and click Manage
Layers. Alternatively, go to the Data Layers tab of the properties pane.
5. To create a density heatmap, click Layer Type and select Heatmap.
You can also add a map data layer, change the layer type to Heatmap, then add
attribute columns to the Category section of the Visualization Grammar Pane.
6. To create a metric heatmap, drag and drop a metric column from the Data
Elements pane to the Color section in the Visualization Grammar Pane. The
heatmap visualization changes from density to metric.
7. In the Data Layers tab of the properties pane, specify the options for the heatmap
layer:

Field Description
Name Select Auto or Custom. Enter a name if you select Custom.
Map Layer Select the map layer type, such as world cities, or other point layers.
Transparen Select the transparency value.
cy
Radius Select the radius value in pixels (px). The radius value is the extent of
influence of a measure around a point value on a map.

2-24
Chapter 2
Apply Map Layers and Backgrounds to Enhance Visualizations

Field Description
Color Select the color type of the heatmap, such as Spectrum Lite, Red-Yellow-
Green.
Interpolatio Select the interpolation method, such as Cumulative, Maximum, Minimum,
n and Average Constant.
The default interpolation method is automatically selected based on the
aggregation rule of the metric column or value that you’ve selected for the
layer.

The heatmap visualization is automatically updated based on the options selected


in the Data Layers tab of the properties pane.

Make Maps Available to Users


For visualization projects, administrators make maps available to end users or hide
them from end users.
You can include or exclude a map from users.
1. On the Home page, click Console.
2. Click Maps.
3. Use the Include option to make a map layer available to end users or hide it from
end users.
You can hide or display custom map and system map layers.

Make Map Backgrounds Available to Users


Oracle provides two pre-configured map backgrounds with Data Visualization. As an
administrator, you can add map backgrounds for use in map visualizations.
1. On the Home page, click Console, select Maps, and click Map Backgrounds.
A table is displayed listing the currently installed map backgrounds that you can
use in a visualization.

Column Description
name
Default Shows which background map is the default (displays a tick symbol when
selected).
Include Shows whether a map background is included or excluded as an available
option to users (displays a tick symbol when selected).
Name Displays the name of the map background.
Description Describes the map background. For example, for Oracle Maps the
description is "General reference world map from Oracle".
Modified Shows the most recent date that the map background was modified in Data
Visualization.

2. To add a background map, click Add Background.


A list of available map backgrounds is displayed.
Oracle Maps are pre-configured and shipped with the product. Additional
background maps that you can add are Google Maps and Baidu Maps.

2-25
Chapter 2
Sort and Select Data in Visualization Canvases

Note:
For third-party map providers (other than Oracle), you must obtain Maps
API access keys from the respective provider (for example, Google or
Baidu). Those providers may independently charge you based on your
usage, as described in their respective terms of agreement.

3. Select a map background from the list.


4. Copy and paste in the appropriate Maps API access key.

Note:
You must sign up with the provider to be able to add and use any of
these map types.
• To use the Google Maps tiles, you must obtain a Google Maps API
access key from Google. Google prompts you to enter your Maps
API access key and, when applicable, your Google “Client ID”.
Usage of the tiles must meet the terms of service specified by
Google in the Google Developers Site Terms of Service.
• To use the Baidu Maps tiles, you must obtain a Baidu Maps API
access key from Baidu. Baidu prompts you to enter your Maps API
access key. Usage of the tiles must meet the terms of service
specified by Baidu in the Baidu User Agreement.

5. Select a default map type if applicable and enter a helpful description if needed.
6. Click Add to include this map in the list of currently available map backgrounds.
Data Visualization displays a message when the map background is successfully
added.

Sort and Select Data in Visualization Canvases


While adding filters to visualizations helps you narrow your focus on certain aspects of
your data, you can take a variety of other analytic actions to explore your data (for
example, drilling, sorting, and selecting). When you take any of these analytic actions,
the filters are automatically applied for you.
Select a visualization and click Menu or right-click, then select one of the following
analytics actions:
• Use Sort to sort attributes in a visualization, such as product names from A to Z. If
you’re working with a table view, then the system always sorts the left column first.
In some cases where specific values display in the left column, you can’t sort the
center column. For example, if the left column is Product and the center column is
Product Type, then you can’t sort the Product Type column. To work around this
issue, swap the positions of the columns and try to sort again.
• Use Drill to drill to a data element and drill through hierarchies in data elements,
such as drilling to weeks within a quarter. You can also drill asymmetrically using
multiple data elements. For example, you can select two separate year members
that are columns in a pivot table, and drill into those members to see the details.

2-26
Chapter 2
Replace a Data Set in a Project

• Use Drill to [Attribute Name] to directly drill to a specific attribute within a


visualization.
• Use Keep Selected to keep only the selected members and remove all others
from the visualization and its linked visualizations. For example, you can keep only
the sales that are generated by a specific sales associate.
• Use Remove Selected to remove selected members from the visualization and its
linked visualizations. For example, you can remove the Eastern and Western
regions from the selection.
• Use Add Reference Line to add a reference line to highlight an important fact
depicted in the visualization, such as a minimum or maximum value. For example,
you can add a reference line across the visualization at the height of the maximum
revenue amount.

Replace a Data Set in a Project


You can replace a data set by re-mapping columns used in the data visualization
project to columns from a different data set. As part of replacing a data set, you can
review and re-map only those columns that are used in the project and replace them
with columns of the same data type in the replacement data set. For example, you can
replace a test data set with a production data set, or use a project as a template in
which you can replace the data but maintain the added structures, visualizations, and
calculations.
The Replace Data Set option is available only for projects using a single data set. The
option isn't available for projects that use multiple data sets.
1. Create or open the Data Visualization project in which you want to replace the
data set.
Confirm that you’re working in the Visualize canvas.
2. In the Data Elements pane, right-click the data set and select Replace Data Set.
3. In the Replace Data Set dialog, perform the following tasks:
• Select the data set that replaces the existing data set in the project and click
Select.
• Review the mapping of the data between the existing and the new data sets in
the data-mapping table. The data-mapping table includes all the data
elements used in the project’s visualizations, calculations, and filters. The data
elements with similar type and names in the two data sets are automatically
mapped. In the table, based on data types, the data elements are grouped
and sorted alphabetically.
• In the new data set column, click the drop-down arrow in a cell and select a
specific data element to adjust the mapping of the data.

Note:

– Only data elements of the same type are displayed in the data
element selection dialog.
– You can navigate back to select a different data set.

2-27
Chapter 2
Remove a Data Set from a Project

4. Click Replace.
The new data set replaces the existing data set in the project. You see a notification if
you’ve selected a data set that is joined to other data sets in the project. Review and
adjust the joins in the project’s Data Diagram.
In the data-mapping table based on the selection, the data is updated throughout the
project. For example, if you map a data element to None, the specific data is removed
from the visualizations, calculations, and filters.

Remove a Data Set from a Project


You can remove a data set from a project.
Removing data from a project differs from deleting the data set from Data
Visualization.
1. Open your project and in the Data Elements pane, select the data set that you
want to remove.
2. Right-click and select Remove from Project. A confirmation dialog is displayed.
3. Click Yes to remove the data set.

Analyze Your Data Set Using Machine Learning


Machine learning analyzes the data in your data set to provide insights that enable you
to explain the various aspects of that data.

Topics:
• About Using Machine Learning to Discover Data Insights
• Add Data Insights to Visualizations

About Using Machine Learning to Discover Data Insights


Machine learning analyzes the data to recognize the patterns and trends in your data
set to provide visual insights and enhanced statistical analysis. You can subsequently
use these visual insights and statistical analysis in your project visualization canvas to
interpret the data in your data set.
Machine learning provides accurate, fast, and powerful data insights because it
analyzes and processes technical and statistical complexity and the volume and
variety of the date in your data set. Because of machine learning’s accuracy, speed,
and scale, it’s cheaper and more powerful than the traditional method of analyzing
data.
To discover data insights, you simply select an attribute in your data set. Machine
learning provides you with narratives, visual insights, and statistical analyses such as
charts. You can select specific charts and include them as visualizations in your
project visualization canvas. You manage these visualizations as you do any other
visualizations in your project. With machine learning, you don't have to waste time
guessing and dropping random data elements on the canvas to create a visualization
for data insight.

2-28
Chapter 2
Analyze Your Data Set Using Machine Learning

Before you start, install machine learning on the Windows or Mac machine where you
installed Data Visualization Desktop. See How do I install Machine Learning for
Desktop?
After you’ve installed machine learning, you can start uncovering insights in your data.
See Add Data Insights to Visualizations.

Add Data Insights to Visualizations


You can select specific data insights charts provided by machine learning and add
them directly as a visualization in your project’s visualization canvas.

Note:
You must install the Data Visualization machine learning component to
display the Explain option.

1. Create or open a data visualization project. Confirm that you’re working in the
Visualize canvas.
2. In the Data Elements pane, right-click a data element (attribute or measure) and
select Explain <Data Element> to display the Explain <Data Element> dialog
tabs:
• Basic Facts about <Data Element> - Shows the basic distribution of the data
element (attribute or measure) values across the data set and its breakdown
against each one of the measures in the data set.
• Key Drivers of <Data Element> - Shows data elements (attributes or
measures) that are more highly correlated to the outcome for the selected
data. The charts showing the distribution of the selected attribute value across
each of the correlated attributes values is displayed.
• Segments that Explain <Data Element> - Shows the segments or group in
the data set, after examining all the records, that can predict the value of the
selected data element. You can select a particular segment or group and then
continue to analyze it.
• Anomalies of <Data Element> - Shows the group of anomalies or unusual
values in the data set that you can relate to the selected data element
(attribute or measure). You can review and select particular group of
anomalies.
3. Use the Explain dialog to help you configure your visualizations.
• When you click a data element (attribute or measure), information for the
selected data element is highlighted in the segments below.
• You can select more than one data element (attribute or measure) at the same
time to see results in the segments.
• You can also sort how the information is displayed in the Segments (High to
Low, or Low to High, group by Color, or sort by data element Value).
• For each Segment in the decision tree, summary rules for the percentage of
the data element and other metadata about the section are displayed. For
example, a certain Segment might show that a particular percentage of the
selected attribute (data element) belongs to a specific group like location, data

2-29
Chapter 2
About Warnings for Data Issues in Visualizations

point, another attribute, or measure. You can then select a specific group, like
location, to analyze the selected attribute.
• The Anomalies section finds data points that don't fit the expected pattern.
4. Click the check mark when you hover the mouse pointer over any of the data
insight charts to select a specific chart.
5. Click Add Selected to add the charts you’ve selected as different visualizations in
your project’s visualization canvas. You can manage data insight visualizations
like any other visualizations you’ve manually created on the canvas.

About Warnings for Data Issues in Visualizations


You see a data warning icon when the full set of data associated with a visualization
isn't rendered or retrieved properly. If the full set of data can't be rendered or retrieved
properly, then the visualization displays as much data as it can as per the fixed limit,
and the remaining data or values are truncated or not displayed.
The warning icon (an exclamation mark icon) is displayed in two locations:
• Next to the title of a visualization that has a data issue.
When you hover over the warning icon, you see a message that includes text such
as the following:
Data sampling was applied due to the large quantity of data.
Please filter your data. The limit of 500 categories was
exceeded.
You see the warning icon associated with the visualization until the data issue is
resolved. The warning icon is displayed only in the visualization Canvas; it's not
displayed in Presentation Mode or Insights.
• On the Canvas tabs bar if any visualization on the Canvas page has the data
warning.
By default, visualization warning icons aren't displayed; You can show or hide the
warning icon beside the title of the visualization by clicking the icon on the Canvas
tabs bar. The warning icon is only displayed if a Canvas includes a visualization
with a data issue. If a visualization with a data issue is in multiple canvases, you
see the icon in all those canvases.

2-30
3
Create and Apply Filters to Visualize Data
This topic describes how you can use filters to find and focus on the data you want to
explore.

Topics:
• Typical Workflow to Create and Apply Filters
• About Filters and Filter Types
• How Visualizations and Filters Interact
• Synchronize Visualizations in a Project
• About Automatically Applied Filters
• Create Filters on a Project
• Create Filters on a Visualization
• Move Filter Panels
• Apply Range Filters
• Apply Top Bottom N Filters
• Apply List Filters
• Apply Date Filters
• Build Expression Filters

Typical Workflow to Create and Apply Filters


Here are the common tasks for creating and applying filters to projects, visualizations,
and canvases.

Task Description More Information


Choose the Filter types (Range, Top / Bottom N filter, Apply Range Filters
appropriate filter type List, Date, and Expression) are specific to Apply Top Bottom N Filters
either a project, visualization, or canvas.
Apply List Filters
Apply Date Filters
Create filters on Create filters on a project or visualization to Create Filters on a Project
projects and limit the data displayed and focus on a Create Filters on a Visualization
visualizations specific section or category.
Build and use You can build and use expression filters to Build Expression Filters
expression filters define more complex filters using SQL
expressions.
Set visualization Define how you want visualizations to affect How Visualizations and Filters Interact
interaction properties each other.

3-1
Chapter 3
About Filters and Filter Types

About Filters and Filter Types


Filters reduce the amount of data shown in visualizations, canvases, and projects.
The Range, List, Date, and Expression filter types are specific to either a visualization,
canvas, or project. Filter types are automatically determined based on the data
elements you choose as filters.
• Range filters - Generated for data elements that are number data types and that
have an aggregation rule set to something other than none. Range filters are
applied to data elements that are measures, and that limit data to a range of
contiguous values, such as revenue of $100,000 to $500,000. Or you can create a
range filter that excludes (as opposed to includes) a contiguous range of values.
Such exclusive filters limit data to noncontiguous ranges (for example, revenue
less than $100,000 or greater than $500,000). See Apply Range Filters.
• List filters - Applied to data elements that are text data types and number data
types that aren’t aggregatable. See Apply List Filters.
• Date filters - Use calendar controls to adjust time or date selections. You can
either select a single contiguous range of dates, or you can use a date range filter
to exclude dates within the specified range. See Apply Date Filters.
• Expression filters - Let you define more complex filters using SQL expressions.
See Build Expression Filters.

How Visualizations and Filters Interact


There are several ways that filters can interact with visualizations in a project. For
example, filters might interact differently with visualizations depending on the number
of data sets, whether the data sets are joined, and what the filters are applied to.

Topics:
• How Data Sets Interact with Filters
• How the Number of Data Sets Interact with Filters

How Data Sets Interact with Filters


Various factors affect the interaction of data sets and filters in projects:
• The number of data sets within a project.
• The data sets that are joined (connected) or not-joined (for a project with multiple
data sets).
• The data elements (columns) that are matched between joined data sets.
You can use the Data Diagram in the Prepare canvas of a project to:
• See joined and not-joined data sets.
• Join or connect multiple data sets by matching the data elements in the data sets.
• Disconnect the data sets by removing matched data elements.

3-2
Chapter 3
Synchronize Visualizations in a Project

How the Number of Data Sets Interact with Filters


You can add filters to the filter bar or to individual visualizations in a project.

Single Data Set Filter Interaction


Add a filter to the It applies to all visualizations in the project.
filter bar
Add a filter to a It is applied after filters on the filter bar are applied.
visualization
Add multiple filters By default filters restrict each other based on the values that you select.

Multiple Data Sets Filter Interaction


If you add filters to • The filters apply to all the visualizations using the joined data sets. For visualizations
the filter bar. using the not-joined data sets, you must add a separate filter to each data set.
• You can't specify data elements of a data set as a filter of other data sets, if the two
data sets aren't joined.
• If a data element of a data set is specified as a filter, but doesn’t match the joined data
sets, then the filter applies only to the visualization of that particular data set, and does
not apply to other visualizations of joined or not-joined data sets.
• You can select Pin to All Canvases of a filter, to apply a filter to all canvases in the
project.
If you hover the Any visualizations that don't use the data element of the filter are grayed out.
mouse pointer over a
filter name to see the
visualization to which
the filter is applied.
If you add filters to • If you specify a filter on an individual visualization, that filter applies to that visualization
visualizations after the filters on the filter bar are applied.
• If you select the Use as Filter option and select the data points that are used as a filter
in the visualization, then filters are generated in the other visualizations of joined data
sets and matched data elements.

You can use the Limit Values By options to remove or limit how the filters in the filter
bar restrict each other.

Synchronize Visualizations in a Project


You can specify whether or not to synchronize Visualizations in a canvas.
You use the Synchronize Visualizations setting to specify how the visualizations on
your canvas interact. By default, visualizations are linked for automatic
synchronization. You can deselect Synchronize Visualizations to unlink your
visualizations and turn automatic synchronization off.
When Synchronize Visualizations is on (selected), then all filters on the filter bar and
actions that create filters (such as Drill) apply to:
• All the visualizations in a project with a single data set.
• All the visualizations of joined data sets with multiple data sets.
If a data element from a data set is specified as a filter but isn't matched with the
joined data sets, then the filter only applies to the visualization of the data set that
it was specified for.

3-3
Chapter 3
About Automatically Applied Filters

Note:

• When you hover the mouse pointer over a visualization to see the filters
applied to the visualization, any filter that isn't applied to the visualization
is grayed out.
• Any visualization-level filters are applied only to the visualization.
• When Synchronize Visualizations is off (deselected), then analytic
actions such as Drill affect the visualization to which you applied the
action.

About Automatically Applied Filters


By default, the filters in the filter bar and filter drop target are automatically applied.
However, you can turn this behavior off if you want to manually apply the filters.
When the Auto-Apply Filters is selected in the filter bar menu, the selections you
make in the filter bar or filter drop target are immediately applied to the visualizations.
When Auto-Apply Filters is off or deselected, the selections you make in the filter bar
or filter drop target aren’t applied to the canvas until you click the Apply button in the
list filter panel.

Create Filters on a Project


You can add filters to limit the data that’s displayed in the visualizations on the
canvases in your project.
If your project contains multiple data sets and some aren’t joined, then there are
restrictions for how you can use filters. Any visualization that doesn't use the data
element of the filter is grayed out.
Instead of or in addition to adding filters to the project or to an individual canvas, you
can add filters to an individual visualization.
1. Click + Add Filter, and select a data element. Alternatively, drag and drop a data
element from the Data Elements pane to the filter bar.
You can't specify data elements of a data set as a filter of other data sets, if the
two data sets aren’t joined.
2. Set the filter values. How you set the values depends upon the data type that
you’re filtering.
• Apply a range filter to filter on columns such as Cost or Quantity Ordered.
• Apply a list filter to filter on columns such as Product Category or Product
Name.
• Apply a date filter to filters on columns such as Ship Date or Order Date.
3. Optionally, click the filter bar menu or right-click, then select Add Expression
Filter.
4. Optionally, click the filter Menu and hover the mouse pointer over the Limit Value
By option to specify how the filter interacts with the other filters in the filter bar.
Note the following:

3-4
Chapter 3
Create Filters on a Visualization

• By default, the Auto option causes the filter to limit other related filters in the
filter bar.
For example, if you’ve filters for Product Category and Product Name, and if
you set the Product Category filter to Furniture and Office Supplies, then the
options in the Product Name filter value pick list is limited to the product
names of furniture and office supplies. You can select None to turn this limit
functionality off.
• You can specify any individual filter in the filter bar that you don’t want to limit.
For example, if you have filters for Product Category, Product Sub Category,
and Product Name, and in the Limit Value By option for the Product Category
filter you click Product Sub Category, then the product subcategory filter
shows all values and not a list of values limited by what you select for Product
Category. However, the values shown for Product Name is limited to what you
select for Product Category.
5. Optionally, click the filter bar menu or right-click and select Auto-Apply Filters,
then click Off to turn off the automatic apply. When you turn off the automatic
apply, then each filter’s selection displays an Apply button that you must click to
apply the filter to the visualizations on the canvas.
6. Click the filter bar menu or right-click and select Pin to All canvases of a filter to
apply a filter to all canvases in the project.

Note:
You can also go to the filter bar and perform the following steps:
• Select a filter and right-click, then select Delete to remove it from the
project.
• Right-click and select Clear All Filter Selections to clear the selection
list of all the filters in the filter bar.
• Right-click and select Remove All Filters to remove all the filters in the
filter bar.

Create Filters on a Visualization


You can add filters to limit the data that’s displayed in a specific visualization on the
canvas.
If a project contains multiple data sets and some aren't joined, then there are
restrictions for how you can use filters. Any visualization that doesn't use the data
element of the filter is grayed out.
Visualization filters can be automatically created by selecting Drill on the
visualization’s Menu when the Synchronize Visualizations option is turned off on the
project toolbar Menu.
Instead of or in addition to adding filters to an individual visualization, you can add
filters to the project or to an individual canvas. Any filters included on the canvas are
applied before the filters that you add to an individual visualization.
1. Confirm that you’re working in the Visualize canvas.

3-5
Chapter 3
Move Filter Panels

2. Select the visualization that you want to add a filter to.


3. Drag and drop one or more data element from the Data Element pane to the Filter
drop target in the Visualization Grammar Pane.

Note:
To use data elements of a data set as a filter in the visualization of
another data set, you’ve to join both the data set, before using the data
elements as filters.

4. Set the filter values. How you set the values depends upon the data type that
you’re filtering.
• To set filters on columns such as Cost or Quantity Ordered, see Apply Range
Filters.
• To set filters on columns such as Product Category or Product Name, see
Apply List Filters.
• To set filters on columns such as Ship Date or Order Date, see Apply Date
Filters.
5. (Optional) Click the filter bar menu or right-click and click Auto-Apply Filters, then
select Off to turn off automatic apply for all filters on the canvas and within the
visualization. When you turn off automatic apply, then each filter’s selection
displays an Apply button that you must click to apply the filter to the visualization.

Move Filter Panels


You can move filter panels from the filter bar to a different spot on the canvas.
When you expand filters in the filter bar, it can block your view of the visualization that
you’re filtering. Moving the panels makes it easy to specify filter values without having
to collapse and reopen the filter selector.
• To detach a filter panel from the filter bar, place the cursor at the top of the filter
panel until it changes to a scissors icon, then click it to detach the panel and drag
it to another location on the canvas.

3-6
Chapter 3
Apply Range Filters

• To reattach the panel to the filter bar, click the reattach panel icon.

Apply Range Filters


You use Range filters for data elements that are number data types and that have an
aggregation rule set to something other than none.
Range filters are applied to data elements that are measures. Range filters limit data
to a range of contiguous values, such as revenue of $100,000 to $500,000. Or you can
create a range filter that excludes (as opposed to includes) a contiguous range of
values. Such exclusive filters limit data to two noncontiguous ranges (for example,
revenue less than $100,000 or greater than $500,000).
1. In the Visualize canvas, go to the filter bar and click the filter to view the Range
list.
2. In the Range list, click By to view the selected list of Attributes. All members that
are being filtered have check marks next to their names.
You can also optionally perform any of the following steps:
• In the selected list click a member to remove it from the list of selections. The
check mark disappears.
• In the selected list, for any non-selected member that you want to add to the
list of selections, click the member. A check mark appears next to the selected
member.
• Click the Plus (+) icon, to add member to the selected list. The newly added
member is marked as checked.
• Set the range that you want to filter on by moving the sliders in the histogram.
The default range is from minimum to maximum, but as you move the sliders,
the Start field and End field adjust to the range you set.
3. Click outside of the filter to close the filter panel.

3-7
Chapter 3
Apply Top Bottom N Filters

Apply Top Bottom N Filters


You use the Top Bottom N filter to filter a measure to a subset of its largest (or
smallest) values.
You apply top or bottom filters to data elements that are measures. When you add a
measure to a filter drop target of a visualization, the default filter type is Range, but
you can change the filter type to Top Bottom N from the Filter Type menu option.
You can apply a Top Bottom N filter to either a project canvas (it applies to all
visualizations in the project), or to a selected visualization. All of the following steps
are optional:
1. To apply the Top Bottom N filter to the canvas and all visualizations in the project:
a. In the Visualize canvas, select a filter in the filter bar.
b. Click the filter menu or right-click and select Filter Type, then click Top
Bottom N. You can only convert a range filter to Top Bottom N filter
2. To apply the Top Bottom N filter to a specific visualization in the project and
update the filtered data on the canvas:
a. In the Visualize canvas, select the visualization to which you want to apply the
filter.
b. In the Visualization Grammar Pane go to the Filters drop target.
c. Select a measure, then right-click and select Filter Type, then click Top
Bottom N.
3. To change which filter method is applied, Top or Bottom, in the Top Bottom N list,
click the Method value.
4. To display a particular number of top or bottom rows, in the Top Bottom N list,
click in the Count field and enter the number.
5. To change which columns to group by, in the Top Bottom N list, click in the By
field, or to display the available columns that you can select from, click Plus (+).
6. To deselect any member from the list of attributes, in the Attributes list, click the
member that you want to deselect.
7. To add a member to the list of attributes, in the Attributes list, click any
nonselected member.
8. Click outside of the filter to close the filter panel.

Apply List Filters


List filters are applied to text and non-aggregatable numbers. After you add a list filter,
you can change the selected members that it includes and excludes.
1. In the Visualize canvas, go to the filter bar and select a filter to view the Selections
list.
2. Locate the member you want to include and click it to add it to the Selections list.
Alternatively, use the Search field to find a member you want to add to the filter.
Use the wildcards * and ? for searching.
3. Optionally, you can also perform the following steps:

3-8
Chapter 3
Apply Date Filters

• In the Selections list click a member to remove it from the list of selections.
• In the Selections list, you can click the eye icon next to a member to cause it
to be filtered out but not removed from the selections list.
• In the Selections list, you can click the actions icon at the top, and select
Exclude Selections to exclude the members in the Selections list.
• Click Add All or Remove All at the bottom of the filter panel to add or remove
all members to or from the Selections list at one time.
4. Click outside of the filter to close the filter panel.

Apply Date Filters


Date filters use calendar controls to adjust time or date selections. You can select a
single contiguous range of dates, or use a date range filter to exclude dates within the
specified range.
1. In the Visualize canvas, go to the filter bar and click the filter to view the Calendar
Date list.
2. In Start, select the date that begins the range that you want to filter.
Use the Previous arrow and Next arrow to move backward or forward in time, or
use the drop-down lists to change the month or year.
3. In End, select the date that ends the range that you want to filter.
4. Optionally, to start over and select different dates, right-click the filter in the filter
bar and select Clear Filter Selections.
5. Click outside of the filter to close the filter panel.

Build Expression Filters


Using expression filters, you can define more complex filters using SQL expressions.
Expression filters can reference zero or more data elements.
For example, you can create the expression filter "Sample Sales"."Base
Facts"."Revenue" < "Sample Sales"."Base Facts"."Target Revenue". After applying the
filter, you see the items that didn’t achieve their target revenue.
You build expressions using the Expression Builder. You can drag and drop data
elements to the Expression Builder and then choose operators to apply. Expressions
are validated for you before you apply them. See About Composing Expressions.
1. In the Visualize canvas, mouse-over the filter bar at the top of the pane and click
Menu, then select Add Expression Filter.
2. In the Expression Filter panel, compose an expression.
3. In the Label field, give the expression a name.
4. Click Validate to check if the syntax is correct.
5. When the expression filter is valid, then click Apply. The expression is applied to
the visualizations on the canvas.

3-9
4
Use Other Functions to Visualize Data
This topic describes other functions that you can use to visualize your data.

Topics:
• Typical Workflow to Prepare, Connect and Search Artifacts
• Build Stories
• Identify Content with Thumbnails
• Manage Custom Plug-ins
• About Composing Expressions
• Use Data Actions to Connect to Canvases and External URLs
• Search Data, Projects, and Visualizations
• Save Your Changes Automatically

Typical Workflow to Prepare, Connect and Search Artifacts


Here are the common tasks for using available functions to prepare, connect, and
search artifacts.

Task Description More Information


Build stories Capture the insights that you discover in Build Stories
your visualizations into a story that you can
revisit later, include in a presentation, or
share with team members.
Manage custom plug- Upload, download, search for, and delete Manage Custom Plug-ins
ins custom plug-ins that you can use to
customize various objects such as
visualizations types or projects.
Compose Compose expressions to use in filters or in About Composing Expressions
expressions calculations.
Create and apply Create data action links to pass context Use Data Actions to Connect to Canvases
data actions values from canvases to URLs or project and External URLs
filters.
Search artifacts Search for projects, visualizations, and Search Data, Projects, and Visualizations
columns. Use BI Ask to quickly build
visualizations.

4-1
Chapter 4
Build Stories

Build Stories
This topic covers how you capture insights and group them into stories.

Topics:
• Capture Insights
• Create Stories
• View Streamlined Content

Capture Insights
As you explore data in visualizations, you can capture memorable information in one
or more insights, which build your story. For example, you might notice before and
after trends in your data that you’d like to add to a story to present to colleagues.
Using insights, you can take a snapshot of any information that you see in a
visualization and keep track of any moments of sudden realization while you work with
the data. You can share insights in the form of a story, but you don't have to. Your
insights can remain a list of personal moments of realization that you can go back to,
and perhaps explore more. You can combine multiple insights in a story. You can also
link insights to visualizations using the Interaction property.

Note:
Insights don't take a snapshot of data. They take a snapshot of the project
definition at a certain point in time.

1. Display the Narrate pane, and build your story:


• Use the Search option in the Canvases pane to locate visualizations to
include in your story. Right-click each canvas to include and click Add To
Story.
• Click Add Note to annotate your canvases with insights, such as notes or web
links.
• Use the tabs on the properties pane to further refine your story. For example,
click Presentation to change the presentation style from compact to film strip.
• To synchronize your story canvases with your visualizations, display the
Visualize pane, click Canvas Settings, then select Synchronize
Visualizations. Alternatively, click Canvas Properties and select this option.
2. Continue adding insights to build a story about your data exploration.
The story builds in the Narrate canvas. Each insight has a tab.

Create Stories
After you begin creating insights within a story, you can cultivate the look and feel of
that story. For example, you can rearrange insights, include another insight, or hide an
insight title. Each project can have one story comprising multiple pages (canvas).

4-2
Chapter 4
Build Stories

1. In your project, click Narrate.


2. Create the story in the following ways:
• Add one or more canvases to the story and select a canvas to annotate.
• To annotate a story with insights, click Add Note. You can add text and web
links.
• To change the default configuration settings for a story, use the properties
pane on the Canvases panel.
• To edit an insight, click or hover the mouse pointer over the insight, click the
menu icon, and select from the editing options.
• To include or exclude an insight, right-click the insight and use the Display or
Hide options. To display insights, on the canvas property pane, click Notes,
then Show All Notes.
• To show or hide insight titles or descriptions, on the canvas property pane,
click General, and use the Hide Page and Description options.
• To rearrange insights, drag and drop them into position on the same canvas.
• To limit the data displayed in a story, on the canvas property pane, click
Filters. If no filters are displayed, go back to the Visualize pane and add one
or more filters first, then click Save.
• To update filters for a story, on the canvas property pane, click Filters, and
use the options to hide, reset, or selectively display filters.
• To rename a story, click the story title and update.
• To add the same canvas multiple times to a story, right-click a canvas and
click Add to Story. You can also right-click the canvases at the bottom of the
Narrate pane and click Duplicate.
• To display the story at any time click Present.
• To close present mode and return to the Narrate pane click X.
• To toggle insights use the Show Notes option.

Note:
You can modify the content on a canvas for an insight. For example, you
can add a trend line, change the chart type, or add a text visualization.
After changing an insight, you'll notice that its corresponding wedge (in
the Insight pane) or dot (in the Story Navigator) changes from solid blue
to hollow. When you select Update to apply the changes to the insight,
you'll see the wedge or dot return to solid blue.

View Streamlined Content


You can use the presentation mode to view a project and its visualizations without the
visual clutter of the canvas toolbar and authoring options.
1. On the Narrate toolbar, click Present.
The project is displayed in presentation mode.
2. To return to the interaction mode, click X.

4-3
Chapter 4
Identify Content with Thumbnails

Identify Content with Thumbnails


You can quickly visually identify content on the Home page and within projects by
looking at thumbnail representations.
Project thumbnails on the Home page show a miniature visualization of what projects
look like when opened. Project thumbnails are regenerated and refreshed when
projects are saved. If a project uses a Subject Area data set, then the project is
represented with a generic icon instead of a thumbnail.

Manage Custom Plug-ins


You can upload, download, search for, and delete custom plug-ins in Data
Visualization. Plug-ins are custom visualization types that you create externally and
import into Data Visualization.
For example, you can upload a custom plug-in that provides a visualization type that
you can use in projects.
Tutorial
1. Navigate to Console and click Extensions. You use this page to upload, search
for, delete, or download a custom plug-in.
2. To upload a custom plug-in, click Upload Extension and perform one of the
following actions.
• Browse to the required plug-in file in your file system, and click Open to select
the plug-in.
• Drag the required plug-in file to the Upload Custom Plugin object.

Note:
If the uploaded custom plug-in file name is the same as an existing
custom plug-in, then the uploaded file replaces the existing one and is
displayed in visualizations.

3. Perform any of the following tasks.


• If the plug-in provides a visualization type, you can select that type from the list
of available types when you create or switch the type of a visualization.
• To search for a custom plug-in, enter your search criteria in the Search field
and click Return to display search results.
• To delete a custom plug-in, click Options on the custom plug-in and select
Delete, and click Yes to delete the custom plug-in.

4-4
Chapter 4
About Composing Expressions

Note:
If you delete a custom visualization type that’s used in a project, then
that project displays an error message in place of the visualization.
Either click Delete to remove the visualization, or upload the same
custom plug-in so that the visualization renders correctly.

• To download a custom plug-in from Data Visualization to your local file


system, click Options on the custom plug-in and select Download.

About Composing Expressions


You can use the Expression window to compose expressions to use in expression
filters or in calculations. Expressions that you create for expression filters must be
Boolean (that is, they must evaluate to true or false).
While you compose expressions for both expression filters and calculations, the end
result is different. A calculation becomes a new data element that you can add to your
visualization. An expression filter, on the other hand, appears only in the filter bar and
can’t be added as a data element to a visualization. You can create an expression
filter from a calculation, but you can’t create a calculation from an expression filter.
See Create Calculated Data Elements and Build Expression Filters.
You can compose an expression in various ways:
• Directly enter text and functions in the Expression window.
• Add data elements from the Data Elements pane (drag and drop, or double-click).
• Add functions from the function panel (drag and drop, or double-click).
See Expression Editor Reference.

Use Data Actions to Connect to Canvases and External


URLs
A Data Action link can pass context values from Data Visualization as parameters to
external URLs or filters to other projects.
When a link navigates to a project, the data context is displayed in the form of canvas
scope filters in the filter bar. The links data context may include attributes associated
with the selections or cell from which the link was initiated.

Topics:
• Create Data Actions to Connect Visualization Canvases
• Create Data Actions to Connect to External URLs from Visualization Canvases
• Apply Data Actions to Visualization Canvases

4-5
Chapter 4
Use Data Actions to Connect to Canvases and External URLs

Create Data Actions to Connect Visualization Canvases


You can create data actions to navigate to a canvas in the current project or to a
canvas in another project.
You can also use data actions to transfer context-related information (for example, an
order number) where the link displays details about an order number in another
visualization or project.
1. Create or open a project and confirm that you’re working in the Visualize canvas.
2. Click Menu on the project toolbar and click Project Properties, then select the
Data Actions tab.
3. Click Add Action and enter a name for the new navigation link.
• You can use only letters and numbers in the navigation link’s name.
• You can add multiple navigation links.
4. Click the Type field and select Canvas Navigation.
5. Click the Anchor To field and select the columns from the current visualization to
associate with this data action. Don't select measure columns or hidden columns.
If you don't specify a value for the Anchor To field, then the data action applies to
all data elements in the visualizations.
6. Click the Project field and select the project you want to use for the anchor:
• Use This Project - Select if you want to navigate to a canvas in the active
project.
Columns that you select must be in the current visualization.
• Select from Catalog - Select to browse for and select the project that you
want to use.
7. Click the Canvas Navigation field and select the canvas that you want to use.
8. Click the Pass Values field and select which values you want the data action to
pass.
For example, if in the Anchor To field, you specified order number column, then in
the Pass Values field, select Anchor Data to pass the specified column values.
• All - Dynamically determines the intersection of the cell that you click and
passes those values to the target.
• Anchor Data - Ensures that the data action is displayed at runtime, but only if
the required columns specified in the Anchor To field are available in the view
context.
• None - Opens the page (URL or canvas) but doesn't pass any data.
• Custom - Enables you to specify a custom set of columns to pass.
9. Click OK to save.

4-6
Chapter 4
Use Data Actions to Connect to Canvases and External URLs

Create Data Actions to Connect to External URLs from Visualization


Canvases
You can use data actions to navigate to an external URL from a canvas so that when
you select an attribute such as the supplier ID, it displays a specific external website.
1. Create or open a project and confirm that you’re working in the Visualize canvas.
2. Click Menu on the project toolbar and click Project Properties, then select the
Data Actions tab.
3. Click Add Action and enter a name for the new navigation link.
• You can use only letters and numbers in the navigation link’s name.
• You can add multiple navigation links.
4. Click the Type field and select URL Navigation.
5. Click the Anchor To field and select the columns that you want the URL to apply
to. Don't select measure columns or hidden columns. If you don't specify a value
for the Anchor To field, then the data action applies to all data elements in the
visualizations.
6. Enter a URL address that starts with http: and optionally include notation and
parameters.
For example, where http://www.address.com?<key>{<value>} is displayed like
www.oracle.com?lob={p3 LOB}&org={D3 Organization}&p1=3.14 Data Visualization
displays a list of available matching column names to choose from as you type (for
example, P3 LOB, P3k LOB Key). The column names that you select here are
replaced with values when you pass the URL. So you could select a year, a
person, and a department.
In case of multiple values for a specific data element, for example, p3 LOB, the LOB
appears multiple times in the URL. www.oracle.com?
lob=value1&lob=value2&org=orgvalue&p1=3.14

7. Click OK to save.
8. In the Canvas, click a cell, or use Ctrl-click to select multiple cells.
9. Right-click and select Navigate to <URL name> to display the result.
Selecting the cells determines the parameters to pass.

Apply Data Actions to Visualization Canvases


You can navigate between canvases and to URLs with links created in data actions.

1. Create or open a project. Confirm that you’re working in the Visualize canvas.
2. On the canvas that contains a Data Action link leading to another canvas or URL,
perform the following steps:
a. Right-click a data element, or select multiple elements (using Crtl-click).
b. Select Data Actions from the menu.
c. Complete the Project Properties dialog.

4-7
Chapter 4
Search Data, Projects, and Visualizations

The name of the data actions that apply in the current view context are displayed
in the context menu.
All the values defined in the Anchor To field must be available in the view context
in order for a data action to be displayed in the context menu.
Note the following rules on matching data elements passed as values with data
elements on the target canvas:
• If the same data element is matched in the target project's canvas, and if the
target canvas doesn't have an existing canvas filter for the data element, a
new canvas filter is added. If there is an existing canvas filter, it’s replaced by
the value from the source project's canvas.
• If the expected data set is unavailable but a different data set is available, the
match is made by using the column name and data type in the different data
set, and the filter is added to that.
• If there are multiple column matches by name and data type, then the filter is
added to all those columns in the target project or canvas.
The data action navigates to the target cell or URL that is mapped and filters the
data displayed based on the values specified in the Data Actions dialog.

Note:
Pass Values context consists of data elements used in the visualization
from which the data action is invoked. The Pass Values context doesn't
include data elements in the project, canvas, or visualization level filters.

Search Data, Projects, and Visualizations


This topic describes how you can search for objects, projects, and columns. This topic
also describes how you can use BI Ask to create spontaneous visualizations.

Topics:
• Index Data for Search and BI Ask
• Visualize Data with BI Ask
• Search for Saved Projects and Visualizations
• Search Tips

Index Data for Search and BI Ask


When you search or use BI Ask, the search results are determined by what
information has been indexed.
Every two minutes, the system runs a process to index your saved objects, project
content, and data set column information. The indexing process also updates the
index file to reflect any objects, projects, or data sets that you deleted from your
system so that these items are no longer displayed in your search results.
For all data sets, the column metadata is indexed. For example, column name, the
data type used in the column, aggregation type, and so on. Column data is indexed for
Excel spreadsheet, CSV, and TXT data set columns with 1,000 or fewer distinct rows.

4-8
Chapter 4
Search Data, Projects, and Visualizations

Note that no database column data is indexed and therefore that data isn’t available in
your search results.

Visualize Data with BI Ask


Use BI Ask to enter column names into the search field, select them, and quickly see a
visualization containing those columns. You can use this functionality to perform
impromptu visualizations without having to first build a project.

1. On the Home Page, click the What are you interested in field.
2. Enter your criteria. As you enter the information, the application returns search
results in a drop-down list. If you select an item from this drop-down list, then your
visualized data is displayed.
• What you select determines the data set for the visualization, and all other
criteria that you enter is limited to columns or values in that data set.
The name of the data set you’re choosing from is displayed in the right side of
the What are you interested in field. Note the following BI Ask search and
visualization example:

• You can search for projects and visualizations or use BI Ask. When you enter
your initial search criteria, the drop-down list contains BI Ask results, which are
displayed in the Visualize data using section of the drop-down list. Your initial
search criteria also builds a search string to find projects and visualizations.
That search string is displayed in the Search results containing section of
the drop-down list and is flagged with the magnifying glass icon. See Search
Tips.

4-9
Chapter 4
Search Data, Projects, and Visualizations

• Excel, CSV, and TXT data set columns with 1,000 or less distinct rows are
indexed and available as search results. No database data set data values are
indexed and available as search results.
3. Enter additional criteria in the search field, select the item that you want to include,
and the application builds your visualization. You can also optionally perform the
following steps:
• Enter the name of the visualization that you want your results to be displayed
in. For example, enter scatter to show your data in a scatter plot chart, or enter
pie to show your data in a pie chart.
• Click Change Visualization Type to apply a different visualization to your
data.
• Click Open in Data Visualization to further modify and save the visualization.
4. To clear the search criteria, click the X icon next to your search tags.

Search for Saved Projects and Visualizations


On the Home page you can quickly and easily search for saved objects.
Folders and thumbnails for objects that you’ve recently worked with are displayed on
the Home page. Use the search field to locate other content.
Note that in the search field you can also use BI Ask to create spontaneous
visualizations.
1. On the Home Page, click the What are you interested in field.
2. Enter your search criteria by typing either keywords or the full name of an object
such as a folder or project. As you enter your criteria, the system builds the search
string in the drop-down list.
The drop-down list contains results that match saved objects, but also can contain
BI Ask search results. To see object matches (for example, folders or projects),
click the row with the magnifying glass icon (located at the top of the drop-down
list in the Search results containing section). Note that any BI Ask matches are
displayed in the Visualize data using section of the drop-down list and are
flagged with different icons.

3. In the Search results containing section of the drop-down list, click the search
term that you want to use.
The objects that match your search are displayed on the Home page.
4. To clear the search criteria, click the X icon next to your search tags.

4-10
Chapter 4
Search Data, Projects, and Visualizations

Search Tips
You must understand how the search functionality works and how to enter valid search
criteria.

Wildcard Searches
You can use the asterisk (*) as a wildcard when searching. For example, you can
specify *forecast to find all items that contain the word “forecast”. However, using two
wildcards to further limit a search returns no results (for example, *forecast*).

Meaningful Keywords
When you search, use meaningful keywords. If you search with keywords such as by,
the, and in it returns no results. For example, if you want to enter by in the search field
to locate two projects called “Forecasted Monthly Sales by Product Category” and
“Forecasted Monthly Sales by Product Name,” then it returns no results.

Items Containing Commas


If you use a comma in your search criteria the search returns no results. For example,
if you want to search for quarterly sales equal to $665,399 and enter 665,399 in the
search field, then no results are returned. However, entering 655399 does return
results.

Date Search
If you want to search for a date attribute, you search using the year-month-date
format. Searching with the month/date/year format (for example, 8/6/2016) doesn’t
produce any direct matches. Instead, your search results contain entries containing 8
and entries containing 2016.

Searching in Non-English Locales


When you enter criteria in the search field, what displays in the drop-down list of
suggestions can differ depending upon your locale setting. For example, if you’re using
an English locale and enter sales, then the drop-down list of suggestions contains
items named sale and sales. However, if you’re using a non-English locale such as
Korean and type sales, then the drop-down list of suggestions contains only items that
are named sales and items such as sale aren’t included in the drop-down list of
suggestions.
For non-English locales, Oracle suggests that when needed, you search using stem
words rather than full words. For example, searching for sale rather than sales returns
items containing sale and sales. Or search for custom to see a results list that contains
custom, customer, and customers.

Frequency of Indexing
If you create or save a project or create a data set and then immediately try to search
for the saved project, project content, or column information, then it’s likely that your
search results won’t contain matches for these items. If this happens, then wait a few
minutes for the indexing process to run, and retry your search. The system
automatically runs the indexing process every two minutes.

4-11
Chapter 4
Save Your Changes Automatically

Searching for Data Values


All data sets including RPD, Excel, DB, CSV, and TXT data set columns with 1,000 or
less distinct rows are indexed and are returned in your search results. Note that
database data set data values aren’t indexed and won’t be included in your search
results.

Save Your Changes Automatically


You can use the Auto Save option to automatically save your updates to a
visualization project without repeatedly clicking Save.

1. Create or open a project. Confirm that you’re working in the Visualize canvas.
2. From the Save menu, select Auto Save.
3. In the Save Project dialog, enter the Name and Description to identify your
project.
4. Select the folder where you want to save your project.
5. Click Save.
If no error occurs, a success message is displayed that states that your project is
saved and that the Save option is disabled. Any project updates are saved in real-
time.

Note:

• If you’ve already saved your project in a specific location, the Save


Project dialog isn’t displayed after you click Auto Save. Updates are
saved in real-time.
• Suppose that two users are updating the same project and Auto
Save is enabled. The Auto Save option is automatically disabled
when different types of updates are made to the project. A message
is displayed that states that another user has updated the project.

4-12
5
Add Data Sources to Analyze and Explore
Data
You can add your own data to visualizations for analysis and exploration.

Topics:
• Typical Workflow to Add Data Sources
• About Data Sources
• Connect to Database Data Sources
• Connect to Oracle Applications Data Sources
• Create Connections to Dropbox
• Create Connections to Google Drive or Google Analytics
• Create Generic JDBC Connections
• Create Generic ODBC Connections
• Create Connections to Oracle Autonomous Data Warehouse Cloud
• Create Connections to Oracle Big Data Cloud
• Create Connections to Oracle Essbase
• Create Connections to Oracle Talent Acquisition Cloud
• Add a Spreadsheet as a Data Source

Typical Workflow to Add Data Sources


Here are the common tasks for adding data from data sources.

Task Description More Information


Add a connection Create a connection if the data source that Create Oracle Applications Connections
you want to use is either Oracle Applications Create Database Connections
or a database.
Create a data source Upload data from spreadsheets. Retrieve Add a Spreadsheet as a Data Source
data from Oracle Applications and Connect to Oracle Applications Data
databases. Sources
Creating a data source from Oracle Create Data Sets from Databases
Applications or a database requires you to
create a new connection or use an existing
connection.

5-1
Chapter 5
About Data Sources

About Data Sources


A data source is any tabular structure. You get to see data source values after you
load a file or send a query to a service that returns results (for example, another
Oracle Business Intelligence system or a database).
A data source can contain any of the following:
• Match columns - These contain values that are found in the match column of
another source, which relates this source to the other (for example, Customer ID
or Product ID).
• Attribute columns - These contain text, dates, or numbers that are required
individually and aren’t aggregated (for example, Year, Category Country, Type, or
Name).
• Measure columns - These contain values that should be aggregated (for
example, Revenue or Miles driven).
See Supported Data Sources.
You can analyze a data source on its own, or you can analyze two or more data
sources together, depending on what the data source contains.

Working with Matching


If you use multiple sources together, then at least one match column must exist in
each source. The requirements for matching are:
• The sources contain common values (for example, Customer ID or Product ID).
• The match must be of the same data type (for example, number with number, date
with date, or text with text).

Connect to Database Data Sources


You can create, edit, and delete database connections and use the connections to
create data sources from databases.

Topics:
• Create Database Connections
• Create Data Sets from Databases
• Edit Database Connections
• Delete Database Connections

Create Database Connections


You can create connections to databases and use those connections to source data in
projects.
1. On the Home page, click Create, then click Connection to display the Create
Connection dialog.

5-2
Chapter 5
Connect to Database Data Sources

2. In the Create Connection dialog, click the icon for the connection type that you
want to create a connection for (for example Oracle Database).
3. Enter a name for the connection, and then enter the required connection
information, such as host, port, username, password, and service name.

Note:

• If you’re creating an SSL connection to an Oracle Database, in the


Client Credentials field, click Select to browse for the
cwallet.sso file. Ask your administrator for the location of the
cwallet.sso file.

4. (Optional) When you connect to some database types (for example, Oracle Talent
Management Cloud), you might have to specify the following authentication
options on the Create Connection and Inspect dialogs:
• Authentication
– Select Always use these credentials, so that the login name and
password you provide for the connection are always used and users aren’t
prompted to log in.
– Select Require users to enter their own credentials when you want to
prompt users to enter their own user name and password for the data
source. Users required to log in see only the data that they have the
permissions, privileges, and role assignments to see.
5. Click Save.
You can now begin creating data sets from the connection.

Create Data Sets from Databases


After you create database connections, you can use those connections to create data
sets.
You must create the database connection before you can create a data set for it.
1. On the Home page click Create and click Data Set to open the Create Data Set
dialog. In the Create Data Set dialog, select Create Connection and use the
Create Connection dialog to create the connection for your data set.
2. In the Data Set editor, first browse or search for and double-click a schema, and
then choose the table that you want to use in the data set. When you double-click
to select a table, a list of its columns is displayed.
You can use breadcrumbs to quickly move back to the table or schema list.
3. In the column list, browse or search for the columns you want to include in the
data set. You can use Shift-click or Ctrl-click to select multiple columns. Click Add
Selected to add the columns you selected, or click Add All to include all of the
table's columns in the data source.
Alternatively, you can select the Enter SQL option to view or modify the data
source’s SQL statement or to write a SQL statement.

5-3
Chapter 5
Connect to Database Data Sources

4. You can also optionally perform the following steps:


• After you’ve selected columns, you can go to the Step editor at the top of the
Data Set editor and click the Filter step to add filters to limit the data in the
data set. After you’ve added filters, click Get Preview Data to see how the
filters limit the data.
• Go to the Step editor at the top of the Data Set editor and click the last step in
the Step editor to specify a description for the data source.
• Go to the Step editor at the top of the Data Set editor and click the last step in
the Step editor and go to the Refresh field to specify how you want to refresh
the data in the data source. Note the following information:
– Select Live if you want the data source to use data from the database
directly rather than copying the data into the cache. Typically because
database tables are large, they shouldn’t be copied to Data Visualization's
cache.
– If your table is small, then select Auto and the data is copied into Data
Visualization’s cache if possible. If you select Auto, you must refresh the
data when it’s stale.
5. Click Add. The View Data Source page is displayed.
6. In the View Data Source page you can optionally view the column properties and
specify their formatting. The column type determines the available formatting
options.

Edit Database Connections


You can edit the database connection details.
1. In the Data page, click Connections.
2. Select the connection you want to edit and click Action menu or right-click, then
select Inspect.
3. In the Inspect dialog, edit the connection details.
4. Click Save.
If you’re editing an SSL connection to an Oracle Database and you need to use a new
cwallet.sso file, in the Client Credentials field, click Select to browse for the
cwallet.sso file. Ask your administrator for the location of the cwallet.sso file.

You must provide a unique Connection Name. If a connection with the same name
already exists in your system, an error message is displayed. You can’t see or edit the
current password for your connection. If you need to change it, you must create a
connection that uses the same password.

5-4
Chapter 5
Connect to Oracle Applications Data Sources

Delete Database Connections


You can delete a database connection. For example, you must delete a database
connection and create a new connection when the database's password has changed.

Note:
If the connection contains any data sets, then you must delete the data sets
before you can delete the connection.

1. Go to the Data page and select Connections.


2. Select the connection that you want to delete and click Actions menu or right-
click, then click Delete.
3. Click Yes.

Connect to Oracle Applications Data Sources


You can create Oracle Application data sources that help you visualize, explore, and
understand the data in your Oracle Fusion Applications with Oracle Transactional
Business Intelligence and Oracle BI EE subject areas and analyses.

Topics:
• Create Oracle Applications Connections
• Compose Data Sets from Subject Areas
• Compose Data Sets from Analyses
• Edit Oracle Applications Connections
• Delete Oracle Applications Connections

Create Oracle Applications Connections


You can create connections to Oracle Applications and use those connections to
create data sets.
You use the Oracle Applications connection type to create connections to Oracle
Fusion Applications with Oracle Transactional Business Intelligence, and to Oracle BI
EE. After you create a connection, you can access and use subject areas and
analyses as data sets for your projects.
1. On the Data page or Home page click Create, then click Connection to display
the Create Connection dialog.
2. Click the Oracle Applications icon.
3. Enter a name for the new connection enter the Oracle Fusion Applications with
Oracle Transactional Business Intelligence or Oracle BI EE URL, then the
username, and password.
4. In the Authentication field, specify if you want the users to be prompted to log in
to access data from the Oracle Applications data source.

5-5
Chapter 5
Connect to Oracle Applications Data Sources

• If you select Always use these credentials, then the login name and
password you provide for the connection is always used and users aren’t
prompted to log in.
• If you select Require users to enter their own credentials, then users are
prompted to enter their user names and passwords to use the data from the
Oracle Applications data source. Users required to log in see only the data
that they have the permissions, privileges, and role assignments to see.
5. Click Save.
You can now create data sets from the connection.

Compose Data Sets from Subject Areas


You use the Oracle Applications connection type to access the Oracle Fusion
Applications with Oracle Transactional Business Intelligence and Oracle BI EE subject
areas that you want to use as data sets.
You must create an Oracle Applications connection before you can create a subject
area data set.

1. On the Home, Data, or Projects page, click Create and click Data Set. Click
Connection and use the Create Connection dialog to specify the details for your
data set.
2. In the Data Set editor, choose Select Columns to view, browse, and search the
available subject areas and their columns that you include in your data set. You
can use breadcrumbs to quickly move back through the directories.
3. You can also optionally perform the following steps:
• In the breadcrumbs click the Add/Remove Related Subject Areas option to
include or exclude related subject areas. Subject areas are related when they
use the same underlying business or logical model.
• After you’ve selected columns, go to the Step editor at the top of the Data Set
editor and click the Filter step to add filters to limit the data in the data set.
After you’ve added filters, click Get Preview Data to see how the filters limit
the data.
• Click Enter SQL to display the logical SQL statement of the data source. View
or modify the SQL statement in this field.

Note:
If you edit the data source’s logical SQL statement, then the SQL
statement determines the data set and any of the column-based
selection or specifications are disregarded.

• Go to the Step editor at the top of the Data Set editor and click the last step in
the Step editor to specify a description for the data set.
4. Before saving the data set, go to the Name field and confirm its name. Click Add.
The Data Set page is displayed.
5. In the Data Set page you can optionally view the column properties and specify
their formatting. The column type determines the available formatting options.

5-6
Chapter 5
Connect to Oracle Applications Data Sources

Compose Data Sets from Analyses


You can use analyses created in Oracle Fusion Applications with Oracle Transactional
Business Intelligence and Oracle BI EE subject areas as data sources.
You must create an Oracle Applications connection before you can create an analysis
data set.

1. On the Home page click Create and click Data Set to open the Create Data Set
dialog. In the Create Data Set dialog, select Create Connection and use the
Create Connection dialog to create the connection for your data set.
2. In the Data Set editor, select the Select an Analysis option to view, browse, and
search the available analyses to use in your data set.
You can use breadcrumbs to quickly move back through the directories.
3. Double-click an analysis to use it for your data set. The analysis’ columns are
displayed in the Data Set editor.
4. You can also optionally perform the following steps:
• Click Enter SQL to display the SQL Statement of the data set. View or modify
the SQL statement in this field.
• Click a column’s gear icon to modify its attributes, like data type and whether
to treat the data as a measure or attribute.
• Go to the Step editor at the top of the Data Set editor and click the last step in
the Step editor to specify a description for the data set.
5. Before saving the data set, go to the Name field and confirm its name. Click Add.
The Data Set page is displayed.
6. In the Data Set page you can optionally view the column properties and specify
their formatting. The column type determines the available formatting options.

Edit Oracle Applications Connections


You can edit Oracle Applications connections. For example, you must edit a
connection if your system administrator changed the Oracle Applications login
credentials.
1. In the Data page, click Connections.
2. Locate the connection that you want to edit and click its Actions menu icon and
select Edit.
3. In the Edit Connection dialog, edit the connection details. Note that you can’t see
or edit the password that you entered when you created the connection. If you
need to change the connection’s password, then you must create a new
connection.
4. Click Save.

5-7
Chapter 5
Create Connections to Dropbox

Delete Oracle Applications Connections


You can delete an Oracle Applications connection. For example, if your list of
connections contains unused connections, then you can delete them to help you keep
your list organized and easy to navigate.

Note:
If any data sets use the connection, then you must delete the data sets
before you can delete the connection.

1. In the Data page, click Connections.


2. To the right of the connection that you want to delete, click Actions menu, and
then select Delete.
3. Click Yes.

Create Connections to Dropbox


You can create connections to Dropbox and use those connections to source data in
projects.
1. On the Data or Home page, click Create, then click Connection to display the
Create Connection dialog.
2. Browse or search for the Dropbox icon. Click the Dropbox icon.
3. In the Add a New Connection dialog, enter a name for the connection, and then
enter the required connection information:

Field Description
Redirect Confirm that the Dropbox application is open and its Settings area is
URL displaying. Copy the URL in the Redirect URL field and paste it into the
Dropbox application’s OAuth 2 Redirect URIs field and then click Add.
Client ID Go to the Dropbox application, locate the App key field, and copy the key
value. Go to Data Visualization and paste this value into the Client ID field.
Client Go to the Dropbox application, locate the App secret field, click Show to
Secret reveal the secret, and copy the secret value. Go to Data Visualization and
paste this value into the Client Secret field.

4. Click Authorize. When prompted by Dropbox to authorize the connection, click


Allow.
The Create Connection dialog refreshes and displays the name of the Dropbox
account and associated email account.
5. Click Save.
You can now create data sets from the Dropbox connection. See Add a
Spreadsheet from Dropbox or Google Drive.

5-8
Chapter 5
Create Connections to Google Drive or Google Analytics

Create Connections to Google Drive or Google Analytics


You can create connections to Google Drive or Google Analytics and use those
connections to source data in projects.
1. Set up a Data Visualization application in Google, if you haven’t done so already.
a. Sign into your Google account, and go to the Developer’s Console.
b. Create a project, then go to the API Manager Developers area of the Google
APIs site and click Create app to create and save a Data Visualization
application.
c. Enable the application and create credentials for the application by accessing
the Analytics API.
d. Open the page displaying the credential information, and paste the redirect
URL provided by Data Visualization, and copy the Client ID and Client secret.
Read the Google documentation for more information about how to perform
these tasks.
2. On the Data or Home page, click Create, then click Connection to display the
Create Connection dialog.
3. Browse or search for the Google Drive or the Google Analytics icon, and then click
the icon.
4. In the Add a New Connection dialog, enter a connection name and enter the
required connection information as described in this table.

Field Description
Redirect Confirm that the Google application is open and its Credentials area is
URL displaying. Copy the URL in the Redirect URL field and paste it into the
Google application’s Authorized redirect URIs field.
Client ID Go to the Google application’s Credentials area, locate the Client ID field, and
copy the key value. Go to Data Visualization and paste this value into the
Client ID field.
Client Go to the Google application’s credential information, locate the Client secret
Secret field and copy the secret value. Go to Data Visualization and paste this value
into the Client Secret field.

5. Click Authorize.
6. When prompted by Google to authorize the connection, click Allow.
The Create Connection dialog refreshes and displays the name of the Google
account, and its associated email account.
7. Click Save.
You can now create data sets from the Google Drive or Google Analytics
connection. See Add a Spreadsheet from Dropbox or Google Drive.

5-9
Chapter 5
Create Generic JDBC Connections

Create Generic JDBC Connections


You can create generic JDBC connections to databases and use those connections to
source data in projects. For example, to connect to databases that aren’t listed with
the default connection types.
This method enables you to use drivers in a JDBC Jar file to connect to specific
databases.
The JDBC driver version must match the database version. A version mismatch can
lead to spurious errors during the data load process. Even using an Oracle database,
if the version of the JDBC driver doesn’t match that of the database, then you must
download the compatible version of the JDBC driver from Oracle's website and place it
in the \lib directory.
1. Confirm that you’ve copied the required JDBC driver’s JAR file into Data
Visualization Desktop’s \lib directory.
For example, C:\Program Files\Oracle Data Visualization\lib.
2. On the Data or Home page, click Create, then click Connection to display the
Create Connection dialog.
3. In the Create Connection dialog, locate and click the JDBC icon.
4. In the Create Connection dialog, enter the connection criteria:

Field Description
New Any name that uniquely identifies the connection. Avoid using instance-
Connection specific names such as host names, because the same connection can be
Name configured against different databases in different environments (for
example, development and production).
URL The URL for your JDBC data source.
See the documentation for the driver, and the JAR file for details on
specifying the URL.
Driver Class The name of the Driver Class.
Name You can find the name in the JAR file, or from wherever you downloaded the
JAR file.
Username The database username.
Password The database user password.

5. Click Save.
You can now create data sets from the connection. See Create Data Sets from
Databases.

Note:
If you import a project containing a JDBC connection into a Data
Visualization installation where the JDBC driver isn’t installed, the import still
works. However, the connection doesn’t work when you try to run the project
or Data Flow. You must recreate the JDBC connection, and JDBC driver to a
suitable data source.

5-10
Chapter 5
Create Generic ODBC Connections

Create Generic ODBC Connections


You can create generic ODBC connections to databases and use those connections to
source data in projects. For example, to connect to databases and database versions
that aren’t listed with the default connection types.
You can only use generic ODBC connections to connect on Windows systems.
1. Confirm that the appropriate database driver is installed on your computer.
You must have the required database driver installed on your computer to create
an ODBC Data Source Name (DSN). If you need to install a database driver, use
installation instructions provided by the organization that supplies the database
driver.
2. Create the new ODBC data source in Windows.

a. In Windows, locate and open the ODBC Data Source Administrator dialog.
b. Click the System DSN tab, and then click Add to display the Create New Data
Source dialog.
Windows uses ODBC DSNs to access the data source and for query
execution.
c. Select the driver appropriate for your data source, and then click Finish.
d. The remaining configuration steps are specific to the data source you want to
configure.
Refer to the documentation for your data source.
3. Create the generic ODBC data source in Data Visualization.
a. On the Data or Home page, click Create, then click Connection to display the
Create Connection dialog.
b. In the Create Connection dialog, locate and click the ODBC icon.
c. In the Create Connection dialog, enter the connection criteria:

Field Description
Name Any name that uniquely identifies the connection.
DSN The name of the system DSN that you set up on your computer.
Username The database username.
Password The password for the database user.

d. Click Save.
You can now create data sets from the connection. See Create Data Sets from
Databases.

5-11
Chapter 5
Create Connections to Oracle Autonomous Data Warehouse Cloud

Note:
If you import a project containing an ODBC connection into a Data
Visualization installation where the ODBC DSN doesn’t exist, and the ODBC
driver isn’t installed, the import still works. However, the connection doesn’t
work when you try to run the project or Data Flow. You must recreate the
ODBC connection, and recreate the ODBC DSN, and ODBC driver to a
suitable data source.

Create Connections to Oracle Autonomous Data


Warehouse Cloud
You can create connections to Oracle Autonomous Data Warehouse Cloud and use
those connections to source data in projects.

Tutorial
1. Before you create connections to Oracle Autonomous Data Warehouse Cloud, you
must have the client credentials zip file containing the trusted certificates that
enable Data Visualization to connect to Oracle Autonomous Data Warehouse
Cloud.
a. Obtain the Client Credentials file from Oracle Autonomous Data Warehouse
Cloud Console.
See Download Client Credentials (Wallets) in Using Oracle Autonomous Data
Warehouse Cloud.
The credentials wallet file secures communication between Oracle Analytics
Cloud and Oracle Autonomous Data Warehouse Cloud. The wallet file that
you upload must contain SSL certificates, to enable SSL on your Oracle
Database Cloud connections.
b. Unzip the Client Credentials wallet file (for example, wallet_ADWC1.zip) to
get the cwallet.sso file.
2. To create a connection to Oracle Autonomous Data Warehouse Cloud:
a. On the Home page, click Create then click Connection to display the Create
Connection dialog.
b. Click Oracle Autonomous Data Warehouse to display the fields for the
connection.
c. Enter the Connection Name, Description, Host, and Port.
d. In the Client Credentials field, click Select to browse for the cwallet.sso
file.
e. Enter the Username, Password, and Service Name.
f. Click Save to create the connection.
You can now create data sets from the connection.

5-12
Chapter 5
Create Connections to Oracle Big Data Cloud

Create Connections to Oracle Big Data Cloud


You can create a connection to access data in Oracle Big Data Cloud Service
Compute Edition and use those connections to source data in projects.
You create an Oracle Big Data Cloud Service Compute Edition connection using these
steps.
1. Before you can create connections to Oracle Big Data Cloud Service Compute
Edition you must ensure that the connections are secure.
a. Download a certificate and generate a Java Key Store file for the
corresponding Oracle Big Data Cloud Service Compute Edition environment.
See About Accessing Thrift in Using Oracle Big Data Cloud.
b. Place the Java Key Store file in:
%AppData%\Local\DVDesktop\components\OBIS\bdcsce
c. Restart Data Visualization Desktop.
2. In Data Visualization, click Create and then click Connection to display the
Create Connection dialog.
3. Click Oracle Big Data Cloud to display the fields for the connection.
4. Enter a connection name in the New Connection Name field.
5. Enter the remaining details as needed.
6. Click Save to create the connection.
You can now create data sets from the connection.

Create Connections to Oracle Essbase


You can connect to Oracle Analytics Cloud – Essbase and Oracle Essbase 11g data
sources and visualize the data in your projects and reports.
1. Click Create, and then click Connection.
2. Click Oracle Essbase.
3. For Connection Name, enter a name that identifies this connection.
4. For DSN (data source name), enter the agent URL for your Oracle Analytics Cloud
– Essbase data source.
Use the format:
https://fully_qualified_domain_name/essbase/agent

For example: https://my-example.analytics.ocp.oraclecloud.com/essbase/agent.

Note:
With this URL, you can connect without having to open any ports or
performing additional configuration. Oracle Analytics Cloud – Essbase
must have a public IP address and use the default port.

5-13
Chapter 5
Create Connections to Oracle Talent Acquisition Cloud

If you want to connect to an Oracle Essbase 11g database, enter the hostname
and agent port number on which Oracle Essbase is running. Use the format:
hostname:port

For example: essbase.example.com:1432


The default port is 1432.

Note:
Your Essbase administrator must open ports in the range 32000-34000
to allow the connection.

5. For Username and Password, enter user credentials with access to the Oracle
Essbase data source.
6. Select the Authentication requirements on this connection.
• Always use these credentials: The username and password you provide for
the connection are always used. Users aren’t prompted to sign in to access
the data available through this connection.
• Require users to enter their own credentials: Users are prompted to enter
their own username and password if they want access to this data source.
Users see only the data that they have the permissions, privileges, and role
assignments to see.
• Use the active user’s credentials: Users aren’t prompted to sign in to access
the data. The same credentials they used to sign in to Oracle Analytics Cloud
are also used to access this data source.
7. Click Save to create the connection.
Now you can create data sets from the data accessible through this connection.

Create Connections to Oracle Talent Acquisition Cloud


You can create connections to Oracle Talent Acquisition Cloud (OTAC) to access data
for analysis and use those connections to source data in projects.
1. Click Create and then click Connection to display the Create Connection dialog.
2. Click Oracle Talent Acquisition Cloud to display the fields for the connection.
3. Enter your connection name in the New Connection Name field.
4. Enter the URL for the Oracle Talent Acquisition Cloud connection.
For example, if the Oracle Talent Acquisition Cloud URL is https://
example.taleo.net, then the connection URL that you must enter is
https://example.taleo.net/smartorg/Bics.jss .
5. Enter your username and password in the corresponding fields.
6. Click an Authentication option:
• If you select Always use these credentials, then the login name and
password you provide for the connection is always used and users aren’t
prompted to log in.

5-14
Chapter 5
Add a Spreadsheet as a Data Source

• If you select Require users to enter their own credentials, then users are
prompted to enter their user names and passwords to use the data from the
Oracle Applications data source. Users who are required to log in see only the
data that they have the permissions, privileges, and role assignments to see.
7. Click Save to create the connection.
You can now create data sets from the connection.

Add a Spreadsheet as a Data Source


You can add a spreadsheet as a data source. Data Visualization allows you to browse
for and upload spreadsheets from a variety of places, such as your computer, Google
Drive, and Dropbox.

Topics:
• About Adding a Spreadsheet as a Data Set
• Add a Spreadsheet from Your Computer
• Add a Spreadsheet from Excel with the Smart View Plug-In
• Add a Spreadsheet from Windows Explorer
• Add a Spreadsheet from Dropbox or Google Drive

About Adding a Spreadsheet as a Data Set


Data source files from a Microsoft Excel spreadsheet file can have the XLSX extension
(signifying a Microsoft Office Open XML Workbook file), and the XLS (signifying Excel
spreadsheet format). You can also add CSV and TXT files.
Before you can upload a Microsoft Excel file as a data set, you must structure the file
in a data-oriented way and it mustn‘t contain pivoted data. Note the following rules for
Excel tables:
• Tables must start in Row 1 and Column 1 of the Excel file.
• Tables must have a regular layout with no gaps or inline headings. An example of
an inline heading is one that repeats itself on every page of a printed report.
• Row 1 must contain the table’s column names. For example, Customer Given
Name, Customer Surname, Year, Product Name, Amount Purchased, and so on.
In this example:
– Column 1 has customer given names.
– Column 2 has customer surnames.
– Column 3 has year values.
– Column 4 has product names.
– Column 5 has the amount each customer purchased for the named product.
• The names in Row 1 must be unique. Note that if there are two columns that hold
year values, then you must add a second word to one or both of the column
names to make them unique. For example, if you’ve two columns named Year
Lease, then you can rename the columns to Year Lease Starts and Year Lease
Expires.

5-15
Chapter 5
Add a Spreadsheet as a Data Source

• Rows 2 onward are the data for the table, and they can’t contain column names.
• Data in a column must be of the same kind because it’s often processed together.
For example, Amount Purchased must have only numbers (and possibly nulls),
enabling it to be summed or averaged. Given Name and Surname must be text as
they might be concatenated, and you may need to split dates into their months,
quarters, or years.
• Data must be at the same granularity. A table can’t contain both aggregations and
details for those aggregations. For example, if you’ve a sales table at the
granularity of Customer, Product, and Year, and contains the sum of Amount
Purchased for each Product by each Customer by Year. In this case, you wouldn’t
include Invoice level details or a Daily Summary in the same table, as the sum of
Amount Purchased wouldn’t be calculated correctly. If you’ve to analyze at invoice
level, day level, and month level, then you can do either of the following:
– Have a table of invoice details: Invoice Number, Invoice Date, Customer,
Product, and Amount Purchased. You can roll these up to day or month or
quarter.
– Have multiple tables, one at each granular level (invoice, day, month, quarter,
and year).

Add a Spreadsheet from Your Computer


You can upload an Excel spreadsheet, CSV file, or TXT file dta source located on your
computer to use as a data set.
Before you add a spreadsheet as a data set, confirm you’ve done the following:
• Confirm that you’ve either an Excel spreadsheet in .XLSX or XLS format, or a
CSV, or TXT file as the data source used to create a data set.
• For an Excel spreadsheet, ensure that it contains no pivoted data.
• Understand how the spreadsheet needs to be structured for successful import.
Follow these steps to add a spreadsheet from your computer and use it as a data
source:
1. On the Home page, click Create, then click Data Set to display the Create Data
Set dialog.
2. Click File and browse to select a suitable (unpivoted) XLSX or XLS file, CSV file,
or TXT file.
3. Click Open to upload and open the selected spreadsheet in Data Visualization.
The Data Set editor is displayed.
4. Make any required changes to Name, Description, or to column attributes.
If you’re uploading a CSV or TXT file, then in the Separated By field, confirm or
change the delimiter. If needed, choose Custom and enter the character you want
to use as the delimiter. In the CSV or TXT file, a custom delimiter must be one
character. The following example uses a pipe (|) as a delimiter: Year|Product|
Revenue|Quantity|Target Revenue| Target Quantity.
5. Click Add to save your changes and create the data set.
6. If a data set with the same name already exists:
• Click Yes if you want to overwrite the existing data set.

5-16
Chapter 5
Add a Spreadsheet as a Data Source

• Click No if you want to update the data set name.

Add a Spreadsheet from Excel with the Smart View Plug-In


The Oracle Smart View Plug-In enables you to publish an XLSX or XLS spreadsheet,
a CSV, or TXT file from Excel and use it as a data source.
Upon import into Data Visualization and before you add the spreadsheet as a data
source, you can modify column attributes, like data type and whether to treat the data
as a measure or attribute.
Before you use the Smart View Plug-In, confirm you’ve done the following:
• Installed the latest version of Oracle Smart View for Office. To find the download,
go to Oracle Smart View for Office. After you install Oracle Smart View for Office,
be sure to restart all Microsoft Office applications.
• Confirmed that you’ve either an Excel spreadsheet in .XLSX or .XLS format,
a .CSV file, or a .TXT file to use as the data source.
• Understand how the spreadsheet needs to be structured for successful import.
Follow these steps to publish an Excel spreadsheet, CSV, or TXT file to use it as a
data source:
1. Open your Excel (.XLSX or XLS) spreadsheet, CSV, or TXT file in Microsoft Excel.
If you're opening a .TXT file, follow the import steps for example, to specify the
delimiter.
2. Click the DV Desktop tab.
3. If you’re publishing a .XLSX or XLS file with pivot data, follow these steps:

a. Select the upper-left numeric data cell, or select an area of data cells that you
want to publish.
Don't include grand totals when you select an area of data cells to publish.
b. Click Unpivot.
c. Click OK.
4. If required, format the new sheet content in Excel (for example, edit column
heading names).
5. In the DV Desktop tab, click Publish to publish the new sheet.
If Data Visualization isn't running, it starts automatically. The spreadsheet data is
displayed in the Data Set editor.
6. In the Data Set editor, make any required changes to Name, Description, or to
column attributes.
If you’re uploading a CSV or TXT file, then in the Separated By field, confirm or
change the delimiter. If needed, select Custom and enter the character you want
to use as the delimiter. In the CSV or TXT file, a custom delimiter must be one
character. The following example uses a pipe (|) as a delimiter: Year|Product|
Revenue|Quantity|Target Revenue| Target Quantity.
7. Click Add. If a data set exists with the same name, you're prompted to confirm that
you want to overwrite it.

5-17
Chapter 5
Add a Spreadsheet as a Data Source

Data Visualization creates and displays a new data set that you can update, re-
pivot, or apply changes to as needed.

Note:
If you later delete the Excel file created when un-pivoting, the data set
created inData Visualization is no longer linked to the Excel file.

Add a Spreadsheet from Windows Explorer


You can add a spreadsheet as a data source from within Windows Explorer.
Before you add a spreadsheet as a data source, do the following:
• Install the latest version of Oracle Smart View for Office. To find the download, go
to Oracle Smart View for Office. After you install Oracle Smart View for Office, be
sure to restart all Microsoft Office applications.
• Confirm that you’ve either an Excel spreadsheet in .XLSX or .XLS format or
a .CSV file to use as the data source.
• For an Excel spreadsheet, ensure that it contains no pivoted data.
• Understand how the spreadsheet needs to be structured for successful import.
1. Open Windows Explorer and navigate to the spreadsheet file (.XLSX, .XLS,
or .CSV) that you want to use as a data source.
2. Right-click the spreadsheet file icon.
3. Click Open with from the menu.
4. Select Oracle DV Desktop.
If Data Visualization isn't running, it starts automatically.
5. If a data set with the same name already exists, the Create or Reload Data Set
window is displayed.
• Click Reload and click OK to overwrite the existing data set with the same
name.
If you choose to reload, you don’t need to follow the final step, and the new
data set overwrites the existing data set.
• Click Create New, and complete one of the following options:
– Enter a new name, and click OK.
– To save using an autogenerated data set name, click OK.
6. In the Data Set editor make any required changes to the Name, Description, or to
column attributes.
If you’re working with a CSV file, then in the Separated By field, confirm or
change the delimiter. If needed, select Custom and enter the character you want
to use as the delimiter.
7. Click Add.
Your spreadsheet is added as a data set and is available in Data Visualization.

5-18
Chapter 5
Add a Spreadsheet as a Data Source

Add a Spreadsheet from Dropbox or Google Drive


If you’re storing spreadsheets in Dropbox or Google Drive you can add a spreadsheet
to create a data set.
Before you add a spreadsheet from Dropbox or Google Drive, do the following:
• Confirm that a connection exists. See Create Connections to Dropbox and Create
Connections to Google Drive or Google Analytics.
• Confirm that the spreadsheet you want to use is either an Excel spreadsheet
in .XLSX or .XLS format, a CSV file, or a TXT file.
• For an Excel spreadsheet, ensure that it contains no pivoted data.
• Understand how the spreadsheet needs to be structured for successful import.
Use the following steps to add a spreadsheet.
1. In the Data page, click Create and click Data Set.
The Create Data Set dialog is displayed.
2. In the Create Data Set dialog, click the connection to Dropbox or Google Drive.
The Data Set editor is displayed.
3. In the Data Set editor, search or browse the Dropbox or Google Drive directories
and locate the spreadsheet that you want to use.
You can use breadcrumbs to quickly move back through the directories.
4. Double-click a spreadsheet to select it. When you select a spreadsheet, its
columns and data values are displayed.
5. Click Add to create the data set.

5-19
6
Manage Data that You Added
This topic describes the functions available to manage the data that you added from
data sources.

Topics:
• Typical Workflow to Manage Added Data
• Manage Data Sets
• Refresh Data that You Added
• Update Details of Data that You Added
• Delete Data Sets from Data Visualization
• Rename a Data Set
• Duplicate Data Sets
• Blend Data that You Added
• About Changing Data Blending
• View and Edit Object Properties

Typical Workflow to Manage Added Data


Here are the common tasks for managing the data added from data sources.

Task Description More Information


Refresh data Refresh data in the data set when newer Refresh Data that You Added
data is available. Or refresh the cache for
Oracle Applications and databases if the
data is stale.
Update details of Inspect and update the properties of the Update Details of Data that You Added
added data added data.
Manage data sets See the available data sets and examine or Manage Data Sets
update a data set's properties.
Renaming a data set Rename a data set listed on the data sets Rename a Data Set
page.
Duplicate data sets Duplicate a data set listed on the data sets Duplicate Data Sets
page.
Blend data Blend data from one data source with data Blend Data that You Added
from another data source. About Changing Data Blending

6-1
Chapter 6
Manage Data Sets

Manage Data Sets


You can modify, update, and delete the data that you added from various data sources
to Data Visualization.
You can use the Data Sets page to examine data set properties, change column
properties such as the aggregation type, and delete data sets that you no longer need
to free up space. Data storage quota and space usage information is displayed, so
that you can quickly see how much space is free.

Refresh Data that You Added


After you add data, the data might change, so you must refresh the data from its
source.

Note:
Rather than refreshing a data set, you can replace it by loading a new data
set with the same name as the existing one. However, replacing a data set
can be destructive and is discouraged. Don’t replace a data set unless you
understand the consequences:
• Replacing a data set breaks projects that use the existing data set if the
old column names and data types aren’t all present in the new data set.
• Any data wrangling (modified and new columns added in the data stage)
is lost and projects using the data set are likely to break.

You can refresh data from all source types: databases, files, and Oracle Applications.

Databases
For databases, the SQL statement is rerun and the data is refreshed.

CSV or TXT
To refresh a CSV or TXT file, you must ensure that it contains the same columns that
are already matched with the date source. If the file that you reload is missing some
columns, then you’ll see an error message that your data reload has failed due to one
or more missing columns.
You can refresh a CSV or TXT file that contains new columns, but after refreshing, the
new columns are marked as hidden and don’t display in the Data Elements pane for
existing projects using the data set.

Excel
To refresh a Microsoft Excel file, you must ensure that the newer spreadsheet file
contains a sheet with the same name as the original one. In addition, the sheet must
contain the same columns that are already matched with the data source. If the Excel
file that you reload is missing some columns, then you'll see an error message that
your data reload has failed due to one or more missing columns.

6-2
Chapter 6
Update Details of Data that You Added

You can refresh an Excel file that contains new columns, but after refreshing, the new
columns are marked as hidden and don’t display in the Data Elements pane for
existing projects using the data set. To resolve this issue, use the Inspect option of
the data set to show the new columns and make them available to existing projects.

Oracle Applications
You can reload data and metadata for Oracle Applications data sources, but if the
Oracle Applications data source uses logical SQL, reloading data only reruns the
statement, and any new columns or refreshed data won’t be pulled into the project.
Any new columns come into projects as hidden so that existing projects that use the
data set aren’t affected. To be able to use the new columns in projects, you must
unhide them in data sets after you refresh. This behavior is the same for file-based
data sources.
To refresh data in a data set:
1. Go to the Data page and select Data Sets.
2. Select the data set you want to refresh and click Actions menu or right-click, then
select Reload Data. To refresh data sets in a project:
• Data Elements panel - Select a data set and right-click, then select Reload
Data.
• Visualize and Prepare canvas - Click Menu and select Refresh Data Sets.
You can also right-click a data set in the data sets tabs bar of the Prepare
canvas and select Reload Data.
3. If you’re reloading a spreadsheet and the file is no longer in the same location or
has been deleted, then the Reload Data dialog prompts you to locate and select a
new file to reload into the data source.
4. Click Select File or drag a file to the Reload Data dialog.
5. A success message is displayed after your data is reloaded successfully.
6. Click OK.
The original data is overwritten with new data, which is displayed in visualizations after
they are refreshed.

Update Details of Data that You Added


After you add data, you can inspect its properties and update details such as the name
and description.
1. Go to the Data page and select Data Sets.
2. Select the data set whose properties you want to update and click the Actions
menu or right-click, then select Inspect.
3. View the properties in the following tabs and modify them as appropriate:
• General
• Data Elements
4. (Optional) Change the Data Access query mode for a database table. The default
is Live because database tables are typically large and shouldn’t be copied to the
cache. If your table is small, then select Automatic Caching and the data is

6-3
Chapter 6
Delete Data Sets from Data Visualization

copied into the cache if possible. If you select Automatic Caching, then you’ll
have to refresh the data when it’s stale.
5. Click Save.

Delete Data Sets from Data Visualization


You can delete data sets from Data Visualization when you need to free up space on
your system.
Deleting a data set permanently removes it and breaks any projects that use the
deleted data set. You can’t delete subject areas that you’ve included in projects.
Deleting data differs from removing a data set from a project.
1. Go to the Data page and select Data Sets.
2. Select the data set you want to delete and click the Actions Menu or right-click,
then select Delete.

Rename a Data Set


Renaming a data set helps you to quickly search and identify it in the data set library.
Even if you change the name of a data set, that change doesn't affect the reference for
the project; that is, the project using the specific data set continues to work.
1. Go to the Data page and select Data Sets.
2. Select a data set and click the Actions menu or right-click, then select Open.
3. Click Edit Data Set on the Results toolbar.
4. Select the last step and go to the Name field, then change the value.
5. Click Save.
If a data set with the same name already exits in your system, an error message is
displayed. Click Yes to overwrite the existing data set (with the data set whose name
you're changing) or cancel the name change.

Duplicate Data Sets


You can duplicate an uploaded data set that is listed in the Data Sets page to help you
further curate (organize and integrate from various sources) data in projects.
For example, suppose an accounts team creates a specific preparation of a data set,
and a marketing team wants to prepare the same data set but in a different way. The
marketing team duplicates the data set for their own purposes.
1. Go to the Data page and select Data Sets.
2. Select a data set that you want to duplicate and click the Actions menu or right-
click, then select Duplicate.

6-4
Chapter 6
Blend Data that You Added

Note:

• The duplication happens immediately.


• The default name of the duplicated data set is <Data set>Copy.
• If the data set name already exists, the new name is set to <Data
set>Copy# in sequential order based on available names.
• You can rename the duplicate data set by editing it in the Inspector
dialog.
• The user that duplicates the data set becomes the owner of the new
data set.
• Any user who can view a data set can also duplicate the data set.
• All properties on the new data set, unless specifically stated, are
reset (as if it’s a new data set). For example, ACL, certified, indexed,
custom-attributes.
• Data preparation changes made on the source are retained in the
new data set.
• Conformance rules on the source are retained in the new data set.

Blend Data that You Added


You might have a project where you added multiple data sets. You can blend data
from one data set with data from another data set.

Video
For example, Data Set A might contain new dimensions that extend the attributes of
Data Set B. Or Data Set B might contain new facts that you can use alongside the
measures that already exist in Data Set A.
When you add more than one data set to a project, the system tries to find matches for
the data that’s added. It automatically matches external dimensions where they share
a common name and have a compatible data type with attributes in the existing data
set.
Data sets that aren't joined are divided by a line in the Data Elements pane of the
project. If the project includes multiple data sets and if any aren't joined, then you'll see
restrictions between data elements and visualizations. For example, you can't use the
data elements of a data set in the filters, visualizations, or calculations of another data
set if they're not joined. If you try to do so, you see an error message. You can match
data elements of data sets that aren't joined in the Data Diagram of a project, or you
can create individual filters, visualizations, or calculations for each data set.
You can specify how you want the system to blend your data.
1. Add one or multiple data sets to your project. Confirm that you're working in the
Prepare canvas.
2. Go to the tabs at the bottom of the Prepare canvas and click Data Diagram.
Alternatively, in the Data Elements pane, right-click and select Data Diagram.

6-5
Chapter 6
Blend Data that You Added

3. Click the number along the line that connects the external source to the newly
loaded source to display the Connect Sources dialog.

Note:
Items that were never explicitly matched together may be matched by
the system. For example, Customer.Person_Name is matched to
Employee.Name, and Employee.Name is matched to
Spouse.Given_Name.

4. In the Connect Sources dialog, make changes as necessary.


a. To change the match for a column, click the name of each column to select a
different column from the data sets.

Note:
If columns have the same name and same data type, then they’re
recognized as a possible match. You can customize this and specify
that one column matches another by explicitly selecting it even if its
name isn’t the same. You can select only those columns with a
matching data type.

b. Click Add Another Match, and then select a column from the data sets to
match.

c. For a measure that you’re uploading for the first time, specify the aggregation
type such as Sum or Average.
d. Click the X to delete a match.

6-6
Chapter 6
About Changing Data Blending

5. Click OK to save the matches.

About Changing Data Blending


Sometimes Data Visualization omits rows of data that you expect to see in a data set.
This happens when your project includes data from two data sets that contain a
mixture of attributes and values, and there are match values in one source that don’t
exist in the other. When this happens, you must specify which data set to use for data
blending.
Suppose we have two data sets (Source A and Source B) with slightly different rows,
as shown in the following image. Note that Source A doesn‘t include IN-8 and Source
B doesn’t include IN-7.

The following results are displayed if you select the All Rows data blending option for
Source A and select the Matching Rows data blending option for Source B. Because
IN-7 doesn’t exist in Source B, the results contain null Rep and null Bonus.

The following results are displayed if you select the Matching Rows data blending
option for Source A and select the All Rows data blending option for Source B.
Because IN-8 doesn’t exist in Source A, the results contain null Date and null
Revenue.

The visualization for Source A includes Date as an attribute, and Source B includes
Rep as an attribute, and the match column is Inv#. Under dimensional rules, you can’t

6-7
Chapter 6
View and Edit Object Properties

use these attributes with a measure from the opposite table unless you also use the
match column.
There are two settings for blending tables that contain both attributes and measures.
These are set independently in each visualization based on what columns are used in
the visualization. The settings are All Rows and Matching Rows and they describe
which source rows the system uses when returning data to be visualized.
The system automatically assigns data blending according to the following rules:
• If the visualization contains a match column, then the system sets sources with the
match column to All Rows.
• If the visualization contains an attribute, then the system sets its source to All
Rows and sets the other sources to Matching Rows.
• If attributes in the visualization come from the same source, then the system sets
the source to All Rows, and sets the other sources to Matching Rows.
• If attributes come from multiple sources, then the system sets the source listed
first in the project's elements panel to All Rows and sets the other sources to
Matching Rows.

Change Data Blending


You can change data blending in a project with multiple data sets. Data blending
specifies which data set takes precedence over the other.
1. Select a visualization on the canvas that uses more than one data set and in the
properties pane click Data Sets.
2. To change the default blending, click Data Blending, and select either Auto or
Custom.
If you choose Custom, you can set the blending to either All Rows or Matching
Rows.
• You must assign at least one source to All Rows.
• If both sources are All Rows, then the system assumes that the tables are
purely dimensional.
• You can’t assign both sources to Matching Rows.

View and Edit Object Properties


You can use inspectors to view and edit the properties of standalone objects in the
Home, Data, Projects and other top-level pages.
The inspectors show the properties of an object. Based on the object’s level, the
properties also provide references to other objects, such as lower level objects that are
part of the object that you’re inspecting and other standalone objects that are
referenced or used by that object. For example, a project property provides the list of
data sets that are included in the project. The properties of lower level objects aren’t
part of the top-level object’s inspector (such as data set properties), so they’re not
displayed as part of a project’s properties.
You can inspect the properties of the following objects:
• Projects

6-8
Chapter 6
View and Edit Object Properties

• Data Sets
• Connections
• Data Flows
• Sequences
• Schedules
• Folders
1. In the Data page and select Data Flows, then locate the data flow whose
properties you want to view or edit.
2. Click the data flow’s Actions menu and select Inspect.
3. In the Inspector dialog, modify the object properties (such as Name and
Description).
Common and type-specific properties are organized in tabs in the Inspector dialog,
and the following tabs are displayed:
• General - Lists standard life-cycle properties (such as Name, Description,
Created By, and Modified By) that are common to all types of object.
This tab also lists high-level properties (such as Type, File Name, File Size,
and Location), depending on the type of object that you’re inspecting.
• Permissions - Lists each user’s levels and level of permission.
• Schedules - Lists schedules for the object (such as Name, Frequency, and
Next Start Time of the schedule).
• Related - Lists objects that are related, referenced, or used by the object that
you’re inspecting. The objects listed depend on the type of object that you’re
inspecting.
• History - Lists the recent activity for the object.

Note:
The Inspector dialog also displays other specific tabs (such as Data
Elements, Parameters, and Data Flows), depending on the type of object
that you’re inspecting.

4. Click Save.

6-9
7
Prepare Your Data Set for Analysis
Data preparation involves cleansing, standardizing, and enriching your data set before
you analyze the data in a visualization canvas.

Topics
• Typical Workflow to Prepare Your Data Set for Analysis
• About Data Preparation
• Data Profiles and Semantic Recommendations
• Accept Enrichment Recommendations
• Transform Data Using Column Menu Options
• Adjust the Column Properties
• Edit the Data Preparation Script

Typical Workflow to Prepare Your Data Set for Analysis


Here are the common tasks for performing data preparation actions in the Prepare
canvas.

Task Description More Information


Apply enrichment Enhance or add information to column data Accept Enrichment Recommendations
recommendations using the enrichment recommendations.
Apply transform Modify column data using the transformation Transform Data Using Column Menu Options
recommendations recommendations or available options.
Change column Change the column properties such as data Adjust the Column Properties
properties type, number format.
Edit the data Select and edit the changes applied to a Edit the Data Preparation Script
preparation script column.

About Data Preparation


The data preparation process enables transforming and enriching the data you’re
preparing for analysis.
When you create a project and add a data set to it, the data undergoes column level
profiling that runs on a representative sample of the data. After profiling the data, you
can implement transformation and enrichment recommendations provided for the
recognizable columns in the data set. The following types of recommendations are
provided to perform single-click transforms and enrichments on the data:
• Global positioning system enrichments such as latitude and longitude for cities or
zip codes.

7-1
Chapter 7
Data Profiles and Semantic Recommendations

• Reference-based enrichments, for example, adding gender using on the person’s


first name as the attribute for make the gender decision.
• Column concatenations, for example, adding a column with the person’s first and
last name.
• Part extractions, for example, separating out the house number from the street
name in an address.
• Semantic extractions, for example, separating out information from a recognized
semantic type such as domain from an email address.
• Date part extractions, for example, separating out the day of week from a date that
uses a month, day, year format to make the data more useful in the visualizations.
• Full and partial obfuscation or masking of detected sensitive fields.
• Recommendations to delete columns containing detected sensitive fields.
You can use and configure a wide range of data transformations from the column’s
Options menu. See Transform Data Using Column Menu Options.
When you transform data, a step is automatically added to the Preparation Script
pane. A blue dot indicates that Apply Script has not been executed. After applying the
script, you can make additional changes to the data set, or you can create a project, or
click Visualize to begin your analysis.
As each transformation and enrichment change is applied to the data, you can review
the changes. You can also compare the data changes with the original source data
verify that the changes are correct.
The data transformation and enrichment changes that you apply to a data set affects
all projects that use the same data set. When you open the project that shares the
data set, a message appears indicating that the project uses updated data. You can
create a data set from the original source that doesn’t contain the data preparation
changes. When you refresh the data in a data set, the preparation script changes are
automatically applied to the refreshed data.

Data Profiles and Semantic Recommendations


After creating a data set, the data set undergoes column-level profiling to produce a
set of semantic recommendations to repair or enrich your data. These
recommendations are based on the system automatically detecting a specific semantic
type during the profile step.
There are various categories of semantic types such as geographic locations identified
by city names, a specific pattern such as a credit card number or email address, a
specific data type such as a date, or a recurring pattern in the data such as a
hyphenated phrase.

Topics
• Semantic Type Categories
• Semantic Type Recommendations
• Recognized Pattern-Based Semantic Types
• Based Semantic Types
• Recommended Enrichments

7-2
Chapter 7
Data Profiles and Semantic Recommendations

• Required Thresholds

Semantic Type Categories


Semantic types have categories. These semantic type categories include the
following:
• Recognizing geographic locations such as city names.
• Identifying recognizable patterns such as those found with credit cards numbers or
email addresses.
• Recurring patterns such as hyphenated phrase data.

Semantic Type Recommendations


Recommendations are based on the detection of each type of data such as repair,
enhance, or enrich the data set with additional information based on the detected
type. For example:
• Enrichments - Adding a new column to your data that corresponds to a specific
detected type such as a geographic location, for example, adding population data
for a city.
• Column Concatenations - When two columns are detected in the data set, one
containing first names and the other containing last names, the system
recommends a concatenating the names into a single column, for example, a
first_name_last_name column.
• Semantic Extractions - When a semantic type is composed of subtypes such as
an us_phone number that includes the area code, the system recommends
extracting the area code into its own column.
• Part Extraction - When a generic pattern separator is detected in the data, the
system recommends extracting parts of that pattern. For example if the system
detects a repeating hyphenation in the data, it recommends extracting the parts
into separate columns to potentially make the data more useful for analysis.
• Date Extractions - When dates are detected, the system recommends extracting
parts of the date that might augment the analysis of the data such as by extracting
the day of week from an invoice or purchase date.
• Full and Partial Obfuscation/Masking - When sensitive fields are detected such
as a credit card number, the system recommends a full or partial masking of the
column.
• Delete - When sensitive fields are detected such as a credit card number, the
system recommends deleting the column to prevent exposing of the sensitive
data.

Recognized Pattern-Based Semantic Types


This list shows the semantic types that are recognized based on patterns in the
data. Recommendations are provided for each of these semantic types:
• Dates (in more than 30 formats)
• US Social Security Numbers (SSN)
• Credit Card Numbers
• Credit Card Attributes (CVV and Expiration Date)

7-3
Chapter 7
Data Profiles and Semantic Recommendations

• Email Addresses
• North American Plan Phone Numbers
• First Names (typical first names in the United States)
• Last Names (typical surnames in the United States)
• US Addresses

Based Semantic Types


The list shows the semantic types that are recognized based on pre-loaded reference
knowledge provided with the service. Recommendations are provided for each of
these semantic types:
• Country names
• Country codes
• State names (Provinces)
• State codes
• County names (Jurisdictions)
• City names (Localized Names)
• Zip codes

Recommended Enrichments
The list shows the recommended enrichments based on the semantic
types. Enrichments are determine based on the geographic location hierarchy.
• Country
• Province (State)
• Jurisdiction (County)
• Longitude
• Latitude
• Population
• Elevation (in Meters)
• Time zone
• ISO country codes
• Federal Information Processing Series (FIPS)
• Country name
• Capital
• Continent
• GeoNames ID
• Languages spoken
• Phone country code
• Postal code format
• Postal code pattern

7-4
Chapter 7
Accept Enrichment Recommendations

• Phone country code


• Currency name
• Currency abbreviation
• Geographic top-level domain (GeoLTD)
• Square KM

Required Thresholds
There are specific thresholds required for the profiling process to make a decision
about a specific semantic type. As a general rule, 85% of the data values in the
column must meet the criteria for a single semantic type in order for the system to
make the classification determination. As a result, a column that might contain 70%
first names and 30% “other”, doesn't meet the threshold requirements and therefore no
recommendations are made.

Accept Enrichment Recommendations


You can use the enrichment recommendations to enhance or add information to the
column data.
You can upload or open an existing data set to modify the data using enrichment
recommendations. After making the changes to the data set, you can create a project.
You can also create a project or open an existing project, add one or more data sets to
the project, and then modify the data by using the enrichment recommendations.
If an enrichment recommendation adds information to a column’s data such as
enriching a zip code number column with a state name, a new column is added. When
you click the check mark next to a recommendation, the change is added to the
Preparation script. If you delete or undo the change, the particular recommendation
once again appears as an available option in the Recommendation pane.
If you don’t apply the preparation script and close the project or the data set, you lose
all the data changes you’ve performed.
1. Open a project, and click Prepare. In the Results pane, select a column to enrich.
If the enrichment recommendations are available for that column, you see them
listed in the Recommendation pane.
2. Click a recommendation to see a preview of the change. To add the change to the
Preparation script, click the check mark next to the recommendation.
3. Continue implementing enrichment recommendations on the data set.
4. In the Preparation Script pane, click Apply Script to apply the data changes to the
entire data set. Click Save, enter a name for the projects, and then click Visualize
to review the data elements.

Transform Data Using Column Menu Options


You can use column menu options to modify the data’s format.
You can upload or open an existing data set to transform the data using column menu
options. After making the changes to the data set, you can create a project or open an
existing project and add the data set to the project.

7-5
Chapter 7
Transform Data Using Column Menu Options

The data transform changes update the column data using the selected option or add
a new column to the data set.
The list of available menu options for a column depends on the type of data in that
column.

Note:
If you don’t apply the transformation script and close the project or the data
set, you lose all the data transform changes you’ve performed.

1. Open a project, and click Prepare. In the Results pane, select a column to
transform.
2. Click Options, and select a transformation option.
3. In the step editor, update the fields to configure the changes. You can review the
changes in the data preview table.
4. Click Add Step to apply the data changes, close the step editor, and add a step to
the Preparation Script pane.
5. Continue implementing data transform changes in the data set.
6. Click Apply Script in the Preparation Script pane to apply the data transform
changes to the entire data set.
7. (Optional) Click Save, and then click Visualize to see the transformed columns.
This example shows a Gender column with the data values F, f, M and m. To change
the gender column data to use Female and Male, you select the column, select
Options, and then select Group.
In the Group editor, you create a new column, using the name Gender_Fix. Create two
groups, one for the values that represent women, F, and f, and one group for the
values that represent men, M, and m. In the first group, enter Female as the group
name, then select all of the data values that represent females (f, F). Click the Add
icon next to Group in the editor to add a new group for men. Enter Men as the group
name. The remaining values in the gender column should represent men, so click Add
All. To complete the transformation step change, you must click Add Step to include
the new column and standardized gender groups in the data set.
The Preparation script is updated with the step to add the new column, Gender_Fix
that uses Female and Male as its values.

Convert Text Columns to Date or Time Columns


You can convert any text column to a date, time, or timestamp column.
For example, you can convert an attribute text column to a true date column.
1. Open the project or the data set that includes the column you want to convert.
Confirm that you’re working in the Prepare canvas.
2. Mouse-over the column that you want to convert.
3. Click Options, and select a conversion option (for example, Convert to Number,
Convert to Date).

7-6
Chapter 7
Transform Data Using Column Menu Options

You can also do this from the Data Sets page when you’re editing a data set.
If you’ve selected Convert to Date, then the Convert to Date/Time dialog is
displayed.
4. To further refine the format, select the column, and use the options on the
properties pane.
5. If you want to change the Source Format's default value then click Source Format
and select a format. For example, 2017.01.23, 01/23/2017, Mon, Jan 23, 2017, or
Mon, Jan 23, 2017 20:00:00.
The Source Format field automatically displays a suggested format based on the
input column text. However, if the Source Format field doesn’t display a
suggested format (for example, for Sat 03/28 2017 20:10:30:222), then you can
enter a custom format.
6. Click Custom if you need to enter your own format into the field at the bottom of
the Convert to Date/Time dialog.
The custom format you enter must be in a format recognized by Oracle Business
Intelligence before conversion. If you enter a custom format that isn’t recognized,
an error message is displayed.
7. The Hide Source Element is selected by default and hides the original source
column after conversion. If you deselect this option, the original column is
displayed next to the converted column after conversion.
8. Click Convert to convert the text column into a date or time column.
The changes you make apply to all projects using the data source with a modified
date or time column.

Adjust the Display Format of Date or Time Columns


You can adjust the display format of a date or a time column by specifying the format
and the level of granularity.
For example, you might want to change the format of a transaction date column (which
is set by default to show the long date format such as November 1, 2017) to display
instead the International Standards Organization (ISO) date format (such as
2017-11-01). You might want to change the level of granularity (for example year,
month, week, or day).
1. Open the project or the data set that includes the date and time column that you
want to update. If you’re working in a project, then confirm that you’re working in
the project's Prepare canvas.
2. Click the date or time column you want to edit.
For example, click a date in the data elements area of the Data Panel, or click or
hover over a date element on the main editing canvas.
3. If you’re working in the main editing canvas, adjust the format by doing one of the
following:
• Click Options, then Extract to display a portion of the date or time (for
example, the year or quarter only).
• Click Options, then Edit to display a Expression Editor that enables you to
create complex functions (for example, with operators, aggregates, or
conversions).

7-7
Chapter 7
Transform Data Using Column Menu Options

• In the properties pane, click the Date/Time Format tab, and use the options to
adjust your dates or times (for example, click Format) to select from short,
medium, or long date formats, or specify your own format by selecting
Custom and editing the calendar string displayed.
4. If you’re working in the data elements area of the Data Panel, adjust the format by
doing one of the following:
• If you want to display just a portion of a calendar column (for example, the
year or quarter only), then select and expand a calendar column and select
the part of the date that you want to display in your visualization. For example,
to only visualize the year in which orders were taken, you might click Order
Date and select Year.
• In the properties pane, click the Date/Time Format tab, and use the options to
adjust your dates or times.
5. If you’re working in table view, select the column header and click Options, then in
the properties pane click Date/Time Format to display or update the format for that
column.

General Custom Format Strings


You can use these strings to create custom time or date formats.
The table shows the general custom format strings and the results that they display.
These allow the display of date and time fields in the user's locale.

General Format Result


String
[FMT:dateShort] Formats the date in the locale's short date format. You can also type
[FMT:date].
[FMT:dateLong] Formats the date in the locale's long date format.
[FMT:dateInput] Formats the date in a format acceptable for input back into the system.
[FMT:time] Formats the time in the locale's time format.
[FMT:timeHourMi Formats the time in the locale's time format but omits the seconds.
n]
[FMT:timeInput] Formats the time in a format acceptable for input back into the system.
[FMT:timeInputH Formats the time in a format acceptable for input back into the system, but
ourMin] omits the seconds.
[FMT:timeStamp Equivalent to typing [FMT:dateShort] [FMT:time]. Formats the
Short] date in the locale's short date format and the time in the locale's time
format. You can also type [FMT:timeStamp].
[FMT:timeStampL Equivalent to typing [FMT:dateLong] [FMT:time]. Formats the
ong] date in the locale's long date format and the time in the locale's time format.
[FMT:timeStampI Equivalent to [FMT:dateInput] [FMT:timeInput]. Formats the
nput] date and the time in a format acceptable for input back into the system.
[FMT:timeHour] Formats the hour field only in the locale's format, such as 8 PM.
YY or yy Displays the last two digits of the year, for example 11 for 2011.
YYY or yyy Displays the last three digits of the year, for example, 011 for 2011.
YYYY or yyyy Displays the four-digit year, for example, 2011.

7-8
Chapter 7
Transform Data Using Column Menu Options

General Format Result


String
M Displays the numeric month, for example, 2 for February.
MM Displays the numeric month, padded to the left with zero for single-digit
months, for example, 02 for February.
MMM Displays the abbreviated name of the month in the user's locale, for
example, Feb.
MMMM Displays the full name of the month in the user's locale, for example,
February.
D or d Displays the day of the month, for example, 1.
DD or dd Displays the day of the month, padded to the left with zero for single-digit
days, for example, 01.
DDD or ddd Displays the abbreviated name of the day of the week in the user's locale,
for example, Thu for Thursday.
DDDD or dddd Displays the full name of the day of the week in the user's locale, for
example, Thursday.
DDDDD or ddddd Displays the first letter of the name of the day of the week in the user's
locale, for example, T for Thursday.
r Displays the day of year, for example, 1.
rr Displays the day of year, padded to the left with zero for single-digit day of
year, for example, 01.
rrr Displays the day of year, padded to the left with zero for single-digit day of
year, for example, 001.
w Displays the week of year, for example, 1.
ww Displays the week of year, padded to the left with zero for single-digit
weeks, for example, 01.
q Displays the quarter of year, for example, 4.
h Displays the hour in 12-hour time, for example 2.
H Displays the hour in 24-hour time, for example, 23.
hh Displays the hour in 12-hour time, padded to the left with zero for single-
digit hours, for example, 01.
HH Displays the hour in 24-hour time, padded to the left with zero for single
digit hours, for example, 23.
m Displays the minute, for example, 7.
mm Displays the minute, padded to the left with zero for single-digit minutes, for
example, 07.
s Displays the second, for example, 2.
You can also include decimals in the string, such as s.# or s.00 (where #
means an optional digit, and 0 means a required digit).
ss Displays the second, padded to the left with zero for single-digit seconds,
for example, 02.
You can also include decimals in the string, such as ss.# or ss.00 (where #
means an optional digit, and 0 means a required digit).
S Displays the millisecond, for example, 2.

7-9
Chapter 7
Transform Data Using Column Menu Options

General Format Result


String
SS Displays the millisecond, padded to the left with zero for single-digit
milliseconds, for example, 02.
SSS Displays the millisecond, padded to the left with zero for single-digit
milliseconds, for example, 002.
t Displays the first letter of the abbreviation for ante meridiem or post
meridiem in the user's locale, for example, a.
tt Displays the abbreviation for ante meridiem or post meridiem in the user's
locale, for example, pm.
gg Displays the era in the user's locale.

Create a Bin Column When You Prepare Data


Binning a measure creates a new column based on the value of the measure. You can
assign a value to the bin dynamically by creating the number of equal size bins (such
as the same number of values in each bin), or by explicitly specifying the range of
values for each bin.
You can create a bin column based on a data element.
1. Open a project, and click Prepare. In the Results pane, select a column you want
to modify using the bin option.
2. Click Options for the selected column, and select Bin.
3. In the Bin step editor, specify the options for the bin column, as described in the
following table:

Field Description
New Change the name of the bin column.
element
name
Number of Click to select a different number from the list.
bins
Method Based on your selection in the Method field, the range and count of the bins
are updated.
• In the Manual method, you can select the boundary (that is, range and
count) of each bin. You can also change the default name of each bin.
• In the Equal Width method, the boundary of each bin is the same, but
the count differs. Based on your selection in the Bin Labels field, the
bin column labels are updated.
• In the Equal Height method, the height of each bin is the same or very
slightly different but the range is equal.
Bin By If you select the Equal Width method, click to select a dimension (that is, a
data element) on which to apply the bin.

4. Click Add Step to apply the data changes, close the step editor, and add a step to
the Preparation Script pane.

7-10
Chapter 7
Adjust the Column Properties

Adjust the Column Properties


You can change the properties of each column in the project’s Prepare canvas.
Column property changes aren’t affected by the data transform changes. For example,
if you’ve updated the name of a column after you use a data transform change on the
same column, the name of the column is updated automatically.
1. Confirm that you’re working in the Prepare canvas.
If you’ve added more than one data set to the project, go to the tabs at the bottom
of the window and select the data set.
2. In the Results pane or Data Element pane select the column whose properties you
want to change.
3. In the properties pane of the selected column, use the General or Number
Format tabs to change the properties. For each property change, a step is added
to the Preparation Script pane.
• General - Change the column name, data type, treat as (measure or attribute),
and aggregation type.
• Number Format - Change the default format of a number column.
4. Click Apply Script in the Preparation Script pane to apply the property changes to
the entire data set.

Edit the Data Preparation Script


You can edit the data transformation changes added to the Preparation Script.
Both before and after you’ve executed Apply Script, you can edit the data
transformation steps. If you’re editing the steps after executing Apply Script, you must
re-apply the script to the entire data set. The updates to the columns are applied only
to the data set and not to the visualization. Click Refresh Data to update the
visualization with the new data.
The edit option is available for only specific transform step. If you don't save the
updates to a step and navigate to another step, a warning message is displayed
indicating that you haven’t saved the changes.
1. Open a project, and click Prepare. If a project has multiple data sets, select the
data set to update.
2. Select a step in the Preparation Script pane, and click the Edit Transform
3. In the step editor, update the fields to edit the data transform changes that are
applied to the column. You can review the changes in the data preview table.
4. Click OK to update the column and close the step editor.
5. Click Apply Script in the Preparation Script pane to apply the data transform
changes to the entire data set.

7-11
8
Use Machine Learning to Analyze Data
You can use machine learning to make predictions using your existing data.
Before you start, install the machine learning Framework on the Windows or Mac
machine on which you’ve installed Data Visualization Desktop. See How Do I Install
DVML for Data Visualization Desktop?

Topics:
• Typical Workflow to Analyze Data with Machine Learning
• Create a Train Model for a Data Flow
• Interpret the Effectiveness of the Model
• Score a Model
• Add Scenarios to a Project

Typical Workflow to Analyze Data with Machine Learning


Here are the common tasks for analyzing data with machine learning.

Task Description More Information


Create train models Use scripts to train data models that you add Create a Train Model for a Data Flow
and use them to to other data sets to predict trends and
interpret data patterns in your data.
Use models to Apply models to generate data sets. Score a Model
generate data sets
Add scenarios to a Add scenarios to a project to create a Add Scenarios to a Project
project blended report.

Create a Train Model for a Data Flow


As a advanced analyst, you can use scripts to train data models that you then add to
other sets of data to predict trends and patterns in data.
Scripts define the interface and logic (code) for machine learning tasks. You can use a
training task (classification or numeric prediction), for example, to train a model based
on known (labelled) data. When the model is built, the same can be used to score
unknown data (that is, unlabelled) to generate a data set within a data flow, or to
provide a prediction dynamically within a visualization. Machine learning tasks are
available as individual step types (for example, Train Binary, Apply Model).
For example, you could train a model on a set of data that includes employee salary
information and then apply this model to a set of employee data that doesn't include
salary information. Because the model is based on specific factors and is 67%
accurate, it can accurately predict how many and which employees in the data set
most likely have an annual salary of over $50k per year.

8-1
Chapter 8
Interpret the Effectiveness of the Model

1. In the Data tab, select a data set that you want to use in the data flow.
This can be any data set containing data that you want to train model.
2. In the Data Flows tab, click Create and select Data Flow to display the Add Data
Set pane.

3. Select the data set that you want to use to create your train model, and click Add.
This can be any data set.
4. In the data flow, click the Plus (+) symbol.
This displays all available data flow step options, including train model types
shown as icons across the bottom (for example, Train Numeric Predictions, Train
Multi-Classifier).
5. Click the train model type that you want to apply to the data set.
For example, Train Binary Classifier is a binary train model (a statistical
calculation) that helps predict a binary choice.
6. Select a suitable script from the available scripts for the selected model type (for
example, Binary Classification) and click OK to confirm.
For example, select CART for model to build a binary classification train model.
The parameters displayed are specific to the script that you select.
7. Refine the field details for the model as required:
a. If you want to change the script, then click Model Training Script.
b. Click Target to select a Data Set column that you want to apply the train
model to.
For example, you might want to model the Income Level column to predict a
person's income. Consider a loan agent who is interested to offer loans only to
those who make more than $50000.
c. Update the remaining fields with values that are appropriate for the script you
selected.
8. Click Save, enter a name and description and click OK to save the data flow with
your choice of parameter values for the current train model script.
9. Click Save Model, enter a name and description, and click Save to save the
model.
You can now run the model script like any other data flow.

Interpret the Effectiveness of the Model


Once you’ve created a model, you can explore information about it and how it
interprets data. You can use that information to modify the model.
When you run a train model data flow, it produces outputs which you can interpret, so
that you can refine the model.
1. Click the Navigator icon and select Machine Learning.
Machine Learning displays the Scripts and Models tabs.
2. To view the train model data flow outputs, display the Models tab.
This displays all models created.

8-2
Chapter 8
Score a Model

3. Click the menu icon for a model and select the Inspect option.
This displays three tabs: General, Quality, and Related.
4. (Optional) Click General.
This page shows information about the model including:
• Predicts - The name of whatever the model is trying to predict (for example,
something about IncomeLevel).
• Trained On - The name of the data set that you're using to train the model.
• Script - The name of the script used in the model.
• Class - The class of script (for example, Binary Classification).
5. (Optional) Click Quality.
A portion (configurable) of the training data set is kept aside for validation
purposes. When the model is built, it’s applied to the validation data set with
known labels. A different set of metrics such as Accuracy, Precision, and Recall
are calculated based on Actual (Label) and Predicted Values. Information is also
shown as a matrix, that you can use to provide quick simple summaries of what is
found during validation. For example, a certain percentage (X) of people in the
validation data set makes more than $50000, whereas the model predicted Y% of
the people making the same.
The Quality page displays:
• A list of standard metrics, where the metrics displayed are related to the model
selected. Each metric helps you determine how good the model is in terms of
its prediction accuracy for the selected Data Set column to which you apply
the train model.
For example, you might model the Income Level column to predict (based on a
range of other values for each person), when someone’s income level is likely
to be greater than $50000.
• The matrix shows the state of the data used to make the predictions.
The matrix indicates actual values against predicted values to help you
understand if the predicted values are close to the actual values.
You can use this information to return to the model and make changes if
necessary.
6. (Optional) Click Related.
Related tab captures data sets emitted by the machine learning scripts when run
to build models. The data sets capture specific information related to the script
logic, so that advanced users (data scientists) can get more insights into the model
built.
This page shows the training data including:
• Training Data - The data set being used to train the model.
• Generated Data - The data sets created by the script that you use for the
training model (for example, obiee.CART.train). You may see different data
sets if you select another script to train a model.

Score a Model
You can apply a model within a data flow to generate a data set.

8-3
Chapter 8
Add Scenarios to a Project

1. In the Data tab, select a data set that you want to use in the data flow.
This can be any data set containing data that you want to apply your model to.
2. In the Data Flows tab, click Create and select Data Flow to display the Add Data
Set pane.
3. Select the data set to which you want to apply the model, and click Add.
Select a data set like the one used to create the model.
4. In the data flow, click the Plus (+) symbol.
5. Click Apply Model from the available options.
6. Select a model from the list of available models and click OK to confirm.
7. Select the Output columns that you want generated by this data flow, and update
Column Name fields if required.
The output columns displayed in the Apply Model pane are created as a data set
when the data flow runs. The output columns are relevant to the model.
8. In the data flow, click the Plus (+) symbol and select Save Data to add a Save
Data step.
9. Click Save, enter a name and description and click OK to save the data flow with
the selected model and output.
You can now run the data flow to create the appropriate output data set columns
using the selected model.

Note:
Any data set created by scoring data flow can be used within a Data
Visualization (project visualization) like any other data sets.

Add Scenarios to a Project


You can apply scenarios within a project by selecting from a list of available machine
learning models, joining the model to the existing data sets within a project, then using
the resulting model columns within a visualization. A scenario enables you to add a set
of virtual model output columns to create a blended report, which isn't unlike adding
data directly to a project to create blended visualization. You can use the predicted
values for the subset of the data of interest within a specific visualization. The virtual
data set columns don’t physically exist, they represent the model outputs and their
values are dynamically generated when used in a visualization.
1. Create or open the Data Visualization project in which you want to apply a
scenario.
Confirm that you’re working in the Visualize canvas.
2. To add a scenario, do one of the following:
• Click Add, and select Create Scenario.
• In the Data Elements pane, right-click the data set and select Create
Scenario.

8-4
Chapter 8
Add Scenarios to a Project

3. In the Create Scenario - Select Model dialog, select the name of the model and
click OK.
4. In the Map Your Data to the Model dialog, specify various options:
• In a project with multiple data set, click Data Set to select a data set that you
want to map to the model.
• In the table, click Select Column to match a column to a model input.
Each model has inputs (that is, data elements) that must match corresponding
columns from the data set. If the data type (for example, column name) of a
model input matches a column, then the input and column are automatically
matched. If a model input has a data type that doesn't match any column, you
must manually specify the appropriate data element.
Click Show all inputs to display the model inputs and the data elements with
which they match. Alternatively, click Show unmatched inputs to display the
model inputs that aren’t matched with a column.
5. Click OK to add the resulting model columns to the Data Elements pane. You can
now use the model columns with the data set columns.
6. Drag and drop one or more data set and model columns from the Data Elements
pane to drop targets in the Visualize canvas. You can also double-click the
columns to add them to the canvas.
You can add one or more scenarios to the same or different data sets. In the Data
Elements pane right-click the model, and select one of the following options:
• Edit Scenario - Open the Map Your Data to the Model dialog to edit a scenario.
• Reload Data - Update the model columns after you edit the scenario.
• Remove from Project - Open the Remove Scenario dialog to remove a scenario.

8-5
9
Use Data Flows to Create Curated Data
Sets
You can use data flows to produce curated (combined, organized, and integrated) data
sets.

Video

Topics:
• Typical Workflow to Create Curated Data Sets with Data Flows
• About Data Flows
• About Editing a Data Flow
• Create a Data Flow
• Add Filters to a Data Flow
• Add Aggregates to a Data Flow
• Merge Columns in a Data Flow
• Merge Rows in a Data Flow
• Create a Bin Column in a Data Flow
• Create a Sequence of Data Flows
• Create a Group in a Data Flow
• Add Cumulative Values to a Data Flow
• Add a Time Series Forecast to a Data Flow
• Add a Sentiment Analysis to a Data Flow
• Branch Out a Data Flow into Multiple Connections
• Apply Incremental Processing to a Data Flow
• Customize the Names and Descriptions of Data Flow Steps
• Schedule a Data Flow
• Create an Essbase Cube in a Data Flow
• Execute a Data Flow
• Save Output Data from a Data Flow
• Run a Saved Data Flow
• Apply Parameters to a Data Flow

9-1
Chapter 9
Typical Workflow to Create Curated Data Sets with Data Flows

Typical Workflow to Create Curated Data Sets with Data


Flows
Here are the common tasks for creating curated data sets with data flows.

Task Description More Information


Create a data flow Create data flows from one or more data Create a Data Flow
sets.
Add filters Use filters to limit the data in a data flow Add Filters to a Data Flow
output.
Add aggregates Apply aggregate functions to group data in a Add Aggregates to a Data Flow
data flow.
Merge columns and Combine two or more columns and rows of Merge Columns in a Data Flow
rows of data sets data sets in a data flow. Merge Rows in a Data Flow
Create a binning Assign a value to add a binning column to Create a Bin Column in a Data Flow
column the data set.
Create a sequence of Create and save a sequential list of data Create a Sequence of Data Flows
data flows flows.
Create a group Create a group column of attribute values in Create a Group in a Data Flow
a data set.
Add cumulative Group data by applying cumulative Add Cumulative Values to a Data Flow
values aggregate functions in a data flow.
Add a time series Apply a time series forecast calculation to a Add a Time Series Forecast to a Data Flow
forecast data set to create additional rows.
Add a sentiment Detect sentiment for a text column by Add a Sentiment Analysis to a Data Flow
analysis applying a sentiment analysis to the data
flow.
Create an Essbase Create an Essbase Cube from a data set. Create an Essbase Cube in a Data Flow
Cube
Schedule data flows Schedule a data flow job and set the job's Schedule a Data Flow
properties.
Execute a data flow Execute data flows to create data sets. Execute a Data Flow
Saving output data Before running a data flow, modify or select Save Output Data from a Data Flow
from a data flow the database name, attribute or measure,
and aggregation rules for each columns of
the output data set.
Run a data flow Run a saved data flow to create data sets or Run a Saved Data Flow
to refresh the data in a data set.

About Data Flows


Data flows enable you to organize and integrate your data to produce a curated set of
data that you use in visualizations.
You use the Data Visualization's data flow editor to apply transformations, add joins
and filters, remove unwanted columns, add new derived measures, add derived
columns, and add other operations. The data flow is then run to produce a data set
that you can use to create complex visualizations.
See Create a Data Flow and Run a Saved Data Flow.

9-2
Chapter 9
About Editing a Data Flow

About Editing a Data Flow


Build your data flow by adding steps to select, limit, and customize your data.
The following image shows the data flow editor.

The data flow editor is a flexible tool, designed to help you create data flows. You can
also:
• Select, add, and rename columns
• Add or adjust aggregates
• Add filters
• Create a merge column
• Merge rows
• Create a binning column
• Add a sequence
• Create a group
• Add cumulative values
• Add a time series forecast
• Add a sentiment analysis
• Apply custom scripts
• Customize step names
• Schedule a data flow
• Create an Essbase cube
• Add another data set
You add steps in the workflow diagram pane and specify details for those steps in the
Step editor pane.
The following tips should help you to use the Step editor pane:

9-3
Chapter 9
Create a Data Flow

• You can hide or display the Step editor pane by clicking Step editor at the bottom
of the Data Flow editor.
You can hide or display the Preview data columns pane by clicking Preview data
at the bottom of the Data Flow editor.
• The Preview data columns pane updates automatically as you make changes to
the data flow.
For example, you could add a Select Columns step, remove some columns, and
then add an Aggregate step. While working on the Aggregate step, the Preview
data columns pane already shows the columns and data that you just specified in
the Select Columns step.
• You can specify whether or not to automatically refresh step changes in the
Preview data columns pane by clicking Auto apply .
• You can add another data set and join it to the existing data sets in your data flow
by selecting Add Data in the Data Flow Steps panel.
Joins are created automatically when you add a data set; however, you can edit
the join details in the Join dialog.
• Oracle Data Visualization validates data flow steps as you add them to or delete
them from the data flow.
• If you’re adding an expression (in an Add Column step or a Filter step), then you
must click Apply to finalize the step.
If you add a new step to the workflow diagram without clicking Apply, then your
expression won’t be applied, and the next step that you add won’t use the correct
data.

Create a Data Flow


You can create a data flow from one or more data sets. With a data flow, you produce
a curated data set that you can use to easily and efficiently create meaningful
visualizations.
1. On the Home Page, click Create, then click Data Flow to display the Add Data Set
dialog.
2. Select a data source.
• Click an existing data source to include it in your project, and click Add.
• Click Create Data Set to display the Create Data Set dialog, where you can
create your own.
3. To add steps to your data flow, in the Data Flow editor, go to the workflow diagram
pane and click Add a step (+) next to the data set step.
4. In the Add step dialog, select the step that you want to add and provide the
required details in the Step editor pane.
5. (Optional) To delete a step from the workflow diagram, click X or right-click the
step and select Delete. Note that deleting a step might make the other steps in the
data flow invalid, as indicated by red X icons displayed for the invalid steps.
6. Click Save to save but not run the data flow. Note that you can save a data flow
that contains validation errors. When you save a data flow, it’s displayed in the
Display pane of the Data page, in the Data Flows area.

9-4
Chapter 9
Add Filters to a Data Flow

When you’ve finished adding steps to the data flow diagram, you can also execute
the data flow without saving it, or save the data flow as a database connection.

Add Filters to a Data Flow


You can use filters to limit the amount of data included in the data flow output. For
example, limiting sales revenue data of a column to the years 2010 through 2017.
You can filter a data element by adding the filter step in the Step editor pane.
1. Create or open the data flow that you want to apply a filter to.
2. Click Add a step (+), and select Filter.
3. In the Filter pane, select the data element you want to filter:

Field Description
Add Filter Select the data element you want to filter, in the Available Data dialog.
(+) Alternatively, click Data Elements in the Data Panel, and drag and drop a
data element to the Filter pane.
Filter fields Change the values, data or selection of the filter (for example, maximum and
minimum range). Based on the data element, specific filter fields are
displayed. You can apply multiple filters to a data element.
Filter menu Select a function to clear the filter selection and disable or delete a filter.
icon
Filter pane Select a function to clear all filter selections, remove all filters, and auto-
menu icon apply filters. You can select to add an expression filter.
Add Select to add an Expression Filter. Click f(x), select a function type, and then
Expression double-click to add a function in the Expression field.
Filter Click Apply.
Auto-Apply Select an auto-apply option for the filters, such as Default (On).
Filters

Note:
Based on the applied filter, the data preview (for example, the displayed
sales data in a column) is updated.

4. Click Save.

Add Aggregates to a Data Flow


You can group data by applying aggregate functions such as count, sum, and
average.
If the data set already contains aggregates, then they’re displayed when you add an
aggregate step. You can add an aggregate in the Step editor pane.
1. Create or open the data flow that you want to add an aggregate step to.
2. Click Add a step (+), and select Aggregate.
3. In the Aggregate pane, to add a column to the aggregate, click Actions then click
Aggregate.

9-5
Chapter 9
Merge Columns in a Data Flow

4. To select an aggregate function to apply to an aggregate column, click the arrow in


the Function field for the selected column and select a value to aggregate by. For
example, for the Profit column you could choose Sum.
5. To remove an aggregate from the selected aggregate list, hover the mouse pointer
over the aggregate’s name, click Actions, and click Group By.
6. To save your changes, click Save Data Flow.

Merge Columns in a Data Flow


You can combine two or more columns to display as one. For example, you can merge
the street address, street name, state, and ZIP code columns so that they display as
one item in the visualizations using the data flow’s output.
You create a merged column by adding a merge column step in the Step editor pane.
1. Create or open the data flow that you want to add a merge column to.
2. Click Add a step (+), and select Merge Columns.
3. In the Merge Columns pane, specify the options for combining the columns:

Field Description
New column Change the name of the merge column.
name
Merge Select the first column you want to merge.
column
With Select the second column you want to merge.
(+) Add Select more columns you want to merge.
Column
Delimiter Select a delimiter to separate column names (for example, Space, Comma,
Dot, or Custom Delimiter).

4. Click Save.

Merge Rows in a Data Flow


You can merge the rows of two data sets. The result can include all the rows from both
data sets, the unique rows from each data set, the overlapping rows from both data
sets, or the rows unique to one data set.
Before you merge the rows, do the following:
• Confirm that each data set has the same number of columns.
• Check that the data types of the corresponding columns of the data sets match.
For example, column 1 of data set 1 must have the same data type as column 1 of
data set 2.
You can add a Merge Rows step in the Step editor pane.
1. Create a data flow and add the data sets you want to merge.
2. Click Add a step (+) and select Merge rows.
3. Select the option for merging the rows, as described in the following table:

9-6
Chapter 9
Create a Bin Column in a Data Flow

Option Description
All rows from Input 1 and All the rows of both the data sets are displayed.
Input 2 (Union All)
Unique rows from Input 1 The data of each unique rows are merged and displayed with
and Input 2 (Union) the other rows.
Rows common to Input 1 Only the common rows are displayed with the merged data.
and Input 2 (Intersect)
Rows unique to Input 1 Only the unique rows of data set 1 are displayed.
(Except)
Rows unique to Input 2 Only the unique rows of data set 2 are displayed.
(Except)

4. Click Save.

Create a Bin Column in a Data Flow


Binning a measure creates a new column based on the value of the measure. You can
assign a value to the bin dynamically by creating the number of equal size bins (such
as the same number of values in each bin), or by explicitly specifying the range of
values for each bin.
You can add a Bin step in the Step editor pane.
1. Create or open the data flow in which you want to create a bin column.
2. Click Add a step (+), and select Bin.
Alternatively, you can select Add Columns, and then click (+) Column to select
Bin.
3. In the Bin pane, click Select Column.
4. In the Available Columns dialog, select the data element.
5. In the Bin pane, specify the options for the bin column:

Field Description
Bin Select a different data element.
New Change the name of the bin column.
element
name
Number of Enter a number, or use the arrows to increment or decrement the number of
Bins bins.
Method Select one of the methods, Manual, Equal Width, or Equal Height.

9-7
Chapter 9
Create a Sequence of Data Flows

Field Description
Histogram Based on your selection in the Method field, the histogram range (width)
View and histogram count (height) of the bins are updated.
• In the Manual method, you can move the slider to select the boundary;
that is, the histogram range and count. The number of sliders changes
based on the histogram count. You can switch to the List view and enter
the range manually along with the bin names.
• In the Equal Width method, the histogram range is divided into intervals
of the same size. For equal width binning, the column values are
measured, and the range is divided into equal-sized intervals. The edge
bins can accommodate very low or very high values in the column.
• In the Equal Height method, the height of each bin is same or very
slightly different but the histogram range is equal. For equal height or
frequency binning, the intervals of each bin is based on each interval
containing approximately the equal number of elements (that is,
records). Equal Height method is preferred specifically for the skewed
data.
List View If you select the Manual method, you can change the name of the bins, and
you can define the range for each bin.

Note:
Based on your changes, the data preview (for example, the bin column
name) is updated.

6. Click Save.

Create a Sequence of Data Flows


A sequence is a saved sequential list of specified data flows and is useful when you
want to run multiple data flows as a single transaction. If any flow within a sequence
fails, then all the changes done in the sequence are rolled back.
1. On the Home page click Create and select Sequence.
2. Drag and drop the data flows and sequences to the Sequence pane.
3. Click the menu icon to move an item up or down in the list, and to remove an item.
4. Click Save. When you save a sequence, it’s displayed in the Sequence area of the
Data page.
5. Go to the Sequence area of the Data page, select the sequence, and click
Execute Sequence.
After you run a sequence, the resulting data sets are displayed in the Data page.
6. Go to the Data page and click Data Sets to see the list of resulting data sets.

Create a Group in a Data Flow


You can use binning attributes to define groups of attribute values in a data set.
1. Create or open the data flow in which you want to create a group column.
2. Click Add a step (+), and select Group.

9-8
Chapter 9
Add Cumulative Values to a Data Flow

3. Select the data element in the Available Columns dialog. You can’t select the
numbered type data element.
4. Specify the options for the new group column in the Group pane:

Field Description
Group Change the name of a group (for example, Group1).
Available Select the values you want to include in a group. The selected values are
values list displayed in the Selections list. Based on your selection, the histogram is
updated. The height of the horizontal bar is based on the count of a group in
the data set.
Name Change the name of the new group column.
Selections Contains all columns selected for this group.
(+) Group Add a new group. You can select a group, and click X to delete it.
Include Group values that haven't been added to any of the other groups.
Others
Add all Add all the values in the available list to a group.
Remove all Remove all the selected values from a group.

Note:
Based on your changes, the data preview (for example, the group
column name) is updated.

5. Click Save.

Add Cumulative Values to a Data Flow


You can group data by applying the cumulative aggregate functions such as the
moving and running aggregate. A moving aggregate aggregates values over a row
and a specific number of preceding rows. A running aggregate aggregates values over
all the preceding rows. Because both the moving and running aggregates are based
on the preceding rows, the sort order of rows is important. You can specify the order
as part of the aggregate.
You can add a Cumulative Value step in the Step editor pane.
1. Create or open the data flow in which you want to add a cumulative value column.
2. Click Add a step (+), and select Cumulative Value.
3. In the Cumulative Value pane, specify the cumulative aggregate functions for the
new column:

Field Description
Aggregate Select a data column.
Function Select a function. The available types of function are based on the data
column.
If the column data type is incompatible with the function, an error message is
displayed.
Rows Select the value. You can edit this field only for specific functions.
If the value isn't a positive integer, an error message is displayed.

9-9
Chapter 9
Add a Time Series Forecast to a Data Flow

Field Description
New column Change the aggregate column name.
name If two columns have the same name, an error message is displayed.
(+) Create a new aggregate column.
Aggregate Select a column. If no aggregate column is defined, an error message is
displayed.
(+) Sort Select a sort by column for the data column.
Column Click Options to move a sort column up or down in the list. Select a sort
order. If you add two sort orders to the same column, an error message is
displayed.
Sort order Select the sort order type.
list The available types of sort order are based on the selected data element.
(+) Restart Select a restart column for the data column.
Column Click Options to move a restart element up or down in the list. Select a
restart element. If you add a duplicate restart column, an error message is
displayed.

Note:
Based on your defined values, the data preview (for example, New
column name) is updated.

4. Click Save.

Add a Time Series Forecast to a Data Flow


You can calculate additional rows with forecasted values by applying a Time Series
Forecast calculation.
A forecast takes a time column and a target column from a given data set and
calculates forecasted values for the target column and puts the values in a new
column. All additional columns are used to create groups. For example, if an additional
column “Department” with values “Sales”, “Finance”, and “IT” is present, the
forecasted values of the target column are based on the past values of the given
group.
Note that multiple columns with diverse values lead to a large number of groups that
affect the precision of the forecast. Select only columns that are relevant to the
grouping of the forecast.
1. Create or open the data flow in which you want to add a cumulative value column.
2. Click Add a step (+), and select Time Series Forecast.
3. In the Time Series Forecast pane and Output section, specify an output column for
the forecasted value. The column is named "forecasted" by default, and you can
rename it.
4. In the Time Series Forecast pane and Parameters section, specify the parameters
for the forecast calculation:

Field Description
Target Select a data column with historical values.

9-10
Chapter 9
Add a Sentiment Analysis to a Data Flow

Field Description
Time Select a column with date information. Forecasted values use a daily grain.
Periods Select the value that indicates how many periods (days) are forecasted per
group.

5. Click Save.

Add a Sentiment Analysis to a Data Flow


You can detect sentiment for a given text column by applying a sentiment analysis to
your data flow.
Sentiment analysis evaluates text based on words and phrases that indicate a
positive, neutral, or negative emotion. Based on the outcome of the analysis, a new
column contains a “Positive”, “Neutral”, or “Negative” String type result.
1. Create or open the data flow in which you want to add a cumulative value column.
2. Click Add a step (+), and select Analyze Sentiment.
3. In the Analyze Sentiment pane and Output section, specify an output column for
the emotion result value. The column is named "emotion" by default, and you can
rename it.
4. In the Analyze Sentiment pane and Parameters section, specify the value for Text
to Analyze.
Select a text column with natural language content to analyze.
5. Click Save.

Apply Custom Scripts to a Data Flow


You can apply your own custom scripts to a data flow. Based on the parameters and
options you’ve defined in the script, specific types of data sets (such as all the input
columns, or only the output columns) can be generated.
You can apply a custom script to a data flow:
1. Create or open the data flow in which you want to apply a custom script.
2. Click Add a step (+), and select Apply Script.
3. In the Select Script dialog, select the custom script, and click OK.
4. In the Apply Script pane, select the values in the displayed fields (such as Outputs,
Parameters, and Data Elements).
If you want to select a different script, click the Script name. Based on the custom
script, specific fields and options are displayed.
5. Click Save.

Branch Out a Data Flow into Multiple Connections


You can branch a data flow into multiple connections to downstream nodes, which
creates multiple outputs from a data flow. For example, you can create a data flow
from a sales transactions data set, then branch and save the data into multiple data

9-11
Chapter 9
Branch Out a Data Flow into Multiple Connections

sets based on the region of the sales transaction, such as west and east coast
regions.
You add a Branch step in the Step editor pane.
1. Create or open the data flow that you want to branch into multiple subsets.
Alternatively, create a data flow from a data set you want to branch into multiple
subsets.
2. In the Add step dialog, click Add a step (+) and select Branch.
A Branch step and two Save Data steps are added to the data flow.
3. In the Branch into field of the Branch pane, specify the number of connections or
outputs that you want to branch.
• The Save Data steps count is directly related to the number in the Branch
into field.
• In the Branch into field, the minimum number is two and the maximum is five.
You can increase the number of connections or outputs only in the Branch
into field.
• You can delete a connection or output. In the Add step dialog, click X or right-
click the Save Data step and select Delete.
• If you’ve only two Save Data steps and you delete one, you see a warning
message indicating that the Branch step will also be deleted. Click Yes to
delete the Branch step. Only one Save Data step is added in the data flow.
• You can’t add the following steps after a Branch step:
– Add Data
– Join
– Merge Rows
4. Click each Save Data step and in the Save Data Set pane, specify the properties
for saving the data set nodes:

Field Description
Name and Enter the data set name and description to identify your data set.
Description
Save data to Specify the location where you want to store the data set, such as Data
Set Storage or Database Connection.
If you select Database Connection, specify values for the Connection,
Table, and When Run options.
When Run Select the option and specify the following parameters: Name and
Prompt to Prompt.
specify Data Set
Columns Specify whether to change a column to a measure or attribute as
appropriate. For measures, specify the aggregation type (such as Sum,
Average, Minimum, Maximum, or Count).

5. Click Run Data Flow to run the data flow. If there’s no validation error, you see a
completion message. Go to the Data page and select Data Sets to see your
resulting data sets in the list.

9-12
Chapter 9
Apply Incremental Processing to a Data Flow

Alternatively, click Save or Save As. In the Save Data Flow As dialog, enter a
Name and Description to identify your data flow. On the Data page select Data
Flows to see your resulting data flow in the list.

Apply Incremental Processing to a Data Flow


Use incremental processing to determine the last data processed in the data flow and
to process only the newly added data.
1. Select a data element column as an incremental identifier for the data set.
You can select an incremental identifier only for those data sets that are sourced
through database connections.
a. Go to the Data page and select Data Sets.
b. Select a data set and click the Actions menu or right-click, then select Open.
c. Click Edit Data Set on the Results toolbar.
d. Select the data set node in the diagram. From the New Data Indicator list,
select a column, then click Save.
2. Apply incremental processing to the data flow using the data sets for which you’ve
selected the incremental identifier.
a. Create or open the data flow in which you want to apply incremental
processing.
b. In the Data Flow editor select the data set.
c. In the Step editor pane, select Add new data only to mark the data set as
incremental.
d. Click Save.
In a data flow with multiple data sets, you can select only one data set as incremental.
If you try to select a second data set as incremental, you see a warning message.
Click Yes to enable incremental processing for the second data set for which you’ve
selected Add new data only. Incremental processing is deselected for the first data
set.

Customize the Names and Descriptions of Data Flow Steps


You can rename a data flow step and add or edit the description.
1. Create or open a data flow.
2. Click Add a step (+), and select a step.
3. Click the step name (for example, Merge Columns) in the step pane header.
4. Enter a new name or edit the existing name in the Name field, and enter a
description if required.
5. To save your changes, click Enter, or click outside the header fields.

Schedule a Data Flow


You can schedule data flow jobs and set properties such as date, frequency, and end
time. You can view and edit an existing job that's scheduled for a data flow.

9-13
Chapter 9
Create an Essbase Cube in a Data Flow

1. Go to the Data page and select Data Flows.


2. Select the data flow that you want to add to a scheduled job.
3. Click the Actions menu or right-click, and select Schedule.
4. In the Jobs dialog, specify the properties for a data flow job:

Field Description
List of Select the scheduled job from the table that you want to change the
scheduled properties for.
jobs
Repeat Select the scheduled job repeat type (such as monthly repeat).
End Select the end date of the scheduled job. If you selected Never in the
Repeat field, then this field doesn't display.
Frequency Select the frequency of the scheduled job. If you selected Custom in the
Repeat field, then this field is displayed. You can also select the day of the
week that you want to run the job.
(+) Add Job Create a new scheduled job.
Add Save the newly created scheduled job.
Update Save the updates to the scheduled job properties.
Revert Click to return to the previously saved properties when editing a scheduled
job.

Create an Essbase Cube in a Data Flow


You can add single input data from a spreadsheet or database into a data flow to
create an Essbase cube.
1. Create or open the data flow in which you want to create an Essbase Cube.
2. Click Add a step (+), and select Create Essbase Cube.
3. In the Create Essbase Cube pane, specify the values for creating the Essbase
Cube:

Field Description
Essbase Click Select Essbase Connection to select a connection in the Save Data
Connection to Database Connection dialog.
Application Enter a name of the Essbase application.
Name
Cube Name Enter a name of the Essbase cube.

4. Click Save.
5. Click Execute Data Flow. After you run the data flow, check the resulting data set
in the Display pane.
6. Go to the Data page and select Data Flows to see your data flow in the list. See
About Using Tabular Data to Create Cubes in Using Oracle Analytics Cloud
Essbase.

9-14
Chapter 9
Execute a Data Flow

Execute a Data Flow


Executing a data flow produces a data set that you can use to create visualizations.
To successfully execute a data flow, it must be free of validation errors.
1. Create or open the data flow that you want to execute and produce a data set
from.
2. Click Add a step (+) and select Save Data.
3. In the Save data to pane enter the output data set Name and Description to
identify your data set.

Note:
Don't change the Save data to field.

4. Click Run Data Flow to execute the data flow. If there is no validation error, a
completion message is displayed.

Note:
When you execute a data flow without saving it, the data flow isn’t saved
and isn't displayed in the Data Flows list. Therefore, the data flow isn’t
available for you to modify or run.

Go to the Data page and select Data Sets to see your resulting data set in the list.
5. Click Save or Save As. In the Save Data Flow As dialog enter a Name and
Description to identify your data flow.
Go to the Data page and select Data Flows to see your resulting data flow in the
list.

Save Output Data from a Data Flow


You can save various information about a data flow. Before running or executing a
data flow, you can select to save details such as the storage location to save the
output data from the data flow; the parameters to reuse in the data flow; the name and
description for identifying the data set; the data type of each column; and the default
aggregation of each column.
1. Create or open the data flow that you want to save with specific values.
2. Click Add a step (+) and select Save Data. Or, if you’ve already saved the data
flow, then click the Save Data step.
• If you want to rename the data flow step, click the step name.
3. In the Save Data Set pane, enter the Name and Description to identify the data
set.
4. Click Save data to list and select a location:

9-15
Chapter 9
Save Output Data from a Data Flow

• Data Set Storage: Specify whether you want to save the data set locally.
• Database Connection: Connect to a database and save the output data from
a data flow to a table in that database. The data flow is securely stored in the
database, and you can take advantage of its managed backup and recovery
facility. You can transform the data source by overwriting it with data from the
data flow. The data source and data flow tables must be in the same database
and have the same name. To successfully save a data flow to a database, it
must have no validation errors.
5. If you’ve selected Database Connection, specify the following options:
a. Click Select connection to display the Save Data to Database Connection
dialog.
b. Select a connection for saving the data flow.
You must create a database connection before you can select one. For
example, you can save to an Oracle database, Apache Hive database,
Hortonworks Hive database, or Map R Hive database. See Create Database
Connections.
c. Enter a name in the Table field. The table name must conform to the naming
conventions of the selected database. For example, the name of a table in an
Oracle database can’t begin with numeric characters.
d. Click the When run list and select to replace existing data or add new data to
existing data.
6. Select the When Run Prompt to specify Data Set option to apply parameters to
the data flow and specify its values.
7. In the Columns table, change or select the database name, the attribute or
measure, and the aggregation rules for each column in the output data set:

Column name Description


Treat As Select how each output column is treated, as an attribute or measure.
Default Select the aggregation rules for each output column (such as Sum,
Aggregation Average, Minimum, Maximum, Count, or Count Distinct).
You can select the aggregation rules if a specific column is treated as a
measure in the output data set.
Database Name Change the database name of the output columns.
You can change the column name if you’re saving the output data from
a data flow to a database.

8. Click Save or Save As. In the Save Data Flow As dialog, enter
a Name and Description to identify the data flow.
• Go to the Data page and select Data Flows to see your resulting data flow in
the list.
• If you don’t save the data flow and try to navigate to another page, a Save
Changes dialog is displayed that prompts you to save the changes to the data
flow.
9. Click Run Data Flow to execute the data flow. If there’s no error, you see a
completion message and the output data is saved to the data set storage or to the
selected database using the table name that you specified.
• If you’ve selected data set storage, go to the Data page and select Data Sets
to see your output data set in the list.

9-16
Chapter 9
Run a Saved Data Flow

– Click Actions menu or right-click and select Inspect, to open the data set
dialog.
– In the data set dialog, click Data Elements and check the Treat As and
Aggregation rules that you’ve selected for each column in the Save
Data step.
• If you select a database to save the output data, go to the table in that
database and inspect the output data.
• If you select a table in the database with the same name, the data in the table
is overwritten when you save to the database.

Run a Saved Data Flow


You can run a saved data flow to create a corresponding data set or to refresh the
data in the data set created from the data flow.
You run the data flow manually to create or refresh the corresponding data set. For an
existing data set, run the data flow if you know that the columns and data from the
data set that was used to build the data flow have changed.
1. In the Data page, go to the Data Flows section, and locate the data flow that you
want to run.
2. Click the data flow’s Actions menu and select Run.
Notes about running data flows:
• To run a saved data flow, you must specify a Save Data step as its final step. To
add this step to the data flow, click the data flow’s Actions menu and select
Open. After you’ve added the step, save the data flow and try to run it again.
• When creating a new database data source, set the database’s query mode to
Live. Setting the query mode to Live allows the data flow to access data from the
database (versus the data cache) and pushes any expensive operations such as
joins to the database. See Manage Data Sets.
• When you update a data flow that uses data from a database source, the data is
either cached or live depending on the query mode of the source database.
• Complex data flows take longer to run. While the data flow is running, you can go
to and use other parts of the application, and then come back to the Data Flows
pane to check the status of the data flow.
• You can cancel a long-running data flow. To do so, go to the Data Flows section,
click the data flow’s Action menu and select Cancel.
• If it’s the first time you’ve run the data flow, then a new data set is created and you
can find it in the Data Sets section of the Data page. The data set contains the
name that you specify on the data flow’s Save Data step. If you’ve run the data
flow before, then the resulting data source already exists and its data is refreshed.

Apply Parameters to a Data Flow


In a data flow, you can add parameters so you can reuse the data flow with a different
source data set or use different criteria to process and select data. Parameters help
you identify the type of data appropriate for the data flow and if you want to select an

9-17
Chapter 9
Apply Parameters to a Data Flow

alternative data set when running or scheduling the data flow. You can also apply
parameters to modify default values when creating an Essbase Cube.
For example, using a parameter you can:
• Process a new data set that has the same format as the default input data set.
• Process and store different aspects of a large data set based on date range,
individual departments, or regions into alternative target data sets.
You can apply parameters for the following steps:
• Add Data
• Save Data
• Create Essbase Cube
In the Step editor pane, specify the values for the parameters:

Step Name Parameter Field


Add Data 1. Select the When Run Prompt to select Data Set option.
2. Provide the Name and Prompt values for the parameter.

Save Data 1. Select the When Run Prompt to specify Data Set option.
2. Provide the Name and Prompt values for the parameter.

Create Essbase 1. Select the When Run Prompt to specify Data Set option.
Cube
2. Provide the Cube name, Application name, and Prompt value for the
parameter.

Modify Parameter Prompts when You Run or Schedule a Data Flow


Parameter prompts are displayed before the job runs, when you run or schedule a
data flow with parameter prompts. Prompts allow you to review the default values or
settings and to select or define an alternate value or setting.

Run a Data Flow


1. Go to the Data page and click Data Flows to select the data flow with parameter
prompts that you want to run.
2. Click the data flow’s Actions menu or right-click and select Run.
3. In the Data Flow Prompt dialog, either use the default values or define alternate
values.
• In the Sources section, click the default Target - existing data set name, then
select a new source data set in the Add Data Set dialog. Click Add.
• In the Targets section, do one of the following
– Change the default Target - existing data set name.
– For a data flow with Create Essbase Cube step, change the default
Target - Application and Target - Cube names.
4. Click OK.

9-18
Chapter 9
Apply Parameters to a Data Flow

Schedule a Data Flow


1. Go to the Data page and click Data Flows to select the data flow with parameter
prompts that you want to add to a scheduled job.
2. Click the data flow's Actions menu or right-click and select Schedule.
3. In the Parameters section of the Jobs dialog, either use the default values or
define alternate values for a data flow job.
• In the Sources section, click the default Target - existing data set name, then
select a new source data set in the Add Data Set dialog. Click Add.
• In the Targets section, do one of the following:
– Change the default Target - existing data set name.
– For a data flow with Create Essbase Cube step, change the default
Target - Application and Target - Cube names.

9-19
10
Import and Share
You can import and export projects to share them with other users. You can also share
a file of a visualization, canvas, or story that can be used by other users.

Topics:
• Typical Workflow to Import and Share Artifacts
• Import and Share Projects or Folders
• Share Visualizations, Canvases, or Stories

Typical Workflow to Import and Share Artifacts


Here are the common tasks for sharing and importing folders, projects, visualizations,
canvases, and stories with other users.

Task Description More Information


Import projects and Import projects and folders as applications. Import an Application or Project
folders
Share a folder, Share a project or folder as an application Share a Project or Folder as an Application
project, visualization, with users. You can also share your project's Share a File of a Visualization, Canvas, or
canvas, or story visualizations, canvases, or stories as a file. Story

Email a folder, Export data visualization artifacts using Email Projects and Folders
project, visualization, email. Email a File of a Visualization, Canvas, or
canvas, or story Story
Share artifacts with Share data visualization artifacts with other Share a Project or Folder on Oracle
Oracle Analytics Oracle Analytics Cloud users. Analytics Cloud
Cloud Share a File of a Visualization, Canvas, or
Story, on OAC

Import and Share Projects or Folders


You can import projects and applications from other users and sources, and export
projects to make them available to other users.

Topics:
• Import an Application or Project
• Share a Project or Folder as an Application
• Email Projects and Folders
• Share a Project or Folder on Oracle Analytics Cloud

10-1
Chapter 10
Import and Share Projects or Folders

Import an Application or Project


You can import an application or project created and exported by another user, or you
can import an application from an external source such as Oracle Fusion Applications.
The import includes everything that you need to use the application or project such as
associated data sets, connection string, connection credentials, and stored data.
1. On the Home page, click Projects.
2. On the Projects page click Page Menu, then select Import Project.
3. In the Import dialog, click Select File or drag a project or application file onto the
dialog, then click Import.
4. If an object with the same name already exists in your system, then choose to
replace the existing object or cancel the import.

Share a Project or Folder as an Application


You can share a project to export it as an application that can be imported and used
by other users.
The export produces a .DVA file that includes the items that you specify (such as
associated data sets, the connection string and credentials, and stored data).
1. On the Home page, click Projects.
2. On the Projects page select the project or folder that you want to share and click
Action Menu, then select Export to open the Export dialog.

Note:
If you select an empty folder that doesn’t contain a project, you see a
notification.

3. Click File, then specify the options for sharing the project or folder:
• Specify the file name.
• Move the slider to enable the Include Data option to include the data when
sharing a project or folder.
• Move the slider to enable the Connection Credentials option, if you want to
include the user name and password of the data source connection with the
exported project.

10-2
Chapter 10
Import and Share Projects or Folders

Note:

• For a project or folder with an Excel, CSV, or TXT data source -


Because an Excel, CSV, or TXT data source doesn’t use a data
connection, clear the Include Connection Credentials option.
• For a project or folder with a database data source - If you
enable the Connection Credentials option, then the user must
provide a valid user name and password to load data into the
imported project.
• For a project with an Oracle Applications or Oracle Essbase
data source - Selecting the Connection Credentials option works if
on the connection setup’s Create Connection dialog you specified
the Always use this name and password option in the
Authentication field.
If you clear the Connection Credentials option or specify the
Require users to enter their own username and password option
in the Authentication field, then the user must provide a valid user
name and password to load data into the imported project.

4. If you selected the Include Data option or the Connection Credentials option,
then enter and confirm a password that the user must provide to import the project
or folder and decrypt its connection credentials and data.
5. Click Save.

Email Projects and Folders


You can email the .DVA file of a project or folder to enable other users to work with it.
When you start to email the project or folder, you initiate an export process that
produces a .DVA file that includes everything that you need to use the project or folder
(such as associated data sets, the connection string and credentials, and stored data)
1. On the Home page, click Projects.
2. On the Projects page select the project or folder that you want to share and click
Action Menu, then select Export to open the Export dialog.

Note:
If you select an empty folder that doesn’t contain a project, you see a
notification.

3. Click Email to open the Email dialog.


4. Move the slider to enable the Include Data option, if you’re sharing a project or
folder that uses an Excel data source and you want to include the data with the
export.
5. Move the slider to enable the Connection Credentials option, if retrieving the
data requires connection credentials. Then enter and confirm the password.

10-3
Chapter 10
Import and Share Projects or Folders

6. If your project or folder includes data from an Oracle Applications or a database


data source and the Include Data option is enabled, then you must enter a
password that’s sent to the database for authentication when the user opens the
application and accesses the data. Disable the Include Data option if you don’t
want to include the password with the project or folder. If you clear this option,
then users must enter the password when opening the application to access the
data.
7. Click Email.
Your email client opens a new partially composed email with the .DVA file
attached.

Note:
When you select the Email option, you don’t obtain a file that you can
save.

Share a Project or Folder on Oracle Analytics Cloud


You can choose to use Oracle Analytics Cloud to share one or more of your project or
folder.
1. On the Home page, click Projects.
2. On the Projects page select the project or folder that you want to share and click
Actions menu, then select Export to open the Export dialog.

Note:
If you select an empty folder that doesn’t contain a project, you see a
notification.

3. Click Cloud, then specify and select the options for sharing the project or folder:
• Enter the file name and Oracle Analytics Cloud URL such as https://
cloud.oracle.com/.

• Enter your Oracle Analytics Cloud user account credentials.


• Click the Include Data option to include the data with the project or folder.
• Click the Connection Credentials option if you want to include the data
source connection’s user name and password.
4. Click Publish.
The project or folder is shared to the Oracle Analytics Cloud user account you
specify and is displayed in Oracle Analytics Cloud with other projects and folders.

10-4
Chapter 10
Share Visualizations, Canvases, or Stories

Share Visualizations, Canvases, or Stories


You can share visualizations, canvases, or stories to make them available to other
users.

Topics:
• Share a File of a Visualization, Canvas, or Story
• Email a File of a Visualization, Canvas, or Story
• Print a Visualization, Canvas, or Story
• Write Visualization Data to a CSV or TXT File
• Share a File of a Visualization, Canvas, or Story on Oracle Analytics Cloud

Share a File of a Visualization, Canvas, or Story


You can share one or more of your project's visualizations, canvases, or stories as a
file.
1. Create or open a Data Visualization project.
2. Click the Share icon on the project toolbar, then click File to open the File dialog.
3. In the File dialog, specify and select the options based on the selected format of
the file.
• Powerpoint (pptx), Acrobat (pdf), and Image (png): Specify the file name,
and paper size and orientation. Based on the pane, do one of the following:
– Visualize - Select either to include the active canvas or visualization, or all
canvases.
– Narrate - Select either to include the active page or visualization, or all
story pages.
• Data (csv) - Specify the file name.
• Package (dva) - Specify the file name. Move the slider to select the Include
Data and Connection Credentials options. If you select the Include Data
and Connection Credentials options, enter a password to retrieve the
packaged data.

Note:
You can only select Package (dva) as the file format for sharing a
project.

4. Click Save.
5. In Save As dialog, change the file name if you want, making sure to including the
file extension, and browse to the location where you want to save the file. Click
Save.
Stop Sharing Links
User ProfileShared Links

10-5
Chapter 10
Share Visualizations, Canvases, or Stories

Email a File of a Visualization, Canvas, or Story


You can choose to email one or more of your project's visualizations, canvases, or
stories as a file. You can also email a project as a file.
1. Create or open a Data Visualization project.
2. Click the Share icon on the project toolbar, then click Email to open the Email
dialog.
3. In the Email dialog, specify and select the options based on the selected file
format that you want to send as an email attachment.
• Powerpoint (pptx), Acrobat (pdf), and Image (png): Specify the file name
and paper size and orientation. Based on the pane, do one of the following:
– Visualize - Select either to include the active canvas or visualization, or all
canvases.
– Narrate - Select either to include the active page or visualization, or all
story pages.
• Data (csv) - Specify the file name.
• Package (dva) - Specify the file name. Move the slider to select the Include
Data and Connection Credentials options. If you select the Include Data
and Connection Credentials option, enter a password to retrieve the
packaged data.

Note:
You can only select Package (dva) as the file format for sharing a
project.

4. Click Email.
Your email client opens a new partially composed email with the export file
attached..

Note:
When you select the Email option, you don’t obtain a file that you can
save.

Print a Visualization, Canvas, or Story


You can print one or more of your project's visualizations, canvases, or stories.
1. Create or open a Data Visualization project.
2. Click the Share icon on the project toolbar, then click Print to open the Print
dialog.
3. In the Print dialog, specify the file name, and paper size and orientation. You also
have to select either to include all the open visualization or canvas, or only the
active visualization or canvas in the file.

10-6
Chapter 10
Share Visualizations, Canvases, or Stories

4. Click Print. The browser's print dialog is displayed.


5. Specify other printing preferences such as which printer to use and how many
copies to print and click Print.

Write Visualization Data to a CSV or TXT File


You can write the data from a visualization to a CSV or TXT file. This lets you open
and update the visualization data in a compatible application such as Excel.
1. Locate the visualization with data that you want to write to the CSV or TXT format,
and click Share on the visualization toolbar, and then select File, select the
Format (for example .CSV) and click Save.
The Save As dialog is displayed.
2. Name the file and browse to the location where you want to save the file. Change
the file extension to .txt, if needed. Click Save.

Share a File of a Visualization, Canvas, or Story on Oracle Analytics


Cloud
You can share the visualizations, canvases, and stories that you've created in a
project in Data Visualization Desktop with a user of Oracle Analytics Cloud.
1. Create or open a data visualization project.
2. Click the Share icon on the project toolbar, then click Cloud to open the Oracle
Analytics Cloud dialog.
3. In the Oracle Analytics Cloud dialog, specify and select the options.
• Enter the file name and Oracle Analytics Cloud URL such as https://
cloud.oracle.com/.

• Enter your Oracle Analytics Cloud user account credentials.


• Click the Include Data option to include the data with the visualizations,
canvases, or stories.
• Click the Connection Credentials option if you want to include the data
source connection’s user name and password.
4. Click Publish.
The project is shared to the Oracle Analytics Cloud user account you specify and
is displayed in Oracle Analytics Cloud with other projects.

10-7
A
Frequently Asked Questions
This reference provides answers to frequently asked questions for Oracle Data
Visualization Desktop.

Topics:
• FAQs to Install Data Visualization Desktop
• FAQs for Data Visualization Projects and Data Sources

FAQs to Install Data Visualization Desktop


This topic contains common questions about installing Data Visualization Desktop and
installing Oracle DVML.

Topics:
• How do I install Machine Learning and Advanced Analytics for Data Visualization
Desktop?
• Why can’t I install Data Visualization Desktop on my computer?
• How can I get the most current version of Data Visualization Desktop?

How do I install Machine Learning and Advanced Analytics for Data Visualization
Desktop?
Machine learning and advanced analytics are optional components and not
automatically installed with Data Visualization. If you want to use the Diagnostics
Analytics (Explain), Machine Learning Studio, or advanced analytics functionality, then
you must install machine learning.
Follow these steps to install the required version of machine learning on Windows.
1. Click Install DVML from the Data Visualization Desktop Windows Start menu.
This installation enables machine learning and advanced analytics for the
corresponding Data Visualization Desktop installation.
2. Click Yes when you see the following message: Do you want to allow the
following program to make changes to this computer?
A terminal window is displayed showing the progress of the installation.
3. The installer starts automatically on completion of the download. Follow the
displayed instructions to install machine learning to the selected install path.
4. Click Finish to close the installer.
5. Click any key when you see the message: Press any key to continue to
closes the terminal window.

A-1
Appendix A
FAQs for Data Visualization Projects and Data Sources

6. If Data Visualization Desktop was running during machine learning installation,


then you must restart Data Visualization Desktop before you can use the machine
learning functionality.
Follow these steps to install the required version of the Machine Learning Framework
on Apple Mac.
1. Double-click the application Oracle Data Visualization Desktop Configure
Python in Finder under Applications or in Launchpad.
A terminal window indicates the download progress of the installer.
2. The installer starts automatically on completion of the download. Follow the
displayed instructions to install machine learning to the selected install path.
• To run the installation, you must enter the user name and password for an
administrator.
• Review the license terms and agree.
3. Click Close, when the installation is completed.
The Machine Learning Framework is installed in /Library/Frameworks/
DVMLruntime.framework
4. If Data Visualization Desktop was running during machine learning installation,
then you must restart Data Visualization Desktop before you can use the machine
learning and advanced analytics functionality.

Why can’t I install Data Visualization Desktop on my computer?


To successfully install Data Visualization Desktop on your computer, you must have
administrator privileges. If you try to install Data Visualization Desktop without
administrator privileges, the following error message is displayed: Error in
creating registry key. Permission denied.

To check to see if you’ve the required administrator privileges, go to Windows Control


Panel and check your user accounts. If you don’t have administrator privileges, then
see your company’s technical support person to help you set up the needed privileges.

How can I get the most current version of Data Visualization Desktop?
If you open Data Visualization Desktop when a newer version is available, a message
is displayed, telling you to go to Oracle Technology Network to download the latest
version of the Data Visualization Desktop installer.
You can find the current version of the installer on Oracle Technology Network. See
Oracle Data Visualization Desktop Installation Download.

FAQs for Data Visualization Projects and Data Sources


This topic identifies and explains common questions about Data Visualization projects
and data sources.

Topics
• What data sources are supported?
• What if I’m using a Teradata version different than the one supported by Data
Visualization?

A-2
Appendix A
FAQs for Data Visualization Projects and Data Sources

What data sources are supported?


You can include data only from specific types and versions of sources. See Supported
Data Sources.

What if I’m using a Teradata version different than the one supported by Data
Visualization?
If you're working with a Teradata version different than the one supported by Data
Visualization, then you’ve to update the extdriver.paths configuration file before you
can successfully build a connection to Teradata. This configuration file is located here:
C:\<your directory>\AppData\Local\DVDesktop\extdrvier.paths. For
example, C:\Users\jsmith\AppData\Local\DVDesktop\extdriver.paths.

When updating the extdriver.paths configuration file, remove the default Teradata
version number and replace it with the Teradata version number that you're using.
Make sure that you include \bin in the path. For example if you're using Teradata
14.10, change C:\Program Files\Teradata\Client\15.10\bin to C:
\Program Files\Teradata\Client\14.10\bin.

A-3
B
Troubleshoot
This topic provides troubleshooting tips forData Visualization Desktop.

Topics
• When I import a project, I get an error stating that the project, data source, or
connection already exists
• When I try to build a connection to Teradata, I get an error and the connection is
not saved
• I have issues when I try to refresh data for file-based data sources
• I can’t refresh data from a MongoDB data source
• Oracle Support needs a file to help me diagnose a technical issue
• I need to find more information about a specific issue

Troubleshoot Data Visualization Issues


This topic describes common problems that you might encounter when working with
Data Visualization and explains how to solve them.

When I import a project, I get an error stating that the project, data source, or
connection already exists
When you’re trying to import a project, you might receive the following error message:
“There is already a project, data source or connection with the same name as
something you’re trying to import. Do you want to continue the import and replace the
existing content?”
This error message is displayed because one or more of the components exported
with the project is already on your system. When a project is exported, the
outputted .DVA file includes the project’s associated data sources and connection
string. To resolve this error, you can either click OK to replace the components on
your system, or you can click Cancel and go into your system and manually delete the
components.
This error message is also displayed when the project you’re trying to import contains
no data. When you export a project without data, the project’s and data sources’
metadata are included in the .DVA. To resolve this issue, you can click OK to replace
the components on your system, or you can click Cancel and go into your system and
manually delete the data source or connection that’s causing the error.

When I try to build a connection to Teradata, I get an error and the connection is
not saved
When you’re trying to create a connection to Teradata, you might receive the following
error message:

B-1
Appendix B
Troubleshoot Data Visualization Issues

“Failed to save the connection. Cannot create a connection since there are some
errors. Please fix them and try again.”
This error message is displayed because the version of Teradata that you're using is
different from the version supported by Data Visualization. To resolve this issue,
update the extdriver.paths configuration file. This configuration file is located here: C:
\<your directory>\AppData\Local\DVDesktop\extdrvier.paths. For
example, C:\Users\jsmith\AppData\Local\DVDesktop\extdriver.paths.

To update the extdriver.paths configuration file, remove the default Teradata version
number and replace it with the Teradata version number that you're using. Make sure
that you include \bin in the path. For example if you're using Teradata 14.10, then
change C:\Program Files\Teradata\Client\15.10\bin to C:\Program
Files\Teradata\Client\14.10\bin. See What if I’m using a Teradata version
different that the one supported by Data Visualization?

I have issues when I try to refresh data for file-based data sources
Keep in mind the following requirements when you refresh data for Microsoft Excel,
CSV, or TXT data sources:
• To refresh an Excel file, ensure that the newer spreadsheet file contains a sheet
with the same name as the original file you uploaded. If a sheet is missing, then
you must fix the file to match the sheets in the original uploaded file.
• If the Excel, CSV, or TXT file that you reload is missing some columns, then you’ll
get an error stating that your data reload has failed. If this happens, then you must
fix the file to match the columns in the original uploaded file.
• If the Excel, CSV, or TXT file you used to create the data source was moved or
deleted, then the connection path is crossed out in the Data Source dialog. You
can reconnect the data source to its original source file, or connect it to a
replacement file, by right-clicking the data source in the Display pane and in the
Options menu select Reload Data. You can then browse for and select the file to
load.
• If you reloaded an Excel, CSV, or TXT file with new columns, then the new
columns are marked as hidden and don’t display in the Data Elements pane for
existing projects using the data set. To unhide these columns, click the Hidden
option.
Data Visualization requires that Excel spreadsheets have a specific structure. See
About Adding a Spreadsheet as a Data Set.

I can’t refresh data from a MongoDB data source


The first time Data Visualization connects to MongoDB, the MongoDB driver creates a
cache file. If the MongoDB schema was renamed and you try to reload a MongoDB
data source or use the data source in a project, then you might get an error or Data
Visualization doesn’t respond.
To correct this error, you need to clear the MongoDB cache. To clear the cache, delete
the contents of the following directory: C:\<your directory>\AppData\Local
\Progress\DataDirect\MongoDB_Schema. For example, C:\Users\jsmith
\AppData\Local\Progress\DataDirect\MongoDB_Schema

B-2
Appendix B
Troubleshoot Data Visualization Issues

Oracle Support needs a file to help me diagnose a technical issue


If you’re working with the Oracle Support team to resolve a specific issue, they may
ask you to generate a diagnostic dump file. To generate this file, do the following:
1. Open the command prompt and change the directory to the Data Visualization
Desktop installation directory (for example, C:\Program Files\Oracle Data
Visualization).
2. Type diagnostic_dump.cmd and then provide a name for the .zip output file (for
example, output.zip).
3. Press Enter to execute the command.
You can find the diagnostic output file in your Data Visualization Desktop
installation directory.

I need to find more information about a specific issue


The community forum is another great resource that you can use to find out more
information about the problem you’re having.
You can find the forum here: Oracle Community Forum.

B-3
C
Accessibility Features and Tips for Data
Visualization Desktop
This topic describes accessibility features and information for Data Visualization
Desktop.

Topics:
• Start Data Visualization Desktop with Accessibility Features Enabled
• Keyboard Shortcuts for Data Visualization

Start Data Visualization Desktop with Accessibility Features


Enabled
You can enable features that make the interface for Data Visualization Desktop more
accessible by improving navigation. To enable these features, you must start Data
Visualization Desktop from the command line. Open a command window and enter the
following:
On Windows:
dvdesktop.exe - sdk

On Mac:
open /Applications/dvdesktop.app --args -sdk

When you run the command, you see Data Visualization Desktop open in a web
browser.

Keyboard Shortcuts for Data Visualization


You can use keyboard shortcuts to navigate and to perform actions.
Use these keyboard shortcuts for working with a project in the Visualize Canvas.

Task Keyboard Shortcut


Save a project with the changes. Ctrl + S
Copy the selected items to the clipboard. Ctrl + C
Save a newly created project with a specific name. Ctrl + Shift + S
Add insights to a project. Ctrl + I
Add data columns to a project. Shift + F10
Undo the last change. Ctrl + Z
Reverse the last undo. Ctrl + Y

C-1
Appendix C
Keyboard Shortcuts for Data Visualization

Use these keyboard shortcuts while working on a visualization in the Visualize canvas.

Task Keyboard Shortcut


Copy a visualization to paste it to another canvas. Ctrl + C
Paste the visualization in a canvas. Ctrl + V
Duplicate a visualization. Ctrl + D
Delete a visualization. Delete key

Use these keyboard shortcuts while working with a filter in the filter panel on the filter
bar.

Task Keyboard Shortcut


Search items in a filter. Enter key
Add the search string to the selection list. Ctrl + Enter

Use these keyboard shortcuts when you want to open, create, or edit artifacts such as
data sets, projects, data flows, and sequences in a new tab or window.

Task Keyboard Shortcut


Open an artifact in a new browser tab. Ctrl+Click the artifact
Open an artifact in a new browser window. Shift+Click the artifact

C-2
D
Data Sources and Data Types Reference
Find out about supported data sources, databases, and data types.

Topics
• Supported Data Sources
• Oracle Applications Connector Support
• Data Visualization Supported and Unsupported Data Types

Supported Data Sources


You can connect to many different data sources.

Data Sources Supported for Use with Data Visualization Desktop on Mac
You can use these as data sources on Mac.
• Oracle Applications
• Oracle Database
• Oracle Essbase
• Oracle Autonomous Data Warehouse Cloud
• Oracle Talent Acquisition Cloud (Beta)
• Microsoft Excel XLSX File
• CSV File

Data Sources Supported for Use with Oracle Data Visualization Desktop

Data Source Version Data Data More Information


Visualizat Visualizat
ion ion
Desktop Desktop
for for Mac
Windows
Oracle Applications 11.1.1.9+ Yes Yes Connector supports several Oracle
or Fusion SaaS Applications. See Oracle
Applicatio Applications Connector Support.
ns See also Create Oracle
Release 8 Applications Connections.
and later

D-1
Appendix D
Supported Data Sources

Data Source Version Data Data More Information


Visualizat Visualizat
ion ion
Desktop Desktop
for for Mac
Windows
Oracle Autonomous Yes Yes Connection to public IP address
Data Warehouse only.
Cloud You can connect to multiple
Oracle Autonomous Data
Warehouse Cloud data sources.
Upload a wallet for each
connection.
See Create Connections to Oracle
Autonomous Data Warehouse
Cloud.
Oracle Big Data Cloud Yes Oracle Big Data Cloud must be
integrated with Oracle Identity
Cloud Service.
See Create Connections to Oracle
Big Data Cloud.
Oracle Database 11.2.0.4+ Yes Yes Use the Oracle Database
12.1+ connection type to connect to
12.2+ Oracle Database Cloud Service.
You can connect to multiple
database services. Upload a wallet
for each connection.
Ensure that the appropriate
security access rules are in place
to allow a network connection to
the database service on the
database listening port.
See Create Database
Connections.
Oracle Content and Yes
Experience Cloud
Oracle Essbase Essbase Yes Yes See Create Connections to Oracle
11.1.2.4.0 Essbase.
+
Oracle
Analytics
Cloud-
Essbase
Oracle Service Cloud Yes
Oracle Talent 15b.9.3+ Yes Yes
Acquisition Cloud
Actian Ingres 5.0+ Yes
Actian Matrix 5.0+ Yes
Actian Vector 5.0+ Yes
Amazon Aurora Yes

D-2
Appendix D
Supported Data Sources

Data Source Version Data Data More Information


Visualizat Visualizat
ion ion
Desktop Desktop
for for Mac
Windows
Amazon EMR Amazon Yes Complex data types not
EMR 4.7.2 supported.
running
Amazon
Hadoop
2.7.2 and
Hive 1.0.0
Amazon
EMR
(MapR) -
Amazon
Machine
Image
(AMI)
3.3.2
running
MapR
Hadoop
M3 and
Hive
0.13.1
Amazon Redshift 1.0.1036 + Yes
Apache Drill 1.7+ Yes
Apache Hive 1.2.1+ Yes
Supported
:
Hive 1.0.x
Hive 1.1.x
Hive 1.2.x
Hive 2.0.x
Hive 2.1.x
Cassandra 3.10 Yes
DB2 10.1+ Yes
10.5+
DropBox Yes
Google Analytics Yes
Google Cloud Yes
Google Drive Yes
GreenPlum 4.3.8+ Yes
HortonWorks Hive 1.2+ Yes
HP Vertica 7+ Yes
IBM BigInsights Hive 1.2+ Yes
Impala 2.7+ Yes
Informix 12+ Yes
MapR Hive 1.2+

D-3
Appendix D
Oracle Applications Connector Support

Data Source Version Data Data More Information


Visualizat Visualizat
ion ion
Desktop Desktop
for for Mac
Windows
Microsoft Access 2013 Yes
2016
MonetDB 5+ Yes
MongoDB 3.2.5 Yes
MySQL 5.1+ Yes Connections to MySQL
5.6+ Community Edition isn’t supported.
Netezza 7 Yes
Pivotal HD Hive Yes
PostgreSQL 9.5+ Yes
Presto Yes
Salesforce Yes
Spark 1.6+ Yes
2.0
2.1+
SQL Server 2008 Yes
2012
2016
Sybase ASE 16+ Yes
Sybase IQ 16+ Yes
Teradata 14 Yes
15
16
16.10
Teradata Aster 6.10+ Yes
Elastic Search 5.6.4+ Yes
JDBC Generic
JDBC
driver
support
OData 4.0+ Yes
ODBC Generic
ODBC
driver
support
CSV File Yes Yes
Microsoft Excel Yes Yes Only XLSX files.

Oracle Applications Connector Support


Oracle Applications Connector supports several Oracle SaaS Applications. You can
also use Oracle Applications Connector to connect to your on-premises Oracle BI

D-4
Appendix D
Data Visualization Supported and Unsupported Data Types

Enterprise Edition deployments (if patched to an appropriate level) and another Oracle
Analytics Cloud service.
Oracle SaaS applications you can connect to:
• Oracle Sales Cloud
• Oracle Financials Cloud
• Oracle Human Capital Management Cloud
• Oracle Supply Chain Cloud
• Oracle Procurement Cloud
• Oracle Project Cloud
• Oracle Loyalty Cloud

Data Visualization Supported and Unsupported Data Types


Read about the data types that Data Visualization supports and doesn’t support.

Topics:
• Unsupported Data Types
• Supported Base Data Types
• Supported Data Types by Database

Unsupported Data Types


Some data types aren’t supported.
You'll see an error message if the data source contains data types that Data
Visualization doesn't support.

Supported Base Data Types


When reading from a data source, Data Visualization attempts to map incoming data
types to the supported data types.
For example, a database column that contains only date values is formatted as a
DATE, a spreadsheet column that contains a mix of numerical and string values is
formatted as a VARCHAR, and a data column that contains numerical data with
fractional values uses DOUBLE or FLOAT.
In some cases Data Visualization can’t convert a source data type. To work around
this data type issue, you can manually convert a data column to a supported type by
entering SQL commands. In other cases, Data Visualization can't represent binary and
complex data types such as BLOB, JSON, and XML.
Data Visualization supports the following base data types:
• Number Types — SMALLINT, SMALLUNIT, TINYINT, TINYUINT, UINT, BIT,
FLOAT, INT, NUMERIC, DOUBLE
• Date Types — DATE, DATETIME, TIMESTAMP, TIME
• String Types — LONGVARCHAR, CHAR, VARCHAR

D-5
Appendix D
Data Visualization Supported and Unsupported Data Types

Supported Data Types by Database


Data Visualization supports the following data types.

Database Type Supported Data Types


Oracle BINARY DOUBLE, BINARY FLOAT
CHAR, NCHAR
CLOB, NCLOB
DATE
FLOAT
NUMBER, NUMBER (p,s),
NVARCHAR2, VARCHAR2
ROWID
TIMESTAMP, TIMESTAMP WITH LOCAL TIMEZONE, TIMESTAMP WITH
TIMEZONE
DB2 BIGINT
CHAR, CLOB
DATE, DECFLOAT, DECIMAL, DOUBLE
FLOAT
INTEGER
LONGVAR
NUMERIC
REAL
SMALLINT
TIME, TIMESTAMP
VARCHAR
SQL Server BIGINT, BIT
CHAR
DATE, DATETIME, DATETIME2, DATETIMEOFFSET, DECIMAL
FLOAT
INT
MONEY
NCHAR, NTEXT, NUMERIC, NVARCHAR, NVARCHAR(MAX)
REAL
SMALLDATETIME, SMALLINT, SMALLMONEY
TEXT, TIME, TINYINT
VARCHAR, VARCHAR(MAX)
XML

D-6
Appendix D
Data Visualization Supported and Unsupported Data Types

Database Type Supported Data Types


MySQL BIGINT, BIGINT UNSIGNED
CHAR
DATE, DATETIME, DECIMAL, DECIMAL UNSIGNED, DOUBLE, DOUBLE
UNSIGNED
FLOAT, FLOAT UNSIGNED
INTEGER, INTEGER UNSIGNED
LONGTEXT
MEDIUMINT, MEDIUMINT UNSIGNED, MEDIUMTEXT
SMALLINT, SMALLINT UNSIGNED
TEXT, TIME, TIMESTAMP, TINYINT, TINYINT UNSIGNED, TINYTEXT
VARCHAR
YEAR
Apache Spark BIGINT, BOOLEAN
DATE, DECIMAL, DOUBLE
FLOAT
INT
SMALLINT, STRING
TIMESTAMP, TINYINT
VARCHAR
Teradata BIGINT, BYTE, BYTEINT
CHAR, CLOB
DATE, DECIMAL, DOUBLE
FLOAT
INTEGER
NUMERIC
REAL
SMALLINT
TIME, TIMESTAMP
VARCHAR

D-7
E
Data Preparation Reference
This topic describes the set and types of recommendation you can use to perform data
transform changes to a data set.

Topics:
• Transform Recommendation Options

Transform Recommendation Options


You can use the following data transform options in the project’s Prepare canvas.

Transformation Option Description


Edit Edits the current column, you can reformat a source column without creating a
second column and by hiding the original column.
Hide Hides the column in the Data Elements pane and in the visualizations. If you
want to see the hidden columns, click Hidden columns (ghost icon) on the
page footer. You can then unhide individual columns or unhide all the hidden
columns at the same time.
Group, Conditional Group Select Group to create your own custom groups. For example, you can group
States together with custom regions, and you can categorize dollar amounts
into groups indicating small, medium, and large.
Split Splits a specific column value into parts. For example, you can split a column
called, Name, into first and last name.
Uppercase Updates the contents of a column with the values in all uppercase letters.
Lowercase Updates the contents of a column with the values all in lowercase letters.
Sentence Case Updates the contents of a column to make the first letter of the first word of a
sentence uppercase.
Rename Allows you to change the name of any column.
Duplicate Creates a column with identical content of the selected column.
Convert to Text Changes the data type of a column to text.
Replace Changes specific text in the selected column to any value that you
specify. For example, you can change all instances of Mister to Mr. in the
column.
Create Creates a column based on a function.
Convert to Number Changes the data type of the column to number, which deletes any values
that aren't numbers from the column.
Convert to Date Changes the data type of the column to date and deletes any values that
aren’t dates from the column.
Bin Creates your own custom groups for number ranges. For example, you can
create bins for an Age column with age ranges binned into Pre-Teen, Young
Adult, Adult, or Senior based on custom requirements.
Log Calculates the natural logarithm of an expression.
Power Raises the values of a column to the power that you specify. The default
power is 2.

E-1
Appendix E
Transform Recommendation Options

Transformation Option Description


Square Root Creates a column populated with the square root of the value in the column
selected.

E-2
F
Expression Editor Reference
This topic describes the expression elements that you can use in the Expression
Editor.

Topics:
• SQL Operators
• Conditional Expressions
• Functions
• Constants
• Types

SQL Operators
SQL operators are used to specify comparisons between expressions.
You can use various types of SQL operators.

Operator Description
BETWEEN Determines if a value is between two non-inclusive bounds. For example:
"COSTS"."UNIT_COST" BETWEEN 100.0 AND 5000.0
BETWEEN can be preceded with NOT to negate the condition.
IN Determines if a value is present in a set of values. For example:
"COSTS"."UNIT_COST" IN(200, 600, 'A')
IS NULL Determines if a value is null. For example:
"PRODUCTS"."PROD_NAME" IS NULL
LIKE Determines if a value matches all or part of a string. Often used with
wildcard characters to indicate any character string match of zero or more
characters (%) or any single character match (_). For example:
"PRODUCTS"."PROD_NAME" LIKE 'prod%'

Conditional Expressions
You use conditional expressions to create expressions that convert values.
The conditional expressions described in this section are building blocks for creating
expressions that convert a value from one form to another.

F-1
Appendix F
Functions

Note:

• In CASE statements, AND has precedence over OR


• Strings must be in single quotes

Expression Example Description


CASE (If) CASE Evaluates each WHEN condition and if satisfied,
WHEN score-par < 0 THEN 'Under Par' assigns the value in the corresponding THEN
expression.
WHEN score-par = 0 THEN 'Par'
If none of the WHEN conditions are satisfied, it
WHEN score-par = 1 THEN 'Bogey'
assigns the default value specified in the ELSE
WHEN score-par = 2 THEN 'Double Bogey' expression. If no ELSE expression is specified, the
ELSE 'Triple Bogey or Worse' system automatically adds an ELSE NULL.
END
CASE (Switch) CASE Score-par Also referred to as CASE (Lookup). The value of
WHEN -5 THEN 'Birdie on Par 6' the first expression is examined, then the WHEN
expressions. If the first expression matches any
WHEN -4 THEN 'Must be Tiger'
WHEN expression, it assigns the value in the
WHEN -3 THEN 'Three under par' corresponding THEN expression.
WHEN -2 THEN 'Two under par' If none of the WHEN expressions match, it assigns
WHEN -1 THEN 'Birdie' the default value specified in the ELSE expression.
WHEN 0 THEN 'Par' If no ELSE expression is specified, the system
automatically adds an ELSE NULL.
WHEN 1 THEN 'Bogey'
If the first expression matches an expression in
WHEN 2 THEN 'Double Bogey' multiple WHEN clauses, only the expression
ELSE 'Triple Bogey or Worse' following the first match is assigned.
END

Functions
There are various types of functions that you can use in expressions.

Topics:
• Aggregate Functions
• Analytics Functions
• Calendar Functions
• Conversion Functions
• Display Functions
• Mathematical Functions
• String Functions
• System Functions
• Time Series Functions

F-2
Appendix F
Functions

Aggregate Functions
Aggregate functions perform operations on multiple values to create summary results.

Function Example Description


Avg Avg(Sales) Calculates the average (mean) of a numeric set of values.
Bin Bin(UnitPrice BY Selects any numeric attribute from a dimension, fact table, or
ProductName) measure containing data values and places them into a
discrete number of bins. This function is treated like a new
dimension attribute for purposes such as aggregation, filtering,
and drilling.
Count Count(Products) Determines the number of items with a non-null value.
First First(Sales) Selects the first non-null returned value of the expression
argument. The First function operates at the most detailed
level specified in your explicitly defined dimension.
Last Last(Sales) Selects the last non-null returned value of the expression.
Max Max(Revenue) Calculates the maximum value (highest numeric value) of the
rows satisfying the numeric expression argument.
Median Median(Sales) Calculates the median (middle) value of the rows satisfying
the numeric expression argument. When there are an even
number of rows, the median is the mean of the two middle
rows. This function always returns a double.
Min Min(Revenue) Calculates the minimum value (lowest numeric value) of the
rows satisfying the numeric expression argument.
StdDev StdDev(Sales) Returns the standard deviation for a set of values. The return
StdDev(DISTINCT Sales) type is always a double.
StdDev_Pop StdDev_Pop(Sales) Returns the standard deviation for a set of values using the
StdDev_Pop(DISTINCT Sales) computational formula for population variance and standard
deviation.
Sum Sum(Revenue) Calculates the sum obtained by adding up all values satisfying
the numeric expression argument.

Analytics Functions
Analytics functions allow you to explore data using models such as trendline and
cluster.

Function Example Description


Trendline TRENDLINE(revenue, (calendar_year, Fits a linear or exponential model and returns
calendar_quarter, calendar_month) BY the fitted values or model. The numeric_expr
(product), 'LINEAR', 'VALUE') represents the Y value for the trend and the
series (time columns) represent the X value.

F-3
Appendix F
Functions

Function Example Description


Cluster CLUSTER((product, company), Collects a set of records into groups based on
(billed_quantity, revenue), one or more input expressions using K-Means
'clusterName', 'algorithm=k- or Hierarchical Clustering.
means;numClusters=%1;maxIter=
%2;useRandomSeed=FALSE;enablePartitioni
ng=TRUE', 5, 10)
Outlier OUTLIER((product, company), This function classifies a record as Outlier
(billed_quantity, revenue), based on one or more input expressions
'isOutlier', 'algorithm=kmeans') using K-Means or Hierarchical Clustering or
Multi-Variate Outlier detection Algorithms.
Regr REGR(revenue, (discount_amount), Fits a linear model and returns the fitted
(product_type, brand), 'fitted', '') values or model. This function can be used to
fit a linear curve on two measures.
Evaluate_Script EVALUATE_SCRIPT('filerepo:// Executes a Python script as specified in the
obiee.Outliers.xml', 'isOutlier', script_file_path, passing in one or more
'algorithm=kmeans;id=%1;arg1=%2;arg2= columns or literal expressions as input. The
%3;useRandomSeed=False;', output of the function is determined by the
customer_number, expected_revenue, output_column_name.
customer_age)

Calendar Functions
Calendar functions manipulate data of the data types DATE and DATETIME based on a
calendar year.

Function Example Description


Current_Date Current_Date Returns the current date.
Current_Time Current_Time(3) Returns the current time to the specified number of
digits of precision, for example: HH:MM:SS.SSS
If no argument is specified, the function returns the
default precision.
Current_TimeStamp Current_TimeStamp(3) Returns the current date/timestamp to the specified
number of digits of precision.
DayName DayName(Order_Date) Returns the name of the day of the week for a
specified date expression.
DayOfMonth DayOfMonth(Order_Date) Returns the number corresponding to the day of the
month for a specified date expression.
DayOfWeek DayOfWeek(Order_Date) Returns a number between 1 and 7 corresponding to
the day of the week for a specified date expression.
For example, 1 always corresponds to Sunday, 2
corresponds to Monday, and so on through to
Saturday which returns 7.
DayOfYear DayOfYear(Order_Date) Returns the number (between 1 and 366)
corresponding to the day of the year for a specified
date expression.
Day_Of_Quarter Day_Of_Quarter(Order_Date) Returns a number (between 1 and 92) corresponding
to the day of the quarter for the specified date
expression.

F-4
Appendix F
Functions

Function Example Description


Hour Hour(Order_Time) Returns a number (between 0 and 23) corresponding
to the hour for a specified time expression. For
example, 0 corresponds to 12 a.m. and 23
corresponds to 11 p.m.
Minute Minute(Order_Time) Returns a number (between 0 and 59) corresponding
to the minute for a specified time expression.
Month Month(Order_Time) Returns the number (between 1 and 12)
corresponding to the month for a specified date
expression.
MonthName MonthName(Order_Time) Returns the name of the month for a specified date
expression.
Month_Of_Quarter Month_Of_Quarter(Order_Date) Returns the number (between 1 and 3) corresponding
to the month in the quarter for a specified date
expression.
Now Now() Returns the current timestamp. The Now function is
equivalent to the Current_Timestamp function.
Quarter_Of_Year Quarter_Of_Year(Order_Date) Returns the number (between 1 and 4) corresponding
to the quarter of the year for a specified date
expression.
Second Second(Order_Time) Returns the number (between 0 and 59)
corresponding to the seconds for a specified time
expression.
TimeStampAdd TimeStampAdd(SQL_TSI_MONTH, Adds a specified number of intervals to a timestamp,
12,Time."Order Date") and returns a single timestamp.
Interval options are: SQL_TSI_SECOND,
SQL_TSI_MINUTE, SQL_TSI_HOUR,
SQL_TSI_DAY, SQL_TSI_WEEK, SQL_TSI_MONTH,
SQL_TSI_QUARTER, SQL_TSI_YEAR
TimeStampDiff TimeStampDiff(SQL_TSI_MONTH, Returns the total number of specified intervals
Time."Order Date",CURRENT_DATE) between two timestamps.
Use the same intervals as TimeStampAdd.
Week_Of_Quarter Week_Of_Quarter(Order_Date) Returns a number (between 1 and 13) corresponding
to the week of the quarter for the specified date
expression.
Week_Of_Year Week_Of_Year(Order_Date) Returns a number (between 1 and 53) corresponding
to the week of the year for the specified date
expression.
Year Year(Order_Date) Returns the year for the specified date expression.

Conversion Functions
Conversion functions convert a value from one form to another.

F-5
Appendix F
Functions

Function Example Description


Cast Cast(hiredate AS CHAR(40)) Changes the data type of an expression or a null literal to
FROM employee another data type. For example, you can cast a
customer_name (a data type of Char or Varchar) or birthdate
(a datetime literal).
IfNull IfNull(Sales, 0) Tests if an expression evaluates to a null value, and if it does,
assigns the specified value to the expression.
IndexCol SELECT IndexCol(VALUEOF Uses external information to return the appropriate column for
(NQ_SESSION.GEOGRAPHY_LEVEL) the signed-in user to see.
, Country, State, City),
Revenue FROM Sales
NullIf SELECT e.last_name, Compares two expressions. If they’re equal, then the function
NULLIF(e.job_id, j.job_id) returns null. If they’re not equal, then the function returns the
"Old Job ID" FROM employees first expression. You can’t specify the literal NULL for the first
e, job_history j WHERE expression.
e.employee_id =
j.employee_id ORDER BY
last_name, "Old Job ID";
To_DateTime SELECT To_DateTime Converts string literals of dateTime format to a DateTime data
('2009-03-0301:01:00', type.
'yyyy-mm-dd hh:mi:ss') FROM
sales

Display Functions
Display functions operate on the result set of a query.

Function Example Description


BottomN BottomN(Sales, 10) Returns the n lowest values of expression, ranked from lowest
to highest.
Filter Filter(Sales USING Product = Computes the expression using the given preaggregate filter.
'widget')
Mavg Mavg(Sales, 10) Calculates a moving average (mean) for the last n rows of
data in the result set, inclusive of the current row.
Msum SELECT Month, Revenue, Calculates a moving sum for the last n rows of data, inclusive
Msum(Revenue, 3) as 3_MO_SUM of the current row.
FROM Sales The sum for the first row is equal to the numeric expression
for the first row. The sum for the second row is calculated by
taking the sum of the first two rows of data, and so on. When
the nth row is reached, the sum is calculated based on the last
n rows of data.
NTile Ntile(Sales, 100) Determines the rank of a value in terms of a user-specified
range. It returns integers to represent any range of ranks. The
example shows a range from 1 to 100, with the lowest sale = 1
and the highest sale = 100.
Percentile Percentile(Sales) Calculates a percent rank for each value satisfying the
numeric expression argument. The percentile rank ranges are
from 0 (1st percentile) to 1 (100th percentile), inclusive.

F-6
Appendix F
Functions

Function Example Description


Rank Rank(Sales) Calculates the rank for each value satisfying the numeric
expression argument. The highest number is assigned a rank
of 1, and each successive rank is assigned the next
consecutive integer (2, 3, 4,...). If certain values are equal,
they are assigned the same rank (for example, 1, 1, 1, 4, 5, 5,
7...).
Rcount SELECT month, profit, Takes a set of records as input and counts the number of
Rcount(profit) FROM sales WHERE records encountered so far.
profit > 200
Rmax SELECT month, profit, Takes a set of records as input and shows the maximum
Rmax(profit) FROM sales value based on records encountered so far. The specified
data type must be one that can be ordered.
Rmin SELECT month, profit, Takes a set of records as input and shows the minimum value
Rmin(profit) FROM sales based on records encountered so far. The specified data type
must be one that can be ordered.
Rsum SELECT month, revenue, Calculates a running sum based on records encountered so
Rsum(revenue) as RUNNING_SUM far.
FROM sales The sum for the first row is equal to the numeric expression
for the first row. The sum for the second row is calculated by
taking the sum of the first two rows of data, and so on.
TopN TopN(Sales, 10) Returns the n highest values of expression, ranked from
highest to lowest.

Mathematical Functions
The mathematical functions described in this section perform mathematical operations.

Function Example Description


Abs Abs(Profit) Calculates the absolute value of a numeric expression.
Acos Acos(1) Calculates the arc cosine of a numeric expression.
Asin Asin(1) Calculates the arc sine of a numeric expression.
Atan Atan(1) Calculates the arc tangent of a numeric expression.
Atan2 Atan2(1, 2) Calculates the arc tangent of y/x, where y is the first numeric
expression and x is the second numeric expression.
Ceiling Ceiling(Profit) Rounds a non-integer numeric expression to the next highest
integer. If the numeric expression evaluates to an integer, the
CEILING function returns that integer.
Cos Cos(1) Calculates the cosine of a numeric expression.
Cot Cot(1) Calculates the cotangent of a numeric expression.
Degrees Degrees(1) Converts an expression from radians to degrees.
Exp Exp(4) Sends the value to the power specified. Calculates e raised to
the n-th power, where e is the base of the natural logarithm.
ExtractBit Int ExtractBit(1, 5) Retrieves a bit at a particular position in an integer. It returns
an integer of either 0 or 1 corresponding to the position of the
bit.

F-7
Appendix F
Functions

Function Example Description


Floor Floor(Profit) Rounds a non-integer numeric expression to the next lowest
integer. If the numeric expression evaluates to an integer, the
FLOOR function returns that integer.
Log Log(1) Calculates the natural logarithm of an expression.
Log10 Log10(1) Calculates the base 10 logarithm of an expression.
Mod Mod(10, 3) Divides the first numeric expression by the second numeric
expression and returns the remainder portion of the quotient.
Pi Pi() Returns the constant value of pi.
Power Power(Profit, 2) Takes the first numeric expression and raises it to the power
specified in the second numeric expression.
Radians Radians(30) Converts an expression from degrees to radians.
Rand Rand() Returns a pseudo-random number between 0 and 1.
RandFromSeed Rand(2) Returns a pseudo-random number based on a seed value.
For a given seed value, the same set of random numbers are
generated.
Round Round(2.166000, 2) Rounds a numeric expression to n digits of precision.
Sign Sign(Profit) This function returns the following:
• 1 if the numeric expression evaluates to a positive
number
• -1 if the numeric expression evaluates to a negative
number
• 0 if the numeric expression evaluates to zero
Sin Sin(1) Calculates the sine of a numeric expression.
Sqrt Sqrt(7) Calculates the square root of the numeric expression
argument. The numeric expression must evaluate to a
nonnegative number.
Tan Tan(1) Calculates the tangent of a numeric expression.
Truncate Truncate(45.12345, 2) Truncates a decimal number to return a specified number of
places from the decimal point.

String Functions
String functions perform various character manipulations. They operate on character
strings.

Function Example Description


Ascii Ascii('a') Converts a single character string to its corresponding ASCII
code, between 0 and 255. If the character expression
evaluates to multiple characters, the ASCII code
corresponding to the first character in the expression is
returned.
Bit_Length Bit_Length('abcdef') Returns the length, in bits, of a specified string. Each Unicode
character is 2 bytes in length (equal to 16 bits).
Char Char(35) Converts a numeric value between 0 and 255 to the character
value corresponding to the ASCII code.

F-8
Appendix F
Functions

Function Example Description


Char_Length Char_Length(Customer_Name) Returns the length, in number of characters, of a specified
string. Leading and trailing blanks aren’t counted in the length
of the string.
Concat SELECT DISTINCT Concat Concatenates two character strings.
('abc', 'def') FROM employee
Insert SELECT Insert('123456', 2, Inserts a specified character string into a specified location in
3, 'abcd') FROM table another character string.
Left SELECT Left('123456', 3) Returns a specified number of characters from the left of a
FROM table string.
Length Length(Customer_Name) Returns the length, in number of characters, of a specified
string. The length is returned excluding any trailing blank
characters.
Locate Locate('d' 'abcdef') Returns the numeric position of a character string in another
character string. If the character string isn’t found in the string
being searched, the function returns a value of 0.
LocateN Locate('d' 'abcdef', 3) Like Locate, returns the numeric position of a character string
in another character string. LocateN includes an integer
argument that enables you to specify a starting position to
begin the search.
Lower Lower(Customer_Name) Converts a character string to lowercase.
Octet_Length Octet_Length('abcdef') Returns the number of bytes of a specified string.
Position Position('d', 'abcdef') Returns the numeric position of strExpr1 in a character
expression. If strExpr1 isn’t found, the function returns 0.
Repeat Repeat('abc', 4) Repeats a specified expression n times.
Replace Replace('abcd1234', '123', Replaces one or more characters from a specified character
'zz') expression with one or more other characters.
Right SELECT Right('123456', 3) Returns a specified number of characters from the right of a
FROM table string.
Space Space(2) Inserts blank spaces.
Substring Substring('abcdef' FROM 2) Creates a new string starting from a fixed number of
characters into the original string.
SubstringN Substring('abcdef' FROM 2 Like Substring, creates a new string starting from a fixed
FOR 3) number of characters into the original string.
SubstringN includes an integer argument that enables you to
specify the length of the new string, in number of characters.
TrimBoth Trim(BOTH '_' FROM Strips specified leading and trailing characters from a
'_abcdef_') character string.
TrimLeading Trim(LEADING '_' FROM Strips specified leading characters from a character string.
'_abcdef')
TrimTrailing Trim(TRAILING '_' FROM Strips specified trailing characters from a character string.
'abcdef_')
Upper Upper(Customer_Name) Converts a character string to uppercase.

F-9
Appendix F
Constants

System Functions
The USER system function returns values relating to the session.

It returns the user name you signed in with.

Time Series Functions


Time series functions are aggregate functions that operate on time dimensions.
The time dimension members must be at or below the level of the function. Because of
this, one or more columns that uniquely identify members at or below the given level
must be projected in the query.

Function Example Description


Periodrolling SELECT Month_ID, Computes the aggregate of a measure over the period starting
Periodrolling x units of time and ending y units of time from the current time.
(monthly_sales, -1, 1) For example, PERIODROLLING can compute sales for a period
that starts at a quarter before and ends at a quarter after the
current quarter.
Forecast FORECAST(numeric_expr, Creates a time-series model of the specified measure over the
([series]), series using either Exponential Smoothing or ARMIA and
output_column_name, options, outputs a forecast for a set of periods as specified by
[runtime_binded_options]) numPeriods.

Constants
You can use constants in expressions.
Available constants include Date, Time, and Timestamp.

Constant Example Description


Date DATE [2014-04-09] Inserts a specific date.
Time TIME [12:00:00] Inserts a specific time.
TimeStamp TIMESTAMP [2014-04-09 Inserts a specific timestamp.
12:00:00]

Types
You can use data types, such as CHAR, INT, and NUMERIC in expressions.

For example, you use types when creating CAST expressions that change the data type
of an expression or a null literal to another data type.

F-10
G
Data Visualization SDK Reference
This topic describes the software development kit (SDK) that you can use to develop
and deploy visualization plug-ins to your Data Visualization installation.

Topics:
• About the Oracle Data Visualization SDK
• Create the Visualization Plug-in Development Environment
• Create a Skeleton Visualization Plug-in
• Create a Skeleton Skin or Unclassified Plug-in
• Develop a Visualization Plug-in
• Run in SDK Mode and Test the Plug-in
• Validate the Visualization Plug-in
• Build, Package, and Deploy the Visualization Plug-in
• Delete Plug-ins from the Development Environment

About the Oracle Data Visualization SDK


The Oracle Data Visualization SDK provides a development environment where you
can create and develop custom visualization plug-ins and deploy them to your Data
Visualization installation.

Scripts
Your installation of Oracle Data Visualization includes the scripts that you use to
create a development environment and create skeleton visualization plug-ins. The
scripts are located in this directory: <your_installation_directory>/
tools/bin

For example, C:\Program Files\Oracle Data Visualization Desktop


\tools\bin

Note the following script names and descriptions:


• bicreatenv - Run this script to create the development environment where you
develop your plug-ins.
• bicreateplugin - Run this script to create a skeleton visualization to quickly get
started on developing your custom plug-in.
• bideleteplugin - Run this script to delete a plug-in from your development
environment.
• bivalidate - Run this script with the gradlew validate command to call the
bivalidate script. The bivalidate script validates whether the JSON configuration
files are properly formatted and contain appropriate visualization configuration.

G-1
Appendix G
Create the Visualization Plug-in Development Environment

Other Resources
These resources help you develop your custom visualization plug-ins:
• circlePack sample - The circlePack sample is included in your development
environment. You can deploy and use this sample immediately. However, the
sample is designed for you to use with the provided tutorial to learn how to
develop a visualization plug-in. You can also copy the sample and use it as a
template for the visualization plug-ins that you want to create.
The circlePack sample is located in <your_development_directory>\src
\sampleviz\sample-circlepack
For example, C:\OracleDVDev\src\sampleviz\sample-circlepack
• Other visualization plug-in samples - You can download plug-in examples from
the Oracle Data Visualization Download Page.
• Tutorial - The tutorial contains information and instructions to help you understand
how to create a robust visualization plug-in. This tutorial provides step-by-step
instructions for modifying the circlePack sample included in your plug-in
development environment.

Tutorial
• JS API documentation - This documentation contains JavaScript reference
information that you need to develop a visualization plug-in. See Data
Visualization SDK JavaScript Reference.

Create the Visualization Plug-in Development Environment


You need to set the PATH environment variable and create the development
environment before you can create visualization plug-ins.
1. Using the command prompt, create an empty development directory. For example,
C:\OracleDVDev.
2. Set the PATH environment variable. For example,
set DVDESKTOP_SDK_HOME="C:\Program Files\Oracle Data Visualization Desktop"
set PLUGIN_DEV_DIR=C:\OracleDVDev
REM add tools\bin to path:
set PATH=%DVDESKTOP_SDK_HOME%\tools\bin;%PATH%
3. Run the bicreateenv script included in your installation to create the development
environment in the empty directory. For example,
cd C:\OracleDVDev
bicreateenv

For information about the options available for running this script, see the script's
command-line help. For example,
C:\OracleDVDev>bicreateenv -help
The complete development environment, including build.gradle and gradlew, is
created in the directory that you specified.
4. (Optional) If you’re working behind a web proxy, then you need to set
gradle.properties to point to your proxy. The gradle.properties are located in your

G-2
Appendix G
Create a Skeleton Visualization Plug-in

development environment, for example C:\OracleDVDev


\gradle.properties.
Use the following example to set your gradle.properties:
systemProp.https.proxyHost=www-proxy.somecompany.com
systemProp.https.proxyPort=80
systemProp.https.nonProxyHosts=*.somecompany.com|*.companyaltname.com

Create a Skeleton Visualization Plug-in


After you create a skeleton visualization plug-in in your development environment, you
then develop it into a robust visualization plug-in and deploy it to your Data
Visualization environment.
1. Run the bicreateplugin script included in your installation to create a skeleton
visualization. Use the following syntax:
bicreateplugin viz -<subType> -<id> -<name>

• <subType> is the type of visualization that you want to create. Your choices are:

– basic - Use this option to create a visualization that doesn’t use any data
from Data Visualization or use any data model mapping. This is like the
Image and Text visualization types delivered with Data Visualization. For
example, you can use this visualization type to show an image or some
text that’s coded into the plug-in or from a configuration. You can use this
type of visualization to improve formatting.
– dataviz -This type renders data from data sources registered with Oracle
Data Visualization into a chart or table or some other representation on
the screen. It also respond to marking events from other visualizations on
the same canvas and publish interaction events to affect other
visualizations on the same canvas.
– embeddableDataviz - This type renders data from data sources
registered with Oracle Data Visualization into the cells of a trellis
visualization. It also responds to marking events from other visualizations
on the same canvas and publish interaction events to affect other
visualizations on the same canvas.
• <id> is your domain and the name that you want to give the visualization
directory and components in your development environment. For example,
com-company.basicviz.

• <name> is the name of the visualization plug-in that you test, deploy, and use in
Data Visualization projects.
For example to create a basic visualization, name its development directory com-
company-basicviz, and name the visualization plug-in helloViz, enter and run the
following command:
C:\OracleDevDir>bicreateplugin viz –subType basic –id com.company.basicviz —
name helloViz

2. (Optional) Open the script's command-line help for information about the options
available for running this script. For example, C:\OracleDVDev> bicreateplugin -
help

When you run the bicreateplugin -viz command for the first time, the system creates
the customviz directory in the following location.
<your_development_environment>\src\customviz

G-3
Appendix G
Create a Skeleton Skin or Unclassified Plug-in

All custom visualization development directories that you create are added to this
directory.
For example, C:\OracleDVDev\src\customviz\com-company-basicviz

Create a Skeleton Skin or Unclassified Plug-in


The bicreateplugin -unclassified command creates an empty plug-in with plugin.xml,
localization bundles, and is a starting point for other Oracle Data Visualization plug-ins.
The bicreateplugin -skin command creates a skeleton skin plug-in.

1. Run the createplugin script included in your installation to create a skeleton plug-
in. Use one of the following syntaxes:
bicreateplugin -skin -<id>

bicreateplugin -unclassified -<id>

• <id> is your domain and the name that you want to give the visualization. For
example, com-company.newskin
For example, to create a skin plug-in, enter and run the following command:
C:\OracleDevDir>bicreateplugin skin –id com.company.newskin

Develop a Visualization Plug-in


After you create the skeleton visualization plug-in, you can use resources provided by
Oracle to help you develop your plug-in.
The directories for dataviz and embeddableDataviz types include the
datamodelhandler.js file, which contains the physical-to-logical data mapping format.
This file also tells Data Visualization how to render itself on the screen and pass user
interactions to the server.
• Use the tutorial to learn how to perform development tasks such as implement
data mapping.

Tutorial
• Use the .JS API documentation to learn how to add dependencies. See Data
Visualization SDK JavaScript Reference.

Run in SDK Mode and Test the Plug-in


You can run Oracle Data Visualization in SDK mode from your browser when you’re
developing your visualization plug-in or when you want to test your visualization plug-
in.
1. Execute the gradlew run command. For example, C:\OracleDevDir>gradlew run
After you run the command, note the following results:
• Data Visualization opens in SDK mode in your default browser. Use the
browser's JavaScript debugger to test and debug the application.
• The visualization that you created is available in the Visualizations pane of
Data Visualization.

G-4
Appendix G
Validate the Visualization Plug-in

• A system tray is displayed in the operating system's toolbar and includes three
links: Launch Browser, which you use to launch or relaunch your default
browser to display Data Visualization; Copy URL to Clipboard, which you can
use to copy the URL and paste it into a different browser; and Shutdown,
which you use to shut down the development browser.
2. Test your visualization by dragging and dropping it to a project’s canvas and
adding data elements.
3. If necessary, continue developing the visualization plug-in. When working in SDK
mode in the browser, you can update the .JS definition and refresh the browser to
see your changes.

Validate the Visualization Plug-in


After you’ve tested your visualization plug-in and before you can package and deploy
it, you must validate it.
1. Run the gradlew validate command. For example,
cd C:\OracleDVDev
.\gradlew validate

This step validates whether the JSON configuration files are properly formatted
and contain appropriate visualization configuration. If the validation discovers any
errors, then the system displays error messages.
2. To check for errors in the JavaScript source files, use your browser’s development
tools.

Build, Package, and Deploy the Visualization Plug-in


After you validate the visualization plug-in, you've to build and package it, and then
copy the resulting distributions into your Data Visualization installation directory.
The build and package process runs for all of the visualizations in your development
directory, and each plug-in is contained in its own zip file. There’s no way to build and
package specific visualizations. If you want to exclude visualizations from the build and
package process, then you've to move the visualizations that you want to exclude out
of your development directory, or delete them from the directory before you perform
the build. See Delete Plug-ins from the Development Environment.

1. Run the gradlew build command. For example,


cd C:\OracleDVDev
.\gradlew clean build

A build directory is added to your development environment. For example, C:


\OracleDVDev\build\distributions. This directory contains a zip file for each
visualization. The zip file’s name is the one that you gave the visualization when
you created its skeleton. For example, basicviz.zip.
2. Copy the zip files to your Data Visualization installation directory. For example,
%localappdata%\DVDesktop\plugins.

G-5
Appendix G
Delete Plug-ins from the Development Environment

Delete Plug-ins from the Development Environment


You can use the bideleteplugin script provided with Data Visualization to delete
the unneeded plug-ins from your development environment.
The build and package process includes all of the visualizations contained in your
development directory. There is no way to build and package specific visualizations.
To exclude any unwanted visualizations from the build, you can delete them before
you perform the build and package process.
1. If you want to delete a visualization plug-in, then run the bideleteplugin
command, using the following syntax:
cd C:\<your_development_directory>
bideleteplugin viz -id <name_of_your_domain>.<name_of_viz_plugin>
2. If you want to delete an unclassified plug-in, then run the bideleteplugin
command, using the following syntax:
cd C:\<your_development_directory>
bideleteplugin unclassified -id
<name_of_your_domain>.<name_of_unclassified_plugin>
3. If you want to delete a skin plug-in, then run the bideleteplugin command,
using the following syntax:
cd C:\<your_development_directory>
bideleteplugin skin -id <name_of_your_domain>.<name_of_skin_plugin>

G-6

You might also like