Tutorial Topaze
Tutorial Topaze
Tutorial Topaze
1. Introduction
This tutorial provides a description of the options and workflow in KAPPA-Workstation. This includes creation of
new documents and analyses, loading of pressure and rate data, extraction of the loaded production data, Decline
Curve Analysis, use of loglog analysis tools, example of analytical and numerical modeling, specialized plots,
sensitivity and forecast, and the creation of interpretation file templates. The tutorial finishes by pointing to a few
items that users may decide to explore on their own.
Before starting this session, the user is expected to have installed KAPPA-Workstation and started the RTA
(Topaze) module. The tutorial will use the three files (shown below) located in the Examples folder in the
Installation directory.
Topaze starts (below, left) and brings the user to the ‘File’ page. The active option is ‘New and recent’ and a ‘Blank’
icon can be seen towards the top left of the screen (below, right).
- Step 1: initialization of the main document options: reference time and location, general information, units
and general comments. Keep everything as default and click .
- Step 2: main options of the first analysis in this document. Input the main test parameters. Those
highlighted with red fields have a significant impact on the results and should not stay at default value. If
the default happens to be the answer one may enter the same value or right click in the field and select
‘Accept default’. If any field remains, a red warning message will be carried out throughout the
interpretation. For this session, set the pay zone (h) to 100 ft and the porosity () to 0.08. Click .
- Step 3: definition of the fluid and its physical diffusion in the formation. Define the fluid as single phase
gas. Advanced PVT is required when the reference phase is gas, in order to compute pseudo properties.
To access the advanced PVT definition, click on . In the PVT definition page, change the reference
pressure to 6650 psia and reference temperature to 300 °F. Validate the PVT definition with and
proceed to Step 4 using .
- Step 5: controls the level of complexity in the numerical model. The options in the left column are standard
with Saphir and Topaze. The options in the right columns are Rubis functionalities. Although these models
can be directly built from Saphir or Topaze, they do require a Rubis license to be available. The default
numerical settings will be largely sufficient for now but will be visited later in this tutorial, so click
on .
The document and its first analysis are initialized and the main Topaze window appears. The active tab is
‘Analysis’ with an empty workspace. The document is only in the active computer memory and it is named
‘Untitled1’. Save it and call it ‘RTA Tutorial 1’ using the Ctrl+S shortcut or select ‘Save’ in the File menu.
The main Topaze screen is displayed again with a history plot showing the loaded rates.
The ‘Extract’, , icon in the control panel accesses the manual extraction dialog. We will revisit the dialog
later. The ‘Automatic Extraction’, , icon appears in lieu of the ‘Extract’ icon when the shift key is pressed. In
this state, click on the icon and the loaded production history will be extracted from peak rate on a DCA plot,
using the default extraction settings.
Double clicking on the DCA plot title bar maximizes it, bringing additional options in the ribbon. Right-click in
the plot to access the split options in the popup menu. Split horizontally and vertically to end with three plots.
Click on ‘Filter’, , in the plot options at the top and draw a lasso around the low rate outliers on
the q vs. time (top left) plot:
This will ghost those unwanted points, allowing for a model to match on the ‘real’ decline trend.
Click on ‘Parameters’, , in the ‘Plot options’ panel at the top. Change the model to ‘Stretched exponential’.
The slider bar at the bottom allows the user to set the relative weight of rates and cumulative production while
regressing on the model parameters. Move the slider to mid-way between q and Q and run the regression on
initial rate, transition time and b exponent (each decline curve will have its own set of parameters):
Several DCA plots may be created with different decline models on each.
Close the dialog and restore the DCA plot by double clicking on the plot title bar.
Create a new analysis by clicking on ‘New’, , in the ribbon at the top (or using the Ctrl + M keyboard
shortcut). When creating a new analysis, different levels of duplication are offered to the user. For this session,
select the existing ‘Analysis 1’ to duplicate and keep all the selections. Click on . Delete the existing
DCA plot in Analysis 2.
Before proceeding with loading pressure data, access the ‘RTA Settings’ through located at the top right of
the Topaze window and select the automatic plots as indicated below:
Based on the above selection, when an extraction is made only the normalized rate – cumulative plot will be
created (in addition to the loglog plot and Blasingame plot which are always created on extraction). This allows
the user to save overcrowding the workspace with plots the user is not intending to use. The unchecked plots
can always be created at any time during the analysis using the ‘New plot’, , option in the ribbon (subject to
plot prerequisites being met). This will be visited later in the tutorial.
Back in the Topaze main workspace, the history plot is displayed with both rate and pressure data.
Since an extraction already existed, even though we deleted the DCA plot, loading pressure data will also
launch the extraction dialog. This time, the extraction will be based on both pressures and rates, unlike before,
when we had rates only.
If no extraction had existed, the dialog can be manually called using ‘Extract’, , in the control panel.
Reset the interval to the complete history by clicking on and change ‘From’ to 1 hr. Click on to
proceed with extraction.
The main Topaze screen (below) has four plots (loglog, Blasingame, normalized rate - cumulative & history)
and a result windows where a red warning indicates that some key parameters remain at their default value.
The 2 nodes controlling the distances to the boundaries, , can now be played with, to interactively adjust
the component behaviors to the data, until we get something similar to the display above right. It is also
possible to adjust the boundary distances interactively on the 2D Top view or manually entering the distances
in the tool parameters table.
Depending on the position of the limit markers on the loglog plot, the North-South and East-West distance
values observed in the result window can be a little different.
Hide the tool parameters and restore the loglog plot by double clicking on the plot title bar.
When plots are not maximized and several plots exist in the workspace, the plot area may be reduced
by the presence of plot scales. In such instances, the scales may be hidden using this option.
The extracted interval can be edited by clicking on ‘Selection’ and interactively changing the time range
in the consequent popup plot. Its impact on the plots is immediate.
The loglog plot uses the equivalent time Te = Q/q. Small rate values can be filtered in order to avoid
excessively large values of Te by clicking on ‘Filter’ and specifying the criteria in the filter dialog.
Certain data points can be selected in a lasso by clicking ‘Time interval selection’, then holding the left
click down in a plot to draw a lasso around them. The corresponding points are automatically
highlighted in all other plots (below, left).
The ‘Analytical’, , icon in the control panel accesses the manual analytical dialog. Model and parameters
have been initialized from the settings and results of the loglog tool. Clicking on the button would
generate the model with these parameters, but we may as well call the automatic model directly. So click on
to exit the manual analytical model dialog.
The ‘Automatic Analytical’, , icon appears in lieu of the ‘Analytical’ icon when the shift key is pressed. In this
state, click on the icon and the model is executed in a single command, with the resulting curves displayed on
the visible plots.
In the ribbon at the top, the single step response can be turned on/off using . Turn the option off to see the
true pressure response on the loglog plot (shown below).
Turn the single step response back on. It will be left on for the remainder of this tutorial.
In this session, the normalized rate – cumulative plot (which is already created) will be used to estimate the
drainage area size. Maximize the normalized rate – cumulative plot and display results in the plot using the
option in the ribbon at the top (below, left). Activate the ‘Line’ plot option from the ribbon at the top and select
a range of data in boundary dominated flow by holding the left mouse click and highlighting the relevant data
range. A straight line will be drawn based on regression on the data within the selected interval. The volumetric
results will be updated when the line is redrawn (below, right). You may need to reset the zoom to see the
complete line.
Note: the actual values may be different depending on the points selected for regression.
Restore the normalized rate – cumulative plot by double clicking on the plot title bar.
The destination model may be the analytical or numerical model at the bottom of the icons on the left. Clicking
on one of these two buttons will transfer the relevant active ‘transferrable’ results to the model and execute
the model at once. If for example you select the Analytical model in the top column and send to the numerical
model, this will be just the same as calling the numerical model with the analytical values. However, the
dashboard establishes a more flexible bridge between the different sources of results.
In this session, select the normalized rate - cumulative plot from the list of specialized analyses and click on
the ‘To analytical’, , icon:
Since the PV has no information on the relative North-South and East-West boundary distances, they are all
reset to the same value, which gives the same PV as calculated from the normalized rate – cumulative plot.
Clicking on the button would run the regression with the defined settings.
Click on the ‘Numerical’, , icon to access the manual numerical model dialog (above right). The numerical
model can be defined automatically based on the diagnostics (analysis tools) or from the analytical model. To
initialize the numerical model from the analytical one click on and click on . In
addition to the model response at the well, a 3D plot with the reservoir geometry, the static and dynamic
reservoir properties is also generated.
With a numerical model initialized it is possible to consider many more complex options either geometrical (in
the map ribbon), related to the fluid behavior (PVT), etc. In the following sections, we will look at the PVT a
little. Switch back to the Analysis Tab.
Now that the model is generated, it can be used to forecast future production. Click on ‘Forecast’, , to
access the forecast setup. Select the ‘Constant pressure’ forecast option. Set a producing pressure of 1750 psia
and forecast for a duration of 3 months and generate the forecast:
Click on ‘Sensitivity’, , to access the sensitivity dialog. Different type of sensitivity calculations can be run.
Click on ‘F1’ in the sensitivity dialog to read about the different methods available. For this exercise, select the
‘Monte-Carlo’ method and check ‘ϕ’ from the list of ‘Variables’. The default porosity distribution is ‘Normal’
with the model value as the mean. Change the standard deviation to 0.01 (below left). Keep the number of
models to 50 and click on .
From the sensitivity variable distributions defined, 50 samples will be taken to run the model. The computation
of the various responses is executed in parallel on a multicore PC and the results are displayed on the sensitivity
history plot, which will automatically be created (above, right). Maximize the sensitivity history plot and click
on ‘Show’, , in the plot option at the top. In the ‘show’ dialog, hide all the rate curves by unchecking them
(below left). Once validated, the sensitivity history plot will be as shown below, right:
To view the distribution of the ‘goodness of fit’ (which indicates how much the model deviates from the data),
create a ‘Sensitivity: Histogram’ plot from the ‘New plot’ list as shown below, left.
The Y-axis displays the goodness of fit (based on the least squares distance between data and model). A red
line threshold is also shown - this line can be interactively moved by the user. Points falling below this line
(shaded in black) will be reported in the sensitivity results only. Let’s set the threshold to roughly around 2500
Mscf/d (below, left):
Close the scatter plot and restore the histogram plot. Click on ‘Results’, , in the ribbon at the top and display
‘Model - Sensitivity results’ (above, right). The ‘lower’ and ‘higher’ values of the sensitivity variable are displayed
in the results, along with the ‘lower’ and ‘higher’ value of the cumulative forecast for that range. If the threshold
line is modified, these values will be affected. These values will differ because of the Monte-Carlo sampling.
Adding desorption to the model leads, quite logically, to an overall higher gas production.
Click on the ‘Improve’, , icon in the control panel accesses the manual improve dialog. Select the reservoir
surface as the regression parameter (below, left). Keep the default target selection of cumulative production
(below, right) and. Click on .
The two analyses can be compared on the various plots displayed. Click on ‘Compare’, , in the Analysis
ribbon at the top. In the compare dialog, select ‘Analysis 3’ and ‘Analysis 2’ and click on ‘Apply’. The models,
as well as the forecasts, are displayed with a specific color in the various plots:
The color assigned to each analysis in compare can be changed by right-clicking on the analysis in the compare
dialog (below, left). Alternatively, this may also be done in the Analysis Information dialog.
With the compare mode active, click on ‘Results’, , in the ribbon at the top. Tabulated results from the 2
analyses may be compared in the consequent dialog. Using the ‘Show:’ drop down list at the bottom of the
‘Results’ dialog, show the ‘Model – Results - Field’ category to compare the change in STGIIP and PV because
of including desorption in the model (above, right).
The template stores the setting of the document and the active analysis at the time it was saved. The next time
the user wants to create a new document, they will just have to select the template. Notice that the wizard,
which takes the user through the six initialization steps, has the settings of the current analysis. This can be
checked with the parameter values in Step 2 (below left) or the PVT and diffusion setup (below right).
However, you may want to explore the capabilities of the numerical models a little further. In the session above
we had just initialized the NL numerical model from the analytical model and gone one step further to add
desorption in the NL model. You may go to the 2D-Map tab, load the field bitmap file ‘RTAEX01 Field.jpg’,
scale the field, define the contour, faults, position the producing well properly, add an injection well and display
the resulting grid in 2D or 3D, enter the interference well schedule, create layers, select a more complex PVT,
etc.
KAPPA-Workstation also has a comprehensive contextual online help, including ‘How to’ topics, Examples and
FAQs, to assist the users whilst using the software. Users are encouraged to consult these.