Prism Predictive Asset Analytics: Deep Dive: Alex Jenkins Mike Reed

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 35

PRiSM Predictive Asset Analytics:

Deep Dive

AP-15
Dallas, TX
September 13th, 2018

Alex Jenkins
Mike Reed

Copyright © 2018 AVEVA Group plc and its subsidiaries. All rights reserved.
Predictive Asset Analytics
Deep Dive
Introduction
Model Design Process
Monitoring
Fault Diagnostics
Catch Cost-Savings Analysis

Copyright © 2018 AVEVA Group plc and its subsidiaries. All rights reserved.
Predictive Asset Analytics
Deep Dive – Introduction & Philosophy
Two things are required for a successful predictive asset
analytics program
1. A proper approach to model design
2. A solid monitoring process

“The whole is greater than


the sum of its parts”
- Aristotle

Copyright © 2018 AVEVA Group plc and its subsidiaries. All rights reserved.
Model Design Process

Copyright © 2018 AVEVA Group plc and its subsidiaries. All rights reserved.
Model Design Process

Create
Build Test Monitor
TDTs
Determin Review Models in Models in Deploy and Fine-
(Template
e Scope Point List PRiSM Data Models Tune
Design
Client Playback Models
Tool)

Copyright © 2018 AVEVA Group plc and its subsidiaries. All rights reserved.
Model Design Process
Determine Scope
Start with equipment critical to your process first
No redundancy / single point of failure
No spares
Costly to repair
How often it runs
Move to secondary equipment later

Copyright © 2018 AVEVA Group plc and its subsidiaries. All rights reserved.
Model Design Process
Review Point List
Instrumentation Availability
Tags exist in the historian (not just local readings)
Adequate historical data
Sufficient resolution / quality of data
Analog tags only
Boolean/Digital tags do not typically model well
Exceptions: These tags can be used as model filters (on-off tags,
different product recipes, etc.)

Copyright © 2018 AVEVA Group plc and its subsidiaries. All rights reserved.
Model Design Process
Create TDTs
Use TDTs (Template Design Tool) spreadsheets
Stay organized
Stay consistent when designing sister assets
Find & Replace in Excel
Easy to share

Copyright © 2018 AVEVA Group plc and its subsidiaries. All rights reserved.
Model Design Process
Create TDTs
Identify how many tags you have in total for the equipment
Use engineering knowledge to try to anticipate groups of
related tags
Some tags may be used in multiple models (Gross Load, Motor
Current/Speed, Ambient Temperature)
If you’re not sure of good groupings, try experimenting with
OPTiCS

Copyright © 2018 AVEVA Group plc and its subsidiaries. All rights reserved.
Model Design Process
Create TDTs
Complicated equipment may need to be split up into multiple
models
Pump: Mechanical, Process
Gas Turbine: Compressor, Combustion, Turbine Cooling,
Mechanical (Bearing Temperatures), Mechanical (Bearing
Vibrations)
Centrifugal Compressor: Process (one model per stage), Seal
System, Mechanical (Bearing Temperatures), Mechanical
(Bearing Vibrations)
Create separate models for the “driver” equipment as well
Ex: Motor  Gearbox  Compressor

Copyright © 2018 AVEVA Group plc and its subsidiaries. All rights reserved.
Model Design Process
Create TDTs
Rule of Thumb: Aim for ~10-25 tags per model
Remember, not every tag has to be used in the Operational
Profile
Use Actual Value alarms on tags that are important to monitor but
don’t “relate” well to anything else
When in doubt, add a few extra tags to the TDT, then remove
them when looking at the data

Copyright © 2018 AVEVA Group plc and its subsidiaries. All rights reserved.
Building Models
PRiSM Client
Remember to create a Template & Metrics first
Needed for Fault Diagnostics, Component Comparison,
Synchronized Alarm Thresholds
Add tags from TDT
Copy/paste to “User Point List”
Map tags to metrics
Import historical data
Rule of Thumb: 1 year at 1 hour intervals

Copyright © 2018 AVEVA Group plc and its subsidiaries. All rights reserved.
Building Models
PRiSM Client – Cleaning Data
Exclude, don’t delete
Use time-based trends and visual comparison (X-Y plot)
Remove flatlined periods and spikes
Check the scale: removing spikes “zooms-in” the chart
Important to check each tag for bad data
Show “invalid data” to see data marked as bad from the
historian
Recommended data import preference: “Use Anyway, Exclude”
Remove a tag from the Operational Profile or Project if it has
enough bad data
Remove periods of bad/abnormal operation (if known)

Copyright © 2018 AVEVA Group plc and its subsidiaries. All rights reserved.
Building Models
PRiSM Client – Create Operational Profile
Rule of Thumb:
Use LSH algorithm
Auto-target to 250 +/- 50 clusters
Select which data sets to use
Un-check data sets with very old data
Un-check data sets that will be used for testing in Data Playback
Select which points to use
Un-check points with too much bad data
Un-check points that are in the project for reference, but don’t
model well

Copyright © 2018 AVEVA Group plc and its subsidiaries. All rights reserved.
Building Models
PRiSM Client – Alarms
Set Alarm Window
Set Alarm Thresholds
Rule of Thumb: OMR Warning = 5%, OMR Alarm = 10%
Use Absolute Signal Deviation, not Relative
Set Actual Value alarms as needed, especially on tags not
included in the Operational Profile
Detailed Alarm View allows for further customization
If you’re using a template, alarm thresholds will be
automatically inherited

Copyright © 2018 AVEVA Group plc and its subsidiaries. All rights reserved.
Test Models
Data Playback
Use Data Playback to verify your model tracks well
Not too many false alarms
Does alarm when there is a problem
If you have an example from the past where the equipment
failed, test against that time period to make sure the model
would alarm
Make tweaks to the training data, point selection, alarm
thresholds, operational profile settings as needed
Remember to re-create the operational profile before re-testing
When testing different training data sets or point selections, you
can create multiple operational profiles and switch between them

Copyright © 2018 AVEVA Group plc and its subsidiaries. All rights reserved.
Deploy Model and Monitor
Fine-Tuning
Once satisfied with the results of Data Playback, deploy the
model
Observe how the model performs in “real life”
Make any tweaks as needed
Rule of Thumb: ~1 month “tuning period” before model is
considered in production

Copyright © 2018 AVEVA Group plc and its subsidiaries. All rights reserved.
Monitoring Process

Copyright © 2018 AVEVA Group plc and its subsidiaries. All rights reserved.
Monitoring Process
Alarm Management
Develop a reliable and consistent process to monitor your
models
Having the perfect model doesn’t matter if it’s not being looked at
Use Alarm Management in PRiSM Web

Acknowledge Monitor Model

Pending Equipment Sensor

Copyright © 2018 AVEVA Group plc and its subsidiaries. All rights reserved.
Monitoring Process
Alarm Management
Use a consistent process so that
everyone accessing PRiSM knows
the status of each alarm
The alarm state icons let everyone
know immediately:
Has someone looked at this alarm
already or not?
Is this alarm “serious” or not?

Copyright © 2018 AVEVA Group plc and its subsidiaries. All rights reserved.
Monitoring Process
Developing a Rhythm MDSC analysts
review PRiSM
alarms

A regular process helps set Site provides


findings and
Analysts flag
potential issues
and maintain expectations feedback to MSDC

between the monitoring center


and the sites
Goal is to provide early
warning of days, weeks Site reviews report
Analysts review
these issues with
Not a replacement for SMEs

operations

MDSC sends report Analysts create


to site cases to track issue

Copyright © 2018 AVEVA Group plc and its subsidiaries. All rights reserved.
Monitoring Process
Applying What You Learn
If you find yourself constantly dismissing nuisance alarms on
a certain model, figure out why you’re getting those alarms
and fix it
Taking a little longer to correctly fix an issue today can be worth it
in the long run
Tie what you learn when monitoring back into your models
See if there are lessons you can apply when designing future
models
Think about real catches you’ve had and create fault diagnostics
for them

Copyright © 2018 AVEVA Group plc and its subsidiaries. All rights reserved.
Fault Diagnostics

Copyright © 2018 AVEVA Group plc and its subsidiaries. All rights reserved.
Fault Diagnostics
Design Philosophy
Create fault diagnostics based on your equipment knowledge
Use them as a way to store expert knowledge in the software
Think about what failures you’ve seen on this equipment in
the past
The fault diagnostics you can create are limited based on the
tags you used in the model

Copyright © 2018 AVEVA Group plc and its subsidiaries. All rights reserved.
Fault Diagnostics
Design Philosophy
Fault diagnostics don’t need to be added immediately
A well-tuned model with no fault diagnostics is better than a
poorly-tuned with great fault diagnostics
Adding fault diagnostics is a good project once all the critical
models have been deployed

Copyright © 2018 AVEVA Group plc and its subsidiaries. All rights reserved.
Fault Diagnostics
Monitoring Philosophy
Fault diagnostics aren’t always going to be 100% correct but
they can be a good starting point
Use the “Analysis” button in Web to see what might be happening
In addition to the top OMR contributors, also check trends related
to any faults that have triggered
Always use common sense to verify the correctness of a fault
detection

Copyright © 2018 AVEVA Group plc and its subsidiaries. All rights reserved.
Catch Cost-Savings
Analysis

Copyright © 2018 AVEVA Group plc and its subsidiaries. All rights reserved.
Catch Analysis
Determine Avoided Costs
The Primary Goal of performing a cost analysis is to create a business case
for customers.
There are two main factors considered when performing a catch analysis:
Cost Savings
Replacement Parts
Labour
Waste Generated
Environmental Impact (Fines)
Lost Opportunity
Downtime
Reduced Efficiency

Copyright © 2018 AVEVA Group plc and its subsidiaries. All rights reserved.
Cost Savings
Estimated Cost Normalized by Metric
Probability (%)
Catastrophic
5%
Moderate
20%
Mild
75%

Estimated Probability  
Maintenance Costs
 
   
 
 

Maintenance Costs Parts ($)


Labor Hours (hrs)
$10,000.00
120
$5,000.00
36
$2,500.00
8

Parts Labor Rate ($/hrs)


Labor Total ($)
$50.00
$6,000.00
$50.00
$1,800.00
$50.00
$400.00

Labor Hours Total Maintenance Costs ($)


Total Maintenance Impact ($)
$16,000.00
$4,335.00
$6,800.00 $2,900.00
 
     
Waste Generated      
Waste Generated Total Pounds Scrapped (lbs) 15000 5000 1000

Raw Materials Cost Cost per Pound ($/lbs)


Total Waste Generated Cost ($)
$0.85
$12,750.00
$0.85
$4,250.00
$0.85
$850.00
Total Waste Generated Impact ($) $2,125.00  
     
Environmental Costs      
Environmental Impact Total Pounds of Pollutant Released (lbs) 5000 2000 0
Fine per Pound of Pollutant ($/lbs) $10.00 $10.00 $10.00
Gov’t Fines Total Environmental Cost ($) $50,000.00 $20,000.00 $0.00
Total Environmental Cost Impact ($) $6,500.00  
     
Cost of Repair
Labor Hours (hrs) 20 4 1
(Subtract actual cost of repair) Labor Rate ($/hrs) $40.00 $40.00 $40.00
Labor Total ($) $800.00 $160.00 $40.00
Parts ($) $8,000.00 $4,000.00 $0
Total Repair Cost ($) $8,800.00 $4,160.00 $40.00
Summary Totals

Total Savings Total Cost Savings ($)


Total Cost Savings Impact ($)
$69,950.00
$11,660.00
$26,890.00
 
$3,710.00
 
Copyright © 2018 AVEVA Group plc and its subsidiaries. All rights reserved.
Lost Opportunity
Estimated Lost Opportunity Metric Catastrophic Moderate Mild
Probability (%) 5% 20% 75%
Normalized by Estimated Probability        
Occurrence Assumptions    
Occurrence Details Power Reduction (MW) 300 300 100

Length of Downtime Hours at Peak (10 hrs/day)


Hours at Off-Peak (14 hrs/day)
25
35
7.5
10.5
1.7
2.3
Fuel Cost and Other Incremental Costs ($/MW) $35.00 $35.00 $35.00
       

Loss of Generating Revenue Loss of Generating Income - Peak


Total Lost MWH Peak (MWH)
 
7500 2250
 
170
MW at Peak Price Avg. Replacement Cost Peak ($/MWH) $180.00 $180.00 $180.00
Penalty Rate Peak ($/MWH) $145.00 $145.00 $145.00
Total Lost Revenue Peak ($) $1,087,500.00 $326,250.00 $24,650.00
Total Lost Revenue Impact Peak ($) $138,112.50  
       
Loss of Generating Income - Off-Peak    
Loss of Generating Revenue Total Lost MWH Peak (MWH) 10500 3150 230
MW at Off-Peak Price Avg. Replacement Cost Off-Peak ($/MWH) $135.00 $135.00 $135.00
Penalty Rate Peak ($/MWH) $100.00 $100.00 $100.00
Total Lost Revenue Peak ($) $1,050,000.00 $315,000.00 $23,000.00
Total Lost Revenue Impact Peak ($) $132,750.00  
       
Total Lost Opportunity ($) $2,137,500.00 $641,250.00 $47,650.00
Lost Opportunity Cost Total Lost Opportunity Impact ($) $270,862.50    
Copyright © 2018 AVEVA Group plc and its subsidiaries. All rights reserved.
Summary

Copyright © 2018 AVEVA Group plc and its subsidiaries. All rights reserved.
Summary

Putting the work in upfront to design a good model will help


in the long run
Use TDTs to stay organized
The two most important factors to making a good model are
proper point selection and proper historical data cleaning
Add Fault Diagnostics once you’ve established a good model
None of the above means anything unless you follow through
with a good monitoring process
Designing good models and following through with a good
monitoring process improve your chances for catches

Copyright © 2018 AVEVA Group plc and its subsidiaries. All rights reserved.
Summary

Create
Build Test Monitor
TDTs
Determine Review Models in Models in Deploy and Fine- Get
(Template
Scope Point List PRiSM Data Models Tune Catches!
Design
Client Playback Models
Tool)

Apply Monitoring
Add Fault
Lessons to Future
Diagnostics
Models

Copyright © 2018 AVEVA Group plc and its subsidiaries. All rights reserved.
[email protected]
[email protected]

Copyright © 2018 AVEVA Group plc and its subsidiaries. All rights reserved.
This presentation may include predictions, estimates, intentions, beliefs
and other statements that are or may be construed as being forward-
looking. While these forward-looking statements represent our current
judgment on what the future holds, they are subject to risks and
uncertainties that could result in actual outcomes differing materially from
those projected in these statements. No statement contained herein
constitutes a commitment by AVEVA to perform any particular action or to
deliver any particular product or product features. Readers are cautioned
not to place undue reliance on these forward-looking statements, which
reflect our opinions only as of the date of this presentation. The Company
shall not be obliged to disclose any revision to these forward-looking
statements to reflect events or circumstances occurring after the date on
which they are made or to reflect the occurrence of future events.

Copyright © 2018 AVEVA Group plc and its subsidiaries. All rights reserved.

You might also like