Performance and Load Testing With Dynatrace: Release Better Software Faster

Download as pdf or txt
Download as pdf or txt
You are on page 1of 52
At a glance
Powered by AI
Some of the key takeaways from the document are performance as a service, continuous performance testing strategies, and the benefits of integrating load and performance testing into the development lifecycle.

Performance testing can be integrated into the development lifecycle by embedding performance tests in CI/CD pipelines. This allows performance feedback to be provided from every build and helps shift performance testing left in the development process.

The Dynatrace and NeoLoad integration provides strong bidirectional integration. It allows NeoLoad tests to retrieve Dynatrace metrics and push test information and events to Dynatrace. It also enables Dynatrace anomaly detection rules and tagging of entities for end-to-end tracing during load tests.

Rob Jahn

Technical Partner Manager


Performance and Dynatrace

Load Testing with Henrik Rexed


Partner Solution Evangelist
Dynatrace Neotys

Brian Wilson
Sales Engineering
Release Better Software Faster
Dynatrace

Jeff Yarbrough
Sales Engineering
Dynatrace
Learning Objectives

1. What is “Performance as a Service” ?

2. How to use Dynatrace’s out-of-the-box features within performance engineering


a. Tagging and Tagging rules
b. Request Attributes & Request Naming rules
c. Management Zones
d. API for time series & calculated metrics

3. Tests results review and performance issues triage within Dynatrace

4. How to integrate Dynatrace in your CI/CD pipelines with Automated Quality Gates

5. Using Neotys load testing platform


1. NeoLoad overview
2. Setup and run tests using Jenkins and NeoLoad SaaS

Confidential
2
Performance as a Service

Confidential
3
Performance as a Service defined

As an engineer I can request performance feedback on demand without having any


dependency to other teams or environments
Allow everyone to get performance feedback from their latest builds (through perf
tests) or from production in an easy way

Performance Center of Excellence Teams enable Self-Service to Engineers (more scale)

Elevate every engineer to become a Performance Expert (right tools and guard rails)

Scale through automation, best practices and standards

Confidential
4
Performance as a Self-Service @ Panera Bread

1 Test scripts with service level Objectives Checked into Repo

2 Continuous Test Execution by Engineers

3 Performance Feedback back From Jenkins and Saved Reports


4 Test scripts gets updated with latest production workload (future)

1 2

3 4

PE – Performance Warehouse
Google Analytics Dynatrace Vrops Data RUM / UEM Confidential
5
Why do we need it and what will it take?

Confidential
6
Change of SDLC Methodology

6+
Big Bang
Waterfall Requirements Design Implementation Testing Acceptance Deployment

4
Incremental
Rational Unified
Inception Elaboration Construction Transition
Process

2
Agile
Scrum, Kanban
Development Operations

1
Continuous
DevOps DevOps

Confidential
7
Change of Architectures – from Monolith

Presentation Layer

One application per host


Having the entire business logic

Business Layer

Data Layer

Confidential
8
Change of Methodology – to microservice

Front

Every service is
managed by a
Service A Service B Service C Sevice D Service E
dedicated
team

Service F Service G

Confidential
9
Continuous Performance Testing Strategy

API Testing Integration Testing Application Testing

Test a Component Test a System Test Real World

Continuous Testing Embedded in CI/CD Pipelines

Shift Left Performance

Confidential
10
Automate the analysis

Test Failure

Error rate

Throughput + Response Time versus


objectives

Resources check

Confidential
11
Quality Gates using performance scoring

1 Collecting the specification

Files define the indicators


Quality Gate and the objectives

3 Calculate the score 2 Retrieving


the data
=

Datasource

Confidential
12
Quality Gate Examples

Check 1 Check 2 Check 3 Check 4

• Is bad coding leading • New dependencies? On • Are we jeopardizing • Did we introduce


Purpose? our SLAs? new “hidden”
to higher costs?
exceptions?
• Services connecting • Does load balancing
Metrics accurately? work?
❑ Memory usage • Number of container • Difference between
❑ Bytes sent / received instances needed? Canaries?

❑ Overall CPU
Metrics
❑ CPU per transaction Metrics
❑ Number of incoming /
type
outgoing dependencies ❑ Response Time Metrics
❑ Number of instances (Percentiles)
❑ Total Exceptions
running on containers ❑ Throughput & Perf per
❑ Exceptions by Class &
Instance / Canary
Service
13
Confidential
What We Have Prepared for You Today

Confidential
14
Agenda

• Lab 0 - Connect to workshop VM, setup Dynatrace and demo application


• Lab 1 – How Dynatrace helps with performance analysis and automation
• Lab 2 – API
• Lab 3 – Automated performance test and analysis within a CI/CD pipeline
• Lab 4 – Advanced performance test automation using Neoload

Confidential
15
Demo Application

• Order processing application

• Web UI with 3 Java spring boot microservices


with embedded databases

• Components pre-compiled, built as a Docker


image, and staged on dockerhub

• Application is deployed using docker-


compose

Confidential
16
Demo Application Docker-Compose
version: '2'
services:
• Docker Compose is a tool for defining and frontend:
running multi-container Docker applications. image: dtdemos/keptn-orders-front-end:1
ports:
With Compose, you use a YAML file to - "80:8080"
environment:
configure your application’s services. SERVICES_PORT: "8080"
DT_APPLICATIONID: "frontend"
customer:
image: dtdemos/keptn-orders-customer-service:1
Port 8083 ports:
- "8081:8080"
catalog:
Port 80 image: dtdemos/keptn-orders-catalog-service:1
Port 8081
ports:
- "8082:8080"
Port 8082 order:
image: dtdemos/keptn-orders-order-service:1
ports:
- "8083:8080"
environment:
CUSTOMER_SERVICE_PORT: "8080"
CATALOG_SERVICE_PORT: "8080"
Confidential
17
Workshop Environment Virtual Machine
Orders Demo App

1 Registry with pre-built app images


Browser
3
SSH

Your
Laptop 5

6
Load generators Test Orchestration & Test History

OneAgent
Lab files & SaaS Cluster
scripts
GitHub Org 2
4
Confidential
18
Today’s Learnings and Takeaways

Confidential
19
Champion Performance as a self-service in your organization by starting with one app & team

1. Promote all teams (Dev/Test/Ops) using Dynatrace as “source of truth” for SLOs & performance analysis

2. Try out and establish best practices for all the “out of the box” Dynatrace features
a. Tagging and Tagging rules
b. Request Attributes & Request Naming rules
c. Management Zones
d. API for time series & calculated metrics

3. Integrate Push Events & Automated Quality Gates in your CI/CD pipelines (ensure SLOs)

4. Check out Keptn Quality Gates @ http://keptn.sh

Rob Jahn - Dynatrace Henrik Rexed - Neotys Brian Wilson - Dynatrace Jeff Yarbrough - Dynatrace
Technical Partner Manager
Confidential
20
Partner Solution Evangelist Sales Engineering Sales Engineering
Appendix – How Dynatrace Helps With PE

Confidential
21
It starts with the Dynatrace OneAgent

Confidential
22
Dynatrace out-of-the-box Service Data & Metrics

Service Metrics, e.g: Response Time, Failure Rate ... Which we can also chart or pull through the API
Confidential
23
Dynatrace out-of-the-box -- Service Web Request analysis

Which and how many HTTP POST requests to a specific URL pattern (e.g: /orange) take longer than our 100ms SLA?

Confidential
24
Dynatrace out-of-the-box – Front end performance analysis tools

Confidential
25
Dynatrace out-of-the-box – Database and SQL analysis

Confidential
26
Dynatrace out-of-the-box: Response Time Performance Analysis

Immediate answers for


where time is spent

Confidential
27
Adding Context to Our Tests

Confidential
28
Tags are the “where by” clause in APIs and web UI filtering

4229 k8s
Pods

env:perf

service:order env:perf

docker service:frontend

apache

env:perf

service:shipping

database

Confidential
29
Automate request naming and attributes used in transaction-level analysis

Confidential
30
Request Attributes in Multi-Dimensional Analysis Charts

SELECT Metric SPLIT BY Dimension FILTER BY Critiera

Confidential
31
Automate detection on/off for Davis AI root-cause detection

Push Test Anomaly Rules Detects Issues

FullStack Root Cause


Analysis

Confidential
32
Continuous Performance testing & automated analysis

Source Code Build Pipeline Release Pipeline


3
• Code 1. Build code 1. Deploy Code Application
• Pipeline 2. Run unit 4 Under Test
tests
• Perf Spec 2. Performance
3. Create Test
artifact
1 that Collect Data
contains
Perf Spec 5 API
Calls Full Stack Monitoring Data in Dynatrace
3. Deployment
Event
2 Application / Cloud
Infrastructure
4. Automated
6 API
Calls Monitoring Tool
Analysis

Automated
7

Confidential
33
Push Text Context using Dynatrace Events and API

Pushed Test
Details

Test Run Test Run


185 186

Confidential
34
From Dynatrace: Performance Signature as Code evaluated through Jenkins

“Performance Signature”
“Performance Signature” “Performance Signature” for every Build
for Build Nov 16 for Build Nov 17

“Multiple Metrics”
compared to prev
Timeframe

Simple Regression Detection


per Metric
https://www.neotys.com/performance-advisory-council/thomas_steinmaurer Confidential
35
Implementations of this automated analysis & scoring using the Dynatrace API

Performance Signature by T-Systems Quality Gates for CI/CD Pipelines


https://github.com/jenkinsci/performance-signature-plugin

Quality Gates
https://keptn.sh

Neoload Service Level Objectives

“Spec” files define guardrails


Regression test & compare build to build
Triage violations
Confidential
36
Find and address Common Distributed Architectural Anti-Patterns

1. N+1 call

2. N+1 query

3. Payload flood

4. Granularity Common Distributed Architectural Anti-Patterns


5. Tight Coupling

6. Inefficient Service Flow

7. Timeouts, Retries, Backoff

8. Dependencies

Confidential
37
Example: Inefficient service call flow analysis

Automatic measurement for time spent, requests & throughput by service

Confidential
38
Example: Cascading N+1 Query Pattern: This is a single end-to-end Trace

26k Database Calls

809
3956
4347
4773
3789
3915
4999
Confidential
39
Appendix – NeoLoad

Confidential
40
NeoLoad Platform

NeoLoad NeoLoad NeoLoad


Core Web SaaS

On-Prem Docker Stack Managed / Hosted

Load Generators RESTful APIs


Confidential
41
Collaboration between teams

• Performance engineers can update the projects build by the developers from NeoLoad GUI

• Early Testing assets can be reused in System wide testing

Confidential
42
One Platform for Component Testing and System Wide Testing

Confidential
43
NeoLoad as code

NeoLoad as code is JSON/YAML. It helps you to describe your project:


NeoLoad
Project

Variables

Server

Sla_profiles

User_Path

Populations

Scenarios

Include

https://www.neotys.com/documents/doc/modules/as-code/project.html
Confidential
44
How to take advantage of the NeoLoad as code

Define the yaml files


Define to your constant variables
( existing in your template project)

Build a template project Automate your test


All environment variables need to be Replace you yaml file
refer to constant variables during the pipeline

Confidential
45 4
Continuous Integration in the Cloud

OpenSource

GitLab is a web-based DevOps lifecycle tool that provides a Git-repository manager providing wiki, issue-tracking and CI/CD pipeline features, using an
open-source license, developed by GitLab Inc.

GitLab = pipeline, GitHub = repository

Confidential
46
NeoLoad web Launcher

CTL LG1 LG2


1 Deploy Load
generator and
Controller
Register to a
NeoLoad Web
Create a zip file with your NeoLoad
2 zone
Project

CI/CD 3 Run the NeoLoad web Launcher NeoLoad Web

NeoLoad
web Upload the project
Laucher And start the test on the requested
Load generator and Controller

Confidential
47
Dynatrace + NeoLoad: Strongest Out-of-the-Box Bidirectional Integration in the market!

NeoLoad Dynatrace
#1 Dynatrace Data in #2 Neoload Test
Neoload Context in PurePath

#3 Neoload Metrics #4 Neoload Events in


in Dynatrace Dynatrace

#5 Neoload Anomaly
Detection

Confidential
48
APIs: Integrations on multiple dimensions

< 100ms
calls calls 1 Get Service Under Test based on tags
X-Dynatrace

Sockshop Frontend Node.js Microservice Golang service 2 Follow Service Flow Dependencies

3 Understand FullStack Topology


runs on
3max 2 Inst.
Instances
4 Auto Tag all relevant entities
Microservice Cluster < 50% CPU
5 Push Testing Anomaly Rules
Webserver Golang process
cluster 6 Push Test Information
Docker Container
7 Tag Virtual User Traffic

8 Pull Dynatrace Metrics to Neoload

Hosts Cloud Foundry Host 9 React to Dynatrace Anomaly Detection

Confidential
49
The Dynatrace User Path

• DynatraceConfiguration Will :
• Create The Request Attribute rules
Init
• Tag the services and all its dependencies -> Create a tag
(called once at the NeoLoad-[name of the TAG]
beggining of the • Create the Request Naming Rules
test)
• DynatraceSetAnomalieDetection
• Create the anomalie dection rule define from a jsonfile
Action
• DynatraceMonitoring will :
(will be executed • Send the NeoLoad web Data in Dynatrace
during the entire
• Collect Dynatrace metrics and send them to NeoLoad Web
test)

End • DynatraceEvents : Create the Events on each services


involved in the test with the url of Neoload web test
(called once at the • DynatraceDeleteAnomalieDetection : Delete the anomalie
end of the test) detection rules created at the begining of the test

Confidential
50
NeoLoad / Dynatrace integrations helping you to deliver

• Web Request Tagging

• Creation of Custom Event

• Sending Load Testing Metrics to Dynatrace (NeoLoad Web -> Dynatrace)

• Data Retrieval – Dynatrace->NeoLoad

Confidential
51
Confidential
52

You might also like