Zabbix Documentation 7.0.en
Zabbix Documentation 7.0.en
Zabbix Documentation 7.0.en
ZABBIX
21.07.2024
Contents
Zabbix Manual 5
Copyright notice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1 Manual structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2 What is Zabbix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
3 Zabbix features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
4 Zabbix overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
5 What’s new in Zabbix 7.0.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
6 What’s new in Zabbix 7.0.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3 Zabbix processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
1 Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2 Agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3 Agent 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
4 Proxy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
5 Java gateway . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
6 Sender . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
7 Get . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
8 JS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
9 Web service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
4 Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
1 Getting Zabbix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
2 Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
3 Best practices for secure Zabbix setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
4 Installation from sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
5 Installation from packages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
1
8 Templates and template groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406
9 Templates out of the box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 407
10 Notifications upon events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 417
11 Macros . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 462
12 Users and user groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 473
13 Storage of secrets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 482
14 Scheduled reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 489
15 Data export . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 492
8 Service monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 497
1 Service tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 498
2 SLA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 502
3 Setup example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 503
9 Web monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 508
1 Web monitoring items . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 517
2 Real-life scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 518
10 Virtual machine monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 526
1 VMware monitoring item keys . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 528
2 Virtual machine discovery key fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 546
3 JSON examples for VMware items . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 550
4 VMware monitoring setup example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 554
11 Maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 559
12 Regular expressions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 563
13 Problem acknowledgment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 568
1 Problem suppression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 571
14 Configuration export/import . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 572
1 Template groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 573
2 Host groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 574
3 Templates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 574
4 Hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 589
5 Network maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 603
6 Media types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 611
15 Discovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 617
1 Network discovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 617
2 Active agent autoregistration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 626
3 Low-level discovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 629
16 Distributed monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 679
1 Proxies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 680
17 Encryption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 687
1 Using certificates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 694
2 Using pre-shared keys . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 701
3 Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 703
18 Web interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 706
1 Menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 706
2 Frontend sections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 712
3 User settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 912
4 Global search . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 917
5 Frontend maintenance mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 919
6 Page parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 919
7 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 920
8 Creating your own theme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 921
9 Debug mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 922
10 Cookies used by Zabbix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 922
11 Time zones . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 923
12 Resetting password . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 924
13 Time period selector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 925
19 API . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 926
Method reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 931
Appendix 1. Reference commentary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1665
Appendix 2. Changes from 6.4 to 7.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1670
Appendix 3. Changes in 7.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1675
20 Extensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1675
1 Loadable modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1677
2 Plugins . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1686
2
3 Frontend modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1693
21 Appendixes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1694
1 Frequently asked questions / Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1694
2 Installation and setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1695
3 Process configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1752
4 Protocols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1820
5 Items . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1844
6 Supported functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1875
7 Macros . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1905
8 Unit symbols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1949
9 Time period syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1950
10 Command execution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1950
11 Version compatibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1951
12 Database error handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1952
13 Zabbix sender dynamic link library for Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1952
14 Python library for Zabbix API . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1953
15 Service monitoring upgrade . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1953
16 Other issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1954
17 Agent vs agent 2 comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1955
18 Escaping examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1956
22 Quick reference guides . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1957
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1957
1 Monitor Linux with Zabbix agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1957
2 Monitor Windows with Zabbix agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1961
3 Monitor Apache via HTTP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1964
4 Monitor MySQL with Zabbix agent 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1968
5 Monitor VMware with Zabbix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1976
6 Monitor network traffic with Zabbix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1979
7 Monitor network traffic using active checks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1982
3
SYNOPSIS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2038
DESCRIPTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2038
OPTIONS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2039
EXAMPLES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2040
SEE ALSO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2040
AUTHOR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2040
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2040
zabbix_js . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2040
NAME . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2040
SYNOPSIS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2040
DESCRIPTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2041
OPTIONS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2041
EXAMPLES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2041
SEE ALSO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2041
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2041
zabbix_proxy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2041
NAME . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2042
SYNOPSIS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2042
DESCRIPTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2042
OPTIONS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2042
FILES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2043
SEE ALSO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2043
AUTHOR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2043
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2043
zabbix_sender . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2043
NAME . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2043
SYNOPSIS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2043
DESCRIPTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2044
OPTIONS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2044
EXIT STATUS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2046
EXAMPLES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2046
SEE ALSO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2047
AUTHOR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2047
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2047
zabbix_server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2047
NAME . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2047
SYNOPSIS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2047
DESCRIPTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2047
OPTIONS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2048
FILES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2049
SEE ALSO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2049
AUTHOR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2049
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2049
zabbix_web_service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2049
NAME . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2050
SYNOPSIS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2050
DESCRIPTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2050
OPTIONS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2050
FILES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2050
SEE ALSO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2050
AUTHOR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2050
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2050
4
Zabbix Manual
Welcome to the user manual for Zabbix software. These pages are created to help users successfully manage their monitoring
tasks with Zabbix, from the simple to the more complex ones.
Copyright notice
Zabbix documentation is NOT distributed under the AGPL-3.0 license. Use of Zabbix documentation is subject to the following
terms:
You may create a printed copy of this documentation solely for your own personal use. Conversion to other formats is allowed as
long as the actual content is not altered or edited in any way. You shall not publish or distribute this documentation in any form or on
any media, except if you distribute the documentation in a manner similar to how Zabbix disseminates it (that is, electronically for
download on a Zabbix web site) or on a USB or similar medium, provided however that the documentation is disseminated together
with the software on the same medium. Any other use, such as any dissemination of printed copies or use of this documentation,
in whole or in part, in another publication, requires the prior written consent from an authorized representative of Zabbix. Zabbix
reserves any and all rights to this documentation not expressly granted above.
1 Introduction
1 Manual structure
Structure
The content of this manual is divided into sections and subsections to provide easy access to particular subjects of interest.
When you navigate to respective sections, make sure that you expand section folders to reveal full content of what is included in
subsections and individual pages.
Cross-linking between pages of related content is provided as much as possible to make sure that relevant information is not
missed by the users.
Sections
Introduction provides general information about current Zabbix software. Reading this section should equip you with some good
reasons to choose Zabbix.
Zabbix concepts explain the terminology used in Zabbix and provides details on Zabbix components.
Installation and Quickstart sections should help you to get started with Zabbix. Zabbix appliance is an alternative for getting a
quick taster of what it is like to use Zabbix.
Configuration is one of the largest and more important sections in this manual. It contains loads of essential advice about how to set
up Zabbix to monitor your environment, from setting up hosts to getting essential data to viewing data to configuring notifications
and remote commands to be executed in case of problems.
Service monitoring section details how to use Zabbix for a high-level overview of your monitoring environment.
Web monitoring should help you learn how to monitor the availability of web sites.
Virtual machine monitoring presents a how-to for configuring VMware environment monitoring.
Maintenance, Regular expressions, Event acknowledgment and XML export/import are further sections that reveal how to use these
various aspects of Zabbix software.
Discovery contains instructions for setting up automatic discovery of network devices, active agents, file systems, network inter-
faces, etc.
Distributed monitoring deals with the possibilities of using Zabbix in larger and more complex environments.
Encryption helps explaining the possibilities of encrypting communications between Zabbix components.
Web interface contains information specific for using the web interface of Zabbix.
Detailed lists of technical information are included in Appendixes. This is where you will also find a FAQ section.
5
2 What is Zabbix
Overview
Zabbix was created by Alexei Vladishev, and currently is actively developed and supported by Zabbix SIA.
Zabbix is a software that monitors numerous parameters of a network and the health and integrity of servers, virtual machines,
applications, services, databases, websites, the cloud and more. Zabbix uses a flexible notification mechanism that allows users
to configure email-based alerts for virtually any event. This allows a fast reaction to server problems. Zabbix offers excellent
reporting and data visualization features based on the stored data. This makes Zabbix ideal for capacity planning.
Zabbix supports both polling and trapping. All Zabbix reports and statistics, as well as configuration parameters, are accessed
through a web-based frontend. A web-based frontend ensures that the status of your network and the health of your servers can be
assessed from any location. Properly configured, Zabbix can play an important role in monitoring IT infrastructure. This is equally
true for small organizations with a few servers and for large companies with a multitude of servers.
Zabbix is free of cost. Zabbix is written and distributed under the AGPL-3.0 license. It means that its source code is freely distributed
and available for the general public.
Commercial support is available and provided by Zabbix Company and its partners around the world.
Users of Zabbix
Many organizations of different size around the world rely on Zabbix as a primary monitoring platform.
3 Zabbix features
Overview
Zabbix is a highly integrated network monitoring solution, offering a multiplicity of features in a single package.
Data gathering
• you can define very flexible problem thresholds, called triggers, referencing values from the backend database
• sending notifications can be customized for the escalation schedule, recipient, media type
• notifications can be made meaningful and helpful using macro variables
• automatic actions include remote commands
Real-time graphing
• monitored items are immediately graphed using the built-in graphing functionality
• Zabbix can follow a path of simulated mouse clicks on a web site and check for functionality and response time
• ability to create custom graphs that can combine multiple items into a single view
• network maps
• slideshows in a dashboard-style overview
• reports
• high-level (business) view of monitored resources
6
• data stored in a database
• configurable history
• built-in housekeeping procedure
Easy configuration
Use of templates
Network discovery
Zabbix API
• Zabbix API provides programmable interface to Zabbix for mass manipulations, third-party software integration and other
purposes.
Permissions system
Binary daemons
4 Zabbix overview
Architecture
Zabbix consists of several major software components. Their responsibilities are outlined below.
Server
Zabbix server is the central component to which agents report availability and integrity information and statistics. The server is
the central repository in which all configuration, statistical and operational data are stored.
Database storage
All configuration information as well as the data gathered by Zabbix is stored in a database.
Web interface
For an easy access to Zabbix from anywhere and from any platform, the web-based interface is provided. The interface is part of
Zabbix server, and usually (but not necessarily) runs on the same physical machine as the one running the server.
Proxy
7
Zabbix proxy can collect performance and availability data on behalf of Zabbix server. A proxy is an optional part of Zabbix
deployment; however, it may be very beneficial to distribute the load of a single Zabbix server.
Agent
Zabbix agents are deployed on monitoring targets to actively monitor local resources and applications and report the gathered
data to Zabbix server. Since Zabbix 4.4, there are two types of agents available: the Zabbix agent (lightweight, supported on
many platforms, written in C) and the Zabbix agent 2 (extra-flexible, easily extendable with plugins, written in Go).
Data flow
In addition it is important to take a step back and have a look at the overall data flow within Zabbix. In order to create an item that
gathers data you must first create a host. Moving to the other end of the Zabbix spectrum you must first have an item to create a
trigger. You must have a trigger to create an action. Thus if you want to receive an alert that your CPU load is too high on Server X
you must first create a host entry for Server X followed by an item for monitoring its CPU, then a trigger which activates if the CPU
is too high, followed by an action which sends you an email. While that may seem like a lot of steps, with the use of templating it
really isn’t. However, due to this design it is possible to create a very flexible setup.
AGPL-3.0 license
Zabbix software is now is written and distributed under the AGPL-3.0 license (previously GPL v2.0 license).
A software update check is now added by default to new and existing installations - Zabbix frontend will communicate to the public
Zabbix endpoint to check for updates.
The news about available Zabbix software updates are displayed in Reports -> System information and (optionally) in the System
Information dashboard widget.
You can disable the software update check by setting AllowSoftwareUpdateCheck=0 in server configuration.
Asynchronous pollers
New poller processes have been added capable of executing multiple checks at the same time:
• agent poller
• http agent poller
• snmp poller (for walk[OID] and get[OID] items)
These pollers are asynchronous - they are capable of starting new checks without the need to wait for a response, with concurrency
that is configurable up to 1000 concurrent checks.
Asynchronous pollers have been developed because, in comparison, the synchronous poller processes can execute only one check
at the same time and most of their time is spent on waiting for the response. Thus the efficiency could be increased by starting
new parallel checks while waiting for network response, and the new pollers do that.
You can start asynchronous agent pollers by modifying the value of StartAgentPollers - a new server/proxy parameter. HTTP agent
pollers can be started by modifying StartHTTPAgentPollers respectively. SNMP pollers can be started by modifying StartSNMPPollers
respectively.
8
The maximum concurrency for asynchronous pollers (agent, HTTP agent and SNMP) is defined by MaxConcurrentChecksPerPoller.
Note that after the upgrade all agent, HTTP agent and SNMP walk[OID] checks will be moved to asynchronous pollers.
As part of the development, the persistent connections cURL feature has been added to HTTP agent checks.
Browser monitoring
A new item type - Browser item - has been added to Zabbix, enabling the monitoring of complex websites and web applications
using a browser. Browser items allow the execution of user-defined JavaScript code to simulate browser-related actions such as
clicking, entering text, navigating through web pages, etc.
This item collects data over HTTP/HTTPS and partially implements the W3C WebDriver standard with Selenium Server or a plain
WebDriver (for example, ChromeDriver) as a testing endpoint.
Additionally, this feature adds the Website by Browser template and new elements to configuration export/import, Zabbix
server/proxy configuration files, timeouts, and the zabbix_js command-line utility. For more information, see Upgrade notes to
7.0.0.
Proxy load balancing is implemented by introducing proxy groups in Zabbix. Proxy groups provide automatic distribution of hosts
between proxies, proxy load re-balancing and high availability - when a proxy goes offline, its hosts are immediately distributed
among other proxies in the group.
For more information, see proxy load balancing and high availability.
A memory buffer has been developed for Zabbix proxy. The memory buffer allows to store new data (item values, network discovery,
host autoregistration) in the buffer and upload to Zabbix server without accessing the database.
In installations before Zabbix 7.0 the collected data was stored in the database before uploading to Zabbix server. For these
installations this remains the default behavior after the upgrade.
For optimized performance, it is recommended to configure the use of memory buffer on the proxy. This is possible by modifying
the value of ProxyBufferMode from ”disk” (hardcoded default for existing installations) to ”hybrid” (recommended) or ”memory”.
It is also required to set the memory buffer size (ProxyMemoryBufferSize parameter).
In hybrid mode the buffer is protected from data loss by flushing unsent data to the database if the proxy is stopped, the buffer is
full or data too old. When all values have been flushed into database, the proxy goes back to using memory buffer.
In memory mode, the memory buffer will be used, however, there is no protection against data loss. If the proxy is stopped, or the
memory gets overfilled, the unsent data will be dropped.
The hybrid mode (ProxyBufferMode=hybrid) is applied to all new installations since Zabbix 7.0.
Additional parameters such as ProxyMemoryBufferSize and ProxyMemoryBufferAge define the memory buffer size and the maxi-
mum age of data in the buffer, respectively.
New internal items have been added to monitor the proxy memory buffer.
Previously provisioned users were limited only to the media created during provisioning without the flexibility of editing such
properties as working hours or severities.
Additionally, when configuring user media mapping for provisioning such fields as When active, Use if severity and Enabled are
now available. Note that changes to user media type mapping form will take effect only for new media created during provisioning.
Timeout configuration per item is now available for more item types (see supported item types). In addition to setting the timeout
values on the item level, it is possible to define global and proxy timeouts for various item types.
Timeouts configured on the item level have the highest priority. By default, global timeouts are applied to all items; however, if
proxy timeouts are set, they will override the global ones.
9
Oracle DB deprecated
The support for Oracle as a backend database has been deprecated and is expected to be completely removed in future versions.
For compatibility with older agents, a failover to the old plaintext protocol has been added. If the agent returns ”ZBX_NOTSUPPORTED”,
Zabbix will cache the interface as old protocol and retry the check by sending only the plaintext item key.
Zabbix get can now be run with a new option -P --protocol <value> where ”value” is either:
• auto - connect using JSON protocol, fall back and retry with plaintext protocol (default);
• json - connect using JSON protocol key;
• plaintext - connect using plaintext protocol where just the item key is sent.
Zabbix agent and agent 2 protocols have been unified by switching Zabbix agent to Zabbix agent 2 protocol. The difference
between Zabbix agent and Zabbix agent 2 requests/responses is expressed by the ”variant” tag value (”1” - Zabbix agent, ”2” -
Zabbix agent 2).
Flexible/scheduling intervals are now supported in active checks by both Zabbix agent and Zabbix agent 2 (previously only Zabbix
agent 2).
Resources that are no longer discovered by low-level discovery now can be disabled automatically. They can be disabled immedi-
ately, after a specified time period or never (see the new Disable lost resources parameter in the discovery rule configuration).
Lost resources (hosts, items, triggers) are marked by an info icon in the status column. The tooltip text provides details on their
status.
Within the same development the Keep lost resources period parameter was renamed to Delete lost resources with options to
delete immediately, after a specified time period or never.
Manual user input for frontend scripts allows to supply a custom parameter on each execution of the script. This saves the necessity
to create multiple similar user scripts with only a single parameter difference.
For example, you may want to supply a different integer or a different URL address to the script during execution.
• use the {MANUALINPUT} macro in the script (commands, script, script parameter) where required; or in the URL field of URL
scripts;
• in advanced script configuration, enable manual user input and configure input options:
10
With user input enabled, before script execution, a Manual input popup will appear to the user asking to supply a custom value.
The supplied value will replace {MANUALINPUT} in the script.
Depending on the configuration, the user will be asked to enter a string value or select the value from a dropdown of pre-determined
options.
Previously sending specific data to Zabbix server was possible using the Zabbix sender utility or by implementing a custom JSON-
based communication protocol similar to that used in Zabbix sender.
Now it is also possible to send data to Zabbix server via HTTP protocol using the history.push API method. Note that receiving
sent data requires a configured trapper item or an HTTP agent item (with trapping enabled).
In addition, correct history.push operations are recorded in Reports → Audit log that has additional filtering options (a new Push
action and History resource), and the history.push API method is also available in the Allow/Deny list of API methods when
configuring a user role.
Previously maintenances were recalculated only every minute causing a possible latency of up to 60 seconds for starting or stopping
a maintenance period.
Now maintenances are still recalculated every minute or as soon as the configuration cache is reloaded if there are changes to the
maintenance period.
Every second the timer process checks if any maintenances must be started/stopped based on whether there are changes to the
maintenance periods after the configuration update. Thus the speed of starting/stopping maintenance periods depends on the
configuration update interval (10 seconds by default). Note that maintenance period changes do not include Active since/Active
till settings. Also, if a host/host group is added to an existing active maintenance period, the changes will only be activated by the
timer process at the start of next minute.
Permission checks have been made much faster by introducing several intermediary tables for checking non-privileged user per-
missions.
These tables keep hashes (SHA-256) of user group sets and host group sets for each user/host respectively. Additionally, there is
a permission table storing only the accessible combinations of users and hosts, specified by the hash IDs.
11
This improvement makes the loading of permission-heavy frontend pages (i.e. hosts, problems) much faster. Note that hashes
and permissions are not calculated for Super-admin users.
Trigger action operation, recovery operation and update operation execution on Zabbix server now occurs immediately (less than
100 milliseconds) after a trigger status change, whereas previously users could experience up to 4 seconds of latency.
The reduction in latency is made possible by implementing interprocess communication (IPC) mechanisms between multiple pro-
cesses (escalator and escalation initiator, escalator and alerter, preprocessing manager and history syncer).
Widgets Several new widgets have been added in the new version, while the available functionality in others has been en-
hanced. Additionally, dashboard widgets can now connect and communicate with each other, making widgets and dashboards
more dynamic.
Gauge
A Gauge widget has been added to dashboard widgets, allowing to display the value of a single item as a gauge. For more
information, see Gauge.
Pie chart
A Pie chart widget has been added to dashboard widgets, allowing to display values of selected items as:
• pie chart;
• doughnut chart.
As part of this development, a Show aggregation function checkbox has been added to the graph widget configuration (in Legend
tab).
Honeycomb
12
A Honeycomb widget has been added to dashboard widgets, which offers a dynamic and vibrant overview of the monitored network
infrastructure and resources, where host groups, such as virtual machines and network devices, along with their respective items,
are visually represented as interactive hexagonal cells. For more information, see Honeycomb.
Top triggers
A Top triggers widget has been added to dashboard widgets, which allows viewing the triggers with the highest number of problems.
The new Item history dashboard widget has replaced the Plain text widget, offering several improvements.
Unlike the Plain Text widget, which only displayed the latest item data in plain text, the Item History widget supports various display
options for multiple item types (numeric, character, log, text, and binary). For example, it can show progress bars or indicators,
images for binary data types (useful for browser items), and highlight text values (useful for log file monitoring).
For more information, see Item history. For details about the replacement of the Plain text widget, see Upgrade notes for 7.0.0.
13
Host navigator and Item navigator
Host navigator and Item navigator widgets have been added to dashboard widgets. These widgets display hosts or items, respec-
tively, based on various filtering and grouping options, and allow to control the information displayed in other widgets based on
the selected host or item. For more information, see Host navigator and Item navigator.
Dashboard widgets can now connect and communicate with each other, making widgets and dashboards more dynamic. Multiple
widgets have parameters that enable them to share configuration data between compatible widgets or the dashboard.
• Host groups, Hosts, and Item parameters allow you to select either the respective entities or a data source providing them.
• Enable host selection parameter has been replaced with the Override host parameter, which allows you to select a data
source that provides hosts.
14
• Time period parameter has been added to multiple widgets, which allows you to select a data source that provides a time
period.
• Map parameter in Map widget allows to select either a map or another widget as a data source for maps.
• Graph parameter in Graph (classic) widget allows to select either a graph or another widget as a data source for graphs.
Depending on the widget and its parameters, the data source can be either a compatible widget from the same dashboard or the
dashboard itself. For more information, see Dashboard widgets.
For changes to stock templates that are shipped with Zabbix, see Template changes.
Time periods now can be configured in the Item value and Top hosts widgets.
It is also now possible to display an aggregated value in the item value widget for the chosen period. The aggregated value can
be displayed as:
• minimum
• maximum
• average
• count
• sum
• first
• last
These added features are useful for creating data comparison widgets. For example, in one widget you may display the latest
value, while in another the average value for a longer period. Or several widgets can be used for side-by-side comparison of
aggregated values from various periods in the past.
Previously, on a template dashboard, you could create only the following widgets: Clock, Graph (classic), Graph prototype, Item
value, Plain text, URL.
Now, besides sorting by Item value, it is also possible to set Host name or Text column as the order column in Top hosts widget.
Host availability widget now allows displaying the hosts with Zabbix agent (active checks) interface. One more availability status
has been added, i.e., Mixed, which corresponds to the situation when at least one interface is unavailable and at least one is
either available or unknown. Moreover, the possibility to see only the total of hosts, without breakdown by interfaces, has been
introduced.
The Graph widget now supports configuring a variable number of legend rows determined by the number of configured items.
New functions have been added for use in trigger expressions and calculated items:
Updated functions
• Aggregate functions now also support non-numeric types for calculation. This may be useful, for example, with the count
and count_foreach functions.
• The count and count_foreach aggregate functions support optional parameters operator and pattern, which can be used to
fine-tune item filtering and only count values that match given criteria.
• All foreach functions no longer include unsupported items in the count.
• The function last_foreach, previously configured to ignore the time period argument, accepts it as an optional parameter.
• Supported range for values returned by prediction functions has been expanded to match the range of double data type. Now
timeleft() function can accept values up to 1.7976931348623158E+308 and forecast() function can accept values ranging
from -1.7976931348623158E+308 to 1.7976931348623158E+308.
15
Items Consistent default history storage period
The default period for keeping item history has been made consistent at 31 days in the frontend and in the database. This change
affects item, template item and item prototype configuration forms as well as the history storage period override in low-level
discovery.
Now, if a floating point value is received for an unsigned integer item, the value will be trimmed from the decimal part and saved
as an integer. Previously a floating point value would make an integer item unsupported.
A new eventlog.count item has been added to Zabbix agent/agent 2 on Windows. This item returns an integer value with the
count of lines in the Windows event log based on the specified parameters.
A new get[OID] SNMP item has been added allowing to query for a single OID value asynchronously.
Internal items
Internal items have been added to monitor the proxy memory buffer:
• net.dns.perf item returns the number of seconds spent waiting for a response from a service, timing the net.dns item
execution.
• net.dns.get Zabbix agent 2 item returns detailed DNS record information.
The following Zabbix agent/agent 2 items have been updated:
• net.dns and net.dns.record items now accept the DNS name in reversed and non-reversed format when performing
reverse DNS lookups;
• proc.get items in ”process” and ”summary” mode now also return the PSS (proportional set size) memory on Linux;
• system.sw.packages and system.sw.packages.get items are now supported on Gentoo Linux;
• system.hostname item can now return a Fully Qualified Domain Name, if the new fqdn option is specified in the type
parameter;
• wmi.get and wmi.getall items used with Zabbix agent 2 now return JSON with boolean values represented as strings (for
example, "RealTimeProtectionEnabled": "True" instead of "RealTimeProtectionEnabled": true returned
previously) to match the output format of these items on Zabbix agent;
• oracle.ts.discovery Zabbix agent 2 item now returns a new {#CON_NAME} LLD macro with container name;
• oracle.ts.stats Zabbix agent 2 item has a new conname parameter to specify the target container name. The JSON
format of the returned data has been updated. When no tablespace, type, or conname is specified in the key parameters,
the returned data will include an additional JSON level with the container name, allowing differentiation between containers.
Simple checks
The vmware.eventlog item now supports optional filtering by severity in the third parameter.
The vmware.vm.discovery item now also returns data on virtual machine network interfaces. This data can be used to configure
custom host interfaces.
The vmware.vm.net.if.discovery item now also returns an array of network interface addresses.
A new options parameter has been added to the following items:
• icmpping
• icmppingloss
• icmppingsec
16
This parameter can be used to specify whether redirected responses should be treated as target host up or target host down. See
simple checks for more details.
Engine IDs in SNMPv3 are used as unique identifiers of the device. Sometimes Engine IDs are the same in several devices because
of misconfiguration or factory settings. As SNMP standards require Engine IDs to be unique, items sharing the same Engine ID
become unsupported in Zabbix leading to availability issues with these devices.
To help troubleshoot such issues, information about SNMPv3 devices sharing the same Engine ID will now be logged periodically
by Zabbix server. Note that duplicate Engine ID detection works in each SNMP poller separately.
Each standard item now has a direct link from the frontend to its documentation page.
The links are placed under the question mark icon, when opening the item helper window from the item configuration form (click
on Select next to the item key field).
Error handling in case of a failure to retrieve an item value (and thus it becoming unsupported) previously lacked the ability to
distinguish the reason or runtime stage where the process failed. All errors had to be handled using one and the same option for
error handling - either to discard the value, set a specified value or set a specified error message.
It is now possible to match the error message to a regular expression. If the error matches (or does not match) it is possible to
specify how the error case should be processed. For example, a specific error message can be ”mapped” to a more general case
to be matched for and handled by a further preprocessing step, or some intermittent (e.g., network connectivity) issue can be
handled differently to a definite failure to acquire the item value.
Multiple Check for not supported value preprocessing steps can now be added. Note that there can be only one ”any error”
matching step at the end of the pipeline probing for unsupported state of the item. If present, it is activated if none of the specific
checks (mis)matched the according pattern, or a (modified) error message was carried over - i.e., no ”Discard value” or ”Set value
to” override came into effect.
The previous design of the item mass update form was not sufficiently clear if the preprocessing step update would add or replace
preprocessing steps. In the new design Replace and Remove all radio buttons have been added, making it clear for users what to
expect as the result of preprocessing step mass update:
17
Macros User macros supported in item and item prototype names
User macros are now supported in item names and item prototype names.
Note that user macro support was removed from item/item prototype names in Zabbix 6.0. Now it has been restored. It is also
now supported to search for item name with the macros resolved, which previously was not supported.
The item name with resolved macros is stored in a separate database table (item_rtname), which is an extension of the items
table. For each record in the items table, a corresponding item_rtname record is created (except for item prototypes, discovery
rule items and template items). The name with resolved macros is limited to 2048 characters.
The item name with resolved macros is displayed in all frontend locations except the Data collection section.
A new configuration syncer worker server process has been added that is responsible for resolving and synchronizing the
user macro values in item names.
• Built-in macros
• User macros
• Low-level discovery macros
• Expression macros
Macro functions can be used in all locations supporting the listed macros. This applies unless explicitly stated that only a macro is
expected (for example, when configuring host macros or low-level discovery rule filters).
Multi-page reporting
For multi-paged dashboards, reports are now returned with all the pages of the dashboard, with each PDF page corresponding to
one dashboard page. Previously this functionality was limited to returning only the first dashboard page.
It is now possible to execute remote commands on a version 7.0 agent that is operating in active mode. Once the execution of a
remote command is triggered by an action operation or manual script execution, the command will be included in the active checks
configuration and executed once the active agent receives it. Note that older active agents will ignore any remote commands
included in the active checks configuration. For more information, see Passive and active agent checks.
The processing of tags returned by the webhook script is now also supported for internal events.
These changes allow to use webhooks for updating or closing an external issue/support ticket by internal event recovery notification.
18
Databases Auditlog converted to hypertable on TimescaleDB
The auditlog table has been converted to hypertable on TimescaleDB in new installations to benefit from automatic partitioning
on time (7 days by default) and better performance.
See also:
• TimescaleDB setup
• Supported TimescaleDB versions
Proxy records have been moved out of the hosts table and are now stored in the new proxy table.
Also, operational data of proxies (such as last access, version, compatibility) have been moved out of the host_rtdata table and
is now stored in the new proxy_rtdata table.
Processes Multithreading
• A new configure parameter has been added: --with-stacksize. This parameter allows to override the default thread
stack size used by the system (in kilobytes).
• User macro resolving has been moved from the preprocessing manager to preprocessing workers.
It is now possible to restrict some Zabbix functions to harden the security of the server environment:
• global script execution on Zabbix server can be disabled by setting EnableGlobalScripts=0 in server configuration. For new
installations, global script execution on Zabbix server is disabled by default.
• user HTTP authentication can be disabled by setting $ALLOW_HTTP_AUTH=false in the frontend configuration file (zab-
bix.conf.php).
The possibility of configuration file validation has been added to the maintenance commands of Zabbix server, proxy, agent, agent
2, and web service. The validation can be done using the -T --test-config option. In case of successful validation, the exit code will
be ”0”; otherwise, the component will exit with non-zero exit code and a corresponding error message. Warnings (e.g., in case of
a deprecated parameter) will not affect successful exit code.
Previously cURL library features were detected at build time of Zabbix server, proxy or agent. If cURL features were upgraded, to
make use of them the respective Zabbix component had to be recompiled.
Now only a restart is required for upgraded cURL library features to become available in Zabbix. Recompilation is no longer required.
This is true for Zabbix server, proxy or agent.
Agent 2 configuration
Buffer size
The default value of the BufferSize configuration parameter for Zabbix agent 2 has been increased from 100 to 1000.
Empty values are now allowed in plugin-related configuration parameters on Zabbix agent 2.
The option to set the startup type of the Zabbix agent/agent 2 Windows service (-S --startup-type) has been added. This
option allows configuring the agent/agent 2 service to start automatically at Windows startup (automatic), after the automatically
started services have completed startup (delayed), when started manually by a user or application (manual) or to disable the
service entirely (disabled).
19
When performing Windows agent installation from MSI, the default startup type on Windows Server 2008/Vista and later versions
is now delayed if not specified otherwise in the STARTUPTYPE command-line parameter. This improves the reliability and perfor-
mance of the Zabbix agent/agent 2 Windows service, particularly during system restarts.
The old style of floating point values, previously deprecated, is no longer supported, as numeric values of extended range are used.
The configuration files zabbix_server.conf and zabbix_proxy.conf have been supplemented with a new, optional parameter Vault
Prefix; zabbix.conf.php has been supplemented with the optional $DB['VAULT_PREFIX'], and setup.php has been updated
accordingly.
The vault paths for CyberArk and HashiCorp are therefore no longer hardcoded to allow for vault deployments with non-standard
paths.
Previously each network discovery rule was processed by one discoverer process. Thus all service checks within the rule could
only be performed sequentially.
In the new version the network discovery process has been reworked to allow concurrency between service checks. A new discovery
manager process has been added along with a configurable number of discovery workers (or threads).
The discovery manager processes discovery rules and creates a discovery job per each rule with tasks (service checks). The service
checks are picked up and performed by the discovery workers. Only those checks that have the same IP and port are scheduled
sequentially because some devices may not allow concurrent connections on the same port.
A new zabbix[discovery_queue] internal item allows to monitor the number of discovery checks in the queue.
The StartDiscoverers parameter now determines the total number of available discovery workers for discovery. The default number
of StartDiscoverers has been upped from 1 to 5, and the range from 0-250 to 0-1000. The discoverer processes from previous
Zabbix versions have been dropped.
Additionally:
• All service checks are now performed asynchronously, except for LDAP checks;
• The number of simultaneous asynchronous checks per each service check type (or the number of available workers for all
synchronous service checks) is now configurable in the frontend (see Maximum concurrent checks per type). This parameter
is optional.
• HTTP service check previously was the same as TCP check. Now HTTP/HTTPs checking is done via libcurl. If Zabbix
server/proxy is compiled without libcurl, then HTTP checks will work like previously (i.e. as TCP checks), but HTTPS checks
will not work.
• Errors in the network discovery process will now be displayed in the frontend (in Data collection -> Discovery), for example:
- fping errors; - incorrect SNMP OID; - incorrect macro for item timeout; - address range errors.
Additional operations are now available for discovery and autoregistration events:
Low-level discovery rules can now link already discovered and existing host groups to hosts created by the same low-level discovery
rules. This affects host groups previously discovered and created by other low-level discovery rules based on the specified group
prototypes.
When streaming item values from Zabbix to external systems, you can now configure which item values the connector should
stream based on their type of information (numeric (unsigned), numeric (float), character, etc.).
In addition, to avoid unsuccessful attempts to stream item values or events (for example, if the HTTP endpoint is busy or rate-
limited), you can now also configure the attempt interval - how long the connector should wait after an unsuccessful attempt to
stream data.
201, 202, 203, and 204 HTTP response codes are now also accepted by connectors as success (previously only 200).
20
A new tool for streaming data to external systems - the Kafka connector for Zabbix server - is now available. The Kafka connector
is a lightweight server written in Go, designed to forward item values and events from a Zabbix server to a Kafka broker.
Templates For new templates and changes to existing templates, see Template changes.
Multi-factor authentication (MFA) with Time-Based One-Time Password (TOTP) or Duo Universal Prompt authentication method can
now be used to sign in to Zabbix, providing an additional layer of security beyond just a username and password.
US time format
Time and date displays in the frontend now conform to the US standard time/date display when the default (en_US) frontend
language is used.
Before Now
Cloning simplified
Previously it was possible to Clone and Full clone hosts, templates and maps.
Now the option Clone has been removed, and the option Full clone has been renamed to Clone while still preserving all of the
previous functionality of Full clone.
All icons in the frontend have been switched from icon image sheets to fonts.
Modal forms
The Advanced configuration checkboxes, responsible for displaying advanced configuration options, have been replaced with col-
lapsible blocks (see, for example, Connector configuration, Service configuration, Clock widget configuration, etc.). This improves
user experience, as collapsing these blocks and saving the configuration will no longer reset the configured advanced options to
their default values.
The menu section for viewing the top triggers is now named Top 100 triggers. The possibility to filter triggers by problem name
and tags has been added. Also, the number of detected problems instead of the number of status changes is now displayed for
each trigger.
URL fields
The character limit for all URL fields is now 2048 characters. This now includes: Tile URL for settings related to geographical maps,
Frontend URL for configuring miscellaneous frontend parameters, URLs for network maps and network map elements, URL A-C for
host inventory fields, and URL for the URL dashboard widget.
Authentication fields
The character limit for authentication fields User/User name and Password is now 255 characters. This applies to configuring HTTP
authentication for HTTP agent items, web scenarios, and connectors, as well as configuring authentication for simple checks, ODBC
monitoring, SSH checks, Telnet checks, and JMX monitoring.
21
When testing items or testing preprocessing steps, values retrieved from a host and test results are now truncated to a maximum
size of 512KB when sent to the frontend. Note that data larger than 512KB is still processed fully by Zabbix server.
All of the configured host dashboards for the selected host are now displayed as tabs under the host dashboards page header,
replacing the previous dropdown in the upper right corner. This allows easy transition between various host dashboards and
improves navigation through monitoring data.
Audit log
In Administration → Audit log, you can now enable/disable audit logging of low-level discovery, network discovery and autoregis-
tration activities performed by the server (System user).
The default period of storing audit log records before those are deleted by the housekeeper has been changed from 365 days to
31 days.
In Monitoring → Latest data, the subfilter and data are no longer displayed by default if the filter is not set.
If upgrading from previous Zabbix versions, see also: Upgrade notes for 7.0.0.
The minimum required PHP version has been raised from 7.4.0 to 8.0.0.
Renamed elements
• Some dashboard widget parameters with the label Tags have been renamed for more clarity: Item tags (for Data overview
widget), Scenario tags (for Web monitoring widget); Problem tags (for Graph, Problem hosts, Problems, Problems by severity,
and Trigger overview widget);
• The action link to editing of the map contents, available from the map list in the Monitoring → Maps section, has been
renamed from Constructor to Edit;
• The fields for setting history and trend storage periods in the item and item prototype configuration forms have been re-
named;
• In Top hosts widget configuration, the fields Order column and Host count have been renamed Order by and Host limit to
better describe their functions.
• In Graph widget configuration, the legend field Display min/max/avg has been renamed to Display min/avg/max, and data
set fields host pattern and item pattern have been renamed to host patterns and item patterns.
• In User profile settings, the Messaging tab has been renamed to Frontend notifications, in which the Frontend messaging
option has also been renamed to Frontend notifications.
Miscellaneous
Plugins Ember+
A new plugin for direct monitoring of Ember+ by Zabbix agent 2 has been added.
22
• Ember+ plugin readme
• Agent 2 items
• Ember+ plugin parameters
• Agent 2 installation
Dedicated installation packages are available for versions 8 and 9 of AlmaLinux, CentOS Stream, Oracle Linux, and Rocky Linux.
Earlier, single installation packages were provided for RHEL and RHEL-based distributions. Now separate packages are used for
RHEL and each of its above-mentioned derivatives to avoid potential problems with binary incompatibility.
ARM64/AArch64 installation packages are now available for Debian, RHEL 8, 9 and its derivatives, as well as SLES/OpenSUSE Leap
15.
The net.dns.perf item now returns a response time instead of 0 when the DNS server responds with an error code (for example,
NXDOMAIN or SERVFAIL).
With the Pie chart widget set to display a doughnut chart, you can now configure the Stroke width parameter - the width of the
chart sector border.
Additionally, when hovering over a sector, it now enlarges outward for a smoother visual effect, replacing the previous pop-out
behavior.
The ”Item list” type data sets for Graph and Pie chart widgets now support selecting compatible widgets as the data source for
items.
Templates For new templates and changes to existing templates, see Template changes.
2 Definitions
Overview In this section you can learn the meaning of some terms commonly used in Zabbix.
23
Definitions host
- any physical or virtual device, application, service, or any other logically-related collection of monitored parameters.
host group
- a logical grouping of hosts. Host groups are used when assigning access rights to hosts for different user groups.
item
- a particular piece of data that you want to receive from a host, a metric of data.
value preprocessing
trigger
- a logical expression that defines a problem threshold and is used to ”evaluate” data received in items.
When received data are above the threshold, triggers go from ’Ok’ into a ’Problem’ state. When received data are below the
threshold, triggers stay in/return to an ’Ok’ state.
template
- a set of entities (items, triggers, graphs, low-level discovery rules, web scenarios) ready to be applied to one or several hosts.
The job of templates is to speed up the deployment of monitoring tasks on a host; also to make it easier to apply mass changes to
monitoring tasks. Templates are linked directly to individual hosts.
template group
- a logical grouping of templates. Template groups are used when assigning access rights to templates for different user groups.
event
- a single occurrence of something that deserves attention such as a trigger changing state or a discovery/agent autoregistration
taking place.
event tag
- a pre-defined marker for the event. It may be used in event correlation, permission granulation, etc.
event correlation
For example, you may define that a problem reported by one trigger may be resolved by another trigger, which may even use a
different data collection method.
problem
problem update
- problem management options provided by Zabbix, such as adding comment, acknowledging, changing severity or closing manu-
ally.
action
An action consists of operations (e.g. sending a notification) and conditions (when the operation is carried out)
escalation
- a custom scenario for executing operations within an action; a sequence of sending notifications/executing remote commands.
media
notification
- a message about some event sent to a user via the chosen media channel.
remote command
- a pre-defined command that is automatically executed on a monitored host upon some condition.
web scenario
24
- one or several HTTP requests to check the availability of a web site.
frontend
dashboard
- customizable section of the web interface displaying summaries and visualizations of important information in visual units called
widgets.
widget
- visual unit displaying information of a certain kind and source (a summary, a map, a graph, the clock, etc.), used in the dashboard.
Zabbix API
- Zabbix API allows you to use the JSON RPC protocol to create, update and fetch Zabbix objects (like hosts, items, graphs and
others) or perform any other custom tasks.
Zabbix server
- a central process of Zabbix software that performs monitoring, interacts with Zabbix proxies and agents, calculates triggers, sends
notifications; a central repository of data.
Zabbix proxy
- a process that may collect data on behalf of Zabbix server, taking some processing load from the server.
Zabbix agent
- a process deployed on monitoring targets to actively monitor local resources and applications.
Zabbix agent 2
- a new generation of Zabbix agent to actively monitor local resources and applications, allowing to use custom plugins for moni-
toring.
Attention:
Because Zabbix agent 2 shares much functionality with Zabbix agent, the term ”Zabbix agent” in documentation stands
for both - Zabbix agent and Zabbix agent 2, if the functional behavior is the same. Zabbix agent 2 is only specifically
named where its functionality differs.
encryption
- support of encrypted communications between Zabbix components (server, proxy, agent, zabbix_sender and zabbix_get utilities)
using Transport Layer Security (TLS) protocol.
agent autoregistration
- automated process whereby a Zabbix agent itself is registered as a host and started to monitor.
network discovery
low-level discovery
- automated discovery of low-level entities on a particular device (e.g. file systems, network interfaces, etc).
item prototype
- a metric with certain parameters as variables, ready for low-level discovery. After low-level discovery the variables are automati-
cally substituted with the real discovered parameters and the metric automatically starts gathering data.
trigger prototype
- a trigger with certain parameters as variables, ready for low-level discovery. After low-level discovery the variables are automat-
ically substituted with the real discovered parameters and the trigger automatically starts evaluating data.
Prototypes of some other Zabbix entities are also in use in low-level discovery - graph prototypes, host prototypes, host group
prototypes.
25
3 Zabbix processes
Please use the sidebar to access content in the Zabbix process section.
1 Server
Overview
The server performs the polling and trapping of data, it calculates triggers, sends notifications to users. It is the central component
to which Zabbix agents and proxies report data on availability and integrity of systems. The server can itself remotely check
networked services (such as web servers and mail servers) using simple service checks.
The server is the central repository in which all configuration, statistical and operational data is stored, and it is the entity in Zabbix
that will actively alert administrators when problems arise in any of the monitored systems.
The functioning of a basic Zabbix server is broken into three distinct components; they are: Zabbix server, web frontend and
database storage.
All of the configuration information for Zabbix is stored in the database, which both the server and the web frontend interact with.
For example, when you create a new item using the web frontend (or API) it is added to the items table in the database. Then,
about once a minute Zabbix server will query the items table for a list of the items which are active that is then stored in a cache
within the Zabbix server. This is why it can take up to two minutes for any changes made in Zabbix frontend to show up in the
latest data section.
Running server
If installed as package
Zabbix server runs as a daemon process. The server can be started by executing:
/etc/init.d/zabbix-server start
Similarly, for stopping/restarting/viewing status, use the following commands:
If the above does not work you have to start it manually. Find the path to the zabbix_server binary and execute:
zabbix_server
You can use the following command-line parameters with Zabbix server:
zabbix_server -c /usr/local/etc/zabbix_server.conf
zabbix_server --help
zabbix_server -V
Runtime control
26
Option Description Target
27
Option Description Target
prof_enable[=<target>]
Enable profiling. process type - All processes of specified type
Affects all processes if target is not specified. (e.g. history syncer)
Enabled profiling provides details of all Supported process types as profiling targets:
rwlocks/mutexes by function name. alerter, alert manager, availability manager,
configuration syncer, discovery manager,
escalator, history poller, history syncer,
housekeeper, http poller, icmp pinger, ipmi
manager, ipmi poller, java poller, lld manager, lld
worker, odbc poller, poller, preprocessing
manager, preprocessing worker, proxy poller,
self-monitoring, service manager, snmp trapper,
task manager, timer, trapper, unreachable poller,
vmware collector
process type,N - Process type and number (e.g.,
history syncer,1)
pid - Process identifier (1 to 65535). For larger
values specify target as ’process type,N’.
scope - rwlock, mutex, processing can be
used with the process type and number (e.g.,
history syncer,1,processing) or all processes of
type (e.g., history syncer,rwlock)
prof_disable[=<target>]
Disable profiling. process type - All processes of specified type
Affects all processes if target is not specified. (e.g. history syncer)
Supported process types as profiling targets: see
prof_enable
process type,N - Process type and number (e.g.,
history syncer,1)
pid - Process identifier (1 to 65535). For larger
values specify target as ’process type,N’.
zabbix_server -R snmp_cache_reload
Example of using runtime control to trigger execution of housekeeper:
28
# Increase log level of process with PID 1234:
zabbix_server -c /usr/local/etc/zabbix_server.conf -R log_level_increase=1234
zabbix_server -R ha_set_failover_delay=10s
Process user
Zabbix server is designed to run as a non-root user. It will run as whatever non-root user it is started as. So you can run server as
any non-root user without any issues.
If you will try to run it as ’root’, it will switch to a hardcoded ’zabbix’ user, which must be present on your system. You can only run
server as ’root’ if you modify the ’AllowRoot’ parameter in the server configuration file accordingly.
If Zabbix server and agent are run on the same machine it is recommended to use a different user for running the server than for
running the agent. Otherwise, if both are run as the same user, the agent can access the server configuration file and any Admin
level user in Zabbix can quite easily retrieve, for example, the database password.
Configuration file
Start-up scripts
The scripts are used to automatically start/stop Zabbix processes during system’s start-up/shutdown. The scripts are located under
directory misc/init.d.
• agent poller - asynchronous poller process for passive checks with a worker thread
• alert manager - alert queue manager
• alert syncer - alert DB writer
• alerter - process for sending notifications
• availability manager - process for host availability updates
• configuration syncer - process for managing in-memory cache of configuration data
• configuration syncer worker - process for resolving and synchronizing user macro values in item names
• connector manager - manager process for connectors
• connector worker - process for handling requests from the connector manager
• discovery manager - manager process for discovery of devices
• discovery worker - process for handling discovery tasks from the discovery manager
• escalator - process for escalation of actions
• ha manager - process for managing high availability
• history poller - process for handling calculated checks requiring a database connection
• history syncer - history DB writer
• housekeeper - process for removal of old historical data
• http agent poller - asynchronous poller process for HTTP checks with a worker thread
• http poller - web monitoring poller
• icmp pinger - poller for icmpping checks
• ipmi manager - IPMI poller manager
• ipmi poller - poller for IPMI checks
• java poller - poller for Java checks
• lld manager - manager process of low-level discovery tasks
• lld worker - worker process of low-level discovery tasks
• odbc poller - poller for ODBC checks
• poller - normal poller for passive checks
• preprocessing manager - manager of preprocessing tasks with preprocessing worker threads
• preprocessing worker - thread for data preprocessing
• proxy poller - poller for passive proxies
• proxy group manager - manager of proxy load balancing and high availability
• report manager- manager of scheduled report generation tasks
• report writer - process for generating scheduled reports
• self-monitoring - process for collecting internal server statistics
• service manager - process for managing services by receiving information about problems, problem tags, and problem
recovery from history syncer, task manager, and alert manager
• snmp poller - asynchronous poller process for SNMP checks with a worker thread (walk[OID] and get[OID] items only)
29
• snmp trapper - trapper for SNMP traps
• task manager - process for remote execution of tasks requested by other components (e.g., close problem, acknowledge
problem, check item value now, remote command functionality)
• timer - timer for processing maintenances
• trapper - trapper for active checks, traps, proxy communication
• trigger housekeeper - process for removing problems generated by triggers that have been deleted
• unreachable poller - poller for unreachable devices
• vmware collector - VMware data collector responsible for data gathering from VMware services
The server log file can be used to observe these process types.
Various types of Zabbix server processes can be monitored using the zabbix[process,<type>,<mode>,<state>] internal item.
Supported platforms
Due to the security requirements and mission-critical nature of server operation, UNIX is the only operating system that can
consistently deliver the necessary performance, fault tolerance and resilience. Zabbix operates on market leading versions.
• Linux
• Solaris
• AIX
• HP-UX
• Mac OS X
• FreeBSD
• OpenBSD
• NetBSD
• SCO Open Server
• Tru64/OSF1
Note:
Zabbix may work on other Unix-like operating systems as well.
Locale
Note that the server requires a UTF-8 locale so that some textual items can be interpreted correctly. Most modern Unix-like systems
have a UTF-8 locale as default, however, there are some systems where that may need to be set specifically.
1 High availability
Overview
High availability (HA) is typically required in critical infrastructures that can afford virtually no downtime. So for any service that
may fail there must be a failover option in place to take over should the current service fail.
Zabbix offers a native high-availability solution that is easy to set up and does not require any previous HA expertise. Native Zabbix
HA may be useful for an extra layer of protection against software/hardware failures of Zabbix server or to have less downtime
due to maintenance.
In the Zabbix high availability mode multiple Zabbix servers are run as nodes in a cluster. While one Zabbix server in the cluster
is active, others are on standby, ready to take over if necessary.
30
Switching to Zabbix HA is non-committal. You may switch back to standalone operation at any point.
Two parameters are required in the server configuration to start a Zabbix server as cluster node:
• HANodeName parameter must be specified for each Zabbix server that will be an HA cluster node.
This is a unique node identifier (e.g. zabbix-node-01) that the server will be referred to in agent and proxy configurations. If
you do not specify HANodeName, then the server will be started in standalone mode.
The NodeAddress parameter (address:port) will be used by Zabbix frontend to connect to the active server node. NodeAddress
must match the IP or FQDN name of the respective Zabbix server.
Restart all Zabbix servers after making changes to the configuration files. They will now be started as cluster nodes. The new
status of the servers can be seen in Reports → System information and also by running:
zabbix_server -R ha_status
This runtime command will log the current HA cluster status into the Zabbix server log (and to stdout):
Preparing frontend
Make sure that Zabbix server address:port is not defined in the frontend configuration (found in conf/zabbix.conf.php of the
frontend files directory).
Zabbix frontend will autodetect the active node by reading settings from the nodes table in Zabbix database. Node address of the
active node will be used as the Zabbix server address.
Proxy configuration
HA cluster nodes (servers) must be listed in the configuration of either passive or active Zabbix proxy.
For a passive proxy, the node names must be listed in the Server parameter of the proxy, separated by a comma.
Server=zabbix-node-01,zabbix-node-02
For an active proxy, the node names must be listed in the Server parameter of the proxy, separated by a semicolon.
Server=zabbix-node-01;zabbix-node-02
31
Agent configuration
HA cluster nodes (servers) must be listed in the configuration of Zabbix agent or Zabbix agent 2.
To enable passive checks, the node names must be listed in the Server parameter, separated by a comma.
Server=zabbix-node-01,zabbix-node-02
To enable active checks, the node names must be listed in the ServerActive parameter. Note that for active checks the nodes must
be separated by a comma from any other servers, while the nodes themselves must be separated by a semicolon, e.g.:
ServerActive=zabbix-node-01;zabbix-node-02
Failover to standby node
Zabbix will fail over to another node automatically if the active node stops. There must be at least one node in standby status for
the failover to happen.
How fast will the failover be? All nodes update their last access time (and status, if it is changed) every 5 seconds. So:
• If the active node shuts down and manages to report its status as ”stopped”, another node will take over within 5 seconds.
• If the active node shuts down/becomes unavailable without being able to update its status, standby nodes will wait for the
failover delay + 5 seconds to take over
The failover delay is configurable, with the supported range between 10 seconds and 15 minutes (one minute by default). To
change the failover delay, you may run:
zabbix_server -R ha_set_failover_delay=5m
Managing HA cluster
The current status of the HA cluster can be managed using the dedicated runtime control options:
• ha_status - log HA cluster status in the Zabbix server log (and to stdout)
• ha_remove_node=target - remove an HA node identified by its <target> - number of the node in the list (the number can
be obtained from the output of running ha_status), e.g.:
zabbix_server -R ha_remove_node=2
Note that active/standby nodes cannot be removed.
• ha_set_failover_delay=delay - set HA failover delay (between 10 seconds and 15 minutes; time suffixes are supported,
e.g. 10s, 1m)
32
Disabling HA cluster
Upgrading HA cluster
In a minor version upgrade it is sufficient to upgrade the first node, make sure it has upgraded and running, and then start upgrade
on the next node.
Implementation details
The high availability (HA) cluster is an opt-in solution and it is supported for Zabbix server. The native HA solution is designed
to be simple in use, it will work across sites and does not have specific requirements for the databases that Zabbix recognizes.
Users are free to use the native Zabbix HA solution, or a third-party HA solution, depending on what best suits the high availability
requirements in their environment.
• is configured separately
• uses the same database
• may have several modes: active, standby, unavailable, stopped
Only one node can be active (working) at a time. A standby node runs only one process - the HA manager. A standby node does no
data collection, processing or other regular server activities; they do not listen on ports; they have minimum database connections.
Both active and standby nodes update their last access time every 5 seconds. Each standby node monitors the last access time
of the active node. If the last access time of the active node is over ’failover delay’ seconds, the standby node switches itself to
be the active node and assigns ’unavailable’ status to the previously active node.
The active node monitors its own database connectivity - if it is lost for more than failover delay-5 seconds, it must stop all
processing and switch to standby mode. The active node also monitors the status of the standby nodes - if the last access time of
a standby node is over ’failover delay’ seconds, the standby node is assigned the ’unavailable’ status.
2 Agent
Overview
Zabbix agent is deployed on a monitoring target to actively monitor local resources and applications (hard drives, memory, pro-
cessor statistics, etc.).
The agent gathers operational information locally and reports data to Zabbix server for further processing. In case of failures
(such as a hard disk running full or a crashed service process), Zabbix server can actively alert the administrators of the particular
machine that reported the failure.
Zabbix agents are highly efficient because of the use of native system calls for gathering statistical information.
33
In a passive check the agent responds to a data request. Zabbix server (or proxy) asks for data, for example, CPU load, and Zabbix
agent sends back the result.
Active checks require more complex processing. The agent must first retrieve a list of items from Zabbix server for independent
processing. Then it will periodically send new values to the server.
Whether to perform passive or active checks is configured by selecting the respective monitoring item type. Zabbix agent processes
items of type ’Zabbix agent’ or ’Zabbix agent (active)’.
Supported platforms
Pre-compiled Zabbix agent binaries are available for the supported platforms:
It is also possible to download legacy Zabbix agent binaries for NetBSD and HP-UX, and those are compatible with current Zabbix
server/proxy version.
Installation
See the package installation section for instructions on how to install Zabbix agent as package.
Alternatively see instructions for manual installation if you do not want to use packages.
Attention:
In general, 32-bit Zabbix agents will work on 64-bit systems, but may fail in some cases.
If installed as package
Zabbix agent runs as a daemon process. The agent can be started by executing:
/etc/init.d/zabbix-agent start
Similarly, for stopping/restarting/viewing status of Zabbix agent, use the following commands:
If the above does not work you have to start it manually. Find the path to the zabbix_agentd binary and execute:
zabbix_agentd
Agent on Windows systems
Preparation
Zabbix agent is distributed as a zip archive. After you download the archive you need to unpack it. Choose any folder to store
Zabbix agent and the configuration file, e. g.
C:\zabbix
Copy bin\zabbix_agentd.exe and conf\zabbix_agentd.conf files to c:\zabbix.
Edit the c:\zabbix\zabbix_agentd.conf file to your needs, making sure to specify a correct ”Hostname” parameter.
Installation
After this is done use the following command to install Zabbix agent as Windows service:
34
C:\> c:\zabbix\zabbix_agentd.exe -c c:\zabbix\zabbix_agentd.conf -i
Now you should be able to configure ”Zabbix agent” service normally as any other Windows service.
Options
It is possible to run multiple instances of the agent on a host. A single instance can use the default configuration file or a config-
uration file specified in the command line. In case of multiple instances each agent instance must have its own configuration file
(one of the instances can use the default configuration file).
Parameter Description
35
zabbix_agentd --print
zabbix_agentd -t "mysql.ping" -c /etc/zabbix/zabbix_agentd.conf
zabbix_agentd.exe -i
zabbix_agentd.exe -i -m -c zabbix_agentd.conf
zabbix_agentd.exe -c zabbix_agentd.conf -S delayed
Runtime control
With runtime control options you may change the log level of agent processes.
log_level_increase[=<target>]
Increase log level. Target can be specified as:
If target is not specified, all processes are process type - all processes of specified type
affected. (e.g., listener)
See all agent process types.
process type,N - process type and number (e.g.,
listener,3)
pid - process identifier (1 to 65535). For larger
values specify target as ’process-type,N’.
log_level_decrease[=<target>]
Decrease log level.
If target is not specified, all processes are
affected.
userparameter_reload
Reload values of the UserParameter and Include
options from the current configuration file.
Examples:
zabbix_agentd -R log_level_increase
zabbix_agentd -R log_level_increase=listener,3
zabbix_agentd -R log_level_increase=1234
zabbix_agentd -R log_level_decrease="active checks"
Note:
Runtime control is not supported on OpenBSD, NetBSD and Windows.
Process user
Zabbix agent on UNIX is designed to run as a non-root user. It will run as whatever non-root user it is started as. So you can run
agent as any non-root user without any issues.
If you will try to run it as ’root’, it will switch to a hardcoded ’zabbix’ user, which must be present on your system. You can only run
agent as ’root’ if you modify the ’AllowRoot’ parameter in the agent configuration file accordingly.
Configuration file
For details on configuring Zabbix agent see the configuration file options for zabbix_agentd or Windows agent.
Locale
Note that the agent requires a UTF-8 locale so that some textual agent items can return the expected content. Most modern
Unix-like systems have a UTF-8 locale as default, however, there are some systems where that may need to be set specifically.
Exit code
36
3 Agent 2
Overview
Zabbix agent 2 is a new generation of Zabbix agent and may be used in place of Zabbix agent. Zabbix agent 2 has been developed
to:
Agent 2 is written in Go programming language (with some C code of Zabbix agent reused). A configured Go environment with a
currently supported Go version is required for building Zabbix agent 2.
Agent 2 does not have built-in daemonization support on Linux; it can be run as a Windows service.
Passive checks work similarly to Zabbix agent. Active checks support scheduled/flexible intervals and check concurrency within
one active server.
Note:
By default, after a restart, Zabbix agent 2 will schedule the first data collection for active checks at a conditionally random
time within the item’s update interval to prevent spikes in resource usage. To perform active checks that do not have
ForceActiveChecksOnStart parameter (global-
Scheduling update interval immediately after the agent restart, set
level) or Plugins.<Plugin name>.System.ForceActiveChecksOnStart (affects only specific plugin checks) in the
configuration file. Plugin-level parameter, if set, will override the global parameter.
Check concurrency
Checks from different plugins can be executed concurrently. The number of concurrent checks within one plugin is limited by the
plugin capacity setting. Each plugin may have a hardcoded capacity setting (1000 being default) that can be lowered using the
Plugins.<PluginName>.System.Capacity=N setting in the Plugins configuration parameter.
Supported platforms
• Windows (all desktop and server versions since Windows 10/Server 2016) - available as a pre-compiled binary or in Zabbix
sources
• Linux - available in distribution packages or Zabbix sources
Installation
Windows:
• from a pre-compiled binary - download the binary and follow the instructions on the Windows agent installation from MSI
page
• from sources - see Building Zabbix agent 2 on Windows
Linux:
• from distribution packages - follow the instructions on the Zabbix packages page, available by choosing your distribution
and the Agent 2 component
• from sources - see Installation from sources; note that you must configure the sources by specifying the --enable-agent2
configuration option
Note:
Zabbix agent 2 monitoring capabilities can be extended with plugins. While built-in plugins are available out-of-the-box,
loadable plugins must be installed separately. For more information, see Plugins.
Options
37
The following command-line parameters can be used with Zabbix agent 2:
Parameter Description
Runtime control
Option Description
38
Option Description
userparameter_reload Reload values of the UserParameter and Include options from the current configuration file.
help Display help information on runtime control.
Examples:
Configuration file
The configuration parameters of agent 2 are mostly compatible with Zabbix agent with some exceptions.
ControlSocket The runtime control socket path. Agent 2 uses a control socket for runtime commands.
EnablePersistentBuffer, These parameters are used to configure persistent storage on agent 2 for active items.
PersistentBufferFile,
PersistentBufferPeriod
ForceActiveChecksOnStart Determines whether the agent should perform active checks immediately after restart or spread
evenly over time.
Plugins Plugins may have their own parameters, in the format Plugins.<Plugin
name>.<Parameter>=<value>. A common plugin parameter is System.Capacity, setting the
limit of checks that can be executed at the same time.
StatusPort The port agent 2 will be listening on for HTTP status request and display of a list of configured
plugins and some internal parameters
Dropped parameters Description
AllowRoot, User Not supported because daemonization is not supported.
LoadModule, Loadable modules are not supported.
LoadModulePath
StartAgents This parameter was used in Zabbix agent to increase passive check concurrency or disable them.
In Agent 2, the concurrency is configured at a plugin level and can be limited by a capacity
setting. Whereas disabling passive checks is not currently supported.
HostInterface, Not yet supported.
HostInterfaceItem
For more details see the configuration file options for zabbix_agent2.
Exit codes
Zabbix agent 2 can also be compiled with older OpenSSL versions (1.0.1, 1.0.2).
In this case Zabbix provides mutexes for locking in OpenSSL. If a mutex lock or unlock fails then an error message is printed to the
standard error stream (STDERR) and Agent 2 exits with return code 2 or 3, respectively.
4 Proxy
Overview
Zabbix proxy is a process that may collect monitoring data from one or more monitored devices and send the information to the
Zabbix server, essentially working on behalf of the server. All collected data is buffered locally and then transferred to the Zabbix
server the proxy belongs to.
Deploying a proxy is optional, but may be very beneficial to distribute the load of a single Zabbix server. If only proxies collect
data, processing on the server becomes less CPU and disk I/O hungry.
A Zabbix proxy is the ideal solution for centralized monitoring of remote locations, branches and networks with no local adminis-
trators.
39
Attention:
Note that databases supported with Zabbix proxy are SQLite, MySQL and PostgreSQL. Using Oracle is at your own risk and
may contain some limitations as, for example, in return values of low-level discovery rules.
Running proxy
If installed as package
Zabbix proxy runs as a daemon process. The proxy can be started by executing:
/etc/init.d/zabbix-proxy start
Similarly, for stopping/restarting/viewing status of Zabbix proxy, use the following commands:
If the above does not work you have to start it manually. Find the path to the zabbix_proxy binary and execute:
zabbix_proxy
You can use the following command-line parameters with Zabbix proxy:
zabbix_proxy -c /usr/local/etc/zabbix_proxy.conf
zabbix_proxy --help
zabbix_proxy -V
Runtime control
40
Option Description Target
log_level_increase[=<target>]
Increase log level, affects all processes if target is process type - All processes of specified type
not specified. (e.g., poller)
Not supported on *BSD systems. See all proxy process types.
process type,N - Process type and number (e.g.,
poller,3)
pid - Process identifier (1 to 65535). For larger
values specify target as ’process type,N’.
log_level_decrease[=<target>]
Decrease log level, affects all processes if target is
not specified.
Not supported on *BSD systems.
prof_enable[=<target>]
Enable profiling. process type - All processes of specified type
Affects all processes if target is not specified. (e.g., history syncer)
Enabled profiling provides details of all See all proxy process types.
rwlocks/mutexes by function name. process type,N - Process type and number (e.g.,
history syncer,1)
pid - Process identifier (1 to 65535). For larger
values specify target as ’process type,N’.
scope - rwlock, mutex, processing can be
used with the process type and number (e.g.,
history syncer,1,processing) or all processes of
type (e.g., history syncer,rwlock)
prof_disable[=<target>]
Disable profiling. process type - All processes of specified type
Affects all processes if target is not specified. (e.g., history syncer)
See all proxy process types.
process type,N - Process type and number (e.g.,
history syncer,1)
pid - Process identifier (1 to 65535). For larger
values specify target as ’process type,N’.
zabbix_proxy -R snmp_cache_reload
Example of using runtime control to trigger execution of housekeeper
41
Zabbix proxy is designed to run as a non-root user. It will run as whatever non-root user it is started as. So you can run proxy as
any non-root user without any issues.
If you will try to run it as ’root’, it will switch to a hardcoded ’zabbix’ user, which must be present on your system. You can only run
proxy as ’root’ if you modify the ’AllowRoot’ parameter in the proxy configuration file accordingly.
Configuration file
• agent poller - asynchronous poller process for passive checks with a worker thread
• availability manager - process for host availability updates
• configuration syncer - process for managing in-memory cache of configuration data
• data sender - proxy data sender
• discovery manager - manager process for discovery of devices
• discovery worker - process for handling discovery tasks from the discovery manager
• history syncer - history DB writer
• housekeeper - process for removal of old historical data
• http agent poller - asynchronous poller process for HTTP checks with a worker thread
• http poller - web monitoring poller
• icmp pinger - poller for icmpping checks
• ipmi manager - IPMI poller manager
• ipmi poller - poller for IPMI checks
• java poller - poller for Java checks
• odbc poller - poller for ODBC checks
• poller - normal poller for passive checks
• preprocessing manager - manager of preprocessing tasks with preprocessing worker threads
• preprocessing worker - thread for data preprocessing
• self-monitoring - process for collecting internal server statistics
• snmp poller - asynchronous poller process for SNMP checks with a worker thread (walk[OID] and get[OID] items only)
• snmp trapper - trapper for SNMP traps
• task manager - process for remote execution of tasks requested by other components (e.g. close problem, acknowledge
problem, check item value now, remote command functionality)
• trapper - trapper for active checks, traps, proxy communication
• unreachable poller - poller for unreachable devices
• vmware collector - VMware data collector responsible for data gathering from VMware services
The proxy log file can be used to observe these process types.
Various types of Zabbix proxy processes can be monitored using the zabbix[process,<type>,<mode>,<state>] internal item.
Supported platforms
Zabbix proxy runs on the same list of supported platforms as Zabbix server.
Memory buffer
The memory buffer allows to store new data (item values, network discovery, host autoregistration) in the buffer and upload to
Zabbix server without accessing the database. The memory buffer has been introduced for the proxy since Zabbix 7.0.
In installations before Zabbix 7.0 the collected data was stored in the database before uploading to Zabbix server. For these
installations this remains the default behavior after upgrading to Zabbix 7.0.
For optimized performance, it is recommended to configure the use of memory buffer on the proxy. This is possible by modifying
the value of ProxyBufferMode from ”disk” (hardcoded default for existing installations) to ”hybrid” (recommended) or ”memory”.
It is also required to set the memory buffer size (ProxyMemoryBufferSize parameter).
In hybrid mode the buffer is protected from data loss by flushing unsent data to the database if the proxy is stopped, the buffer is
full or data too old. When all values have been flushed into database, the proxy goes back to using memory buffer.
In memory mode, the memory buffer will be used, however, there is no protection against data loss. If the proxy is stopped, or the
memory gets overfilled, the unsent data will be dropped.
The hybrid mode (ProxyBufferMode=hybrid) is applied to all new installations since Zabbix 7.0.
Additional parameters such as ProxyMemoryBufferSize and ProxyMemoryBufferAge define the memory buffer size and the maxi-
mum age of data in the buffer, respectively.
Note that with conflicting configuration the proxy will print an error and fail to start, for example, if:
42
• ProxyBufferMode is set to ”hybrid” or ”memory” and ProxyMemoryBufferSize is ”0”;
• ProxyBufferMode is set to ”hybrid” or ”memory” and ProxyLocalBuffer is not ”0”.
Locale
Note that the proxy requires a UTF-8 locale so that some textual items can be interpreted correctly. Most modern Unix-like systems
have a UTF-8 locale as default, however, there are some systems where that may need to be set specifically.
5 Java gateway
Overview
Native support for monitoring JMX applications exists in the form of a Zabbix daemon called ”Zabbix Java gateway”. Zabbix Java
gateway is a daemon written in Java. To find out the value of a particular JMX counter on a host, Zabbix server queries Zabbix Java
gateway, which uses the JMX management API to query the application of interest remotely. The application does not need any
additional software installed, it just has to be started with -Dcom.sun.management.jmxremote option on the command line.
Java gateway accepts incoming connection from Zabbix server or proxy and can only be used as a ”passive proxy”. As opposed
to Zabbix proxy, it may also be used from Zabbix proxy (Zabbix proxies cannot be chained). Access to each Java gateway is
configured directly in Zabbix server or proxy configuration file, thus only one Java gateway may be configured per Zabbix server
or Zabbix proxy. If a host will have items of type JMX agent and items of other type, only the JMX agent items will be passed to
Java gateway for retrieval.
When an item has to be updated over Java gateway, Zabbix server or proxy will connect to the Java gateway and request the value,
which Java gateway in turn retrieves and passes back to the server or proxy. As such, Java gateway does not cache any values.
Zabbix server or proxy has a specific type of processes that connect to Java gateway, controlled by the option StartJavaPollers.
Internally, Java gateway starts multiple threads, controlled by the START_POLLERS option. On the server side, if a connection
takes more than Timeout seconds, it will be terminated, but Java gateway might still be busy retrieving value from the JMX counter.
To solve this, there is the TIMEOUT option in Java gateway that allows to set timeout for JMX network operations.
Zabbix server or proxy will try to pool requests to a single JMX target together as much as possible (affected by item intervals) and
send them to the Java gateway in a single connection for better performance.
It is suggested to have StartJavaPollers less than or equal to START_POLLERS, otherwise there might be situations when
no threads are available in the Java gateway to service incoming requests; in such a case Java gateway uses ThreadPoolEx-
ecutor.CallerRunsPolicy, meaning that the main thread will service the incoming request and will not accept any new requests
temporarily.
If you are trying to monitor Wildfly-based Java applications with Zabbix Java gateway, please install the latest jboss-client.jar
available on the Wildfly download page.
You can install Java gateway either from the sources or packages downloaded from Zabbix website.
Using the links below you can access information how to get and run Zabbix Java gateway, how to configure Zabbix server (or
Zabbix proxy) to use Zabbix Java gateway for JMX monitoring, and how to configure Zabbix items in Zabbix frontend that correspond
to particular JMX counters.
Overview
If installed from sources, the following information will help you in setting up Zabbix Java gateway.
Overview of files
If you obtained Java gateway from sources, you should have ended up with a collection of shell scripts, JAR and configuration files
under $PREFIX/sbin/zabbix_java. The role of these files is summarized below.
bin/zabbix-java-gateway-$VERSION.jar
43
Java gateway JAR file itself.
lib/logback-core-0.9.27.jar
lib/logback-classic-0.9.27.jar
lib/slf4j-api-1.6.1.jar
lib/android-json-4.3_r3.1.jar
Dependencies of Java gateway: Logback, SLF4J, and Android JSON library.
lib/logback.xml
lib/logback-console.xml
Configuration files for Logback.
shutdown.sh
startup.sh
Convenience scripts for starting and stopping Java gateway.
settings.sh
Configuration file that is sourced by startup and shutdown scripts above.
By default, Java gateway listens on port 10052. If you plan on running Java gateway on a different port, you can specify that in
settings.sh script. See the description of Java gateway configuration file for how to specify this and other options.
Warning:
Port 10052 is not IANA registered.
Once you are comfortable with the settings, you can start Java gateway by running the startup script:
./startup.sh
Likewise, once you no longer need Java gateway, run the shutdown script to stop it:
./shutdown.sh
Note that unlike server or proxy, Java gateway is lightweight and does not need a database.
With Java gateway up and running, you have to tell Zabbix server where to find Zabbix Java gateway. This is done by specifying
JavaGateway and JavaGatewayPort parameters in the server configuration file. If the host on which JMX application is running is
monitored by Zabbix proxy, then you specify the connection parameters in the proxy configuration file instead.
JavaGateway=192.168.3.14
JavaGatewayPort=10052
By default, server does not start any processes related to JMX monitoring. If you wish to use it, however, you have to specify the
number of pre-forked instances of Java pollers. You do this in the same way you specify regular pollers and trappers.
StartJavaPollers=5
Do not forget to restart server or proxy, once you are done with configuring them.
In case there are any problems with Java gateway or an error message that you see about an item in the frontend is not descriptive
enough, you might wish to take a look at Java gateway log file.
By default, Java gateway logs its activities into /tmp/zabbix_java.log file with log level ”info”. Sometimes that information is not
enough and there is a need for information at log level ”debug”. In order to increase logging level, modify file lib/logback.xml and
change the level attribute of <root> tag to ”debug”:
<root level="debug">
<appender-ref ref="FILE" />
</root>
Note that unlike Zabbix server or Zabbix proxy, there is no need to restart Zabbix Java gateway after changing logback.xml file -
changes in logback.xml will be picked up automatically. When you are done with debugging, you can return the logging level to
”info”.
If you wish to log to a different file or a completely different medium like database, adjust logback.xml file to meet your needs.
See Logback Manual for more details.
44
Sometimes for debugging purposes it is useful to start Java gateway as a console application rather than a daemon. To do that,
comment out PID_FILE variable in settings.sh. If PID_FILE is omitted, startup.sh script starts Java gateway as a console application
and makes Logback use lib/logback-console.xml file instead, which not only logs to console, but has logging level ”debug” enabled
as well.
Finally, note that since Java gateway uses SLF4J for logging, you can replace Logback with the framework of your choice by placing
an appropriate JAR file in lib directory. See SLF4J Manual for more details.
JMX monitoring
Overview
If installed from RHEL packages, the following information will help you in setting up Zabbix Java gateway.
/etc/zabbix/zabbix_java_gateway.conf
For more details, see Zabbix Java gateway configuration parameters.
With Java gateway up and running, you have to tell Zabbix server where to find Zabbix Java gateway. This is done by specifying
JavaGateway and JavaGatewayPort parameters in the server configuration file. If the host on which JMX application is running is
monitored by Zabbix proxy, then you specify the connection parameters in the proxy configuration file instead.
JavaGateway=192.168.3.14
JavaGatewayPort=10052
By default, server does not start any processes related to JMX monitoring. If you wish to use it, however, you have to specify the
number of pre-forked instances of Java pollers. You do this in the same way you specify regular pollers and trappers.
StartJavaPollers=5
Do not forget to restart server or proxy, once you are done with configuring them.
/var/log/zabbix/zabbix_java_gateway.log
If you like to increase the logging, edit the file:
/etc/zabbix/zabbix_java_gateway_logback.xml
and change level="info" to ”debug” or even ”trace” (for deep troubleshooting):
<configuration scan="true" scanPeriod="15 seconds">
[...]
<root level="info">
<appender-ref ref="FILE" />
</root>
</configuration>
45
JMX monitoring
Overview
If installed from Debian/Ubuntu packages, the following information will help you in setting up Zabbix Java gateway.
/etc/zabbix/zabbix_java_gateway.conf
For more details, see Zabbix Java gateway configuration parameters.
With Java gateway up and running, you have to tell Zabbix server where to find Zabbix Java gateway. This is done by specifying
JavaGateway and JavaGatewayPort parameters in the server configuration file. If the host on which JMX application is running is
monitored by Zabbix proxy, then you specify the connection parameters in the proxy configuration file instead.
JavaGateway=192.168.3.14
JavaGatewayPort=10052
By default, server does not start any processes related to JMX monitoring. If you wish to use it, however, you have to specify the
number of pre-forked instances of Java pollers. You do this in the same way you specify regular pollers and trappers.
StartJavaPollers=5
Do not forget to restart server or proxy, once you are done with configuring them.
/var/log/zabbix/zabbix_java_gateway.log
If you like to increase the logging, edit the file:
/etc/zabbix/zabbix_java_gateway_logback.xml
and change level="info" to ”debug” or even ”trace” (for deep troubleshooting):
<configuration scan="true" scanPeriod="15 seconds">
[...]
<root level="info">
<appender-ref ref="FILE" />
</root>
</configuration>
JMX monitoring
6 Sender
Overview
Zabbix sender is a command line utility that may be used to send performance data to Zabbix server for processing.
The utility is usually used in long running user scripts for periodical sending of availability and performance data.
46
For sending results directly to Zabbix server or proxy, a trapper item type must be configured.
See also zabbix_utils - a Python library that has built-in functionality to act like Zabbix sender.
cd bin
./zabbix_sender -z zabbix -s "Linux DB3" -k db.connections -o 43
where:
Attention:
Options that contain whitespaces, must be quoted using double quotes.
Zabbix sender can be used to send multiple values from an input file. See the Zabbix sender manpage for more information.
If a configuration file is specified, Zabbix sender uses all addresses defined in the agent ServerActive configuration parameter for
sending data. If sending to one address fails, the sender tries sending to the other addresses. If sending of batch data fails to one
address, the following batches are not sent to this address.
Zabbix sender accepts strings in UTF-8 encoding (for both UNIX-like systems and Windows) without byte order mark (BOM) first in
the file.
zabbix_sender.exe [options]
zabbix_sender realtime sending scenarios will gather multiple values passed to it in close succession and send them to the server
in a single connection. A value that is not further apart from the previous value than 0.2 seconds can be put in the same stack,
but maximum polling time still is 1 second.
Note:
Zabbix sender will terminate if invalid (not following parameter=value notation) parameter entry is present in the specified
configuration file.
7 Get
Overview
Zabbix get is a command line utility which can be used to communicate with Zabbix agent and retrieve required information from
the agent.
See also zabbix_utils - a Python library that has built-in functionality to act like Zabbix get.
An example of running Zabbix get under UNIX to get the processor load value from the agent:
cd bin
./zabbix_get -s 127.0.0.1 -p 10050 -k system.cpu.load[all,avg1]
Another example of running Zabbix get for capturing a string from a website:
cd bin
./zabbix_get -s 192.168.1.1 -p 10050 -k "web.page.regexp[www.example.com,,,\"USA: ([a-zA-Z0-9.-]+)\",,\1]"
Note that the item key here contains a space so quotes are used to mark the item key to the shell. The quotes are not part of the
item key; they will be trimmed by the shell and will not be passed to Zabbix agent.
47
-s --host <host name or IP> Specify host name or IP address of a host
-p --port <port number> Specify port number of agent running on the host (default: 10050)
-I --source-address <IP address> Specify source IP address
-t --timeout <seconds> Specify timeout. Valid range: 1-30 seconds (default: 30 seconds)
-k --key <item key> Specify key of item to retrieve value for
-P --protocol <value> Protocol used to communicate with agent. Values:
auto - connect using JSON protocol, fallback and retry with pl
json - connect using JSON protocol
plaintext - connect using plaintext protocol where just item k
-h --help Display this help message
-V --version Display version number
zabbix_get.exe [options]
8 JS
Overview
zabbix_js is a command line utility that can be used for embedded script testing.
This utility will execute a user script with a string parameter and print the result. Scripts are executed using the embedded Zabbix
scripting engine.
In case of compilation or execution errors zabbix_js will print the error in stderr and exit with code 1.
Usage
-s, --script script-file Specify the file name of the script to execute. If '-' is specified as
-i, --input input-file Specify the file name of the input parameter. If '-' is specified as f
-p, --param input-param Specify the input parameter.
-l, --loglevel log-level Specify the log level.
-t, --timeout timeout Specify the timeout in seconds. Valid range: 1-60 seconds (default: 10
-h, --help Display help information.
-V, --version Display the version number.
-w <webdriver url> Enables browser monitoring.
Example:
48
9 Web service
Overview
Zabbix web service is a process that is used for communication with external web services. Currently, Zabbix web service is used
for generating and sending scheduled reports with plans to add additional functionality in the future.
Zabbix server connects to the web service via HTTP(S). Zabbix web service requires Google Chrome to be installed on the same
host; on some distributions the service may also work with Chromium (see known issues) .
Installation
To compile Zabbix web service from sources, specify the --enable-webservice configure option.
See also:
4 Installation
1 Getting Zabbix
Overview
To download the latest distribution packages, pre-compiled sources or the virtual appliance, go to the Zabbix download page, where
direct links to latest versions are provided.
• You can download the released stable versions from the official Zabbix website
• You can download nightly builds from the official Zabbix website developer page
• You can get the latest development version from the Git source code repository system:
– The primary location of the full repository is at https://git.zabbix.com/scm/zbx/zabbix.git
– Master and supported releases are also mirrored to Github at https://github.com/zabbix/zabbix
A Git client must be installed to clone the repository. The official commandline Git client package is commonly called git in
distributions. To install, for example, on Debian/Ubuntu, run:
2 Requirements
Hardware
49
Memory
Zabbix requires both physical and disk memory. The amount of required disk memory obviously depends on the number of hosts
and parameters that are being monitored. If you’re planning to keep a long history of monitored parameters, you should be thinking
of at least a couple of gigabytes to have enough space to store the history in the database. Each Zabbix daemon process requires
several connections to a database server. The amount of memory allocated for the connection depends on the configuration of
the database engine.
Note:
The more physical memory you have, the faster the database (and therefore Zabbix) works.
CPU
Zabbix and especially Zabbix database may require significant CPU resources depending on number of monitored parameters and
chosen database engine.
Other hardware
A serial communication port and a serial GSM modem are required for using SMS notification support in Zabbix. USB-to-serial
converter will also work.
These are size and hardware configuration examples to start with. Each Zabbix installation is unique. Make sure to benchmark the
performance of your Zabbix system in a staging or development environment, so that you can fully understand your requirements
before deploying the Zabbix installation to its production environment.
1 2
1 metric = 1 item + 1 trigger + 1 graph<br> Example with Amazon general purpose EC2 instances, using ARM64 or x86_64
architecture, a proper instance type like Compute/Memory/Storage optimised should be selected during Zabbix installation evalu-
ation and testing before installing in its production environment.
Note:
Actual configuration depends on the number of active items and refresh rates very much (see database size section of this
page for details). It is highly recommended to run the database on a separate server for large installations.
Supported platforms
Due to security requirements and the mission-critical nature of the monitoring server, UNIX is the only operating system that can
consistently deliver the necessary performance, fault tolerance, and resilience. Zabbix operates on market-leading versions.
Zabbix components are available and tested for the following platforms:
50
Platform Server Agent Agent2
Linux x x x
IBM AIX x x -
FreeBSD x x -
NetBSD x x -
OpenBSD x x -
HP-UX x x -
Mac OS X x x -
Solaris x x -
Windows - x x
Note:
Zabbix server/agent may work on other Unix-like operating systems as well. Zabbix agent is supported on all Windows
desktop and server versions since XP (64-bit version). Zabbix agent will not work on AIX platforms below versions 6.1
TL07 / AIX 7.1 TL01.
To prevent critical security vulnerabilities in Zabbix agent 2, it is compiled only with supported Go releases. As of
Go 1.21, the minimum required Windows versions are raised, therefore, the minimum Windows version for Zabbix agent
2 is Windows 10/Server 2016.
Attention:
Zabbix disables core dumps if compiled with encryption and does not start if the system does not allow disabling of core
dumps.
Required software
Zabbix is built around modern web servers, leading database engines, and PHP scripting language.
If stated as mandatory, the required software/library is strictly necessary. Optional ones are needed for supporting some specific
function.
Mandatory Supported
Software status versions Comments
MySQL/Percona One of 8.0.30- Required if MySQL (or Percona) is used as Zabbix backend database.
8.4.X InnoDB engine is required.
51
Mandatory Supported
Software status versions Comments
Note:
Although Zabbix can work with databases available in the operating systems, for the best experience, we recommend
using databases installed from the official database developer repositories.
Frontend
If stated as mandatory, the required software/library is strictly necessary. Optional ones are needed for supporting some specific
function.
Mandatory
Software status Version Comments
52
Mandatory
Software status Version Comments
Mandatory Minimum
Library status version Comments
jQuery JavaScript Yes 3.6.0 JavaScript library that simplifies the process of cross-browser
Library development.
jQuery UI 1.12.1 A set of user interface interactions, effects, widgets, and themes
built on top of jQuery.
OneLogin’s SAML PHP 4.0.0 A PHP toolkit that adds SAML 2.0 authentication support to be able
Toolkit to sign in to Zabbix.
Symfony Yaml 5.1.0 Adds support to export and import Zabbix configuration elements in
Component the YAML format.
Note:
Zabbix may work on previous versions of Apache, MySQL, Oracle, and PostgreSQL as well.
Attention:
For other fonts than the default DejaVu, PHP function imagerotate might be required. If it is missing, these fonts might be
rendered incorrectly when a graph is displayed. This function is only available if PHP is compiled with bundled GD, which
is not the case in Debian and other distributions.
Third-party libraries used for writing and debugging Zabbix frontend code:
Mandatory Minimum
Library status version Description
The latest stable versions of Google Chrome, Mozilla Firefox, Microsoft Edge, Apple Safari, and Opera are supported.
Warning:
The same-origin policy for IFrames is implemented, which means that Zabbix cannot be placed in frames on a different
domain.
Still, pages placed into a Zabbix frame will have access to Zabbix frontend (through JavaScript) if the page that is placed
in the frame and Zabbix frontend are on the same domain. A page like http://secure-zabbix.com/cms/page.html,
if placed into dashboards on http://secure-zabbix.com/zabbix/, will have full JS access to Zabbix.
Server/proxy
If stated as mandatory, the required software/library is strictly necessary. Optional ones are needed for supporting some specific
function.
53
Mandatory
Requirement status Description
libpcre/libpcre2 One of PCRE/PCRE2 library is required for Perl Compatible Regular Expression (PCRE)
support.
The naming may differ depending on the GNU/Linux distribution, for example
’libpcre3’ or ’libpcre1’. PCRE v8.x and PCRE2 v10.x are supported.
libevent Yes Required for inter-process communication. Version 1.4 or higher.
libevent-pthreads Required for inter-process communication.
libpthread Required for mutex and read-write lock support (could be part of libc).
libresolv Required for DNS resolution (could be part of libc).
libiconv Required for text encoding/format conversion (could be part of libc). Mandatory for
Zabbix server on Linux.
libz Required for compression support.
libm Math library. Required by Zabbix server only.
libmysqlclient One of Required if MySQL is used.
libmariadb Required if MariaDB is used.
libclntsh Required if Oracle is used; libclntsh version must match or be higher than the
version of the Oracle database used.
libpq5 Required if PostgreSQL is used; libpq5 version must match or be higher than the
version of the PostgreSQL database used.
libsqlite3 Required if Sqlite is used. Required for Zabbix proxy only.
libOpenIPMI No Required for IPMI support. Required for Zabbix server only.
libssh2 or libssh Required for SSH checks. Version 1.0 or higher (libssh2); 0.9.0 or higher (libssh).
libcurl Required for web monitoring, VMware monitoring, SMTP authentication,
web.page.* Zabbix agent items, HTTP agent items and Elasticsearch (if used).
Version 7.19.1 or higher is required (7.28.0 or higher is recommended).
Libcurl version requirements:
- SMTP authentication: version 7.20.0 or higher
- Elasticsearch: version 7.28.0 or higher
To make use of upgraded cURL features, restart Zabbix server/proxy and agent (for
web.page.* items).
libxml2 Required for VMware monitoring and XML XPath preprocessing.
net-snmp Required for SNMP support. Version 5.3.0 or higher.
Support of strong encryption protocols (AES192/AES192C, AES256/AES256C) is
available starting with net-snmp library 5.8; on RHEL 8+ based systems it is
recommended to use net-snmp 5.8.15 or later.
libunixodbc Required for database monitoring.
libgnutls or libopenssl Required when using encryption.
Minimum versions: libgnutls - 3.1.18, libopenssl - 1.0.1
libldap Required for LDAP support.
fping Required for ICMP ping items.
Agent
Mandatory
Requirement status Description
libpcre/libpcre2 One of PCRE/PCRE2 library is required for Perl Compatible Regular Expression (PCRE)
support.
The naming may differ depending on the GNU/Linux distribution, for example
’libpcre3’ or ’libpcre1’. PCRE v8.x and PCRE2 v10.x are supported.
Required for log monitoring. Also required on Windows.
libpthread Yes Required for mutex and read-write lock support (could be part of libc). Not
required on Windows.
libresolv Required for DNS resolution (could be part of libc). Not required on Windows.
libiconv Required for text encoding/format conversion to UTF-8 in log items, file content,
file regex and regmatch items (could be part of libc). Not required on Windows.
libgnutls or libopenssl No Required if using encryption.
Minimum versions: libgnutls - 3.1.18, libopenssl - 1.0.1
On Microsoft Windows OpenSSL 1.1.1 or later is required.
libldap Required if LDAP is used. Not supported on Windows.
54
Mandatory
Requirement status Description
libcurl Required for web.page.* Zabbix agent items. Not supported on Windows.
Version 7.19.1 or higher is required (7.28.0 or higher is recommended).
To make use of upgraded cURL features, restart Zabbix agent.
libmodbus Only required if Modbus monitoring is used.
Version 3.0 or higher.
Agent 2
Mandatory
Requirement status Description
libpcre/libpcre2 One of PCRE/PCRE2 library is required for Perl Compatible Regular Expression (PCRE)
support.
The naming may differ depending on the GNU/Linux distribution, for example
’libpcre3’ or ’libpcre1’. PCRE v8.x and PCRE2 v10.x are supported.
Required for log monitoring. Also required on Windows.
libopenssl No Required when using encryption.
OpenSSL 1.0.1 or later is required on UNIX platforms.
The OpenSSL library must have PSK support enabled. LibreSSL is not supported.
On Microsoft Windows systems OpenSSL 1.1.1 or later is required.
Go libraries
Mandatory Minimum
Requirement status version Description
git.zabbix.com/ap/plugin-
Yes 1.X.X Zabbix own support library. Mostly for plugins.
support
github.com/BurntSushi/locker 0.0.0 Named read/write locks, access sync.
github.com/chromedp/cdproto 0.0.0 Generated commands, types, and events for the Chrome DevTools
Protocol domains.
github.com/chromedp/chromedp 0.6.0 Chrome DevTools Protocol support (report generation).
github.com/dustin/gomemcached 0.0.0 A memcached binary protocol toolkit for go.
github.com/eclipse/paho.mqtt.golang1.2.0 A library to handle MQTT connections.
github.com/fsnotify/fsnotify 1.4.9 Cross-platform file system notifications for Go.
github.com/go- 3.0.3 Basic LDAP v3 functionality for the Go programming language.
ldap/ldap
github.com/go- 1.2.4 Win32 ole implementation for Go.
ole/go-ole
github.com/godbus/dbus 4.1.0 Native Go bindings for D-Bus.
github.com/go-sql- 1.5.0 MySQL driver.
driver/mysql
github.com/godror/godror 0.20.1 Oracle DB driver.
github.com/mattn/go- 2.0.3 Sqlite3 driver.
sqlite3
github.com/mediocregopher/radix/v33.5.0 Redis client.
github.com/memcachier/mc/v3 3.0.1 Binary Memcached client.
github.com/miekg/dns 1.1.43 DNS library.
github.com/omeid/go- 0.0.1 Embeddable filesystem mapped key-string store.
yarn
github.com/goburrow/modbus 0.1.0 Fault-tolerant implementation of Modbus.
golang.org/x/sys 0.0.0 Go packages for low-level interactions with the operating system.
Also used in plugin support lib. Used in MongoDB and PostgreSQL
plugins.
github.com/Microsoft/go-
On 0.6.0 Windows named pipe implementation.
winio Windows. Also used in plugin support lib. Used in MongoDB and PostgreSQL
Yes, plugins.
1
indirect
github.com/goburrow/serial
Yes, 0.1.0 Serial library for Modbus.
1
indirect
55
Mandatory Minimum
Requirement status version Description
1
”Indirect” means that it is used in one of the libraries that the agent uses. It’s required since Zabbix uses the library that uses
the package.
• PostgreSQL
• MongoDB
Java gateway
If you obtained Zabbix from the source repository or an archive, then the necessary dependencies are already included in the
source tree.
If you obtained Zabbix from your distribution’s package, then the necessary dependencies are already provided by the packaging
system.
In both cases above, the software is ready to be used and no additional downloads are necessary.
If, however, you wish to provide your versions of these dependencies (for instance, if you are preparing a package for some Linux
distribution), below is the list of library versions that Java gateway is known to work with. Zabbix may work with other versions of
these libraries, too.
The following table lists JAR files that are currently bundled with Java gateway in the original code:
Mandatory Minimum
Library status version Comments
Java gateway can be built using either Oracle Java or open-source OpenJDK (version 1.6 or newer). Packages provided by Zabbix
are compiled using OpenJDK. The table below provides information about OpenJDK versions used for building Zabbix packages by
distribution:
RHEL 8 1.8.0
RHEL 7 1.8.0
SLES 15 11.0.4
Debian 10 11.0.8
Ubuntu 20.04 11.0.8
Ubuntu 18.04 11.0.8
56
The following list of open ports per component is applicable for default configuration:
Note:
The port numbers should be open in firewall to enable Zabbix communications. Outgoing TCP connections usually do not
require explicit firewall settings.
Database size
Zabbix configuration data require a fixed amount of disk space and do not grow much.
Zabbix database size mainly depends on these variables, which define the amount of stored historical data:
This is the average number of new values Zabbix server receives every second. For example, if we have 3000 items for monitoring
with a refresh rate of 60 seconds, the number of values per second is calculated as 3000/60 = 50.
It means that 50 new values are added to Zabbix database every second.
Zabbix keeps values for a fixed period of time, normally several weeks or months. Each new value requires a certain amount of
disk space for data and index.
So, if we would like to keep 30 days of history and we receive 50 values per second, the total number of values will be around
(30*24*3600)* 50 = 129.600.000, or about 130M of values.
Depending on the database engine used, type of received values (floats, integers, strings, log files, etc), the disk space for keeping
2
a single value may vary from 40 bytes to hundreds of bytes. Normally it is around 90 bytes per value for numeric items . In our
case, it means that 130M of values will require 130M * 90 bytes = 10.9GB of disk space.
Note:
The size of text/log item values is impossible to predict exactly, but you may expect around 500 bytes per value.
Zabbix keeps a 1-hour max/min/avg/count set of values for each item in the table trends. The data is used for trending and long
period graphs. The one hour period can not be customized.
Zabbix database, depending on the database type, requires about 90 bytes per each total. Suppose we would like to keep trend
data for 5 years. Values for 3000 items will require 3000*24*365* 90 = 2.2GB per year, or 11GB for 5 years.
For each recovered event, an event_recovery record is created. Normally most of the events will be recovered so we can assume
one event_recovery record per event. That means additional 80 bytes per event.
1
Optionally events can have tags, each tag record requiring approximately 100 bytes of disk space . The number of tags per event
(#tags) depends on configuration. So each will need an additional #tags * 100 bytes of disk space.
It means that if we want to keep 3 years of events, this would require 3*365*24*3600* (250+80+#tags*100) = ~30GB+#tags*100B
2
disk space .
57
Note:
1
More when having non-ASCII event names, tags and values.
2
The size approximations are based on MySQL and might be different for other databases.
The table contains formulas that can be used to calculate the disk space required for Zabbix system:
Time synchronization
It is very important to have precise system time on the server with Zabbix running. ntpd is the most popular daemon that syn-
chronizes the host’s time with the time of other machines. It’s strongly recommended to maintain synchronized system time on
all systems Zabbix components are running on.
Network requirements
A following list of open ports per component is applicable for default configuration.
Port Components
Note:
The port numbers should be opened in the firewall to enable external communications with Zabbix. Outgoing TCP connec-
tions usually do not require explicit firewall settings.
Overview
The required libraries for the PostgreSQL loadable plugin are listed in this page.
Go libraries
58
Mandatory Minimum
Requirement status version Description
git.zabbix.com/ap/plugin-
Yes 1.X.X Zabbix own support library. Mostly for plugins.
support
github.com/jackc/pgx/v4 4.17.2 PostgreSQL driver.
github.com/omeid/go- 0.0.1 Embeddable filesystem mapped key-string store.
yarn
1
github.com/jackc/chunkreader
Indirect 2.0.1
github.com/jackc/pgconn 1.13.0
github.com/jackc/pgio 1.0.0
github.com/jackc/pgpassfile 1.0.0
github.com/jackc/pgproto3 2.3.1
github.com/jackc/pgservicefile 0.0.0
github.com/jackc/pgtype 1.12.0
github.com/jackc/puddle 1.3.0
github.com/Microsoft/go- 0.6.0 Required package for PostgreSQL plugin on Windows.
winio
golang.org/x/crypto 0.0.0
golang.org/x/sys 0.0.0
golang.org/x/text 0.3.7
1
”Indirect” means that it is used in one of the libraries that the agent uses. It’s required since Zabbix uses the library that uses
the package.
Overview
The required libraries for the MongoDB loadable plugin are listed in this page.
Go libraries
Mandatory Minimum
Requirement status version Description
git.zabbix.com/ap/plugin-
Yes 1.X.X Zabbix own support library. Mostly for plugins.
support
go.mongodb.org/mongo- 1.7.6 Named read/write locks, access sync.
driver
1
github.com/go- Indirect 1.8.0 Required package for MongoDB plugin mongo-driver lib.
stack/stack
github.com/golang/snappy 0.0.1 Required package for MongoDB plugin mongo-driver lib.
github.com/klauspost/compress 1.13.6 Required package for MongoDB plugin mongo-driver lib.
github.com/Microsoft/go- 0.6.0 Required package for MongoDB plugin mongo-driver lib on Windows.
winio
github.com/pkg/errors 0.9.1 Required package for MongoDB plugin mongo-driver lib.
github.com/xdg- 1.0.0 Required package for MongoDB plugin mongo-driver lib.
go/pbkdf2
github.com/xdg- 1.0.2 Required package for MongoDB plugin mongo-driver lib.
go/scram
github.com/xdg- 1.0.2 Required package for MongoDB plugin mongo-driver lib.
go/stringprep
github.com/youmark/pkcs8 0.0.0 Required package for MongoDB plugin mongo-driver lib.
golang.org/x/crypto 0.0.0 Required package for MongoDB plugin mongo-driver lib.
golang.org/x/sync 0.0.0 Required package for MongoDB plugin mongo-driver lib.
golang.org/x/sys 0.0.0 Required package for MongoDB plugin mongo-driver lib.
golang.org/x/text 0.3.7 Required package for MongoDB plugin mongo-driver lib.
1
”Indirect” means that it is used in one of the libraries that the agent uses. It’s required since Zabbix uses the library that uses
the package.
59
3 Best practices for secure Zabbix setup
Overview
This section contains best practices for setting up Zabbix in a secure way.
The practices in this section are not required for the functioning of Zabbix but are recommended for better system security.
UTF-8 encoding
UTF-8 is the only encoding supported by Zabbix. It is known to work without any security flaws. Users should be aware that there
are known security issues if using some of the other encodings.
When using Windows installers, it is recommended to use the default paths provided by the installer. Using custom paths without
proper permissions could compromise the security of the installation.
1 Access control
Overview
This section contains best practices for setting up access control in a secure way.
User accounts, at all times, should run with as few privileges as possible. This means that user accounts in Zabbix frontend,
database users, or the user for Zabbix server/proxy/agent processes should only have the privileges that are essential for perform-
ing the intended functions.
Attention:
Giving extra privileges to the ’zabbix’ user will allow it to access configuration files and execute operations that can com-
promise the infrastructure security.
When configuring user account privileges, Zabbix frontend user types should be considered. Note that although the Admin user
type has fewer privileges than the Super Admin user type, it can still manage configuration and execute custom scripts.
Note:
Some information is available even for non-privileged users. For example, while Alerts → Scripts is available only for
Super Admin users, scripts can also be retrieved through Zabbix API. In this case, limiting script permissions and excluding
sensitive information from scripts (for example, access credentials) can help avoid exposing sensitive information available
in global scripts.
By default, Zabbix server and Zabbix agent processes share one ’zabbix’ user. To ensure that Zabbix agent cannot access sensitive
details in the server configuration (for example, database login information), the agent should be run as a different user:
Zabbix Windows agent compiled with OpenSSL will try to reach the SSL configuration file in c:\\openssl-64bit. The
openssl-64bit directory on disk C: can be created by non-privileged users.
To improve security, create this directory manually and revoke write access from non-admin users.
Please note that directory names will differ on 32-bit and 64-bit versions of Windows.
Some functionality can be switched off to harden the security of Zabbix components:
• global script execution on Zabbix server can be disabled by setting EnableGlobalScripts=0 in server configuration;
60
• global script execution on Zabbix proxy is disabled by default (can be enabled by setting EnableRemoteCommands=1 in
proxy configuration);
• global script execution on Zabbix agents is disabled by default (can be enabled by adding an AllowKey=system.run[<command>,*]
parameter for each allowed command in agent configuration);
• user HTTP authentication can be disabled by setting $ALLOW_HTTP_AUTH=false in the frontend configuration file (zab-
bix.conf.php). Note that reinstalling the frontend (running setup.php) will remove this parameter.
1 MySQL/MariaDB
Overview
This section contains best practices for setting up a MySQL database in a secure way.
For an easy setup, it is recommended to follow the default MySQL/MariaDB database creation instructions, which include creating
the ’zabbix’ user with full privileges on the Zabbix database. This user is the database owner that also has the necessary privileges
for modifying the database structure when upgrading Zabbix.
To improve security, creating additional database roles and users with minimal privileges is recommended. These roles and users
should be configured based on the principle of least privilege, that is, they should only have privileges that are essential for
performing the intended functions.
Attention:
Table restoration and upgrade should be performed by the database owner.
• zbx_part - role with a reduced set of privileges for database partitioning; note that this role can be created only after the
database has been created, as it grants privileges on specific database tables:
CREATE ROLE 'zbx_part';
GRANT SELECT, ALTER, DROP ON zabbix.history TO 'zbx_part';
GRANT SELECT, ALTER, DROP ON zabbix.history_uint TO 'zbx_part';
GRANT SELECT, ALTER, DROP ON zabbix.history_str TO 'zbx_part';
GRANT SELECT, ALTER, DROP ON zabbix.history_text TO 'zbx_part';
GRANT SELECT, ALTER, DROP ON zabbix.history_log TO 'zbx_part';
GRANT SELECT, ALTER, DROP ON zabbix.trends TO 'zbx_part';
GRANT SELECT, ALTER, DROP ON zabbix.trends_uint TO 'zbx_part';
-- For MariaDB: skip the next line (GRANT session_variables_admin ON *.* TO 'zbx_part';)
GRANT session_variables_admin ON *.* TO 'zbx_part';
GRANT SELECT ON zabbix.dbversion TO 'zbx_part';
GRANT SELECT, DELETE ON zabbix.housekeeper TO 'zbx_part';
FLUSH PRIVILEGES;
61
To assign the created user roles, create users and assign the relevant roles to them. Replace <user>, <host>, <role>, and
<password> as necessary.
CREATE USER '<user>'@'<host>' IDENTIFIED BY '<password>';
GRANT '<role>' TO '<user>'@'<host>';
SET DEFAULT ROLE '<role>' TO '<user>'@'<host>';
-- For MariaDB: SET DEFAULT ROLE '<role>' FOR '<user>'@'<host>'
FLUSH PRIVILEGES;
For example, to create and assign the role for running Zabbix server and proxy:
CREATE USER 'usr_srv'@'localhost' IDENTIFIED BY 'password';
GRANT 'zbx_srv' TO 'usr_srv'@'localhost';
SET DEFAULT ROLE ALL TO 'usr_srv'@'localhost';
FLUSH PRIVILEGES;
2 PostgreSQL/TimescaleDB
Overview
This section contains best practices for setting up a PostgreSQL database in a secure way.
For an easy setup, it is recommended to follow the default PostgreSQL database creation instructions, which include creating the
’zabbix’ user with full privileges on the Zabbix database. This user is the database owner that also has the necessary privileges
for modifying the database structure when upgrading Zabbix.
To improve security, configuring a secure schema usage pattern, as well as creating additional database roles and users with
minimal privileges is recommended. These roles and users should be configured based on the principle of least privilege, that is,
they should only have privileges that are essential for performing the intended functions.
Database setup
Create the user that will be the database owner, and create the Zabbix database; the database owner is the user that is specified
on database creation:
createuser -U postgres -h localhost --pwprompt usr_owner
createdb -U postgres -h localhost -O usr_owner -E Unicode -T template0 zabbix
Attention:
A clean install or upgrade of the database has to be performed by the database owner. This is because the right to drop
a database object or alter its definition is a privilege that is inherent to the database owner and cannot be granted or
revoked.
Attention:
The following commands on this page must be executed while the connection to PostgreSQL is made specifically to the
zabbix database.
Create the zabbix schema and set the database owner (usr_owner) to be the owner of this schema:
CREATE SCHEMA zabbix AUTHORIZATION usr_owner;
62
ALTER DEFAULT PRIVILEGES FOR ROLE usr_owner IN SCHEMA zabbix GRANT DELETE, INSERT, SELECT, UPDATE ON TABLE
ALTER DEFAULT PRIVILEGES FOR ROLE usr_owner IN SCHEMA zabbix GRANT SELECT, UPDATE, USAGE ON sequences TO z
Attention:
Table restoration is possible only by the database owner.
To assign the created user roles, create users and assign the relevant roles to them. Replace <user>, <role>, and <password>
as necessary.
CREATE USER <user> WITH ENCRYPTED password '<password>';
GRANT <role> TO <user>;
For example, to create and assign the role for running Zabbix server and proxy:
CREATE USER usr_srv WITH ENCRYPTED password 'password';
GRANT zbx_srv TO usr_srv;
Database partitioning is facilitated by TimescaleDB. To utilize TimescaleDB, Zabbix server requires database owner privileges.
If the PostgreSQL zabbix schema has already been created in the zabbix database, you can enable TimescaleDB with the
following command:
echo "CREATE EXTENSION IF NOT EXISTS timescaledb WITH SCHEMA zabbix CASCADE;" | sudo -u postgres psql zabb
3 Oracle
Overview
This section contains best practices for setting up an Oracle database in a secure way.
For a typical setup, it is recommended to follow the default Oracle database creation instructions, which include creating the
’zabbix’ user with full privileges on the Zabbix database. This user is the database owner that also has the necessary privileges
for modifying the database structure when upgrading Zabbix.
To improve security, creating additional database users with minimal privileges is recommended. These users should be configured
based on the principle of least privilege, that is, they should only have privileges that are essential for performing the intended
functions.
Attention:
The support for Oracle DB is deprecated since Zabbix 7.0.
Creating users
Assuming that the pluggable database (PDB) owner is usr_owner, creating two additional users with the corresponding privileges
(for daily operations) are recommended:
63
• usr_web - user for running Zabbix frontend and API.
These users must be created by the PDB owner (usr_owner) using the following commands:
CREATE USER usr_srv IDENTIFIED BY "usr_srv" DEFAULT TABLESPACE "usr_owner" TEMPORARY TABLESPACE temp;
CREATE USER usr_web IDENTIFIED BY "usr_web" DEFAULT TABLESPACE "usr_owner" TEMPORARY TABLESPACE temp;
GRANT CREATE SESSION, DELETE ANY TABLE, INSERT ANY TABLE, SELECT ANY TABLE, UPDATE ANY TABLE, SELECT ANY S
GRANT CREATE SESSION, DELETE ANY TABLE, INSERT ANY TABLE, SELECT ANY TABLE, UPDATE ANY TABLE, SELECT ANY S
Attention:
Table restoration and upgrade should be performed by the database owner.
Generating synonyms
The script below creates synonyms, so that usr_srv and usr_web can access tables in the usr_owner schema without specifying
the schema explicitly.
BEGIN
FOR x IN (SELECT owner,table_name FROM all_tables WHERE owner ='usr_owner')
LOOP
EXECUTE IMMEDIATE 'CREATE OR REPLACE SYNONYM usr_srv.'|| x.table_name ||' FOR '||x.owner||'.'|| x.table_
EXECUTE IMMEDIATE 'CREATE OR REPLACE SYNONYM usr_web.'|| x.table_name ||' FOR '||x.owner||'.'|| x.table_
END LOOP;
END;
/
Attention:
This script should be run each time after the Zabbix database structure is created or changed (for example, after upgrading
Zabbix, if some tables were created or renamed).
2 Cryptography
Overview
This section contains best practices for setting up cryptography in a secure way.
mkdir -p /etc/httpd/ssl/private
chmod 700 /etc/httpd/ssl/private
openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/httpd/ssl/private/apache-selfsigned.key -
Fill out the prompts appropriately. The most important line is the one that requests the Common Name. You must enter the domain
name you want to be associated with your server. You can enter the public IP address instead if you do not have a domain name.
DocumentRoot "/usr/share/zabbix"
ServerName example.com:443
64
SSLCertificateFile /etc/httpd/ssl/apache-selfsigned.crt
SSLCertificateKeyFile /etc/httpd/ssl/private/apache-selfsigned.key
3 Web server
Overview
This section contains best practices for setting up the web server in a secure way.
On RHEL-based systems, add a virtual host to Apache configuration (/etc/httpd/conf/httpd.conf) and set a permanent
redirect for document root to Zabbix SSL URL. Note that example.com should be replaced with the actual name of the server.
#### Add lines:
<VirtualHost *:*>
ServerName example.com
Redirect permanent / https://example.com
</VirtualHost>
To protect Zabbix frontend against protocol downgrade attacks, we recommend enabling the HSTS policy on the web server.
To enable the HSTS policy for your Zabbix frontend in Apache configuration, follow these steps:
#### On Debian/Ubuntu
systemctl restart apache2.service
To protect Zabbix frontend against Cross Site Scripting (XSS), data injection, and similar attacks, we recommend enabling Content
Security Policy on the web server. To do so, configure the web server to return the HTTP header.
Attention:
The following CSP header configuration is only for the default Zabbix frontend installation and for cases when all content
originates from the site’s domain (excluding subdomains). A different CSP header configuration may be required if you
are, for example, configuring the URL widget to display content from the site’s subdomains or external domains, switching
from OpenStreetMap to another map engine, or adding external CSS or widgets. If you’re using the Duo Universal Prompt
multi-factor authentication method, make sure to add ”duo.com” to the CSP directive in your virtual host’s configuration
file.
To enable CSP for your Zabbix frontend in Apache configuration, follow these steps:
65
• /etc/httpd/conf/httpd.conf on RHEL-based systems
• /etc/apache2/sites-available/000-default.conf on Debian/Ubuntu
2. Add the following directive to your virtual host’s configuration file:
<VirtualHost *:*>
Header set Content-Security-Policy: "default-src 'self' *.openstreetmap.org; script-src 'self' 'unsafe
</VirtualHost>
#### On Debian/Ubuntu
systemctl restart apache2.service
The signature can be disabled by adding the following parameters to the Apache configuration file:
ServerSignature Off
ServerTokens Prod
PHP signature (X-Powered-By HTTP header) can be disabled by changing the php.ini configuration file (by default, the signature
is disabled):
expose_php = Off
For additional security, you can use the mod_security tool with Apache (package libapache2-mod-security2). This tool allows
removing the server signature instead of removing only the version from the server signature. The server signature can be
changed to any value by setting ”SecServerSignature” to any desired value after installing mod_security.
Please refer to the documentation of your web server to find help on how to remove/change software signatures.
66
These default error pages should be replaced/removed. For example, the ”ErrorDocument” directive can be used to define a custom
error page/text for the Apache web server.
Please refer to the documentation of your web server to find help on how to replace/remove default error pages.
To avoid information exposure, removing the web server test page is recommended.
By default, the Apache web server webroot contains the index.html test page:
Please refer to the documentation of your web server to find help on how to remove default test pages.
By default, Zabbix is configured with X-Frame-Options HTTP header* set to SAMEORIGIN. This means that content can only be
loaded in a frame that has the same origin as the page itself.
Zabbix frontend elements that pull content from external URLs (namely, the URL dashboard widget) display retrieved content in a
sandbox with all sandboxing restrictions enabled.
These settings enhance the security of the Zabbix frontend and provide protection against XSS and clickjacking attacks. Super
admin users can modify the Use iframe sandboxing and Use X-Frame-Options HTTP header parameters as needed. Please carefully
weigh the risks and benefits before changing default settings. Turning iframe sandboxing or X-Frame-Options HTTP header off
completely is not recommended.
To increase the complexity of password brute force attacks, limiting access to the ui/data/top_passwords.txt file is rec-
ommended. This file contains a list of the most common and context-specific passwords and prevents users from setting such
passwords (if the Avoid easy-to-guess passwords parameter is enabled in the password policy).
To limit access to the top_passwords.txt file, modify your web server configuration.
On Apache, file access can be limited using the .htaccess file:
<Files "top_passwords.txt">
Order Allow,Deny
Deny from all
</Files>
67
location = /data/top_passwords.txt {
deny all;
return 404;
}
You can get the very latest version of Zabbix by compiling it from the sources.
A step-by-step tutorial for installing Zabbix from the sources is provided here.
Go to the Zabbix download page and download the source archive. Once downloaded, extract the sources, by running:
Note:
Enter the correct Zabbix version in the command. It must match the name of the downloaded archive.
For all of the Zabbix daemon processes, an unprivileged user is required. If a Zabbix daemon is started from an unprivileged user
account, it will run as that user.
However, if a daemon is started from a ’root’ account, it will switch to a ’zabbix’ user account, which must be present. To create
such a user account (in its own group, ”zabbix”),
Attention:
Zabbix processes do not need a home directory, which is why we do not recommend creating it. However, if you are using
some functionality that requires it (e. g. store MySQL credentials in $HOME/.my.cnf) you are free to create it using the
following commands.
If Zabbix server and agent are run on the same machine it is recommended to use a different user for running the server than for
running the agent. Otherwise, if both are run as the same user, the agent can access the server configuration file and any Admin
level user in Zabbix can quite easily retrieve, for example, the database password.
Attention:
Running Zabbix as root, bin, or any other account with special rights is a security risk.
For Zabbix server and proxy daemons, as well as Zabbix frontend, a database is required. It is not needed to run Zabbix agent.
68
SQL scripts are provided for creating database schema and inserting the dataset. Zabbix proxy database needs only the schema
while Zabbix server database requires also the dataset on top of the schema.
Having created a Zabbix database, proceed to the following steps of compiling Zabbix.
C99 with GNU extensions is required for building Zabbix server, Zabbix proxy or Zabbix agent. This version can be explicitly
specified by setting CFLAGS=”-std=gnu99”:
export CFLAGS="-std=gnu99"
Note:
If installing from Zabbix Git repository, it is required to run first:
./bootstrap.sh
When configuring the sources for a Zabbix server or proxy, you must specify the database type to be used. Only one database
type can be compiled with a server or proxy process at a time.
To see all of the supported configuration options, inside the extracted Zabbix source directory run:
./configure --help
To configure the sources for a Zabbix server and agent, you may run something like:
./configure --enable-agent
or, for Zabbix agent 2:
./configure --enable-agent2
Note:
A configured Go environment with a currently supported Go version is required for building Zabbix agent 2. See go.dev for
installation instructions.
• Command-line utilities zabbix_get and zabbix_sender are compiled if --enable-agent option is used.
• --with-libcurl and --with-libxml2 configuration options are required for virtual machine monitoring; --with-libcurl is also re-
quired for SMTP authentication and web.page.* Zabbix agent items. Note that cURL 7.20.0 or higher is required with the
--with-libcurl configuration option.
• Zabbix always compiles with the PCRE library; installing it is not optional. --with-libpcre=[DIR] only allows pointing to a
specific base install directory, instead of searching through a number of common places for the libpcre files.
• You may use the --enable-static flag to statically link libraries. If you plan to distribute compiled binaries among different
servers, you must use this flag to make these binaries work without required libraries. Note that --enable-static does not
work in Solaris.
• Using --enable-static option is not recommended when building server. In order to build the server statically, you must have
a static version of every external library needed. There is no strict check for that in configure script.
• Add optional path to the MySQL configuration file --with-mysql=/<path_to_the_file>/mysql_config to select the desired
MySQL client library when there is a need to use one that is not located in the default location. It is useful when there
are several versions of MySQL installed or MariaDB installed alongside MySQL on the same system.
• Use --with-oracle flag to specify location of the OCI API.
69
Attention:
config.log file for more details
If ./configure fails due to missing libraries or some other circumstance, please see the
libssl is missing, the immediate error message may be misleading:
on the error. For example, if
checking for main in -lmysqlclient... no
configure: error: Not found mysqlclient library
While config.log has a more detailed description:
/usr/bin/ld: cannot find -lssl
/usr/bin/ld: cannot find -lcrypto
See also:
Note:
If installing from Zabbix Git repository, it is required to run first:
$ make dbschema
make install
This step should be run as a user with sufficient permissions (commonly ’root’, or by using sudo).
Running make install will by default install the daemon binaries (zabbix_server, zabbix_agentd, zabbix_proxy) in /usr/local/sbin
and the client binaries (zabbix_get, zabbix_sender) in /usr/local/bin.
Note:
To specify a different location than /usr/local, use a --prefix key in the previous step of configuring sources, for example --
prefix=/home/zabbix. In this case daemon binaries will be installed under <prefix>/sbin, while utilities under <prefix>/bin.
Man pages will be installed under <prefix>/share.
You need to configure this file for every host with zabbix_agentd installed.
You must specify the Zabbix server IP address in the file. Connections from other hosts will be denied.
You must specify the database name, user and password (if using any).
The rest of the parameters will suit you with their defaults if you have a small installation (up to ten monitored hosts). You should
change the default parameters if you want to maximize the performance of Zabbix server (or proxy) though.
• if you have installed a Zabbix proxy, edit the proxy configuration file /usr/local/etc/zabbix_proxy.conf
You must specify the server IP address and proxy hostname (must be known to the server), as well as the database name, user
and password (if using any).
Note:
With SQLite the full path to database file must be specified; DB user and password are not required.
zabbix_server
Note:
Make sure that your system allows allocation of 36MB (or a bit more) of shared memory, otherwise the server may not
start and you will see ”Cannot allocate shared memory for <type of cache>.” in the server log file. This may happen on
FreeBSD, Solaris 8.
70
Run zabbix_agentd on all the monitored machines.
zabbix_agentd
Note:
Make sure that your system allows allocation of 2MB of shared memory, otherwise the agent may not start and you will
see ”Cannot allocate shared memory for collector.” in the agent log file. This may happen on Solaris 8.
zabbix_proxy
2 Installing Zabbix web interface
Zabbix frontend is written in PHP, so to run it a PHP supported webserver is needed. Installation is done by simply copying the PHP
files from the ui directory to the webserver HTML documents directory.
Common locations of HTML documents directories for Apache web servers include:
It is suggested to use a subdirectory instead of the HTML root. To create a subdirectory and copy Zabbix frontend files into it,
execute the following commands, replacing the actual directory:
mkdir <htdocs>/zabbix
cd ui
cp -a . <htdocs>/zabbix
If planning to use any other language than English, see Installation of additional frontend languages for instructions.
Installing frontend
Please see Web interface installation page for information about Zabbix frontend installation wizard.
It is required to install Java gateway only if you want to monitor JMX applications. Java gateway is lightweight and does not require
a database.
To install from sources, first download and extract the source archive.
To compile Java gateway, run the ./configure script with --enable-java option. It is advisable that you specify the --prefix
option to request installation path other than the default /usr/local, because installing Java gateway will create a whole directory
tree, not just a single executable.
make
Now you have a zabbix-java-gateway-$VERSION.jar file in src/zabbix_java/bin. If you are comfortable with running Java gateway
from src/zabbix_java in the distribution directory, then you can proceed to instructions for configuring and running Java gateway.
Otherwise, make sure you have enough privileges and run make install.
make install
Proceed to setup for more details on configuring and running Java gateway.
Installing Zabbix web service is only required if you want to use scheduled reports.
To install from sources, first download and extract the source archive.
To compile Zabbix web service, run the ./configure script with --enable-webservice option.
Note:
A configured Go version 1.13+ environment is required for building Zabbix web service.
71
Run zabbix_web_service on the machine, where the web service is installed:
zabbix_web_service
Proceed to setup for more details on configuring Scheduled reports generation.
Overview
This section demonstrates how to build Zabbix Windows agent binaries from sources with or without TLS.
Compiling OpenSSL
The following steps will help you to compile OpenSSL from sources on MS Windows 10 (64-bit).
Compiling PCRE
72
15. Open a commandline window e.g. the x64 Native Tools Command Prompt for VS 2017 and navigate to the Makefile mentioned
above.
16. Run NMake command: E:\pcre2-10.39\build> nmake install
Compiling Zabbix
The following steps will help you to compile Zabbix from sources on MS Windows 10 (64-bit). When compiling Zabbix with/without
TLS support the only significant difference is in step 4.
1. On a Linux machine check out the source from git:$ git clone https://git.zabbix.com/scm/zbx/zabbix.git
$ cd zabbix $ ./bootstrap.sh $ ./configure --enable-agent --enable-ipv6 --prefix=`pwd`
$ make dbschema $ make dist
2. Copy and unpack the archive, e.g. zabbix-7.0.0.tar.gz, on a Windows machine.
3. Let’s assume that sources are in e:\zabbix-7.0.0. Open a commandline window e.g. the x64 Native Tools Command Prompt
for VS 2017 RC. Go to E:\\zabbix-7.0.0\\build\\win32\\project.
4. Compile zabbix_get, zabbix_sender and zabbix_agent.
• without TLS: E:\zabbix-7.0.0\build\win32\project> nmake /K PCREINCDIR=E:\pcre2-10.39-install\include
PCRELIBDIR=E:\pcre2-10.39-install\lib
• with TLS: E:\zabbix-7.0.0\build\win32\project> nmake /K -f Makefile_get TLS=openssl TLSINCDIR=C:\Ope
TLSLIBDIR=C:\OpenSSL-Win64-111-static\lib PCREINCDIR=E:\pcre2-10.39-install\include
PCRELIBDIR=E:\pcre2-10.39-install\lib E:\zabbix-7.0.0\build\win32\project> nmake /K
-f Makefile_sender TLS=openssl TLSINCDIR="C:\OpenSSL-Win64-111-static\include TLSLIBDIR="C:\OpenS
PCREINCDIR=E:\pcre2-10.39-install\include PCRELIBDIR=E:\pcre2-10.39-install\lib E:\zabbix-7.0
nmake /K -f Makefile_agent TLS=openssl TLSINCDIR=C:\OpenSSL-Win64-111-static\include
TLSLIBDIR=C:\OpenSSL-Win64-111-static\lib PCREINCDIR=E:\pcre2-10.39-install\include
PCRELIBDIR=E:\pcre2-10.39-install\lib
5. New binaries are located in e:\zabbix-7.0.0\bin\win64. Since OpenSSL was compiled with ’no-shared’ option, Zabbix binaries
contain OpenSSL within themselves and can be copied to other machines that do not have OpenSSL.
The process is similar to compiling with OpenSSL, but you need to make small changes in files located in the build\win32\project
directory:
Overview
This section demonstrates how to build Zabbix agent 2 (Windows) from sources.
1. Download MinGW-w64 with SJLJ (set jump/long jump) Exception Handling and Windows threads (for example x86_64-8.1.0-
release-win32-sjlj-rt_v6-rev0.7z)
2. Extract and move to c:\mingw
3. Setup environmental variable
@echo off
set PATH=%PATH%;c:\mingw\bin
cmd
When compiling use Windows prompt instead of MSYS terminal provided by MinGW
The following instructions will compile and install 64-bit PCRE libraries in c:\dev\pcre and 32-bit libraries in c:\dev\pcre32:
73
1. Download the PCRE or PCRE2 library (https://pcre.org/) and extract
2. Open cmd and navigate to the extracted sources
del CMakeCache.txt
rmdir /q /s CMakeFiles
2. Run cmake (CMake can be installed from https://cmake.org/download/):
mingw32-make clean
mingw32-make install
Build 32bit PCRE
1. Run:
mingw32-make clean
2. Delete CMakeCache.txt:
del CMakeCache.txt
rmdir /q /s CMakeFiles
3. Run cmake:
mingw32-make install
Installing OpenSSL development libraries
32 bit
Open MinGW environment (Windows command prompt) and navigate to build/mingw directory in the Zabbix source tree.
Run:
mingw32-make clean
mingw32-make ARCH=x86 PCRE=c:\dev\pcre32 OPENSSL=c:\dev\openssl32
64 bit
Open MinGW environment (Windows command prompt) and navigate to build/mingw directory in the Zabbix source tree.
Run:
mingw32-make clean
mingw32-make PCRE=c:\dev\pcre OPENSSL=c:\dev\openssl
Note:
Both 32- and 64- bit versions can be built on a 64-bit platform, but only a 32-bit version can be built on a 32-bit platform.
When working on the 32-bit platform, follow the same steps as for 64-bit version on 64-bit platform.
Overview
This section demonstrates how to build Zabbix macOS agent binaries from sources with or without TLS.
Prerequisites
74
You will need command line developer tools (Xcode is not required), Automake, pkg-config and PCRE (v8.x) or PCRE2 (v10.x). If
you want to build agent binaries with TLS, you will also need OpenSSL or GnuTLS.
To install Automake and pkg-config, you will need a Homebrew package manager from https://brew.sh/. To install it, open terminal
and run the following command:
If you intend to run agent binaries on a macOS machine that already has these libraries, you can use precompiled libraries that are
provided by Homebrew. These are typically macOS machines that use Homebrew for building Zabbix agent binaries or for other
purposes.
If agent binaries will be used on macOS machines that don’t have the shared version of libraries, you should compile static libraries
from sources and link Zabbix agent with them.
Install PCRE2 (replace pcre2 with pcre in the commands below, if needed):
cd zabbix
./bootstrap.sh
./configure --sysconfdir=/usr/local/etc/zabbix --enable-agent --enable-ipv6
make
make install
Build agent with OpenSSL:
cd zabbix
./bootstrap.sh
./configure --sysconfdir=/usr/local/etc/zabbix --enable-agent --enable-ipv6 --with-openssl=/usr/local/opt/
make
make install
Build agent with GnuTLS:
cd zabbix-source/
./bootstrap.sh
./configure --sysconfdir=/usr/local/etc/zabbix --enable-agent --enable-ipv6 --with-gnutls=/usr/local/opt/g
make
make install
Building agent binaries with static libraries without TLS
Let’s assume that PCRE static libraries will be installed in $HOME/static-libs. We will use PCRE2 10.39.
PCRE_PREFIX="$HOME/static-libs/pcre2-10.39"
Download and build PCRE with Unicode properties support:
mkdir static-libs-source
cd static-libs-source
curl --remote-name https://github.com/PhilipHazel/pcre2/releases/download/pcre2-10.39/pcre2-10.39.tar.gz
tar xf pcre2-10.39.tar.gz
cd pcre2-10.39
./configure --prefix="$PCRE_PREFIX" --disable-shared --enable-static --enable-unicode-properties
75
make
make check
make install
Download Zabbix source and build agent:
When building OpenSSL, it’s recommended to run make test after successful building. Even if building was successful, tests
sometimes fail. If this is the case, problems should be researched and resolved before continuing.
Let’s assume that PCRE and OpenSSL static libraries will be installed in $HOME/static-libs. We will use PCRE2 10.39 and
OpenSSL 1.1.1a.
PCRE_PREFIX="$HOME/static-libs/pcre2-10.39"
OPENSSL_PREFIX="$HOME/static-libs/openssl-1.1.1a"
Let’s build static libraries in static-libs-source:
mkdir static-libs-source
cd static-libs-source
Download and build PCRE with Unicode properties support:
GnuTLS depends on the Nettle crypto backend and GMP arithmetic library. Instead of using full GMP library, this guide will use
mini-gmp which is included in Nettle.
When building GnuTLS and Nettle, it’s recommended to run make check after successful building. Even if building was successful,
tests sometimes fail. If this is the case, problems should be researched and resolved before continuing.
Let’s assume that PCRE, Nettle and GnuTLS static libraries will be installed in $HOME/static-libs. We will use PCRE2 10.39,
Nettle 3.4.1 and GnuTLS 3.6.5.
PCRE_PREFIX="$HOME/static-libs/pcre2-10.39"
76
NETTLE_PREFIX="$HOME/static-libs/nettle-3.4.1"
GNUTLS_PREFIX="$HOME/static-libs/gnutls-3.6.5"
Let’s build static libraries in static-libs-source:
mkdir static-libs-source
cd static-libs-source
Download and build Nettle:
Package files for yum/dnf, apt and zypper repositories for various OS distributions are available at repo.zabbix.com.
Some OS distributions (in particular, Debian-based distributions) provide their own Zabbix packages. Note that these packages
are not supported by Zabbix. Third-party Zabbix packages can be out of date and may lack the latest features and bug fixes. It
is recommended to use only the official packages from repo.zabbix.com. If you have previously used unofficial Zabbix packages,
see notes about upgrading the Zabbix packages from OS repositories.
Overview
Official Zabbix 7.0 packages for Red Hat Enterprise Linux versions 6, 7, 8, and 9, as well as for versions 8 and 9 of AlmaLinux,
CentOS Stream, Oracle Linux, and Rocky Linux are available on Zabbix website.
77
Attention:
Zabbix packages for Red Hat Enterprise Linux systems are intended only for RHEL systems. Alternative environments,
such as Red Hat Universal Base Image, may lack the necessary dependencies and repository access requirements for
successful installation. To address such issues, verify compatibility with the target environment and ensure access to
required repositories and dependencies before proceeding with Zabbix installation from packages. For more information,
see Known issues.
Zabbix agent packages, as well as Zabbix get and Zabbix sender utilities are also available in Zabbix Official Repository for the
following OS:
• RHEL 6, 7, 8, and 9
• AlmaLinux 8 and 9
• CentOS Stream 8 and 9
• Oracle Linux 8 and 9
• Rocky Linux 8 and 9
The official Zabbix repository provides fping, iksemel and libssh2 packages as well. These packages are located in the non-
supported directory.
Attention:
The EPEL repository for EL9 also provides Zabbix packages. If both the official Zabbix repository and EPEL repositories are
installed, then the Zabbix packages in EPEL must be excluded by adding the following clause to the EPEL repo configuration
file under /etc/yum.repos.d/:
[epel]
...
excludepkgs=zabbix*
See also: Accidental installation of EPEL Zabbix packages
Notes on installation
If you want to run Zabbix agent as root, see Running agent as root.
Zabbix web service process, which is used for scheduled report generation, requires Google Chrome browser. The browser is not
included into packages and has to be installed manually.
With TimescaleDB, in addition to the import command for PostgreSQL, also run:
Warning:
TimescaleDB is supported with Zabbix server only.
SELinux configuration
Zabbix uses socket-based inter-process communication. On systems where SELinux is enabled, it may be required to add SELinux
rules to allow Zabbix create/use UNIX domain sockets in the SocketDir directory. Currently, socket files are used by server (alerter,
preprocessing, IPMI) and proxy (IPMI). Socket files are persistent, meaning they are present while the process is running.
Having SELinux status enabled in enforcing mode, you need to execute the following commands to enable communication between
Zabbix frontend and server:
RHEL 7 and later or AlmaLinux, CentOS Stream, Oracle Linux, Rocky Linux 8 and later:
78
# setsebool -P httpd_can_connect_zabbix on
If the database is accessible over network (including 'localhost' in case of PostgreSQL), you need to allo
# setsebool -P httpd_can_network_connect_db on
RHEL prior to 7:
setsebool -P httpd_can_network_connect on
setsebool -P zabbix_can_network on
After the frontend and SELinux configuration is done, restart the Apache web server:
• RHEL 7, 8, and 9
• AlmaLinux 8 and 9
• CentOS Stream 8 and 9
• Oracle Linux 8 and 9
• Rocky Linux 8 and 9
This package provides a basic default policy for SELinux and makes zabbix components work out-of-the-box by allowing Zabbix to
create and use sockets and enabling httpd connection to PostgreSQL (used by frontend).
require {
type zabbix_t;
type zabbix_port_t;
type zabbix_var_run_t;
type postgresql_port_t;
type httpd_t;
class tcp_socket name_connect;
class sock_file { create unlink };
class unix_stream_socket connectto;
}
Proxy installation
Once the required repository is added, you can install Zabbix proxy by running:
The package ’zabbix-sql-scripts’ contains database schemas for all supported database management systems for both Zabbix
server and Zabbix proxy and will be used for data import.
Creating database
Zabbix server and Zabbix proxy cannot use the same database. If they are installed on the same host, the proxy database must
have a different name.
Importing data
79
Import initial schema:
Edit zabbix_proxy.conf:
vi /etc/zabbix/zabbix_proxy.conf
DBHost=localhost
DBName=zabbix
DBUser=zabbix
DBPassword=<password>
In DBName for Zabbix proxy use a separate database from Zabbix server.
In DBPassword use Zabbix database password for MySQL; PostgreSQL user password for PostgreSQL.
Use DBHost= with PostgreSQL. You might want to keep the default setting DBHost=localhost (or an IP address), but this would
make PostgreSQL use a network socket for connecting to Zabbix. See SELinux configuration for instructions.
A Zabbix proxy does not have a frontend; it communicates with Zabbix server only.
It is required to install Java gateway only if you want to monitor JMX applications. Java gateway is lightweight and does not require
a database.
Once the required repository is added, you can install Zabbix Java gateway by running:
Note:
Debuginfo packages are currently available for RHEL versions 7 and 6.
To enable debuginfo repository, edit /etc/yum.repos.d/zabbix.repo file. Change enabled=0 to enabled=1 for zabbix-debuginfo
repository.
[zabbix-debuginfo]
name=Zabbix Official Repository debuginfo - $basearch
baseurl=http://repo.zabbix.com/zabbix/7.0/rhel/7/$basearch/debuginfo/
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-ZABBIX-A14FE591
gpgcheck=1
This will allow you to install the zabbix-debuginfo package.
2 Debian/Ubuntu/Raspbian
Overview
Official Zabbix 7.0 packages for Debian, Ubuntu, and Raspberry Pi OS (Raspbian) are available on Zabbix website.
80
Packages are available with either MySQL/PostgreSQL database and Apache/Nginx web server support.
Notes on installation
See the installation instructions per platform in the download page for:
If you want to run Zabbix agent as root, see running agent as root.
Zabbix web service process, which is used for scheduled report generation, requires Google Chrome browser. The browser is not
included into packages and has to be installed manually.
With TimescaleDB, in addition to the import command for PostgreSQL, also run:
Warning:
TimescaleDB is supported with Zabbix server only.
SELinux configuration
After the frontend and SELinux configuration is done, restart the Apache web server:
Once the required repository is added, you can install Zabbix proxy by running:
The package ’zabbix-sql-scripts’ contains database schemas for all supported database management systems for both Zabbix
server and Zabbix proxy and will be used for data import.
Creating database
Zabbix server and Zabbix proxy cannot use the same database. If they are installed on the same host, the proxy database must
have a different name.
Importing data
Edit zabbix_proxy.conf:
vi /etc/zabbix/zabbix_proxy.conf
DBHost=localhost
DBName=zabbix
DBUser=zabbix
DBPassword=<password>
81
In DBName for Zabbix proxy use a separate database from Zabbix server.
In DBPassword use Zabbix database password for MySQL; PostgreSQL user password for PostgreSQL.
Use DBHost= with PostgreSQL. You might want to keep the default setting DBHost=localhost (or an IP address), but this would
make PostgreSQL use a network socket for connecting to Zabbix. Refer to the respective section for RHEL for instructions.
A Zabbix proxy does not have a frontend; it communicates with Zabbix server only.
It is required to install Java gateway only if you want to monitor JMX applications. Java gateway is lightweight and does not require
a database.
Once the required repository is added, you can install Zabbix Java gateway by running:
Overview
Official Zabbix 7.0 packages for SUSE Linux Enterprise Server are available on Zabbix website.
Zabbix agent packages and utilities Zabbix get and Zabbix sender are available in Zabbix Official Repository for SLES 15 (SP4 and
newer) and SLES 12 (SP4 and newer).
Please note that SLES 12 can be used only for Zabbix proxy and the following features are not available:
• Verify CA encryption mode with MySQL does not work due to older MySQL libraries.
• SSH checks - due to the older libssh version.
Install the repository configuration package. This package contains yum (software package manager) configuration files.
SLES 15:
Server/frontend/agent installation
To install Zabbix server/frontend/agent with PHP 8, Apache and MySQL support, run:
• For Nginx: use zabbix-nginx-conf-php8 instead of zabbix-apache-conf-php8. See also: Nginx setup for Zabbix
on SLES 15.
• For PostgreSQL: use zabbix-server-pgsql instead of zabbix-server-mysql; use zabbix-web-pgsql instead of
zabbix-web-mysql.
• For Zabbix agent 2 (only SLES 15): use zabbix-agent2 instead of or in addition to zabbix-agent.
To install Zabbix proxy with MySQL support:
82
For PostgreSQL, use zabbix-proxy-pgsql instead of zabbix-proxy-mysql.
For SQLite3, use zabbix-proxy-sqlite3 instead of zabbix-proxy-mysql.
The package ’zabbix-sql-scripts’ contains database schemas for all supported database management systems for both Zabbix
server and Zabbix proxy and will be used for data import.
Creating database
Zabbix server and proxy daemons require a database. Zabbix agent does not need a database.
To create a database, follow the instructions for MySQL or PostgreSQL. An SQLite3 database (supported for Zabbix proxy only) will
be created automatically and does not require additional installation steps.
Warning:
Separate databases are required for Zabbix server and Zabbix proxy; they cannot share the same database. If a server
and a proxy are installed on the same host, their databases must be created with different names!
Importing data
Now import initial schema and data for the server with MySQL:
With PostgreSQL:
Warning:
TimescaleDB is supported with Zabbix server only.
Edit /etc/zabbix/zabbix_server.conf (and zabbix_proxy.conf) to use their respective databases. For example:
vi /etc/zabbix/zabbix_server.conf
DBHost=localhost
DBName=zabbix
DBUser=zabbix
DBPassword=<password>
In DBPassword use Zabbix database password for MySQL; PostgreSQL user password for PostgreSQL.
Use DBHost= with PostgreSQL. You might want to keep the default setting DBHost=localhost (or an IP address), but this would
make PostgreSQL use a network socket for connecting to Zabbix.
Depending on the web server used (Apache/Nginx), edit the corresponding configuration file for Zabbix frontend. While some PHP
settings may already be configured, it’s essential that you uncomment the date.timezone setting and specify the appropriate
timezone setting that suits your requirements.
83
php_value always_populate_raw_post_data -1
# php_value date.timezone Europe/Riga
• The zabbix-nginx-conf package installs a separate Nginx server for Zabbix frontend. Its configuration file is located in
/etc/nginx/conf.d/zabbix.conf. For Zabbix frontend to work, it’s necessary to uncomment and set listen and/or
server_name directives.
# listen 80;
# server_name example.com;
• Zabbix uses its own dedicated php-fpm connection pool with Nginx:
Its configuration file is located in /etc/php8/fpm/php-fpm.d/zabbix.conf (the path may vary slightly depending on the
service pack).
php_value[max_execution_time] = 300
php_value[memory_limit] = 128M
php_value[post_max_size] = 16M
php_value[upload_max_filesize] = 2M
php_value[max_input_time] = 300
php_value[max_input_vars] = 10000
; php_value[date.timezone] = Europe/Riga
Now you are ready to proceed with frontend installation steps that will allow you to access your newly installed Zabbix.
Note that a Zabbix proxy does not have a frontend; it communicates with Zabbix server only.
Start Zabbix server and agent processes and make it start at system boot.
To enable debuginfo repository edit /etc/zypp/repos.d/zabbix.repo file. Change enabled=0 to enabled=1 for zabbix-debuginfo
repository.
[zabbix-debuginfo]
name=Zabbix Official Repository debuginfo
type=rpm-md
baseurl=http://repo.zabbix.com/zabbix/7.0/sles/15/x86_64/debuginfo/
gpgcheck=1
gpgkey=http://repo.zabbix.com/zabbix/7.0/sles/15/x86_64/debuginfo/repodata/repomd.xml.key
enabled=0
update=1
Overview
Zabbix Windows agent can be installed from Windows MSI installer packages (32-bit or 64-bit) available for download.
The Zabbix get and sender utilities can also be installed, either together with Zabbix agent/agent 2 or separately.
All packages come with TLS support, however, configuring TLS is optional.
84
Note:
Although Zabbix installation from MSI installer packages is fully supported, it is recommended to install at least Microsoft
.NET Framework 2 for proper error handling. See Microsoft Download .NET Framework.
Attention:
It is recommended to use default paths provided by the installer as using custom paths without proper permissions could
compromise the security of the installation.
Installation steps
85
Accept the license to proceed to the next step.
86
Parameter Description
Enter pre-shared key identity and value. This step is only available if you checked Enable PSK in the previous step.
87
Select Zabbix components to install - Zabbix agent daemon, Zabbix sender, Zabbix get.
88
Zabbix components along with the configuration file will be installed in a Zabbix Agent folder in Program Files. zabbix_agentd.exe
will be set up as Windows service with delayed automatic startup (or automatic startup on Windows versions before Windows
Server 2008/Vista).
89
Command-line based installation
Supported parameters
Parameter Description
90
Parameter Description
LISTENPORT The agent will listen on this port for connections from the server.
LOGFILE The name of the log file.
LOGTYPE The type of the log output.
PERSISTENTBUFFERFILE Zabbix agent 2 only. The file where Zabbix agent 2 should keep the SQLite database.
PERSISTENTBUFFERPERIODZabbix agent 2 only. The time period for which data should be stored when there is no
connection to the server or proxy.
SERVER A list of comma-delimited IP addresses, optionally in CIDR notation, or DNS names of Zabbix
servers and Zabbix proxies.
SERVERACTIVE The Zabbix server/proxy address or cluster configuration to get active checks from.
SKIP SKIP=fw - do not install the firewall exception rule.
STARTUPTYPE Startup type of the Zabbix Windows agent/agent 2 service. Possible values:
automatic - start the service automatically at Windows startup;
delayed - (default) delay starting the service after the automatically started services have
completed startup (available on Windows Server 2008/Vista and later versions);
manual - start the service manually (by a user or application);
disabled - disable the service, so that it cannot be started by a user or application.
Example: STARTUPTYPE=disabled
STATUSPORT Zabbix agent 2 only. If set, the agent will listen on this port for HTTP status requests
(http://localhost:<port>/status).
TIMEOUT Specifies timeout for communications (in seconds).
TLSACCEPT What incoming connections to accept.
TLSCAFILE The full pathname of a file containing the top-level CA(s) certificates for peer certificate
verification, used for encrypted communications between Zabbix components.
TLSCERTFILE The full pathname of a file containing the agent certificate or certificate chain, used for
encrypted communications between Zabbix components.
TLSCONNECT How the agent should connect to Zabbix server or proxy.
TLSCRLFILE The full pathname of a file containing revoked certificates. This parameter is used for encrypted
communications between Zabbix components.
TLSKEYFILE The full pathname of a file containing the agent private key, used for encrypted communications
between Zabbix components.
TLSPSKFILE The full pathname of a file containing the agent pre-shared key, used for encrypted
communications with Zabbix server.
TLSPSKIDENTITY The pre-shared key identity string, used for encrypted communications with Zabbix server.
TLSPSKVALUE The pre-shared key string value, used for encrypted communications with Zabbix server.
TLSSERVERCERTISSUER The allowed server (proxy) certificate issuer.
TLSSERVERCERTSUBJECT The allowed server (proxy) certificate subject.
Examples
To install Zabbix Windows agent from the command-line, you may run, for example:
91
SKIP=fw^
ALLOWDENYKEY="DenyKey=vfs.file.contents[/etc/passwd]"
You may also run, for example:
Note:
If both TLSPSKFILE and TLSPSKVALUE are passed, then TLSPSKVALUE will be written to TLSPSKFILE.
Overview
Zabbix Mac OS agent can be installed from PKG installer packages available for download. Versions with or without encryption are
available.
Installing agent
The agent can be installed using the graphical user interface or from the command line, for example:
Running agent
This section lists some useful commands that can be used for troubleshooting and removing Zabbix agent installation.
92
sudo launchctl unload /Library/LaunchDaemons/com.zabbix.zabbix_agentd.plist
Remove files (including configuration and logs) that were installed with installer package:
sudo rm -f /Library/LaunchDaemons/com.zabbix.zabbix_agentd.plist
sudo rm -f /usr/local/sbin/zabbix_agentd
sudo rm -f /usr/local/bin/zabbix_get
sudo rm -f /usr/local/bin/zabbix_sender
sudo rm -rf /usr/local/etc/zabbix
sudo rm -rf /var/log/zabbix
Forget that Zabbix agent has been installed:
6 Unstable releases
Overview
The instructions below are for enabling unstable Zabbix release repositories (disabled by default) used for minor Zabbix version
release candidates.
First, install or update to the latest zabbix-release package. To enable rc packages on your system do the following:
Open the /etc/yum.repos.d/zabbix.repo file and set enabled=1 for the zabbix-unstable repo.
[zabbix-unstable] name=Zabbix Official Repository (unstable) - $basearch baseurl=https://repo.zabbix.com/zabbix/7.0/rhel/8/$basearch/
enabled=1 gpgcheck=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-ZABBIX-A14FE591
Debian/Ubuntu
SUSE
Open the /etc/zypp/repos.d/zabbix.repo file and set enable=1 for the zabbix-unstable repo.
[zabbix-unstable] name=Zabbix Official Repository type=rpm-md baseurl=https://repo.zabbix.com/zabbix/7.0/sles/15/x86_64/
gpgcheck=1 gpgkey=https://repo.zabbix.com/zabbix/7.0/sles/15/x86_64/repodata/repomd.xml.key enabled=1 update=1
Overview This section describes how to deploy Zabbix with Docker or Docker Compose.
• Separate Docker images for each Zabbix component to run as portable and self-sufficient containers.
• Compose files for defining and running multi-container Zabbix components in Docker.
Attention:
Since Zabbix 6.0, deterministic triggers need to be created during the installation. If binary logging is
enabled for MySQL/MariaDB, this requires superuser privileges or setting the variable/configuration parameter
log_bin_trust_function_creators = 1. See Database creation scripts for instructions how to set the variable.
Note that if executing from a console, the variable will only be set temporarily and will be dropped when a Docker is
restarted. In this case, keep your SQL service running, only stop zabbix-server service by running ’docker compose down
zabbix-server’ and then ’docker compose up -d zabbix-server’.
Alternatively, you can set this variable in the configuration file.
Source files
93
Docker file sources are stored in the Zabbix official repository on GitHub, where you can follow latest file changes or fork the project
to make your own images.
Docker Zabbix provides images based on a variety of OS base images. To get the list of supported base operating system images
for a specific Zabbix component, see the component’s description in Docker Hub. All Zabbix images are configured to rebuild latest
images if base images are updated.
Installation
Zabbix zabbix/zabbix-agent
agent
Zabbix
server
with MySQL support zabbix/zabbix-server-mysql
with PostgreSQL support zabbix/zabbix-server-pgsql
Zabbix
web
inter-
face
based on Apache2 web server with MySQL support zabbix/zabbix-web-apache-mysql
based on Apache2 web server with PostgreSQL zabbix/zabbix-web-apache-pgsql
support
based on Nginx web server with MySQL support zabbix/zabbix-web-nginx-mysql
based on Nginx web server with PostgreSQL support zabbix/zabbix-web-nginx-pgsql
Zabbix
proxy
with SQLite3 support zabbix/zabbix-proxy-sqlite3
with MySQL support zabbix/zabbix-proxy-mysql
Zabbix zabbix/zabbix-java-gateway
Java
gate-
way
Note:
SNMP trap support is provided in a separate repository zabbix/zabbix-snmptraps. It can be linked with Zabbix server and
Zabbix proxy.
Tags
latest The latest stable version of a Zabbix component based on Alpine Linux zabbix-agent:latest
image.
94
Tag Description Example
<OS>-trunk The latest nightly build of the Zabbix version that is currently being zabbix agent:ubuntu-trunk
developed on a specific operating system.
Initial configuration
After downloading the images, start the containers by executing docker run command followed by additional arguments to
specify required environment variables and/or mount points. Some configuration examples are provided below.
Attention:
Zabbix must not be run as PID1/as an init process in containers.
Note:
To enable communication between Zabbix components, some ports, such as 10051/TCP for Zabbix server (trapper),
10050/TCP for Zabbix agent, 162/UDP for SNMP traps and 80/TCP for Zabbix web interface will be exposed to a host
machine. Full list of default ports used by Zabbix components is available on the Requirements page. For Zabbix server
and agent the default port can be changed by setting ZBX_LISTENPORT environment variable.
Environment variables
All Zabbix component images provide environment variables to control configuration. Supported environment variables are listed
in the component repository.
95
These environment variables are options from Zabbix configuration files, but with different naming method. For example,
ZBX_LOGSLOWQUERIES is equal to LogSlowQueries from Zabbix server and Zabbix proxy configuration files.
Attention:
Some of configuration options cannot be changed. For example, PIDFile and LogType.
The following environment variables are specific to Docker components and do not exist in Zabbix configuration files:
Volumes
The images allow to mount volumes using the following mount points:
Volume Description
Zabbix
agent
/etc/zabbix/zabbix_agentd.d
The volume allows to include *.conf files and extend Zabbix agent using the UserParameter
feature
/var/lib/zabbix/modulesThe volume allows to load additional modules and extend Zabbix agent using the LoadModule
feature
/var/lib/zabbix/enc The volume is used to store TLS-related files. These file names are specified using
ZBX_TLSCAFILE, ZBX_TLSCRLFILE, ZBX_TLSKEY_FILE and ZBX_TLSPSKFILE environment
variables
Zabbix
server
96
Volume Description
/usr/lib/zabbix/alertscripts
The volume is used for custom alert scripts. It is the AlertScriptsPath parameter in
zabbix_server.conf
/usr/lib/zabbix/externalscripts
The volume is used by external checks. It is the ExternalScripts parameter in
zabbix_server.conf
/var/lib/zabbix/modulesThe volume allows to load additional modules and extend Zabbix server using the LoadModule
feature
/var/lib/zabbix/enc The volume is used to store TLS related files. These file names are specified using
ZBX_TLSCAFILE, ZBX_TLSCRLFILE, ZBX_TLSKEY_FILE and ZBX_TLSPSKFILE environment
variables
/var/lib/zabbix/ssl/certsThe volume is used as location of SSL client certificate files for client authentication. It is the
SSLCertLocation parameter in zabbix_server.conf
/var/lib/zabbix/ssl/keysThe volume is used as location of SSL private key files for client authentication. It is the
SSLKeyLocation parameter in zabbix_server.conf
/var/lib/zabbix/ssl/ssl_ca
The volume is used as location of certificate authority (CA) files for SSL server certificate
verification. It is the SSLCALocation parameter in zabbix_server.conf
/var/lib/zabbix/snmptraps
The volume is used as location of snmptraps.log file. It could be shared by zabbix-snmptraps
container and inherited using the volumes_from Docker option while creating a new instance of
Zabbix server. SNMP trap processing feature could be enabled by using shared volume and
switching the ZBX_ENABLE_SNMP_TRAPS environment variable to ’true’
/var/lib/zabbix/mibs The volume allows to add new MIB files. It does not support subdirectories, all MIBs must be
placed in /var/lib/zabbix/mibs
Zabbix
proxy
The volume is used by external checks. It is the
/usr/lib/zabbix/externalscripts ExternalScripts parameter in
zabbix_proxy.conf
/var/lib/zabbix/db_data/
The volume allows to store database files on external devices. Supported only for Zabbix proxy
with SQLite3
/var/lib/zabbix/modulesThe volume allows to load additional modules and extend Zabbix server using the LoadModule
feature
/var/lib/zabbix/enc The volume is used to store TLS related files. These file names are specified using
ZBX_TLSCAFILE, ZBX_TLSCRLFILE, ZBX_TLSKEY_FILE and ZBX_TLSPSKFILE environment
variables
/var/lib/zabbix/ssl/certsThe volume is used as location of SSL client certificate files for client authentication. It is the
SSLCertLocation parameter in zabbix_proxy.conf
/var/lib/zabbix/ssl/keysThe volume is used as location of SSL private key files for client authentication. It is the
SSLKeyLocation parameter in zabbix_proxy.conf
/var/lib/zabbix/ssl/ssl_ca
The volume is used as location of certificate authority (CA) files for SSL server certificate
verification. It is the SSLCALocation parameter in zabbix_proxy.conf
/var/lib/zabbix/snmptraps
The volume is used as location of snmptraps.log file. It could be shared by the zabbix-snmptraps
container and inherited using the volumes_from Docker option while creating a new instance of
Zabbix server. SNMP trap processing feature could be enabled by using shared volume and
switching the ZBX_ENABLE_SNMP_TRAPS environment variable to ’true’
/var/lib/zabbix/mibs The volume allows to add new MIB files. It does not support subdirectories, all MIBs must be
placed in /var/lib/zabbix/mibs
Zabbix
web
in-
ter-
face
based
on
Apache2
web
server
/etc/ssl/apache2 The volume allows to enable HTTPS for Zabbix web interface. The volume must contain the two
ssl.crt and ssl.key files prepared for Apache2 SSL connections
97
Volume Description
Zabbix
web
in-
ter-
face
based
on
Ng-
inx
web
server
/etc/ssl/nginx The volume allows to enable HTTPS for Zabbix web interface. The volume must contain the two
ssl.crt, ssl.key files and dhparam.pem prepared for Nginx SSL connections
Zabbix
sn-
mptraps
/var/lib/zabbix/snmptraps
The volume contains the snmptraps.log log file named with received SNMP traps
/var/lib/zabbix/mibs The volume allows to add new MIB files. It does not support subdirectories, all MIBs must be
placed in /var/lib/zabbix/mibs
Examples
Example 1
The example demonstrates how to run Zabbix server with MySQL database support, Zabbix web interface based on the Nginx web
server and Zabbix Java gateway.
98
5. Start Zabbix web interface and link the instance with created MySQL server and Zabbix server instances
The example demonstrates how to run Zabbix server with PostgreSQL database support, Zabbix web interface based on the Nginx
web server and SNMP trap feature.
99
--restart unless-stopped \
-d zabbix/zabbix-web-nginx-pgsql:alpine-7.0-latest
Example 3
The example demonstrates how to run Zabbix server with MySQL database support, Zabbix web interface based on the Nginx web
server and Zabbix Java gateway using podman on Red Hat 8.
1. Create new pod with name zabbix and exposed ports (web-interface, Zabbix server trapper):
podman pod create --name zabbix -p 80:8080 -p 10051:10051
2. (optional) Start Zabbix agent container in zabbix pod location:
podman run --name zabbix-agent \
-e ZBX_SERVER_HOST="127.0.0.1,localhost" \
--restart=always \
--pod=zabbix \
-d registry.connect.redhat.com/zabbix/zabbix-agent-70:latest
3. Create ./mysql/ directory on host and start Oracle MySQL server 8.0:
podman run --name mysql-server -t \
-e MYSQL_DATABASE="zabbix" \
-e MYSQL_USER="zabbix" \
-e MYSQL_PASSWORD="zabbix_pwd" \
-e MYSQL_ROOT_PASSWORD="root_pwd" \
-v ./mysql/:/var/lib/mysql/:Z \
--restart=always \
--pod=zabbix \
-d mysql:8.0 \
--character-set-server=utf8 --collation-server=utf8_bin \
--default-authentication-plugin=mysql_native_password
4. Start Zabbix server container:
Note:
Pod zabbix exposes 80/TCP port (HTTP) to host machine from 8080/TCP of zabbix-web-mysql container.
100
Docker Compose Alternatively, Zabbix can be installed using Docker Compose plugin. Compose files for defining and running
multi-container Zabbix components are available in the official Zabbix Docker repository on GitHub.
Attention:
Official Zabbix compose files support version 3 of Docker Compose.
These compose files are added as examples; they are overloaded. For example, they contain proxies with both MySQL and SQLite3
support.
docker-compose_v3_alpine_mysql_latest.yaml
The compose file runs the latest version of Zabbix 7.0 components on Alpine Linux with MySQL
database support.
docker-compose_v3_alpine_mysql_local.yaml
The compose file locally builds the latest version of Zabbix 7.0 and runs Zabbix components on
Alpine Linux with MySQL database support.
docker-compose_v3_alpine_pgsql_latest.yaml
The compose file runs the latest version of Zabbix 7.0 components on Alpine Linux with
PostgreSQL database support.
docker-compose_v3_alpine_pgsql_local.yaml
The compose file locally builds the latest version of Zabbix 7.0 and runs Zabbix components on
Alpine Linux with PostgreSQL database support.
docker-compose_v3_ol_mysql_latest.yaml
The compose file runs the latest version of Zabbix 7.0 components on Oracle Linux with MySQL
database support.
docker-compose_v3_ol_mysql_local.yaml
The compose file locally builds the latest version of Zabbix 7.0 and runs Zabbix components on
Oracle Linux with MySQL database support.
docker-compose_v3_ol_pgsql_latest.yaml
The compose file runs the latest version of Zabbix 7.0 components on Oracle Linux with
PostgreSQL database support.
docker-compose_v3_ol_pgsql_local.yaml
The compose file locally builds the latest version of Zabbix 7.0 and runs Zabbix components on
Oracle Linux with PostgreSQL database support.
docker-compose_v3_ubuntu_mysql_latest.yaml
The compose file runs the latest version of Zabbix 7.0 components on Ubuntu 20.04 with MySQL
database support.
docker-compose_v3_ubuntu_mysql_local.yaml
The compose file locally builds the latest version of Zabbix 7.0 and runs Zabbix components on
Ubuntu 20.04 with MySQL database support.
docker-compose_v3_ubuntu_pgsql_latest.yaml
The compose file runs the latest version of Zabbix 7.0 components on Ubuntu 20.04 with
PostgreSQL database support.
docker-compose_v3_ubuntu_pgsql_local.yaml
The compose file locally builds the latest version of Zabbix 7.0 and runs Zabbix components on
Ubuntu 20.04 with PostgreSQL database support.
Storage
Compose files are configured to support local storage on a host machine. Docker Compose will create a zbx_env directory in
the folder with the compose file when you run Zabbix components using the compose file. The directory will contain the same
structure as described in the Volumes section and directory for database storage.
There are also volumes in read-only mode for /etc/localtime and /etc/timezone files.
Environment variables
The variable files have the following naming structure: .env_<type of component> and are located in the env_vars directory.
See environment variables for details about variable naming and available selection.
Examples
Example 1
101
git checkout 7.0
docker compose -f ./docker-compose_v3_alpine_mysql_latest.yaml up -d
The command will download the latest Zabbix 7.0 images for each Zabbix component and run them in detach mode.
Attention:
Do not forget to download .env_<type of component> files from github.com official Zabbix repository with compose
files.
Example 2
This section provides step-by-step instructions for installing Zabbix web interface. Zabbix frontend is written in PHP, so to run it a
PHP supported webserver is needed.
Note:
You can find out more about setting up SSL for Zabbix frontend by referring to these best practices.
Welcome screen
Open Zabbix frontend URL in the browser. If you have installed Zabbix from packages, the URL is:
You should see the first screen of the frontend installation wizard.
Use the Default language drop-down menu to change system default language and continue the installation process in the selected
language (optional). For more information, see Installation of additional frontend languages.
Check of pre-requisites
102
Pre-requisite Minimum value Description
103
Optional pre-requisites may also be present in the list. A failed optional prerequisite is displayed in orange and has a Warning
status. With a failed optional pre-requisite, the setup may continue.
Attention:
If there is a need to change the Apache user or user group, permissions to the session folder must be verified. Otherwise
Zabbix setup may be unable to continue.
Configure DB connection
Enter details for connecting to the database. Zabbix database must already be created.
If the Database TLS encryption option is checked, then additional fields for configuring the TLS connection to the database appear
in the form (MySQL or PostgreSQL only).
If Store credentials in is set to HashiCorp Vault or CyberArk Vault, additional parameters will become available:
• for HashiCorp Vault: Vault API endpoint, vault prefix, secret path, and authentication token;
• for CyberArk Vault: Vault API endpoint, vault prefix, secret query string, and certificates. Upon marking Vault certificates
checkbox, two new fields for specifying paths to SSL certificate file and SSL key file will appear.
Settings
Entering a name for Zabbix server is optional, however, if submitted, it will be displayed in the menu bar and page titles.
104
Set the default time zone and theme for the frontend.
Pre-installation summary
Install
If installing Zabbix from sources, download the configuration file and place it under conf/ in the webserver HTML documents
subdirectory where you copied Zabbix PHP files to.
105
Note:
Providing the webserver user has write access to conf/ directory the configuration file would be saved automatically and it
would be possible to proceed to the next step right away.
106
Log in
Zabbix frontend is ready! The default user name is Admin, password zabbix.
8 Upgrade procedure
Overview
• using packages:
– for Red Hat Enterprise Linux
– for Debian/Ubuntu
• using sources
107
See also upgrade instructions for servers in a high-availability (HA) cluster.
Upgrading Zabbix proxies is highly recommended. Zabbix server fully supports proxies that are of the same major version as the
server. Zabbix server also supports proxies that are no older than Zabbix server previous LTS release version, but with limited
functionality (data collection, execution of remote commands, immediate item value checks). Configuration update is also disabled,
and outdated proxies will only work with old configuration.
Attention:
Proxies that are older than Zabbix server previous LTS release version or newer than Zabbix server major version are not
supported. Zabbix server will ignore data from unsupported proxies and all communication with Zabbix server will fail with
a warning. For more information, see Version compatibility.
To minimize downtime and data loss during the upgrade, it is recommended to stop, upgrade, and start Zabbix server and then
stop, upgrade, and start Zabbix proxies one after another. During server downtime, running proxies will continue data collection.
Once the server is up and running, outdated proxies will send the data to the newer server (proxy configuration will not be updated
though) and will remain partly functional. Any notifications for problems during Zabbix server downtime will be generated only
after the upgraded server is started.
If Zabbix proxy is started for the first time and the SQLite database file is missing, proxy creates it automatically.
Note that if Zabbix proxy uses SQLite3 and on startup detects that existing database file version is older than required, it will
delete the database file automatically and create a new one. Therefore, history data stored in the SQLite database file will be
lost. If Zabbix proxy’s version is older than the database file version, Zabbix will log an error and exit.
Depending on the database size, the database upgrade to version 7.0 may take a long time.
Direct upgrade to Zabbix 7.0.x is possible from Zabbix 6.4.x, 6.2.x, 6.0.x, 5.4.x, 5.2.x, 5.0.x, 4.4.x, 4.2.x, 4.0.x, 3.4.x, 3.2.x,
3.0.x, 2.4.x, 2.2.x and 2.0.x. For upgrading from earlier versions consult Zabbix documentation for 2.0 and earlier.
Note:
Please be aware that after upgrading some third-party software integrations in Zabbix might be affected, if the external
software is not compatible with the upgraded Zabbix version.
6.4.x For: Minimum required PHP version upped from 7.4.0 to 8.0.0.
Zabbix 7.0 Asynchronous pollers for agent, HTTP agent and SNMP walk[oid]
checks.
Separate database table for proxies.
Default location for Windows agent configuration file changed.
Oracle DB deprecated.
6.2.x For: Minimum required MySQL version raised from 8.0.0 to 8.0.30.
Zabbix 6.4 ’libevent_pthreads’ library is required for Zabbix server/proxy.
Zabbix 7.0 Upon the first launch after an upgrade, Zabbix proxy with SQLite3
automatically drops the old version of the database (with all the
history) and creates a new one.
6.0.x LTS For: Minimum required PHP version upped from 7.2.5 to 7.4.0.
Zabbix 6.2 Service monitoring reworked significantly.
Zabbix 6.4 Deterministic triggers need to be created during the upgrade. If binary
Zabbix 7.0 logging is enabled for MySQL/MariaDB, this requires superuser
privileges or setting the variable/configuration parameter
log_bin_trust_function_creators = 1. See Database creation scripts for
instructions how to set the variable.
5.4.x For: Minimum required database versions upped.
Zabbix 6.0 Server/proxy will not start if outdated database.
Zabbix 6.2 Audit log records lost because of database structure change.
Zabbix 6.4
Zabbix 7.0
108
Read full upgrade
Upgrade from notes Most important changes between versions
109
Read full upgrade
Upgrade from notes Most important changes between versions
110
Read full upgrade
Upgrade from notes Most important changes between versions
2.0.x For: Minimum required PHP version upped from 5.1.6 to 5.3.0.
Zabbix 2.2 Case-sensitive MySQL database required for proper server work;
Zabbix 2.4 character set utf8 and utf8_bin collation is required for Zabbix server
Zabbix 3.0 to work properly with MySQL database. See database creation scripts.
Zabbix 3.2 ’mysqli’ PHP extension required instead of ’mysql’.
Zabbix 3.4
Zabbix 4.0
Zabbix 4.2
Zabbix 4.4
Zabbix 5.0
Zabbix 5.2
Zabbix 5.4
Zabbix 6.0
Zabbix 6.2
Zabbix 6.4
Zabbix 7.0
Overview
This section provides the steps required for a successful upgrade from Zabbix 6.4.x to Zabbix 7.0.x using official Zabbix sources.
Warning:
Before the upgrade make sure to read the relevant upgrade notes!
Note:
It may be handy to run two parallel SSH sessions during the upgrade, executing the upgrade steps in one and monitoring
the server/proxy logs in another. For example, run tail -f zabbix_server.log or tail -f zabbix_proxy.log
in the second SSH session showing you the latest log file entries and possible errors in real time. This can be critical for
production instances.
1 Stop server
Stop Zabbix server to make sure that no new data is inserted into database.
This is a very important step. Make sure that you have a backup of your database. It will help if the upgrade procedure fails (lack
of disk space, power off, any unexpected problem).
Make a backup copy of Zabbix binaries, configuration files and the PHP file directory.
Make sure to review Upgrade notes to check if any changes in the configuration parameters are required.
Start new binaries. Check log files to see if the binaries have started successfully.
Zabbix server will automatically upgrade the database. When starting up, Zabbix server reports the current (mandatory and
optional) and required database versions. If the current mandatory version is older than the required version, Zabbix server
automatically executes the required database upgrade patches. The start and progress level (percentage) of the database upgrade
111
is written to the Zabbix server log file. When the upgrade is completed, a ”database upgrade fully completed” message is written
to the log file. If any of the upgrade patches fail, Zabbix server will not start. Zabbix server will also not start if the current
mandatory database version is newer than the required one. Zabbix server will only start if the current mandatory database
version corresponds to the required mandatory version.
• Make sure the database user has enough permissions (create table, drop table, create index, drop index)
• Make sure you have enough free disk space.
The minimum required PHP version is 8.0.0. Update if needed and follow installation instructions.
After the upgrade you may need to clear web browser cookies and web browser cache for the Zabbix web interface to work properly.
1 Stop proxy
Make a backup copy of the Zabbix proxy binary and configuration file.
Start the new Zabbix proxy. Check log files to see if the proxy has started successfully.
Zabbix proxy will automatically upgrade the database. Database upgrade takes place similarly as when starting Zabbix server.
Attention:
Upgrading agents is not mandatory. You only need to upgrade agents if it is required to access the new functionality.
The upgrade procedure described in this section may be used for upgrading both the Zabbix agent and the Zabbix agent 2.
1 Stop agent
Make a backup copy of the Zabbix agent binary and configuration file.
Alternatively, you may download pre-compiled Zabbix agents from the Zabbix download page.
There are no mandatory changes in this version neither to agent nor to agent 2 parameters.
Start the new Zabbix agent. Check log files to see if the agent has started successfully.
When upgrading between minor versions of 7.0.x (for example from 7.0.1 to 7.0.3) it is required to execute the same actions for
server/proxy/agent as during the upgrade between major versions. The only difference is that when upgrading between minor
versions no changes to the database are made.
112
2 Upgrade from packages
Overview
This section provides the steps required for a successful upgrade using official RPM and DEB packages provided by Zabbix for:
Often, OS distributions (in particular, Debian-based distributions) provide their own Zabbix packages.
Note that these packages are not supported by Zabbix, they are typically out of date and lack the latest features and bug fixes.
Only the packages from repo.zabbix.com are officially supported.
If you are upgrading from packages provided by OS distributions (or had them installed at some point), follow this procedure to
switch to official Zabbix packages:
Overview
This section provides the steps required for a successful upgrade from Zabbix 6.4.x to Zabbix 7.0.x using official Zabbix packages
for Red Hat Enterprise Linux or its derivatives - AlmaLinux, CentOS Stream, Oracle Linux, and Rocky Linux.
Note:
Prior to Zabbix 7.0, single installation packages were provided for RHEL and RHEL-based distributions. Starting from 7.0,
separate packages are used for RHEL and each of its above-mentioned derivatives to avoid potential problems with binary
incompatibility.
Warning:
Before the upgrade make sure to read the relevant upgrade notes!
Note:
It may be handy to run two parallel SSH sessions during the upgrade, executing the upgrade steps in one and monitoring
the server/proxy logs in another. For example, run tail -f zabbix_server.log or tail -f zabbix_proxy.log
in the second SSH session showing you the latest log file entries and possible errors in real time. This can be critical for
production instances.
Upgrade procedure
Stop Zabbix server to make sure that no new data is inserted into database.
This is a very important step. Make sure that you have a backup of your database. It will help if the upgrade procedure fails (lack
of disk space, power off, any unexpected problem).
Make a backup copy of Zabbix binaries, configuration files and the PHP file directory.
Configuration files:
113
mkdir /opt/zabbix-backup/
cp /etc/zabbix/zabbix_server.conf /opt/zabbix-backup/
cp /etc/httpd/conf.d/zabbix.conf /opt/zabbix-backup/
PHP files and Zabbix binaries:
cp -R /usr/share/zabbix/ /opt/zabbix-backup/
cp -R /usr/share/zabbix-* /opt/zabbix-backup/
4 Update repository configuration package
Before proceeding with the upgrade, update your current repository package to the latest version.
On RHEL 9, run:
Attention:
Upgrading Zabbix agent 2 with the dnf install zabbix-agent2 command could lead to an error. For more information,
see Known issues.
To upgrade the web frontend with Apache on RHEL 8 correctly, also run:
Make sure to review Upgrade notes to check if any changes in the configuration parameters are required.
You may need to clear web browser cookies and web browser cache after the upgrade for Zabbix web interface to work properly.
It is possible to upgrade between minor versions of 7.0.x (for example, from 7.0.1 to 7.0.3). Upgrading between minor versions is
easy.
114
sudo dnf upgrade 'zabbix-agent2-*'
2 Debian/Ubuntu
Overview
This section provides the steps required for a successful upgrade from Zabbix 6.4.x to Zabbix 7.0.x using official Zabbix packages
for Debian/Ubuntu.
Warning:
Before the upgrade make sure to read the relevant upgrade notes!
Note:
It may be handy to run two parallel SSH sessions during the upgrade, executing the upgrade steps in one and monitoring
the server/proxy logs in another. For example, run tail -f zabbix_server.log or tail -f zabbix_proxy.log
in the second SSH session showing you the latest log file entries and possible errors in real time. This can be critical for
production instances.
Upgrade procedure
Stop Zabbix server to make sure that no new data is inserted into database.
This is a very important step. Make sure that you have a backup of your database. It will help if the upgrade procedure fails (lack
of disk space, power off, any unexpected problem).
Make a backup copy of Zabbix binaries, configuration files and the PHP file directory.
Configuration files:
mkdir /opt/zabbix-backup/
cp /etc/zabbix/zabbix_server.conf /opt/zabbix-backup/
cp /etc/apache2/conf-enabled/zabbix.conf /opt/zabbix-backup/
PHP files and Zabbix binaries:
cp -R /usr/share/zabbix/ /opt/zabbix-backup/
cp -R /usr/share/zabbix-* /opt/zabbix-backup/
4 Update repository configuration package
To proceed with the update, your current repository package has to be uninstalled.
rm -Rf /etc/apt/sources.list.d/zabbix.list
Then, install the latest repository configuration package.
wget https://repo.zabbix.com/zabbix/7.0/debian/pool/main/z/zabbix-release/zabbix-release_latest+debian12_a
dpkg -i zabbix-release_latest+debian12_all.deb
For older Debian versions, replace the link above with the correct one from Zabbix repository. Note that for older Debian versions,
the packages may not include all components. For Zabbix components included in the packages, see Zabbix packages.
wget https://repo.zabbix.com/zabbix/7.0/ubuntu/pool/main/z/zabbix-release/zabbix-release_latest+ubuntu24.0
dpkg -i zabbix-release_latest+ubuntu24.04_all.deb
On Ubuntu 22.04, run:
115
wget https://repo.zabbix.com/zabbix/7.0/ubuntu/pool/main/z/zabbix-release/zabbix-release_latest+ubuntu22.0
dpkg -i zabbix-release_latest+ubuntu22.04_all.deb
For older Ubuntu versions, replace the link above with the correct one from Zabbix repository. Note that for older Ubuntu versions,
the packages may not include all components. For Zabbix components included in the packages, see Zabbix packages.
apt-get update
5 Upgrade Zabbix components
Attention:
Upgrading Zabbix agent 2 with the apt install zabbix-agent2 command could lead to an error. For more information,
see Known issues.
Then, to upgrade the web frontend with Apache correctly, also run:
Make sure to review Upgrade notes to check if any changes in the configuration parameters are required.
After the upgrade you may need to clear web browser cookies and web browser cache for the Zabbix web interface to work properly.
It is possible to upgrade minor versions of 7.0.x (for example, from 7.0.1 to 7.0.3). It is easy.
Overview
This section describes steps required for a successful upgrade to Zabbix 7.0.x containers.
Separate sets of instructions are available for upgrading individual Zabbix component images and Docker compose files.
116
Warning:
Before the upgrade make sure to read the relevant upgrade notes!
Attention:
Before starting the upgrade, verify that users have the necessary permissions to the database for upgrading purposes.
For upgrades from Zabbix 6.0 or older, deterministic triggers will need to be created during the upgrade. If binary log-
ging is enabled for MySQL/MariaDB, this requires superuser privileges or setting the variable/configuration parameter
log_bin_trust_function_creators = 1. See Database creation scripts for instructions how to set the variable.
Note that if executing from a console, the variable will only be set temporarily and will be dropped when a Docker is
restarted. In this case, keep your SQL service running, only stop zabbix-server service by running ’docker compose down
zabbix-server’ and then ’docker compose up -d zabbix-server’.
Alternatively, you can set this variable in the configuration file.
Depending on the size of a database upgrade to version 7.0 may take quite a long time.
The steps listed below can be used to upgrade any Zabbix component. Replace zabbix-server-mysql with the required com-
ponent image name.
docker rm zabbix-server-mysql
5. Launch the updated container by executing docker run command followed by additional arguments to specify required
environment variables and/or mount points.
Configuration examples
117
--volumes-from zabbix-snmptraps \
--restart unless-stopped \
-d zabbix/zabbix-server-pgsql:alpine-7.0-latest
More configuration examples, including examples for other Zabbix components, are available on the Installation from containers
page.
Follow upgrade instructions in this section, if you installed Zabbix using compose file.
git pull
git checkout 7.0
3. Start Zabbix components using new compose file:
docker-compose -f ./docker-compose_v3_alpine_mysql_latest.yaml up -d
4. Verify the update:
9 Known issues
zabbix_proxy on MySQL versions 8.0.0-8.0.17 fails with the following ”access denied” error:
[Z3001] connection to database 'zabbix' failed: [1227] Access denied; you need (at least one of) the SUPER
That is due to MySQL 8.0.0 starting to enforce special permissions for setting session variables. However, in 8.0.18 this behavior
was removed: As of MySQL 8.0.18, setting the session value of this system variable is no longer a restricted operation.
The sql_mode setting in MySQL/MariaDB must have the ”STRICT_TRANS_TABLES” mode set. If it is absent, the Zabbix database
upgrade will fail (see also ZBX-19435).
Upgrading Zabbix may fail if database tables were created with MariaDB 10.2.1 and before, because in those versions the default
row format is compact. This can be fixed by changing the row format to dynamic (see also ZBX-17690).
Templates
In dual-stack environments (systems configured to support both IPv4 and IPv6), the hostname localhost typically resolves to
both IPv4 and IPv6 addresses. Due to the common prioritization of IPv6 over IPv4 by many operating systems and DNS resolvers,
Zabbix templates may fail to work correctly if the service being monitored is configured to listen only on IPv4.
118
Services that are not configured to listen on IPv6 addresses may become inaccessible, leading to monitoring failures. Users might
configure access correctly for IPv4 but still face connectivity issues due to the default behavior of prioritizing IPv6.
A workaround for this is to ensure that the services (Nginx, Apache, PostgreSQL, etc.) are configured to listen on both IPv4 and
IPv6 addresses, and Zabbix server/agent is allowed access via IPv6. Additionally, in Zabbix templates and configurations, use
localhost explicitly instead of 127.0.0.1 to ensure compatibility with both IPv4 and IPv6.
For example, when monitoring PostgreSQL with the PostgreSQL by Zabbix agent 2 template, you may need to edit the
pg_hba.conf file to allow connections for the zbx_monitor user. If the dual-stack environment prioritizes IPv6 (system resolves
localhost to ::1) and you configure localhost but only add an IPv4 entry (127.0.0.1/32), the connection will fail because
there is no matching IPv6 entry.
The following pg_hba.conf file example ensures that the zbx_monitor user can connect to any database from the local machine
using both IPv4 and IPv6 addresses with different authentication methods:
With EPEL repository installed and enabled, installing Zabbix from packages will lead to EPEL Zabbix packages being installed
rather than official Zabbix packages.
When installing Zabbix from Red Hat Enterprise Linux packages on Red Hat Universal Base Image environments, ensure access to
required repositories and dependencies. Zabbix packages depend on libOpenIPMI.so and libOpenIPMIposix.so libraries,
which are not provided by any package in the default package manager repositories enabled on UBI systems and will result in
installation failures.
The libOpenIPMI.so and libOpenIPMIposix.so libraries are available in the OpenIPMI-libs package, which is provided
by the redhat-#-for-<arch>-appstream-rpms repository. Access to this repository is curated by subscriptions, which, in the
case of UBI environments, get propagated by mounting repository configuration and secrets directories of the RHEL host into the
container file-system namespace.
Database TLS connection is not supported with the ’verify_ca’ option for the DBTLSConnect parameter if MariaDB is used.
When running under high load, and with more than one LLD worker involved, it is possible to run into a deadlock caused by an
InnoDB error related to the row-locking strategy (see upstream bug). The error has been fixed in MySQL since 8.0.29, but not in
MariaDB. For more details, see ZBX-21506.
Events may not get correlated correctly if the time interval between the first and second event is very small, i.e. half a second and
less.
PostgreSQL 11 and earlier versions only support floating point value range of approximately -1.34E-154 to 1.34E+154.
119
NetBSD 8.0 and newer
Various Zabbix processes may randomly crash on startup on the NetBSD versions 8.X and 9.X. That is due to the too small default
stack size (4MB), which must be increased by running:
ulimit -s 10240
For more information, please see the related problem report: ZBX-18275.
Zabbix agent 2 does not support lookaheads and lookbehinds in regular expressions due to the standard Go regexp library limita-
tions.
IPMI checks
IPMI checks will not work with the standard OpenIPMI library package on Debian prior to 9 (stretch) and Ubuntu prior to 16.04
(xenial). To fix that, recompile OpenIPMI library with OpenSSL enabled as discussed in ZBX-6139.
SSH checks
• Some Linux distributions like Debian, Ubuntu do not support encrypted private keys (with passphrase) if the libssh2 library
is installed from packages. Please see ZBX-4850 for more details.
• When using libssh 0.9.x on some Linux distributions with OpenSSH 8, SSH checks may occasionally report ”Cannot read data
from SSH server”. This is caused by a libssh issue (more detailed report). The error is expected to have been fixed by a
stable libssh 0.9.5 release. See also ZBX-17756 for details.
• Using the pipe ”|” in the SSH script may lead to a ”Cannot read data from SSH server” error. In this case it is recommended
to upgrade the libssh library version. See also ZBX-21337 for details.
ODBC checks
• MySQL unixODBC driver should not be used with Zabbix server or Zabbix proxy compiled against MariaDB connector library
and vice versa, if possible it is also better to avoid using the same connector as the driver due to an upstream bug. Suggested
setup:
PostgreSQL, SQLite or Oracle connector → MariaDB or MySQL unixODBC driver MariaDB connector → MariaDB unixODBC
driver MySQL connector → MySQL unixODBC driver
• XML data queried from Microsoft SQL Server may get truncated in various ways on Linux and UNIX systems.
• It has been observed that using ODBC checks for monitoring Oracle databases using various versions of Oracle Instant Client
for Linux causes Zabbix server to crash. See also: ZBX-18402, ZBX-20803.
• If using FreeTDS UnixODBC driver, you need to prepend a ’SET NOCOUNT ON’ statement to an SQL query (for example,
SET NOCOUNT ON DECLARE @strsql NVARCHAR(max) SET @strsql = ....). Otherwise, database monitor item in
Zabbix will fail to retrieve the information with an error ”SQL query returned empty result”.
See ZBX-19917 for more information.
The request method parameter, used only in HTTP checks, may be incorrectly set to ’1’, a non-default value for all items as a result
of upgrade from a pre-4.0 Zabbix version. For details on how to fix this situation, see ZBX-19308.
Zabbix server leaks memory on some Linux distributions due to an upstream bug when ”SSL verify peer” is enabled in web scenarios
or HTTP agent. Please see ZBX-10486 for more information and available workarounds.
Simple checks
There is a bug in fping versions earlier than v3.10 that mishandles duplicate echo replay packets. This may cause unexpected
results for icmpping, icmppingloss, icmppingsec items. It is recommended to use the latest version of fping. Please see
ZBX-11726 for more details.
When containers are running in rootless mode or in a specific-restrictions environment, you may face errors related to fping
execution when performing ICMP checks, such as fping: Operation not permitted or all packets to all resources lost.
To fix this problem add --cap-add=net_raw to ”docker run” or ”podman run” commands.
Additionally fping execution in non-root environments may require sysctl modification, i.e.:
120
sudo sysctl -w "net.ipv4.ping_group_range=0 1995"
where ”1995” is the zabbix GID. For more details, see ZBX-22833.
SNMP checks
If the OpenBSD operating system is used, a use-after-free bug in the Net-SNMP library up to the 5.7.3 version can cause a crash
of Zabbix server if the SourceIP parameter is set in the Zabbix server configuration file. As a workaround, please do not set the
SourceIP parameter. The same problem applies also for Linux, but it does not cause Zabbix server to stop working. A local patch
for the net-snmp package on OpenBSD was applied and will be released with OpenBSD 6.3.
Spikes in SNMP data have been observed that may be related to certain physical factors like voltage spikes in the mains. See
ZBX-14318 more details.
SNMP traps
The ”net-snmp-perl” package, needed for SNMP traps, has been removed in RHEL 8.0-8.2; re-added in RHEL 8.3.
So if you are using RHEL 8.0-8.2, the best solution is to upgrade to RHEL 8.3.
Instances of a Zabbix server alerter process crash have been encountered in RHEL 7. Please see ZBX-10461 for details.
When upgrading Zabbix agent 2 (version 6.0.5 or older) from packages, a plugin-related file conflict error may occur. To fix the
error, back up your agent 2 configuration (if necessary), uninstall agent 2 and install it anew.
It has been observed that frontend locales may flip without apparent logic, i. e. some pages (or parts of pages) are displayed
in one language while other pages (or parts of pages) in a different language. Typically the problem may appear when there are
several users, some of whom use one locale, while others use another.
The problem is related to how setting the locale works in PHP: locale information is maintained per process, not per thread. So in a
multi-thread environment, when there are several projects run by same Apache process, it is possible that the locale gets changed
in another thread and that changes how data can be processed in the Zabbix thread.
Changes to Daylight Saving Time (DST) result in irregularities when displaying X axis labels (date duplication, date missing, etc).
Sum aggregation
When using sum aggregation in a graph for period that is less than one hour, graphs display incorrect (multiplied) values when
data come from trends.
Text overlapping
For some frontend languages (e.g., Japanese), local fonts can cause text overlapping in graph legend. To avoid this, use version
2.3.0 (or later) of PHP GD extension.
121
log[] and logrt[] items repeatedly reread log file from the beginning if file system is 100% full and the log file is being appended
(see ZBX-10884 for more information).
Zabbix server generates slow SELECT queries in case of non-existing values for items. This issue is known to occur in MySQL
5.6/5.7 versions (for an extended discussion, see ZBX-10652), and, in specific cases, may also occur in later MySQL versions.
A workaround to this is disabling the index_condition_pushdown or prefer_ordering_index optimizer in MySQL. Note,
however, that this workaround may not fix all issues related to slow queries.
Configuration sync might be slow in Zabbix installations with Oracle DB that have high number of items and item preprocessing
steps. This is caused by the Oracle database engine speed processing nclob type fields.
To improve performance, you can convert the field types from nclob to nvarchar2 by manually applying the database patch
items_nvarchar_prepare.sql. Note that this conversion will reduce the maximum field size limit from 65535 bytes to 4000 bytes
for item preprocessing parameters and item parameters such as Description, Script item’s field Script, HTTP agent item’s fields
Request body and Headers, Database monitor item’s field SQL query. Queries to determine template names that need to be
deleted before applying the patch are provided in the patch as a comment. Alternatively, if MAX_STRING_SIZE is set you can
change nvarchar2(4000) to nvarchar2(32767) in the patch queries to set the 32767 bytes field size limit.
API login
A large number of open user sessions can be created when using custom scripts with the user.login method without a following
user.logout.
IPv6 address issue in SNMPv3 traps
Due to a net-snmp bug, IPv6 address may not be correctly displayed when using SNMPv3 in SNMP traps. For more details and a
possible workaround, see ZBX-14541.
A failed login attempt message will display only the first 39 characters of a stored IP address as that’s the character limit in the
database field. That means that IPv6 IP addresses longer than 39 characters will be shown incompletely.
Non-existing DNS entries in a Server parameter of Zabbix agent configuration file (zabbix_agentd.conf) may increase Zabbix
agent response time on Windows. This happens because Windows DNS caching daemon doesn’t cache negative responses for
IPv4 addresses. However, for IPv6 addresses negative responses are cached, so a possible workaround to this is disabling IPv4 on
the host.
YAML export/import
Frontend setup wizard cannot save configuration file on SUSE with NGINX + php-fpm. This is caused by a setting in
/usr/lib/systemd/system/php-fpm.service unit, which prevents Zabbix from writing to /etc. (introduced in PHP 7.4).
• Set the ProtectSystem option to ’true’ instead of ’full’ in the php-fpm systemd unit.
• Manually save /etc/zabbix/web/zabbix.conf.php file.
Though in most cases, Zabbix web service can run with Chromium, on Ubuntu 20.04 using Chromium causes the following error:
122
If Zabbix is used with MySQL installation on Azure, an unclear error message [9002] Some errors occurred may appear in Zabbix
logs. This generic error text is sent to Zabbix server or proxy by the database. To get more information about the cause of the
error, check Azure logs.
In Zabbix 6.0 support for PCRE2 has been added. Even though PCRE is still supported, Zabbix installation packages for RHEL 7
and newer, SLES (all versions), Debian 9 and newer, Ubuntu 16.04 and newer have been updated to use PCRE2. While providing
many benefits, switching to PCRE2 may cause certain existing PCRE regexp patterns becoming invalid or behaving differently.
In particular, this affects the pattern ^[\w-\.]. In order to make this regexp valid again without affecting semantics, change the
expression to ^[-\w\.] . This happens due to the fact that PCRE2 treats the dash sign as a delimiter, creating a range inside a
character class.
The maps in the Geomap widget may not load correctly, if you have upgraded from an older Zabbix version with NGINX and didn’t
switch to the new NGINX configuration file during the upgrade.
To fix the issue, you can discard the old configuration file, use the configuration file from the current version package and reconfigure
it as described in the download instructions in section e. Configure PHP for Zabbix frontend.
Alternatively, you can manually edit an existing NGINX configuration file (typically, /etc/zabbix/nginx.conf). To do so, open the file
and locate the following block:
location ~ /(api\/|conf[^\.]|include|locale|vendor) {
deny all;
return 404;
}
Then, replace this block with:
location ~ /(api\/|conf[^\.]|include|locale) {
deny all;
return 404;
}
location /vendor {
deny all;
return 404;
}
1 Compilation issues
These are the known issues regarding Zabbix compilation from sources. For all other cases, see the Known issues page.
If you install the PCRE library from the popular HP-UX package site http://hpux.connect.org.uk (for example, from file
pcre-8.42-ia64_64-11.31.depot), only the 64-bit version of the library will be installed in the /usr/local/lib/hpux64
directory.
In this case, for successful agent compilation, a customized option is needed for the configure script, for example:
CFLAGS="+DD64" ./configure --enable-agent --with-libpcre-include=/usr/local/include --with-libpcre-lib=/us
Library in a non-standard location
Zabbix allows you to specify a library located in a non-standard location. In the example below, Zabbix will run curl-config from
the specified non-standard location and use its output to determine the correct libcurl to use.
Therefore, specifying a component in a non-standard location will not always work when the same component also exists in a
standard location.
For example, if you use a newer libcurl installed in /usr/local with the libcurl package still installed, Zabbix might pick up the
wrong one and compilation will fail:
123
usr/bin/ld: ../../src/libs/zbxhttp/libzbxhttp.a(http.o): in function 'zbx_http_convert_to_utf8':
/tmp/zabbix-master/src/libs/zbxhttp/http.c:957: undefined reference to 'curl_easy_header'
collect2: error: ld returned 1 exit status
Here, the function curl_easy_header() is not available in the older /usr/lib/x86_64-linux-gnu/libcurl.so, but is
available in the newer /usr/local/lib/libcurl.so.
The problem lies with the order of linker flags, and one solution is to specify the full path to the library in an LDFLAGS variable:
10 Template changes
This page lists all changes to the stock templates that are shipped with Zabbix.
Note:
Upgrading to the latest Zabbix version will not automatically upgrade the templates used. It is suggested to modify the
templates in existing installations by downloading the latest templates from the Zabbix Git repository and importing them
manually into Zabbix. If templates with the same names already exist, the Delete missing options should be checked when
importing to achieve a clean import. This way the old items that are no longer in the updated template will be removed
(note that it will mean losing history of these old items).
• Website by Browser template has been added for monitoring complex websites and web applications.
Updated templates
• Zabbix server health, Remote Zabbix server health, Zabbix proxy health, and Remote Zabbix proxy health templates have
been updated according to the changes in network discovery. The item/trigger for monitoring the discoverer process (now
removed) have been replaced by items/triggers to measure the process utilization of discovery manager and discovery
worker respectively. A new internal item has been added for monitoring the discovery queue.
• MongoDB node by Zabbix agent 2 template item type of mongodb.version has been changed from Dependent item to
Zabbix agent.
• Oracle by Zabbix agent 2 template item type of oracle.version has been changed from Dependent item to Zabbix agent.
• All templates containing dashboard widgets have been updated according to the Communication framework for widgets
feature. Template dashboard widget elements have been updated to reflect widget field changes in template dashboards.
• Azure by HTTP template has been updated according to the changes to the Plain text widget. The template dashboard widget
name and fields elements have been updated to reflect the replacement of the Plain text widget with the Item history
widget.
• Azure VM Scale Set by HTTP, a supplement to the Azure by HTTP template set.
• Jira Data Center by JMX, a template for monitoring Jira Data Center health.
Updated templates
The templates Zabbix server health, Remote Zabbix server health, Zabbix proxy health, and Remote Zabbix proxy health have
been updated for improved visual representation by transferring item data visualizations from graphs to dashboard widgets and
regrouping the displayed metrics.
These notes are for upgrading from Zabbix 6.4.x to Zabbix 7.0.0.
• Breaking changes - changes that may break existing installations and other critical information related to the upgrade
process
124
• Other - all remaining information describing the changes in Zabbix functionality
See also:
• Upgrade procedure for all relevant information about upgrading from versions before Zabbix 6.4.0;
• Upgrading HA cluster for instructions on upgrading servers in a high-availability (HA) cluster.
Upgrade process
To complete a successful Zabbix server upgrade on MySQL/MariaDB, you may require to set GLOBAL log_bin_trust_function_creators
= 1 in MySQL if binary logging is enabled, there are no superuser privileges and log_bin_trust_function_creators = 1 is
not set in MySQL configuration file.
The minimum required PHP version has been raised from 7.4.0 to 8.0.0.
The default location where Zabbix agent on Windows looks for the configuration file has been changed. Now the agent searches
for it in the directory where the agent binary zabbix_agentd.exe is located (instead of C:\zabbix_agentd.conf, as previously).
Zabbix agent 2 on Windows already searched for the default configuration file in the directory where the binary zabbix_agent2.exe
is located. However, in the new version the agent 2 expects the configuration file to be named zabbix_agent2.conf (instead
of zabbix_agent2.win.conf)
See also: Installing Zabbix agent on Windows.
Empty values are now allowed in plugin-related configuration parameters on Zabbix agent 2.
Prior to upgrading to Zabbix 7.0.0, it is necessary to manually upgrade TimescaleDB to use double precision data types if
TimescaleDB is used with compression. You can tell if TimescaleDB is not using double precision data types by the warning in the
System information frontend section or Zabbix server log: ”Database is not upgraded to use double precision values. Support for
the old numeric type will be removed in future versions.”
For more information, see Old numeric (float) value type dropped.
The auditlog table has been converted to hypertable on TimescaleDB in new installations to benefit from automatic partitioning
on time (7 days by default) and better performance.
See also:
• TimescaleDB setup
• Supported TimescaleDB versions
Proxy records have been moved out of the hosts table and are now stored in the new proxy table.
125
Also, operational data of proxies (such as last access, version, compatibility) have been moved out of the host_rtdata table and
is now stored in the new proxy_rtdata table.
There is also a new proxy object in API. All operations with proxies should be updated to be done via this new proxy object.
Based on the changes to item timeout configuration, both ODBC login timeout and query execution timeout for database monitor
items are now limited to the Timeout parameter value set in the item configuration form.
• wmi.get and wmi.getall, when used with Zabbix agent 2, now return a JSON with boolean values represented as
strings (for example,"RealTimeProtectionEnabled": "True" instead of "RealTimeProtectionEnabled": true
returned previously) to match the output format of these items on Zabbix agent;
• oracle.ts.stats has a new conname parameter to specify the target container name. The JSON format of the returned
data has been updated. When no tablespace, type, or conname is specified in the key parameters, the returned data will
include an additional JSON level with the container name, allowing differentiation between containers.
• net.dns.* items can no longer be configured without the name parameter. Although always listed as mandatory, the
name parameter, if omitted, would previously resolve to a default value (zabbix.com), which is no longer the case.
For the list of item changes that do no break compatibility, see What’s new in Zabbix 7.0.0.
Zabbix now can read SNMP trap files from the correct place in case the active node is switched in a high-availability setup.
However, for this functionality to work it is required to update the time format in any bash, perl and SNMPTT scripts to ”%Y-%m-
%dT%H:%M:%S%z” (i.e. 2024-01-10T11:56:14+0300).
Increased maximum size and number of dashboard widgets
The default width has been increased 3 times for all widgets. Note that if you are using custom widgets, you may have to update
the respective parameters of the manifest.json file (for example, when configuring a customized Clock widget, width is to be
changed from 4 to 12).
A widget may now be up to 72 columns in width (previously 24) and 1 to 64 rows in height (previously 2 to 32). The dashboard can
therefore now hold up to 72 widgets horizontally.
The new Item history dashboard widget has replaced the Plain text widget, offering several improvements.
Unlike the Plain Text widget, which only displayed the latest item data in plain text, the Item History widget supports various display
options for multiple item types (numeric, character, log, text, and binary). For example, it can show progress bars or indicators,
images for binary data types (useful for browser items), and highlight text values (useful for log file monitoring).
After the upgrade, all previously configured Plain text widgets will be automatically replaced with Item history widgets, retaining
the same configuration settings. However, any API scripts referencing the Plain Text widget must be updated manually.
API changes
The support for Oracle as a backend database has been deprecated since Zabbix 7.0 and is expected to be completely removed
in future versions.
A software update check is now added by default to new and existing installations - Zabbix frontend will communicate to the public
Zabbix endpoint to check for updates.
Now, if a floating point value is received for an unsigned integer item, the value will be trimmed from the decimal part and saved
as an integer. Previously a floating point value would make an integer item unsupported.
US time format
Time and date displays in the frontend now conform to the US standard time/date display when the default (en_US) frontend
language is used.
126
Before Now
Asynchronous pollers
After the upgrade all agent, HTTP agent and walk[OID] SNMP checks will be moved to asynchronous pollers.
Detecting cURL library features at runtime
Previously cURL library features were detected at build time of Zabbix server, proxy or agent. If cURL features were upgraded, to
make use of them the respective Zabbix component had to be recompiled.
Now only a restart is required for upgraded cURL library features to become available in Zabbix. Recompilation is no longer required.
This is true for Zabbix server, proxy or agent.
In addition:
Upon upgrade, global timeouts for all supported item types will be set based on the Timeout parameter value from the server
configuration file. If a proxy is configured, then, by default, it will use the server’s global timeout settings.
When using an upgraded server (version 7.0.0 or newer) with an older proxy or agent, the proxy or agent will work as before:
• the proxy will use the Timeout parameter from the proxy configuration file;
• the agent will use the Timeout parameter from the agent configuration file.
The timeout parameters have been removed from the configuration files of Modbus and MQTT plugins. The request execution
timeouts can now be set using the item configuration form.
Browser items
A new item type - Browser item - has been added to Zabbix, enabling the monitoring of complex websites and web applications
using a browser. Browser items allow the execution of user-defined JavaScript code to simulate browser-related actions such as
clicking, entering text, navigating through web pages, etc.
• Website by Browser template has been added to templates out of the box;
• ITEM_TYPE_BROWSER (22) item type has been added to template or host item, low-level discovery rule, and item prototype
configuration export/import;
• StartBrowserPollers and WebDriverURL Zabbix server/proxy configuration file parameters have been added;
• Browser item timeout has been added to proxy timeouts or global timeouts (if a proxy is not used);
• -w <webdriver url> command-line parameter for enabling browser monitoring has been added to the zabbix_js
command-line utility.
In the new version the network discovery process has been reworked to allow concurrency between service checks. A new discovery
manager process has been added along with a configurable number of discovery workers (or threads). The discovery manager
processes discovery rules and creates a discovery job per each rule with tasks (service checks). The service checks are picked up
and performed by the discovery workers.
The StartDiscoverers parameter now determines the total number of available discovery workers for discovery. The default number
of StartDiscoverers has been upped from 1 to 5, and the range from 0-250 to 0-1000. The discoverer processes from previous
Zabbix versions have been dropped.
Additionally, the number of available workers per each rule is now configurable in the frontend. This parameter is optional. During
the upgrade it will be set to ”One” as in previous Zabbix versions.
All icons in the frontend have been switched from icon image sheets to fonts.
127
In Monitoring → Latest data, the subfilter and data are no longer displayed by default if the filter is not set. Note, however, that
previously saved filters that were set using only the subfilter remain unaffected. In such cases, the subfilter will remain visible,
and data will be displayed even without the main filter being set.
Configuration parameters
The default value for several configuration parameters has been changed:
• BufferSize configuration parameter for Zabbix agent 2 has been increased from 100 to 1000;
• Plugins.<PluginName>.System.Capacity configuration parameter for Zabbix agent 2 has been increased from 100 to 1000
(maximum). Note that the parameter Plugins.<PluginName>.Capacity, deprecated in Zabbix 6.0, has been removed
completely;
• StartAgents configuration parameter for Zabbix agent has been increased from 3 to 10. Note in packaging that for smaller
systems (i.e. Raspberry Pi) the default value may remain 3.
These changes do not affect existing installations where these parameters are explicitly set.
Aggregate calculations
• Aggregate functions now also support non-numeric types for calculation. This may be useful, for example, with the count
and count_foreach functions.
• The count and count_foreach aggregate functions support optional parameters operator and pattern, which can be used to
fine-tune item filtering and only count values that match given criteria.
• All foreach functions no longer include unsupported items in the count.
• The function last_foreach, previously configured to ignore the time period argument, accepts it as an optional parameter.
Since Zabbix 5.0.0, numeric (float) data type supports precision of approximately 15 digits and range from approximately -
1.79E+308 to 1.79E+308. This is implemented by default in new installations. When upgrading existing installations that had
been created before Zabbix 5.0, a database upgrade patch is applied automatically, except for TimescaleDB with compression.
For Oracle databases, older versions of MySQL databases, and large installations, the patch execution can take a lot of time. For
this reason it is recommended to update the data type manually before starting the upgrade.
Attention:
The patch alters data columns of history and trends tables, which usually contain lots of data, therefore it is expected to
take some time to complete. Since the exact estimate cannot be predicted and depends on server performance, database
management system configuration and version, it is recommended to first test the patch outside the production environ-
ment. With MySQL 8.0 and MariaDB 10.5 configured by default, the patch is known to be executed instantly for large tables
due to efficient algorithm and the fact that previously the same double type was used but with limited precision, meaning
that data itself does not need to be modified.
Please execute the patch (SQL files) for your database, as described on the Database upgrade to primary keys and double precision
data types page.
Note that with TimescaleDB, compression support must only be turned on after applying this patch.
Warning:
Important! Run these scripts for the server database only.
To apply a patch:
The option to set the startup type of the Zabbix agent/agent 2 Windows service (-S --startup-type) has been added. This
option allows configuring the agent/agent 2 service to start automatically at Windows startup (automatic), after the automatically
started services have completed startup (delayed), when started manually by a user or application (manual) or to disable the
service entirely (disabled).
When performing Windows agent installation from MSI, the default startup type on Windows Server 2008/Vista and later versions
is now delayed if not specified otherwise in the STARTUPTYPE command-line parameter. This improves the reliability and perfor-
mance of the Zabbix agent/agent 2 Windows service, particularly during system restarts.
128
Templates
For new templates and changes to existing templates, see Template changes.
When installing Zabbix from packages and preparing the database schema, the location of database-related files has changed to
better correspond to the file structure in the sources:
• Base initialization files (schema.sql, data.sql, images.sql) are located at the root of the database directory.
• Optional files/patches for upgrading database tables are located in theoption-patches directory.
• Database extensions and addons are now subdirectories, named after the extension and located in the respective
database directory.
• TimescaleDB-specific changes:
– The abbreviationtsdb has been replaced by timescaledb.
– The option-patches directory includes with-compression and without-compression subdirectories; these
contain optional files/patches for upgrading database tables depending on TimescaleDB compression settings.
– The hypertable schema creation file for TimescaleDB has been moved to database/postgresql/timescaledb/schema.sql.
Below is a comparison of the previous and current directory structures for MySQL and PostgreSQL databases.
### Previous: # Current:
database database
��� mysql ��� mysql
� ��� data.sql � ��� option-patches
� ��� double.sql � � ��� double.sql
� ��� history_pk_prepare.sql � � ��� history_pk_prepare.sql
� ��� images.sql � ��� data.sql
� ��� schema.sql � ��� images.sql
� � ��� schema.sql
� �
��� postgresql ��� postgresql
� ��� tsdb_history_pk_upgrade_no_compression � ��� option-patches
� � ��� history_pk.sql � � ��� double.sql
� � ��� history_pk_log.sql � � ��� history_pk_prepare.sql
� � ��� history_pk_str.sql � ��� timescaledb
� � ��� history_pk_text.sql � � ��� option-patches
� � ��� history_pk_uint.sql � � � ��� with-compression
� ��� tsdb_history_pk_upgrade_with_compression � � � � ��� history_pk.sql
� � ��� history_pk.sql � � � � ��� history_pk_log.sql
� � ��� history_pk_log.sql � � � � ��� history_pk_str.sql
� � ��� history_pk_str.sql � � � � ��� history_pk_text.sql
� � ��� history_pk_text.sql � � � � ��� history_pk_uint.sql
� � ��� history_pk_uint.sql � � � � ��� trends_upgrade.sql
� ��� data.sql � � � ��� without-compression
� ��� double.sql � � � ��� history_pk.sql
� ��� history_pk_prepare.sql � � � ��� history_pk_log.sql
� ��� images.sql � � � ��� history_pk_str.sql
� ��� schema.sql � � � ��� history_pk_text.sql
� ��� timescaledb.sql � � � ��� history_pk_uint.sql
� � � � ��� trends_upgrade.sql
� � � ��� schema.sql
� � ��� data.sql
� � ��� images.sql
� � ��� schema.sql
��� ... ��� ...
Please update your scripts, if any, containing the previous directory structure.
129
A new index has been added to the auditlog table to improve database and frontend response times when filtering records by
Recordset ID in the Audit log.
Note that users with large audit logs may experience extended upgrade times due to the database size.
The net.dns.perf item now returns a response time instead of 0 when the DNS server responds with an error code (for example,
NXDOMAIN or SERVFAIL).
5 Quickstart
Overview
In this section, you will learn how to log in and set up a system user in Zabbix.
Login
This is the Zabbix welcome screen. Enter the user name Admin with password zabbix to log in as a Zabbix superuser. Access to
all menu sections will be granted.
In case of five consecutive failed login attempts, Zabbix interface will pause for 30 seconds in order to prevent brute force and
dictionary attacks.
The IP address of a failed login attempt will be displayed after a successful login.
130
Adding user
In the new user form, make sure to add your user to one of the existing user groups, for example ’Zabbix administrators’.
By default, new users have no media (notification delivery methods) defined for them. To create one, go to the ’Media’ tab and
click on Add.
131
In this pop-up, enter an email address for the user.
You can specify a time period when the medium will be active (see Time period specification page for a description of the format),
by default a medium is always active. You can also customize trigger severity levels for which the medium will be active, but leave
all of them enabled for now.
Permissions tab has a mandatory field Role. The role determines which frontend elements the user can view and which actions he
is allowed to perform. Press Select and select one of the roles from the list. For example, select Admin role to allow access to all
Zabbix frontend sections, except Administration. Later on, you can modify permissions or create more user roles. Upon selecting
a role, permissions will appear in the same tab:
132
Click Add in the user properties form to save the user. The new user appears in the userlist.
Adding permissions
By default, a new user has no permissions to access hosts and templates. To grant the user rights, click on the group of the user in
the Groups column (in this case - ’Zabbix administrators’). In the group properties form, go to the Host permissions tab to assign
permissions to host groups. Click on for the host group selection field to be displayed:
133
Then click on Select next to the field to see the list of the host groups. This user is to have read-only access to Linux servers group,
so mark the appropriate checkbox in the list and click on Select to confirm your choice.
Click the Read button to set the permission level and then Update to save the changes made to the user group configuration.
To grant permissions to templates, you will need to switch to the Template permissions tab and specify template groups.
Attention:
In Zabbix, access rights to hosts and templates are assigned to user groups, not individual users.
Done! You may try to log in using the credentials of the new user.
2 New host
Overview
A host in Zabbix is a networked entity (physical, virtual) that you wish to monitor. The definition of what can be a ”host” in Zabbix
is quite flexible. It can be a physical server, a network switch, a virtual machine or some application.
Adding host
Information about configured hosts in Zabbix is available in Data collection → Hosts and Monitoring → Hosts. There is already one
pre-defined host, called ”Zabbix server”, but we want to learn adding another.
To add a new host, click on Create host. This will present us with a host configuration form.
134
All mandatory input fields are marked with a red asterisk.
Host name
• Enter a host name. Alphanumerics, spaces, dots, dashes and underscores are allowed.
Host groups
• Select one or several existing groups by clicking Select button or enter a non-existing group name to create a new group.
Note:
All access permissions are assigned to host groups, not individual hosts. That is why a host must belong to at least one
group.
Interfaces: IP address
• Although not a required field technically, a host interface is necessary for collecting certain metrics. To use Zabbix agent
passive checks, specify the agent’s IP or DNS in this field. Note that you should also specify Zabbix server’s IP or DNS in the
Zabbix agent configuration file ’Server’ directive. If Zabbix agent and Zabbix server are installed on the same machine, you
need to specify the same IP/DNS in both places.
When done, click Add. Your new host should be visible in the host list.
The Availability column contains indicators of host availability per each interface. We have defined a Zabbix agent interface, so
we can use the agent availability icon (with ’ZBX’ on it) to understand host availability:
• - host status has not been established; no metric check has happened yet
• - host is unavailable, a metric check has failed (move your mouse cursor over the icon to see the error message).
There might be some error with communication, possibly caused by incorrect interface credentials. Check that Zabbix server
is running, and try refreshing the page later as well.
135
3 New item
Overview
Items are the basis of gathering data in Zabbix. Without items, there is no data - because only an item defines a single metric or
what kind of data to collect from a host.
Adding item
All items are grouped around hosts. That is why to configure a sample item we go to Data collection → Hosts and find the ”New
host” we have created.
Click on the Items link in the row of ”New host”, and then click on Create item. This will present us with an item definition form.
Name
• Enter CPU load as the value. This will be the item name displayed in lists and elsewhere.
Key
• Manually enter system.cpu.load as the value. This is the technical name of an item that identifies the type of information
that will be gathered. The particular key is just one of pre-defined keys that come with Zabbix agent.
Type of information
• This attribute defines the format of the expected data. For the system.cpu.load key, this field will be automatically set to
Numeric (float).
136
Note:
You may also want to reduce the number of days item history will be kept, to 7 or 14. This is good practice to relieve the
database from keeping lots of historical values.
When done, click Add. The new item should appear in the item list. Click on Details above the list to view what exactly was done.
Viewing data
With an item defined, you might be curious if it is actually gathering data. For that, go to Monitoring → Latest data, select ’New
host’ in the filter and click on Apply.
With that said, it may take up to 60 seconds for the first data to arrive. That, by default, is how often the server reads configuration
changes and picks up new items to execute.
If you see no value in the ’Change’ column, maybe only one value has been received so far. Wait 30 seconds for another value to
arrive.
If you do not see information about the item as in the screenshot, make sure that:
• you have filled out the item ’Key’ and ’Type of information’ fields exactly as in the screenshot;
• both the agent and the server are running;
• host status is ’Monitored’ and its availability icon is green;
• the host selected in the host filter is correct;
• the item is enabled.
Graphs
With the item working for a while, it might be time to see something visual. Simple graphs are available for any monitored numeric
item without any additional configuration. These graphs are generated on runtime.
To view the graph, go to Monitoring → Latest data and click on the ’Graph’ link next to the item.
4 New trigger
137
Overview
Items only collect data. To automatically evaluate incoming data we need to define triggers. A trigger contains an expression that
defines a threshold of what is an acceptable level for the data.
If that level is surpassed by the incoming data, a trigger will ”fire” or go into a ’Problem’ state - letting us know that something has
happened that may require attention. If the level is acceptable again, trigger returns to an ’Ok’ state.
Adding trigger
To configure a trigger for our item, go to Data collection → Hosts, find ’New host’ and click on Triggers next to it and then on Create
trigger. This presents us with a trigger definition form.
Name
• Enter CPU load too high on ’New host’ for 3 minutes as the value. This will be the trigger name displayed in lists and
elsewhere.
Expression
This is the trigger expression. Make sure that the expression is entered right, down to the last symbol. The item key here (sys-
tem.cpu.load) is used to refer to the item. This particular expression basically says that the problem threshold is exceeded when
the CPU load average value for 3 minutes is over 2. You can learn more about the syntax of trigger expressions.
When done, click Add. The new trigger should appear in the trigger list.
If the CPU load has exceeded the threshold level you defined in the trigger, the problem will be displayed in Monitoring → Problems.
138
The flashing in the status column indicates a recent change of trigger status, one that has taken place in the last 30 minutes.
Overview
In this section you will learn how to set up alerting in the form of notifications in Zabbix.
With items collecting data and triggers designed to ”fire” upon problem situations, it would also be useful to have some alerting
mechanism in place that would notify us about important events even when we are not directly looking at Zabbix frontend.
This is what notifications do. Email being the most popular delivery method for problem notifications, we will learn how to set up
an email notification.
Email settings
Initially there are several predefined notification delivery methods in Zabbix. Email is one of those.
To configure email settings, go to Alerts → Media types and click on Email in the list of pre-defined media types.
139
All mandatory input fields are marked with a red asterisk.
In the Media type tab, set the values of SMTP server, SMTP helo and SMTP email to the appropriate for your environment.
Note:
’SMTP email’ will be used as the ’From’ address for the notifications sent from Zabbix.
Next, it is required to define the content of the problem message. The content is defined by means of a message template,
configured in the Message templates tab.
Click on Add to create a message template, and select Problem as the message type.
140
Click on Add when ready and save the form.
Now you have configured ’Email’ as a working media type. The media type must also be linked to users by defining specific delivery
addresses (like we did when configuring a new user), otherwise it will not be used.
New action
Delivering notifications is one of the things actions do in Zabbix. Therefore, to set up a notification, go to Alerts → Actions →
Trigger actions and click on Create action.
In the most simple case, if we do not add any more specific conditions, the action will be taken upon any trigger change from ’Ok’
to ’Problem’.
We still should define what the action should do - and that is done in the Operations tab. Click on Add in the Operations block,
which opens a new operation form.
141
All mandatory input fields are marked with a red asterisk.
Here, click on Add in the Send to Users block and select the user (’user’) we have defined. Select ’Email’ as the value of Send only
to. When done with this, click on Add, and the operation should be added:
That is all for a simple action configuration, so click Add in the action form.
Receiving notification
Now, with delivering notifications configured, it would be fun to actually receive one. To help with that, we might on purpose
increase the load on our host - so that our trigger ”fires” and we receive a problem notification.
142
cat /dev/urandom | md5sum
You may run one or several of these processes.
Now go to Monitoring → Latest data and see how the values of ’CPU Load’ have increased. Remember, for our trigger to fire, the
’CPU Load’ value has to go over ’2’ for 3 minutes running. Once it does:
• in Monitoring → Problems you should see the trigger with a flashing ’Problem’ status
• you should receive a problem notification in your email
Attention:
If notifications do not work:
• verify once again that both the email settings and the action have been configured properly
• make sure the user you created has at least read permissions on the host which generated the event, as noted in
the Adding user step. The user, being part of the ’Zabbix administrators’ user group must have at least read access
to ’Linux servers’ host group that our host belongs to.
• Additionally, you can check out the action log by going to Reports → Action log.
6 New template
Overview
Previously we learned how to set up an item, a trigger and how to get a problem notification for the host.
While all of these steps offer a great deal of flexibility in themselves, it may appear like a lot of steps to take if needed for, say, a
thousand hosts. Some automation would be handy.
This is where templates come to help. Templates allow to group useful items, triggers and other entities so that those can be
reused again and again by applying to hosts in a single step.
When a template is linked to a host, the host inherits all entities of the template. So, basically a pre-prepared bunch of checks can
be applied very quickly.
Adding template
To start working with templates, we must first create one. To do that, in Data collection → Templates click on Create template. This
will present us with a template configuration form.
Template name
Template groups
• Select one or several groups by clicking Select button. The template must belong to a group.
143
Note:
Access permissions to template groups are assigned in the user group configuration on the Template permissions tab
in the same way as host permissions. All access permissions are assigned to groups, not individual templates, that’s why
including the template into at least one group is mandatory.
When done, click Add. Your new template should be visible in the list of templates. You can also use the filter to find your template.
As you may see, the template is there, but it holds nothing in it - no items, triggers or other entities.
To add an item to the template, go to the item list for ’New host’. In Data collection → Hosts click on Items next to ’New host’.
Then:
• Click on Copy.
If you now go to Data collection → Templates, ’New template’ should have one new item in it.
We will stop at one item only for now, but similarly you can add any other items, triggers or other entities to the template until it’s
a fairly complete set of entities for given purpose (monitoring OS, monitoring single application).
With a template ready, it only remains to add it to a host. For that, go to Data collection → Hosts, click on ’New host’ to open its
property form and find the Templates field.
Start typing New template in the Templates field. The name of template we have created should appear in the dropdown list. Scroll
down to select. See that it appears in the Templates field.
144
Click Update in the form to save the changes. The template is now added to the host, with all entities that it holds.
This way it can be applied to any other host as well. Any changes to the items, triggers and other entities at the template level
will propagate to the hosts the template is linked to.
As you may have noticed, Zabbix comes with a set of predefined templates for various OS, devices and applications. To get started
with monitoring very quickly, you may link the appropriate one of them to a host, but beware that these templates need to be
fine-tuned for your environment. Some checks may not be needed, and polling intervals may be way too frequent.
6 Zabbix appliance
Overview As an alternative to setting up manually or reusing an existing server for Zabbix, users may download a Zabbix
appliance or a Zabbix appliance installation CD image.
Zabbix appliance installation CD can be used for instant deployment of Zabbix server (MySQL).
Attention:
You can use this Appliance to evaluate Zabbix. The Appliance is not intended for serious production use.
System requirements:
• RAM: 1.5 GB
• Disk space: at least 8 GB should be allocated for the virtual machine
• CPU: 2 cores minimum
Zabbix appliance contains a Zabbix server (configured and running on MySQL) and a frontend.
145
• VMware (.vmx)
• Open virtualization format (.ovf)
• Microsoft Hyper-V 2012 (.vhdx)
• Microsoft Hyper-V 2008 (.vhd)
• KVM, Parallels, QEMU, USB stick, VirtualBox, Xen (.raw)
• KVM, QEMU (.qcow2)
To get started, boot the appliance and point a browser at the IP the appliance has received over DHCP.
Attention:
DHCP must be enabled on the host.
ip addr show
To access Zabbix frontend, go to http://<host_ip> (for access from the host’s browser bridged mode should be enabled in the VM
network settings).
Note:
If the appliance fails to start up in Hyper-V, you may want to press Ctrl+Alt+F2 to switch tty sessions.
1 Changes to AlmaLinux 8 configuration The appliance is based on AlmaLinux 8. There are some changes applied to the
base AlmaLinux configuration.
1.1 Repositories
[zabbix]
name=Zabbix Official Repository - $basearch
baseurl=http://repo.zabbix.com/zabbix/7.0/rhel/8/$basearch/
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-ZABBIX-A14FE591
1.2 Firewall configuration
By default the appliance uses DHCP to obtain the IP address. To specify a static IP address:
By default the appliance uses UTC for the system clock. To change the time zone, copy the appropriate file from /usr/share/zoneinfo
to /etc/localtime, for example:
cp /usr/share/zoneinfo/Europe/Riga /etc/localtime
146
2 Zabbix configuration Zabbix appliance setup has the following passwords and configuration changes:
System:
• root:zabbix
Zabbix frontend:
• Admin:zabbix
Database:
• root:<random>
• zabbix:<random>
Note:
Database passwords are randomly generated during the installation process.
Root password is stored inside the /root/.my.cnf file. It is not required to input a password under the ”root” account.
To change the database user password, changes have to be made in the following locations:
• MySQL;
• /etc/zabbix/zabbix_server.conf;
• /etc/zabbix/web/zabbix.conf.php.
Note:
Separate users zabbix_srv and zabbix_web are defined for the server and the frontend respectively.
This can be customized in /etc/nginx/conf.d/zabbix.conf. Nginx has to be restarted after modifying this file. To do so, log in
using SSH as root user and execute:
4 Firewall By default, only the ports listed in the configuration changes above are open. To open additional ports, modify
”/etc/sysconfig/iptables” file and reload firewall rules:
The images in vmdk format are usable directly in VMware Player, Server and Workstation products. For use in ESX, ESXi and
vSphere they must be converted using VMware converter. If you use VMWare Converter, you may encounter issues with the hybrid
network adapter. In that case, you can try specifying the E1000 adapter during the conversion process. Alternatively, after the
conversion is complete, you can delete the existing adapter and add an E1000 adapter.
147
dd if=./zabbix_appliance_7.0.0.raw of=/dev/sdc bs=4k conv=fdatasync
Replace /dev/sdc with your Flash/HDD disk device.
7 Configuration
1 Configuring a template
Overview
Configuring a template requires that you first create a template by defining its general parameters and then you add entities
(items, triggers, graphs, etc.) to it.
Creating a template
Template attributes:
Parameter Description
148
Parameter Description
Vendor and version Template vendor and version; displayed only when updating existing templates (out-of-the-box
templates provided by Zabbix, imported templates, or templates modified through the Template
API) if the template configuration contains such information.
Cannot be modified in Zabbix frontend.
For out-of-the-box templates, version is displayed as follows: major version of Zabbix, delimiter
(”-”), revision number (increased with each new version of the template, and reset with each
major version of Zabbix). For example, 6.4-0, 6.4-5, 7.0-0, 7.0-3.
The Tags tab allows you to define template-level tags. All problems of hosts linked to this template will be tagged with the values
entered here.
User macros, {INVENTORY.*} macros, {HOST.HOST}, {HOST.NAME}, {HOST.CONN}, {HOST.DNS}, {HOST.IP}, {HOST.PORT} and
{HOST.ID} macros are supported in tags.
The Macros tab allows you to define template-level user macros as a name-value pairs. Note that macro values can be kept as
plain text, secret text, or Vault secret. Adding a description is also supported.
If you select the Inherited and template macros option, you may also view macros from linked templates and global macros that
the template will inherit, as well as the values that the macros will resolve to.
149
For convenience, links to the respective templates, as well as a link to global macro configuration is provided. It is also possible to
edit a linked template macro or global macro on the template level, effectively creating a copy of the macro on the template.
The Value mapping tab allows to configure human-friendly representation of item data in value mappings.
Buttons:
Add the template. The added template should appear in the list.
Create another template based on the properties of the current template, including the entities
(items, triggers, etc.) both inherited from linked templates and directly attached to the current
template, but excluding template vendor and version.
Delete the template; entities of the template (items, triggers, etc.) remain with the linked hosts.
Delete the template and all its entities from linked hosts.
Attention:
Items have to be added to a template first. Triggers and graphs cannot be added without the corresponding item.
150
Adding triggers and graphs is done in a similar fashion (from the list of triggers and graphs respectively), again, keeping in mind
that they can only be added if the required items are added first.
Adding dashboards
Attention:
When configuring widgets on a template dashboard (instead of a global dashboard), the host-related parameters are not
available, and some parameters have a different label. This is because template dashboards display data only from the host
that the template is linked to. For example, the parameters Host groups, Exclude host groups and Hosts in the Problems
widget are not available, the parameter Host groups in the Host availability widget is not available, and the parameter
Show hosts in maintenance is renamed to Show data in maintenance, etc. For more information on the availability of
parameters in template dashboard widgets, see specific parameters for each dashboard widget.
Note:
For details on accessing host dashboards that are created from template dashboards, see the host dashboards section.
Attention:
Only Super Admin users can create template groups.
To create a nested template group, use the ’/’ forward slash separator, for example Linux servers/Databases/MySQL. You
can create this group even if none of the two parent template groups (Linux servers/Databases/) exist. In this case creating
these parent template groups is up to the user; they will not be created automatically.<br>Leading and trailing slashes, several
slashes in a row are not allowed. Escaping of ’/’ is not supported.
Once the group is created, you can click on the group name in the list to edit group name, clone the group or set additional option:
151
Apply permissions to all subgroups - mark this checkbox and click on Update to apply the same level of permissions to all nested
template groups. For user groups that may have had differing permissions assigned to nested template groups, the permission
level of the parent template group will be enforced on the nested groups. This is a one-time option that is not saved in the database.
• When creating a child template group to an existing parent template group, user group permissions to the child are inherited
from the parent (for example, when creating Databases/MySQL if Databases already exists).
• When creating a parent template group to an existing child template group, no permissions to the parent are set (for example,
when creating Databases if Databases/MySQL already exists).
2 Linking/unlinking
Overview
Linking is a process whereby templates are applied to hosts, whereas unlinking removes the association with the template from a
host.
Linking a template
The host will now have all the entities of the template. This includes items, triggers, graphs, low-level discovery rules, web
scenarios, as well as dashboards.
Attention:
Linking multiple templates to the same host will fail if those templates contain items with the same item key. And, as
triggers and graphs use items, they cannot be linked to a single host from multiple templates either, if using identical item
keys.
When entities (items, triggers, etc.) are added from the template:
• previously existing identical entities on the host are updated as entities of the template, and any existing host-level
customizations to the entity are lost;
• entities from the template are added;
• any directly linked entities that, prior to template linkage, existed only on the host remain untouched.
In the lists, all entities from the template now are prefixed by the template name, indicating that these belong to the particular
template. The template name itself (in gray text) is a link allowing to access the list of those entities on the template level.
Note:
For some items, such as external checks, HTTP agent checks, simple checks, SSH checks and Telnet checks, a host interface
is optional. If, at the time of linking a template, the host does not have an interface defined these items will be added
without an interface. If you add a host interface later it will not be assigned automatically to already existing items. To
assign the newly added host interface to all template items at once, unlink the template from the host and then link it back
again. To preserve item history use the option Unlink, do not use Unlink and clear.
If some entity is not prefixed by the template name, it means that it existed on the host before and was not added by the template.
When adding entities (items, triggers, etc.) from a template it is important to know what of those entities already exist on the host
and need to be updated and what entities differ. The uniqueness criteria for deciding upon the sameness/difference are:
To update template linkage of many hosts, in Data collection → Hosts select some hosts by marking their checkboxes, then click
on Mass update below the list and then select Link templates:
152
To link additional templates, start typing the template name in the auto-complete field until a dropdown appears offering the
matching templates. Just scroll down to select the template to link.
The Replace option will allow to link a new template while unlinking any template that was linked to the hosts before. The Unlink
option will allow to specify which templates to unlink. The Clear when unlinking option will allow to not only unlink any previously
linked templates, but also remove all entities inherited from them (items, triggers, etc.).
Note:
Zabbix offers a sizable set of predefined templates. You can use these for reference, but beware of using them unchanged
in production as they may contain too many items and poll for data too often. If you feel like using them, finetune them to
fit you real needs.
If you try to edit an item or a trigger that was linked from the template, you may realize that many key options are disabled for
editing. This makes sense as the idea of templates is that things are edited in a one-touch manner on the template level. However,
you still can, for example, enable/disable an item on individual hosts and set the update interval, history length and some other
parameters.
If you want to edit the entity fully, you have to edit it on the template level (template level shortcut is displayed in the form name),
keeping in mind that these changes will affect all hosts that have this template linked to them.
Attention:
Any customizations to the entities implemented on a template-level will override the previous customizations of the entities
on a host-level.
Unlinking a template
Choosing the Unlink option will simply remove association with the template, while leaving all its entities with the host. This
includes items, triggers, graphs, low-level discovery rules, and web scenarios, but excludes dashboards. Note that value maps and
tags inherited from linked templates will also be removed.
Choosing the Unlink and clear option will remove both the association with the template and all its entities (items, triggers, etc.).
3 Nesting
Overview
As it makes sense to separate out entities on individual templates for various services, applications, etc., you may end up with
quite a few templates all of which may need to be linked to quite a few hosts. To simplify the picture, it is possible to link some
templates together in a single template.
153
The benefit of nesting is that you have to link only one template to the host, and the host will automatically inherit all entities from
the templates that are linked to the one template. For example, if we link T1 and T2 to T3, we supplement T3 with all entities from
T1 and T2, but not vice versa. If we link T1 to T2 and T3, we supplement T2 and T3 with entities from T1.
To link templates, you need to take an existing template (or create a new one), and then:
Thus, all entities of the configured template, as well as all entities of linked templates will now appear in the template configuration.
This includes items, triggers, graphs, low-level discovery rules, and web scenarios, but excludes dashboards. However, linked
template dashboards will, nevertheless, be inherited by hosts.
To unlink any of the linked templates, click on Unlink or Unlink and clear in the template configuration form, and then click on
Update.
The Unlink option will simply remove the association with the linked template, while not removing any of its entities (items, triggers,
etc.).
The Unlink and clear option will remove both the association with the linked template, as well as all its entities (items, triggers,
etc.).
4 Mass update
Overview
Sometimes you may want to change some attribute for a number of templates at once. Instead of opening each individual template
for editing, you may use the mass update function for that.
1. Mark the checkboxes before the templates you want to update in the template list.
2. Click on Mass update below the list.
3. Navigate to the tab with required attributes (Template, Tags, Macros or Value mapping).
4. Mark the checkboxes of any attribute to update and enter a new value for them.
154
The following options are available when selecting the respective button for the Link templates update:
To specify the templates to link/unlink, start typing the template name in the auto-complete field until a dropdown appears offering
the matching templates. Just scroll down to select the required templates.
The Clear when unlinking option will allow to unlink any previously linked templates, as well as to remove all elements inherited
from them (items, triggers, graphs, etc.).
The following options are available when selecting the respective button for the Template groups update:
• Add - allows to specify additional template groups from the existing ones or enter completely new template groups for the
templates;
• Replace - will remove the template from any existing template groups and replace them with the one(s) specified in this field
(existing or new template groups);
• Remove - will remove specific template groups from templates.
These fields are auto-complete - starting to type in them offers a dropdown of matching template groups. If the template group is
new, it also appears in the dropdown and it is indicated by (new) after the string. Just scroll down to select.
User macros, {INVENTORY.*} macros, {HOST.HOST}, {HOST.NAME}, {HOST.CONN}, {HOST.DNS}, {HOST.IP}, {HOST.PORT} and
{HOST.ID} macros are supported in tags. Note that tags with the same name, but different values are not considered ’duplicates’
and can be added to the same template.
The following options are available when selecting the respective button for macros update:
• Add - allows to specify additional user macros for the templates. If Update existing checkbox is checked, value, type and
description for the specified macro name will be updated. If unchecked, if a macro with that name already exist on the
155
template(s), it will not be updated.
• Update - will replace values, types and descriptions of macros specified in this list. If Add missing checkbox is checked,
macro that didn’t previously exist on a template will be added as new macro. If unchecked, only macros that already exist
on a template will be updated.
• Remove - will remove specified macros from templates. If Except selected box is checked, all macros except specified in the
list will be removed. If unchecked, only macros specified in the list will be removed.
• Remove all - will remove all user macros from templates. If I confirm to remove all macros checkbox is not checked, a new
popup window will open asking to confirm removal of all macros.
The Value mapping tab allows you to mass update value mappings.
Buttons with the following options are available for value map update:
• Add - add value maps to the templates. If you mark Update existing, all properties of the value map with this name will be
updated. Otherwise, if a value map with that name already exists, it will not be updated.
• Update - update existing value maps. If you mark Add missing, a value map that didn’t previously exist on a template will
be added as a new value map. Otherwise only the value maps that already exist on a template will be updated.
• Rename - give new name to an existing value map.
• Remove - remove the specified value maps from the templates. If you mark Except selected, all value maps will be removed
except the ones that are specified.
• Remove all - remove all value maps from the templates. If the I confirm to remove all value maps checkbox is not marked,
a new popup window will open asking to confirm the removal.
Add from template and Add from host options are available for value mapping add/update operations. They allow to select value
mappings from a template or a host respectively.
When done with all required changes, click on Update. The attributes will be updated accordingly for all the selected templates.
What is a ”host”?
In Zabbix, a ”host” refers to any physical or virtual device, application, service, or any other logically-related collection of monitored
parameters.
Creating hosts is one of the first monitoring tasks in Zabbix. For example, if you want to monitor some parameters on a server ”x”,
you must first create a host called, say, ”Server X” and then you can look to add monitoring items to it.
1 Configuring a host
Overview
156
• Enter parameters of the host in the form
You can also use the Clone button in the configuration form of an existing host to create a new host. This host will have all of
the properties of the existing host, including linked templates, entities (items, triggers, etc.) from those templates, as well as the
entities directly attached to the existing host.
Note that when a host is cloned, it will retain all template entities as they are originally on the template. Any changes to those
entities made on the existing host level (such as changed item interval, modified regular expression or added prototypes to the
low-level discovery rule) will not be cloned to the new host; instead they will be as on the template.
Configuration
Parameter Description
Host Enter a unique host name. Alphanumerics, spaces, dots, dashes and underscores are
name allowed. However, leading and trailing spaces are disallowed.
Note: With Zabbix agent running on the host you are configuring, the agent configuration file
parameter Hostname must have the same value as the host name entered here. The name
in the parameter is needed in the processing of active checks.
157
Parameter Description
Visible Enter a unique visible name for the host. If you set this name, it will be the one visible in
name lists, maps, etc instead of the technical host name. This attribute has UTF-8 support.
Templates Link templates to the host. All entities (items, triggers, etc.) will be inherited from the
template.
To link a new template, start typing the template name in the text input field. A list of
matching templates will appear; scroll down to select. Alternatively, you may click on Select
next to the field and select templates from the list in a popup window. All selected templates
will be linked to the host when the host configuration form is saved or updated.
To unlink a template, use one of the two options in the Linked templates block:
Unlink - unlink the template, but preserve its entities (items, triggers, etc.);
Unlink and clear - unlink the template and remove all its entities (items, triggers, etc.).
Listed template names are clickable links leading to the template configuration form.
Host Select host groups the host belongs to. A host must belong to at least one host group. A new
groups group can be created and linked to the host by adding a non-existing group name.
Interfaces Several host interface types are supported for a host: Agent, SNMP, JMX and IPMI.
No interfaces are defined by default. To add a new interface, click on Add in the Interfaces
block, select the interface type and enter IP/DNS, Connect to and Port info.
Note: Interfaces that are used in any items cannot be removed and link Remove is grayed
out for them.
See Configuring SNMP monitoring for additional details on configuring an SNMP interface (v1,
v2 and v3).
IP address Host IP address (optional).
DNS name Host DNS name (optional).
Connect to Clicking the respective button will tell Zabbix server what to use to retrieve data from agents:
IP - Connect to the host IP address (recommended)
DNS - Connect to the host DNS name
Port TCP/UDP port number. Default values are: 10050 for Zabbix agent, 161 for SNMP agent,
12345 for JMX and 623 for IPMI.
Default Check the radio button to set the default interface.
Description Enter the host description.
Monitored Select if the host is monitored by:
by Server - host is monitored by Zabbix server;
Proxy - host is monitored by standalone proxy;
Proxy group - host is monitored by proxy group.
Proxy The assigned proxy name is displayed (only if Zabbix server has assigned one from the
selected proxy group).
This field is displayed only if the host is monitored by a proxy group.
Enabled When the checkbox is checked, the host is enabled - ready for monitoring.
Parameter Description
The Tags tab allows you to define host-level tags. All problems of this host will be tagged with the values entered here.
158
User macros, {INVENTORY.*} macros, {HOST.HOST}, {HOST.NAME}, {HOST.CONN}, {HOST.DNS}, {HOST.IP}, {HOST.PORT} and
{HOST.ID} macros are supported in tags.
The Macros tab allows you to define host-level user macros as a name-value pairs. Note that macro values can be kept as plain
text, secret text or Vault secret. Adding a description is also supported.
You may also view here template-level and global user macros if you select the Inherited and host macros option. That is where
all defined user macros for the host are displayed with the value they resolve to as well as their origin.
For convenience, links to respective templates and global macro configuration are provided. It is also possible to edit a tem-
plate/global macro on the host level, effectively creating a copy of the macro on the host.
The Inventory tab allows you to manually enter inventory information for the host. You can also select to enable Automatic
inventory population, or disable inventory population for this host.
159
If inventory is enabled (manual or automatic), a green dot is displayed with the tab name.
Encryption
The Encryption tab allows you to require encrypted connections with the host.
Parameter Description
Connections to host How Zabbix server or proxy connects to Zabbix agent on a host: no encryption (default), using
PSK (pre-shared key) or certificate.
Connections from host Select what type of connections are allowed from the host (i.e. from Zabbix agent and Zabbix
sender). Several connection types can be selected at the same time (useful for testing and
switching to other connection type). Default is ”No encryption”.
Issuer Allowed issuer of certificate. Certificate is first validated with CA (certificate authority). If it is
valid, signed by the CA, then the Issuer field can be used to further restrict allowed CA. This field
is intended to be used if your Zabbix installation uses certificates from multiple CAs. If this field
is empty then any CA is accepted.
Subject Allowed subject of certificate. Certificate is first validated with CA. If it is valid, signed by the CA,
then the Subject field can be used to allow only one value of Subject string. If this field is empty
then any valid certificate signed by the configured CA is accepted.
PSK identity Pre-shared key identity string.
Do not put sensitive information in the PSK identity, it is transmitted unencrypted over the
network to inform a receiver which PSK to use.
PSK Pre-shared key (hex-string). Maximum length: 512 hex-digits (256-byte PSK) if Zabbix uses
GnuTLS or OpenSSL library, 64 hex-digits (32-byte PSK) if Zabbix uses mbed TLS (PolarSSL)
library. Example: 1f87b595725ac58dd977beef14b97461a7c1045b9a1c963065002c5473194952
Value mapping
The Value mapping tab allows to configure human-friendly representation of item data in value mappings.
Attention:
Only Super Admin users can create host groups.
To create a nested host group, use the ’/’ forward slash separator, for example Europe/Latvia/Riga/Zabbix servers. You
can create this group even if none of the three parent host groups (Europe/Latvia/Riga/) exist. In this case creating these
160
parent host groups is up to the user; they will not be created automatically.<br>Leading and trailing slashes, several slashes in a
row are not allowed. Escaping of ’/’ is not supported.
Once the group is created, you can click on the group name in the list to edit group name, clone the group or set additional option:
Apply permissions and tag filters to all subgroups - mark this checkbox and click on Update to apply the same level of permis-
sions/tag filters to all nested host groups. For user groups that may have had differing permissions assigned to nested host groups,
the permission level of the parent host group will be enforced on the nested groups. This is a one-time option that is not saved in
the database.
• When creating a child host group to an existing parent host group, user group permissions to the child are inherited from
the parent (for example, when creating Riga/Zabbix servers if Riga already exists)
• When creating a parent host group to an existing child host group, no permissions to the parent are set (for example, when
creating Riga if Riga/Zabbix servers already exists)
2 Inventory
Overview
There is a special Inventory menu in the Zabbix frontend. However, you will not see any data there initially and it is not where you
enter data. Building inventory data is done manually when configuring a host or automatically by using some automatic population
options.
Building inventory
Manual mode
When configuring a host, in the Inventory tab you can enter such details as the type of device, serial number, location, responsible
person, URLs, etc. - data that will populate inventory information.
If a URL is included in host inventory information and it starts with ’http’ or ’https’, it will result in a clickable link in the Inventory
section.
Automatic mode
Host inventory can also be populated automatically. For that to work, when configuring a host the inventory mode in the Inventory
tab must be set to Automatic.
Then you can configure host items to populate any host inventory field with their value, indicating the destination field with the
respective attribute (called Item will populate host inventory field) in item configuration.
Items that are especially useful for automated inventory data collection:
161
Inventory mode by default for new hosts is selected based on the Default host inventory mode setting in Administration → General
→ Other.
For hosts added by network discovery or autoregistration actions, it is possible to define a Set host inventory mode operation
selecting manual or automatic mode. This operation overrides the Default host inventory mode setting.
Inventory overview
The details of all existing inventory data are available in the Inventory menu.
In Inventory → Overview you can get a host count by various fields of the inventory.
In Inventory → Hosts you can see all hosts that have inventory information. Clicking on the host name will reveal the inventory
details in a form.
Parameter Description
The Details tab shows all inventory fields that are populated (are not empty).
Inventory macros
There are host inventory macros {INVENTORY.*} available for use in notifications, for example:
”Server in {INVENTORY.LOCATION1} has a problem, responsible person is {INVENTORY.CONTACT1}, phone number {INVEN-
TORY.POC.PRIMARY.PHONE.A1}.”
162
3 Mass update
Overview
Sometimes you may want to change some attribute for a number of hosts at once. Instead of opening each individual host for
editing, you may use the mass update function for that.
• Mark the checkboxes before the hosts you want to update in the host list
• Click on Mass update below the list
• Navigate to the tab with required attributes (Host, IPMI, Tags, Macros, Inventory, Encryption or Value mapping)
• Mark the checkboxes of any attribute to update and enter a new value for them
The following options are available when selecting the respective button for template linkage update:
To specify the templates to link/unlink start typing the template name in the auto-complete field until a dropdown appears offering
the matching templates. Just scroll down to select the required template.
The Clear when unlinking option will allow to not only unlink any previously linked templates, but also remove all elements inherited
from them (items, triggers, etc.).
The following options are available when selecting the respective button for host group update:
• Add - allows to specify additional host groups from the existing ones or enter completely new host groups for the hosts
• Replace - will remove the host from any existing host groups and replace them with the one(s) specified in this field (existing
or new host groups)
• Remove - will remove specific host groups from hosts
These fields are auto-complete - starting to type in them offers a dropdown of matching host groups. If the host group is new, it
also appears in the dropdown and it is indicated by (new) after the string. Just scroll down to select.
163
User macros, {INVENTORY.*} macros, {HOST.HOST}, {HOST.NAME}, {HOST.CONN}, {HOST.DNS}, {HOST.IP}, {HOST.PORT} and
{HOST.ID} macros are supported in tags. Note that tags with the same name but different values are not considered ’duplicates’
and can be added to the same host.
The following options are available when selecting the respective button for macros update:
• Add - allows to specify additional user macros for the hosts. If Update existing checkbox is checked, value, type and descrip-
tion for the specified macro name will be updated. If unchecked, if a macro with that name already exist on the host(s), it
will not be updated.
• Update - will replace values, types and descriptions of macros specified in this list. If Add missing checkbox is checked,
macro that didn’t previously exist on a host will be added as new macro. If unchecked, only macros that already exist on a
host will be updated.
• Remove - will remove specified macros from hosts. If Except selected box is checked, all macros except specified in the list
164
will be removed. If unchecked, only macros specified in the list will be removed.
• Remove all - will remove all user macros from hosts. If I confirm to remove all macros checkbox is not checked, a new popup
window will open asking to confirm removal of all macros.
To be able to mass update inventory fields, the Inventory mode should be set to ’Manual’ or ’Automatic’.
Buttons with the following options are available for value map update:
• Add - add value maps to the hosts. If you mark Update existing, all properties of the value map with this name will be
165
updated. Otherwise, if a value map with that name already exists, it will not be updated.
• Update - update existing value maps. If you mark Add missing, a value map that didn’t previously exist on a host will be
added as a new value map. Otherwise only the value maps that already exist on a host will be updated.
• Rename - give new name to an existing value map
• Remove - remove the specified value maps from the hosts. If you mark Except selected, all value maps will be removed
except the ones that are specified.
• Remove all - remove all value maps from the hosts. If the I confirm to remove all value maps checkbox is not marked, a new
popup window will open asking to confirm the removal.
When done with all required changes, click on Update. The attributes will be updated accordingly for all the selected hosts.
2 Items
Overview
Items are used for collecting data. Once you have configured a host, you must add items to get actual data. One way of quickly
adding many items is to attach one of the predefined templates to a host. However, for optimized system performance, you may
need to fine-tune the templates to have as many items and as frequent monitoring as necessary.
To specify what sort of data to collect from a host, use the item key. For example, an item with the key name system.cpu.load
will collect processor load data, while an item with the key name net.if.in will collect incoming traffic information.
Additional parameters can be specified in square brackets after the key name. For example, system.cpu.load[avg5] will return
the processor load average for the last 5 minutes, while net.if.in[eth0] will show incoming traffic in the interface ”eth0”.
Note:
See individual sections of item types for all supported item types and item keys.
1 Creating an item
Overview
You can also create an item by opening an existing one, pressing the Clone button and then saving under a different name.
Configuration
166
All mandatory input fields are marked with a red asterisk.
Parameter Description
167
Parameter Description
Units If a unit symbol is set, Zabbix will add postprocessing to the received value and display it with
the set unit postfix.
By default, if the raw value exceeds 1000, it is divided by 1000 and displayed accordingly. For
example, if you set bps and receive a value of 881764, it will be displayed as 881.76 Kbps.
The JEDEC memory standard is used for processing B (byte), Bps (bytes per second) units, which
are divided by 1024. Thus, if units are set to B or Bps Zabbix will display:
1 as 1B/1Bps
1024 as 1KB/1KBps
1536 as 1.5KB/1.5KBps
Special processing is used if the following time-related units are used:
unixtime - translated to ”yyyy.mm.dd hh:mm:ss”. To translate correctly, the received value
must be a Numeric (unsigned) type of information.
uptime - translated to ”hh:mm:ss” or ”N days, hh:mm:ss”
For example, if you receive the value as 881764 (seconds), it will be displayed as ”10 days,
04:56:04”
s - translated to ”yyy mmm ddd hhh mmm sss ms”; parameter is treated as number of seconds.
For example, if you receive the value as 881764 (seconds), it will be displayed as ”10d 4h 56m”
Only 3 upper major units are shown, like ”1m 15d 5h” or ”2h 4m 46s”. If there are no days to
display, only two levels are displayed - ”1m 5h” (no minutes, seconds or milliseconds are shown).
Will be translated to ”< 1 ms” if the value is less than 0.001.
Note that if a unit is prefixed with !, then no unit prefixes/processing is applied to item values.
See unit conversion.
Update interval Retrieve a new value for this item every N seconds. Maximum allowed update interval is 86400
seconds (1 day).
Time suffixes are supported, e.g., 30s, 1m, 2h, 1d.
User macros are supported.
A single macro has to fill the whole field. Multiple macros in a field or macros mixed with text are
not supported.
Note: The update interval can only be set to ’0’ if custom intervals exist with a non-zero value. If
set to ’0’, and a custom interval (flexible or scheduled) exists with a non-zero value, the item will
be polled during the custom interval duration.
Note that the first item poll after the item became active or after update interval change might
occur earlier than the configured value.
New items will be checked within 60 seconds of their creation, unless they have Scheduling or
Flexible update interval and the Update interval is set to 0.
An existing passive item can be polled for value immediately by pushing the Execute now button.
Custom intervals You can create custom rules for checking the item:
Flexible - create an exception to the Update interval (interval with different frequency)
Scheduling - create a custom polling schedule.
For detailed information see Custom intervals.
Time suffixes are supported in the Interval field, e.g., 30s, 1m, 2h, 1d.
User macros are supported.
A single macro has to fill the whole field. Multiple macros in a field or macros mixed with text are
not supported.
Timeout Set the item check timeout (available for supported item types). Select the timeout option:
Global - proxy/global timeout is used (displayed in the grayed out Timeout field);
Override - custom timeout is used (set in the Timeout field; allowed range: 1 - 600s). Time
suffixes, e.g. 30s, 1m, and user macros are supported.
Clicking the Timeouts link allows you to configure proxy timeouts or global timeouts (if a proxy is
not used). Note that the Timeouts link is visible only to users of Super admin type with
permissions to Administration → General or Administration → Proxies frontend sections.
168
Parameter Description
If a global overriding setting exists, a green info icon is displayed. If you position your mouse
on it, a warning message is displayed, e.g., Overridden by global housekeeper settings (1d).
It is recommended to keep the recorded values for the smallest possible time to reduce the size
of value history in the database. Instead of storing a long history of values, you can store longer
data of trends.
See also History and trends.
Trends Select either:
Do not store - trends are not stored.
This setting cannot be overridden by global housekeeper settings.
Store up to - specify the duration of keeping aggregated (hourly min, max, avg, count) history
in the database (1 day to 25 years). Older data will be removed by the housekeeper. Stored in
seconds.
Time suffixes are supported, e.g., 24h, 1d. User macros are supported.
The Store up to value can be overridden globally in Administration → Housekeeping.
If a global overriding setting exists, a green info icon is displayed. If you position your mouse
on it, a warning message is displayed, e.g., Overridden by global housekeeper settings (7d).
Note: Keeping trends is not available for non-numeric data - character, log and text.
See also History and trends.
Value mapping Apply value mapping to this item. Value mapping does not change received values, it is for
displaying data only.
It works with Numeric(unsigned), Numeric(float) and Character items.
For example, ”Windows service states”.
Log time format Available for items of type Log only. Supported placeholders:
* y: Year (1970-2038)
* M: Month (01-12)
* d: Day (01-31)
* h: Hour (00-23)
* m: Minute (00-59)
* s: Second (00-59)
If left blank, the timestamp will not be parsed.
For example, consider the following line from the Zabbix agent log file:
” 23480:20100328:154718.045 Zabbix agent started. Zabbix 1.8.2 (revision 11211).”
It begins with six character positions for PID, followed by date, time, and the rest of the line.
Log time format for this line would be ”pppppp:yyyyMMdd:hhmmss”.
Note that ”p” and ”:” chars are just placeholders and can be anything but ”yMdhms”.
Populates host You can select a host inventory field that the value of item will populate. This will work if
inventory field automatic inventory population is enabled for the host.
This field is not available if Type of information is set to ’Log’.
Description Enter an item description. User macros are supported.
Enabled Mark the checkbox to enable the item so it will be processed.
Latest data Click on the link to view the latest data for the item.
This link is only available when editing an already existing item.
Note:
Item type specific fields are described on corresponding pages.
Note:
When editing an existing template level item on a host level, a number of fields are read-only. You can use the link in the
form header and go to the template level and edit them there, keeping in mind that the changes on a template level will
change the item for all hosts that the template is linked to.
169
The Tags tab allows to define item-level tags.
The Preprocessing tab allows to define transformation rules for the received values.
Testing
Attention:
To perform item testing, ensure that the system time on the server and the proxy is synchronized. In the case when the
server time is behind, item testing may return an error message ”The task has been expired.” Having set different time
zones on the server and the proxy, however, won’t affect the testing result.
It is possible to test an item and, if configured correctly, get a real value in return. Testing can occur even before an item is saved.
Testing is available for host and template items, item prototypes and low-level discovery rules. Testing is not available for active
items.
• Zabbix agent
• SNMP agent (v1, v2, v3)
• IPMI agent
• SSH checks
• Telnet checks
• JMX agent
• Simple checks (except icmpping*, vmware.* items)
• Zabbix internal
• Calculated items
• External checks
• Database monitor
• HTTP agent
• Script
• Browser
To test an item, click on the Test button at the bottom of the item configuration form. Note that the Test button will be disabled for
items that cannot be tested (like active checks, excluded simple checks).
The item testing form has fields for the required host parameters (host address, port, test with server/proxy (proxy name)) and
item-specific details (such as SNMPv2 community or SNMPv3 security credentials). These fields are context aware:
170
• The values are pre-filled when possible, i.e., for items requiring an agent, by taking the information from the selected agent
interface of the host
• The values have to be filled manually for template items
• Plain-text macro values are resolved
• Fields where the value (or part of the value) is a secret or Vault macro are empty and have to be entered manually. If any
item parameter contains a secret macro value, the following warning message is displayed: ”Item contains user-defined
macros with secret values. Values of these macros should be entered manually.”
• The fields are disabled when not needed in the context of the item type (e.g., the host address field and the proxy field are
disabled for calculated items)
To test the item, click on Get value. If the value is retrieved successfully, it will fill the Value field, moving the current value (if any)
to the Previous value field while also calculating the Prev. time field, i.e., the time difference between the two values (clicks) and
trying to detect an EOL sequence and switch to CRLF if detecting ”\n\r” in retrieved value.
Values retrieved from a host and test results are truncated to a maximum size of 512KB when sent to the frontend. If a result is
truncated, a warning icon is displayed. The warning description is displayed on mouseover. Note that data larger than 512KB is
still processed fully by Zabbix server.
If the configuration is incorrect, an error message is displayed describing the possible cause.
A successfully retrieved value from host can also be used to test preprocessing steps.
Form buttons
171
Buttons at the bottom of the form allow to perform several operations.
Execute a check for a new item value immediately. Supported for passive checks only (see
more details).
Note that when checking for a value immediately, configuration cache is not updated, thus the
value will not reflect very recent changes to item configuration.
Unit conversion
By default, specifying a unit for an item results in a multiplier prefix being added - for example, an incoming value ’2048’ with unit
’B’ would be displayed as ’2KB’.
To prevent a unit from conversion, use the ! prefix, for example, !B. To better understand how the conversion works with and
without the exclamation mark, see the following examples of values and units:
1024 !B → 1024 B
1024 B → 1 KB
61 !s → 61 s
61 s → 1m 1s
0 !uptime → 0 uptime
0 uptime → 00:00:00
0 !! → 0 !
0 ! → 0
Note:
Before Zabbix 4.0, there was a hardcoded unit stoplist consisting of ms, rpm, RPM, %. This stoplist has been deprecated,
thus the correct way to prevent converting such units is !ms, !rpm, !RPM, !%.
Text data limits depend on the database backend. Before storing text values in the database they get truncated to match the
database value type limit:
172
Database Limit in characters Limit in bytes
Unsupported items
An item can become unsupported if its value cannot be retrieved for some reason. Such items are still rechecked at their standard
Update interval.
Item key format, including key parameters, must follow syntax rules. The following illustrations depict the supported syntax.
Allowed elements and characters at each point can be determined by following the arrows - if some block can be reached through
the line, it is allowed, if not - it is not allowed.
To construct a valid item key, one starts with specifying the key name, then there’s a choice to either have parameters or not - as
depicted by the two lines that could be followed.
Key name
The key name itself has a limited range of allowed characters, which just follow each other. Allowed characters are:
0-9a-zA-Z_-.
Which means:
• all numbers;
• all lowercase letters;
• all uppercase letters;
• underscore;
• dash;
• dot.
Key parameters
An item key can have multiple parameters that are comma separated.
173
Each key parameter can be either a quoted string, an unquoted string or an array.
The parameter can also be left empty, thus using the default value. In that case, the appropriate number of commas must
be added if any further parameters are specified. For example, item key icmpping[„200„500] would specify that the interval
between individual pings is 200 milliseconds, timeout - 500 milliseconds, and all other parameters are left at their defaults.
It is possible to include macros in the parameters. Those can be user macros or some of the built-in macros. To see what particular
built-in macros are supported in item key parameters, search the page Supported macros for ”item key parameters”.
If the key parameter is a quoted string, any Unicode character is allowed. If the key parameter string contains a quotation mark, this
parameter has to be quoted, and each quotation mark which is a part of the parameter string has to be escaped with a backslash
(\) character. If the key parameter string contains comma, this parameter has to be quoted.
Warning:
To quote item key parameters, use double quotes only. Single quotes are not supported.
If the key parameter is an unquoted string, any Unicode character is allowed except comma and right square bracket (]). Unquoted
parameter cannot start with left square bracket ([).
174
Parameter - array
If the key parameter is an array, it is again enclosed in square brackets, where individual parameters come in line with the rules
and syntax of specifying multiple parameters.
Attention:
Multi-level parameter arrays, e.g. [a,[b,[c,d]],e], are not allowed.
2 Custom intervals
Overview
It is possible to create custom rules regarding the times when an item is checked. The two methods for that are Flexible intervals,
which allow to redefine the default update interval, and Scheduling, whereby an item check can be executed at a specific time or
sequence of times.
Flexible intervals
Flexible intervals allow to redefine the default update interval for specific time periods. A flexible interval is defined with Interval
and Period where:
If multiple flexible intervals overlap, the smallest Interval value is used for the overlapping period. Note that if the smallest value
of overlapping flexible intervals is ’0’, no polling will take place. Outside the flexible intervals the default update interval is used.
Note that if the flexible interval equals the length of the period, the item will be checked exactly once. If the flexible interval is
greater than the period, the item might be checked once or it might not be checked at all (thus such configuration is not advisable).
If the flexible interval is less than the period, the item will be checked at least once.
If the flexible interval is set to ’0’, the item is not polled during the flexible interval period and resumes polling according to the
default Update interval once the period is over. Examples:
Scheduling intervals
Scheduling intervals are used to check items at specific times. While flexible intervals are designed to redefine the default item
update interval, the scheduling intervals are used to specify an independent checking schedule, which is executed in parallel.
175
• md - month days
• wd - week days
• h - hours
• m - minutes
• s – seconds
<filter> is used to specify values for its prefix (days, hours, minutes, seconds) and is defined as: [<from>[-<to>]][/<step>][,<filter
where:
• <from> and <to> define the range of matching values (included). If <to> is omitted then the filter matches a <from> -
<from> range. If <from> is also omitted then the filter matches all possible values.
• <step> defines the skips of the number value through the range. By default <step> has the value of 1, which means that
all values of the defined range are matched.
While the filter definitions are optional, at least one filter must be used. A filter must either have a range or the <step> value
defined.
An empty filter matches either ’0’ if no lower-level filter is defined or all possible values otherwise. For example, if the hour filter
is omitted then only ’0’ hour will match, provided minute and seconds filters are omitted too, otherwise an empty hour filter will
match all hour values.
Valid <from> and <to> values for their respective filter prefix are:
The <from> value must be less or equal to <to> value. The <step> value must be greater or equal to 1 and less or equal to <to>
- <from>.
Single digit month days, hours, minutes and seconds values can be prefixed with 0. For example md01-31 and h/02 are valid
intervals, but md01-031 and wd01-07 are not.
In Zabbix frontend, multiple scheduling intervals are entered in separate rows. In Zabbix API, they are concatenated into a single
string with a semicolon ; as a separator.
If a time is matched by several intervals it is executed only once. For example, wd1h9;h9 will be executed only once on Monday
at 9am.
Examples:
176
Interval Will be executed
Note that Zabbix proxies and agent use their local time zones when processing scheduling intervals.
For this reason, when scheduling intervals are applied to items monitored by Zabbix proxy or agent active items, it is recommended
to set the time zone of the respective proxies or agent the same as Zabbix server, otherwise the queue may report item delays
incorrectly.
The time zone for Zabbix proxy or agent can be set using the environment variable TZ in the systemd unit file:
[Service]
...
Environment="TZ=Europe/Amsterdam"
Overview
In preprocessing it is possible to apply transformations to the received item values before saving them to the database.
There are many uses for this functionality. You may want to, for example:
• multiply the network traffic value by ”8” to convert bytes into bits;
• get the per-second statistics of an incrementally rising value;
• apply a regular expression to the value;
• use a custom script on the value;
• choose to discard unchanged values.
One or several transformations are possible. Transformations (preprocessing steps) are executed in the order in which they are
defined.
Attention:
An item will become unsupported if any of the preprocessing steps fail. This can be avoided by Custom on fail error-handling
(available for supported transformations), which can be configured to discard the value or set a specified value.
To make sure that the configured preprocessing pipeline works, it is possible to test it.
For log items, log metadata (without value) will always reset the item unsupported state and make the item supported again, even
if the initial error occurred after receiving a log value from agent.
Preprocessing is done by Zabbix server or proxy (if items are monitored by proxy).
Note that all values passed to preprocessing are of the string type; conversion to the desired value type (as defined in item
configuration) is performed at the end of the preprocessing pipeline. Conversions, however, may also take place if required by the
corresponding preprocessing step. See preprocessing details for more technical information.
Configuration
Preprocessing steps are defined in the Preprocessing tab of the item configuration form.
177
Click on Add to select a supported transformation.
The Type of information field is displayed at the bottom of the tab when at least one preprocessing step is defined. If required, it is
possible to change the type of information without leaving the Preprocessing tab. See Creating an item for the detailed parameter
description.
Supported transformations
All supported transformations are listed below. Click on the transformation name to see full details about it.
Regular expression Match the value to the regular expression and replace with the required output. Text
Replace Find the search string and replace it with another (or nothing).
Trim Remove specified characters from the beginning and end of the value.
Right trim Remove specified characters from the end of the value.
Left trim Remove specified characters from the beginning of the value.
XML XPath Extract value or fragment from XML data using XPath functionality. Structured
data
JSON Path Extract value or fragment from JSON data using JSONPath functionality.
CSV to JSON Convert CSV file data into JSON format.
XML to JSON Convert data in XML format to JSON.
SNMP walk value Extract value by the specified OID/MIB name and apply formatting options. SNMP
SNMP walk to JSON Convert SNMP values to JSON.
SNMP get value Apply formatting options to the SNMP get value.
Custom multiplier Multiply the value by the specified integer or floating-point value. Arithmetic
Simple change Calculate the difference between the current and previous value. Change
Change per second Calculate the value change (difference between the current and previous value)
speed per second.
Boolean to decimal Convert the value from boolean format to decimal. Numeral
systems
Octal to decimal Convert the value from octal format to decimal.
Hexadecimal to Convert the value from hexadecimal format to decimal.
decimal
JavaScript Enter JavaScript code. Custom
scripts
In range Define a range that a value should be in. Validation
Matches regular Specify a regular expression that a value must match.
expression
Does not match Specify a regular expression that a value must not match.
regular expression
Check for error in JSON Check for an application-level error message located at JSONPath.
Check for error in XML Check for an application-level error message located at XPath.
Check for error using a Check for an application-level error message using a regular expression.
regular expression
Check for not Check if there was an error in retrieving item value.
supported value
Discard unchanged Discard a value if it has not changed. Throttling
Discard unchanged Discard a value if it has not changed within the defined time period.
with heartbeat
Prometheus pattern Use the following query to extract the required data from Prometheus metrics. Prometheus
Prometheus to JSON Convert the required Prometheus metrics to JSON.
Note that for Change and Throttling preprocessing steps, Zabbix has to remember the last value to calculate/compare the new
178
value as required. These previous values are handled by the preprocessing manager. If Zabbix server or proxy is restarted or there
is any change made to preprocessing steps, the last value of the corresponding item is reset, resulting in:
• for Simple change, Change per second steps - the next value will be ignored because there is no previous value to calculate
the change from;
• for Discard unchanged, Discard unchanged with heartbeat steps - the next value will never be discarded, even if it should
have been because of discarding rules.
Regular expression
Match the value to the regular expression and replace with the required output.
Parameters:
Comments:
• A failure to match the input value will make the item unsupported;<br>
• The regular expression supports extraction of maximum 10 captured groups with the \N sequence;<br>
• If you mark the Custom on fail checkbox, it is possible to specify custom error-handling options: either to discard the value,
set a specified value, or set a specified error message. In case of a failed preprocessing step, the item will not become
unsupported if the option to discard the value or set a specified value is selected.<br>
• Please refer to regular expressions section for some existing examples.
Replace
Find the search string and replace it with another (or nothing).
Parameters:
Comments:
Trim
Remove specified characters from the beginning and end of the value.
Right trim
Left trim
XML XPath
Comments:
• For this option to work, Zabbix server (or Zabbix proxy) must be compiled with libxml support;<br>
• Namespaces are not supported;<br>
• If you mark the Custom on fail checkbox, it is possible to specify custom error-handling options: either to discard the value,
set a specified value, or set a specified error message. In case of a failed preprocessing step, the item will not become
unsupported if the option to discard the value or set a specified value is selected.
Examples:
179
JSON Path
If you mark the Custom on fail checkbox, it is possible to specify custom error-handling options: either to discard the value, set a
specified value, or set a specified error message. In case of a failed preprocessing step, the item will not become unsupported if
the option to discard the value or set a specified value is selected.
CSV to JSON
XML to JSON
If you mark the Custom on fail checkbox, it is possible to specify custom error-handling options: either to discard the value, set a
specified value, or set a specified error message. In case of a failed preprocessing step, the item will not become unsupported if
the option to discard the value or set a specified value is selected.
Extract value by the specified OID/MIB name and apply formatting options:<br>
If you mark the Custom on fail checkbox, it is possible to specify custom error-handling options: either to discard the value, set a
specified value, or set a specified error message. In case of a failed preprocessing step, the item will not become unsupported if
the option to discard the value or set a specified value is selected.
Specify a field name in the JSON and the corresponding SNMP OID path. Field values will be populated by values in the specified
SNMP OID path.
Comments:
• Similar value formatting options as in the SNMP walk value step are available;<br>
• You may use this preprocessing step for SNMP OID discovery;<br>
• If you mark the Custom on fail checkbox, it is possible to specify custom error-handling options: either to discard the value,
set a specified value, or set a specified error message. In case of a failed preprocessing step, the item will not become
unsupported if the option to discard the value or set a specified value is selected.
If you mark the Custom on fail checkbox, it is possible to specify custom error-handling options: either to discard the value, set a
specified value, or set a specified error message. In case of a failed preprocessing step, the item will not become unsupported if
the option to discard the value or set a specified value is selected.
Custom multiplier
Comments:
• Use this option to convert values received in KB, MBps, etc., into B, Bps. Otherwise, Zabbix cannot correctly set prefixes (K,
M, G, etc.).<br>
180
• Note that if the item type of information is Numeric (unsigned), incoming values with a fractional part will be trimmed (i.e.,
’0.9’ will become ’0’) before the custom multiplier is applied;<br>
• If you use a custom multiplier or store value as Change per second for items with the type of information set to Numeric
(unsigned) and the resulting calculated value is actually a float number, the calculated value is still accepted as a correct
one by trimming the decimal part and storing the value as an integer;<br>
1e+70; user macros and LLD macros; strings that include macros, for example,
• Supported: scientific notation, for example,
{#MACRO}e+10, {$MACRO1}e+{$MACRO2}. The macros must resolve to an integer or a floating-point number.
• If you mark the Custom on fail checkbox, it is possible to specify custom error-handling options: either to discard the value,
set a specified value, or set a specified error message. In case of a failed preprocessing step, the item will not become
unsupported if the option to discard the value or set a specified value is selected.
Simple change
Comments:
Calculate the value change (difference between the current and previous value) speed per second.
Comments:
• This step is useful for calculating the speed per second of a constantly growing value;<br>
• As this calculation may produce floating-point numbers, it is recommended to set the ’Type of information’ to Numeric (float),
even if the incoming raw values are integers. This is especially relevant for small numbers where the decimal part matters.
If the floating-point values are large and may exceed the ’float’ field length in which case the entire value may be lost, it is
actually suggested to use Numeric (unsigned) and thus trim only the decimal part.<br>
• Evaluated as (value-prev_value)/(time-prev_time), where value - the current value; prev_value - the previously received
value; time - the current timestamp; prev_time - the timestamp of the previous value;<br>
• Only one change operation per item (”Simple change” or ”Change per second”) is allowed;
• If the current value is smaller than the previous value, Zabbix discards that difference (stores nothing) and waits for another
value. This helps to work correctly with, for instance, a wrapping (overflow) of 32-bit SNMP counters.<br>
• If you mark the Custom on fail checkbox, it is possible to specify custom error-handling options: either to discard the value,
set a specified value, or set a specified error message. In case of a failed preprocessing step, the item will not become
unsupported if the option to discard the value or set a specified value is selected.
Boolean to decimal
Comments:
• The textual representation is translated into either 0 or 1. Thus, ’TRUE’ is stored as 1 and ’FALSE’ is stored as 0. All values
are matched in a case-insensitive way. Currently recognized values are, for TRUE - true, t, yes, y, on, up, running, enabled,
available, ok, master; for FALSE - false, f, no, n, off, down, unused, disabled, unavailable, err, slave. Additionally, any
non-zero numeric value is considered to be TRUE and zero is considered to be FALSE.<br>
• If you mark the Custom on fail checkbox, it is possible to specify custom error-handling options: either to discard the value,
set a specified value, or set a specified error message. In case of a failed preprocessing step, the item will not become
unsupported if the option to discard the value or set a specified value is selected.
Octal to decimal
If you mark the Custom on fail checkbox, it is possible to specify custom error-handling options: either to discard the value, set a
specified value, or set a specified error message. In case of a failed preprocessing step, the item will not become unsupported if
the option to discard the value or set a specified value is selected.
Hexadecimal to decimal
181
If you mark the Custom on fail checkbox, it is possible to specify custom error-handling options: either to discard the value, set a
specified value, or set a specified error message. In case of a failed preprocessing step, the item will not become unsupported if
the option to discard the value or set a specified value is selected.
JavaScript
Enter JavaScript code in the block that appears when clicking in the parameter field or on the pencil icon.
Comments:
In range
Comments:
• Numeric values are accepted (including any number of digits, optional decimal part and optional exponential part, negative
values);<br>
• The minimum value should be less than the maximum;<br>
• At least one value must exist;<br>
• User macros and low-level discovery macros can be used;<br>
• If you mark the Custom on fail checkbox, it is possible to specify custom error-handling options: either to discard the value,
set a specified value, or set a specified error message. In case of a failed preprocessing step, the item will not become
unsupported if the option to discard the value or set a specified value is selected.
If you mark the Custom on fail checkbox, it is possible to specify custom error-handling options: either to discard the value, set a
specified value, or set a specified error message. In case of a failed preprocessing step, the item will not become unsupported if
the option to discard the value or set a specified value is selected.
If you mark the Custom on fail checkbox, it is possible to specify custom error-handling options: either to discard the value, set a
specified value, or set a specified error message. In case of a failed preprocessing step, the item will not become unsupported if
the option to discard the value or set a specified value is selected.
Check for an application-level error message located at JSONPath. Stop processing if succeeded and the message is not empty;
otherwise, continue processing with the value that was before this preprocessing step.
Comments:
• These external service errors are reported to the user as is, without adding preprocessing step information;<br>
• No error will be reported in case of failing to parse invalid JSON;<br>
• If you mark the Custom on fail checkbox, it is possible to specify custom error-handling options: either to discard the value,
set a specified value, or set a specified error message. In case of a failed preprocessing step, the item will not become
unsupported if the option to discard the value or set a specified value is selected.
Check for an application-level error message located at XPath. Stop processing if succeeded and the message is not empty;
otherwise, continue processing with the value that was before this preprocessing step.
Comments:
• These external service errors are reported to the user as is, without adding preprocessing step information;<br>
• No error will be reported in case of failing to parse invalid XML;<br>
• If you mark the Custom on fail checkbox, it is possible to specify custom error-handling options: either to discard the value,
set a specified value, or set a specified error message. In case of a failed preprocessing step, the item will not become
unsupported if the option to discard the value or set a specified value is selected.
Check for an application-level error message using a regular expression. Stop processing if succeeded and the message is not
empty; otherwise, continue processing with the value that was before this preprocessing step.
182
Parameters:
Comments:
• These external service errors are reported to the user as is, without adding preprocessing step information;<br>
• If you mark the Custom on fail checkbox, it is possible to specify custom error-handling options: either to discard the value,
set a specified value, or set a specified error message. In case of a failed preprocessing step, the item will not become
unsupported if the option to discard the value or set a specified value is selected.
Check if no item value could be retrieved. Specify how the failure should be processed, based on inspecting the returned error
message.
Parameters:
• scope - select the error processing scope:<br>any error - any error;<br>error matches - only the error that matches the
regular expression specified in pattern;<br>error does not match - only the error that does not match the regular expression
specified in pattern<br>
• pattern - the regular expression to match the error to. If any error is selected in the scope parameter, this field is not
displayed. If displayed, this field is mandatory.<br>
Comments:
• Normally the absence/failure to retrieve a value would lead to the item becoming unsupported. In this preprocessing step
it is possible to modify this behavior by marking the Custom on fail option. The following custom error-handling options are
available: Discard value, Set value to (the value can be used in triggers), or Set error to.
• The item will remain supported if Discard value or Set value to is selected;
• Capturing regular expression groups is supported in the Set value to or Set error to fields. An \N (where N=1…9) escape
sequence is replaced with the Nth matched group. A \0 escape sequence is replaced with the matched text.
• For this preprocessing step, the Custom on fail checkbox is grayed out and always marked;
• These steps are always executed as the first preprocessing steps and are placed above all others after saving changes to
the item.
• Multiple Check for not supported value steps are supported, in the specified order. A step for any error will be automatically
placed as the last step in this group.
Discard unchanged
Comments:
• If a value is discarded, it is not saved in the database and Zabbix server has no knowledge that this value was received. No
trigger expressions will be evaluated, as a result, no problems for related triggers will be created/resolved. Functions will
work only based on data that is actually saved in the database. As trends are built based on data in the database, if there
is no value saved for an hour then there will also be no trends data for that hour.<br>
• Only one throttling option can be specified per item.
Discard a value if it has not changed within the defined time period (in seconds).
Comments:
• Positive integer values are supported to specify the seconds (minimum - 1 second);<br>
• Time suffixes can be used (e.g., 30s, 1m, 2h, 1d);<br>
• User macros and low-level discovery macros can be used;<br>
• If a value is discarded, it is not saved in the database and Zabbix server has no knowledge that this value was received. No
trigger expressions will be evaluated, as a result, no problems for related triggers will be created/resolved. Functions will
work only based on data that is actually saved in the database. As trends are built based on data in the database, if there
is no value saved for an hour then there will also be no trends data for that hour.<br>
• Only one throttling option can be specified per item.
Prometheus pattern
Use the following query to extract the required data from Prometheus metrics.
183
Prometheus to JSON
Macro support
User macros and user macros with context are supported in:
Note:
The macro context is ignored when a macro is replaced with its value. The macro value is inserted in the code as is, it is
not possible to add additional escaping before placing the value in the JavaScript code. Please be advised that this can
cause JavaScript errors in some cases.
Testing
1 Preprocessing testing
Testing
Testing preprocessing steps is useful to make sure that complex preprocessing pipelines yield the results that are expected from
them, without waiting for the item value to be received and preprocessed.
It is possible to test:
Each preprocessing step can be tested individually as well as all steps can be tested together. When you click on the Test or Test
all steps button respectively in the Actions block, a testing window is opened.
184
Parameter Description
Get value from host If you want to test a hypothetical value, leave this checkbox unmarked.
See also: Testing real value.
Value Enter the input value to test.
Clicking in the parameter field or on the view/edit button will open a text area window for
entering the value or code block.
Not supported Mark this checkbox to test an unsupported value.
This option is useful to test the Check for not supported value preprocessing step.
Error Enter the error text.
This field is enabled when Get value from host is unchecked, but Not supported is checked.
If Get value from host is checked, this field gets filled with the actual error message (read-only)
from the host.
Time Time of the input value is displayed: now (read-only).
Previous value Enter a previous input value to compare to.
Only for Change and Throttling preprocessing steps.
Previous time Enter the previous input value time to compare to.
Only for Change and Throttling preprocessing steps.
The default value is based on the ’Update interval’ field value of the item (if ’1m’, then this field
is filled with now-1m). If nothing is specified or the user has no access to the host, the default is
now-30s.
Macros If any macros are used, they are listed along with their values. The values are editable for testing
purposes, but the changes will only be saved within the testing context.
End of line sequence Select the end of line sequence for multiline input values:
LF - LF (line feed) sequence
CRLF - CRLF (carriage-return line-feed) sequence.
Preprocessing steps Preprocessing steps are listed; the testing result is displayed for each step after the Test button is
clicked.
Test results are truncated to a maximum size of 512KB when sent to the frontend. If a result is
truncated, a warning icon is displayed. The warning description is displayed on mouseover. Note
that data larger than 512KB is still processed fully by Zabbix server.
If the step failed in testing, an error icon is displayed. The error description is displayed on
mouseover.
In case ”Custom on fail” is specified for the step and that action is performed, a new line appears
right after the preprocessing test step row, showing what action was done and what outcome it
produced (error or value).
Result The final result of testing preprocessing steps is displayed in all cases when all steps are tested
together (when you click on the Test all steps button).
The type of conversion to the value type of the item is also displayed, for example Result
converted to Numeric (unsigned).
Test results are truncated to a maximum size of 512KB when sent to the frontend. If a result is
truncated, a warning icon is displayed. The warning description is displayed on mouseover. Note
that data larger than 512KB is still processed fully by Zabbix server.
Test values are stored between test sessions for either individual steps or all steps, allowing the user to change preprocessing
steps or item configuration and then return to the testing window without having to re-enter information. Values are lost on a page
refresh though.
The testing is done by Zabbix server. The frontend sends a corresponding request to the server and waits for the result. The
request contains the input value and preprocessing steps (with expanded user macros). For Change and Throttling steps, an
optional previous value and time can be specified. The server responds with results for each preprocessing step.
All technical errors or input validation errors are displayed in the error box at the top of the testing window.
185
– The values have to be filled manually for template items
– Plain-text macro values are resolved
– Where the field value (or part of the value) is a secret or Vault macro, the field will be empty and has to be filled out
manually. If any item parameter contains a secret macro value, the following warning message is displayed: ”Item
contains user-defined macros with secret values. Values of these macros should be entered manually.”
– The fields are disabled when not needed in the context of the item type (e.g., the host address and the proxy fields are
disabled for calculated items)
• Click on Get value and test to test the preprocessing
If you have specified a value mapping in the item configuration form (’Show value’ field), the item test dialog will show another
line after the final result, named ’Result with value map applied’.
Parameter Description
Get value from host Mark this checkbox to get a real value from the host.
Host address Enter the host address.
This field is automatically filled by the address of the item host interface.
Port Enter the host port.
This field is automatically filled by the port of item host interface.
Additional fields for See Configuring SNMP monitoring for additional details on configuring an SNMP interface (v1, v2
SNMP and v3).
interfaces<br>(SNMP These fields are automatically filled from the item host interface.
version, SNMP
community, Context
name, etc.)
Proxy Specify the proxy if the host is monitored by a proxy.
This field is automatically filled by the proxy of the host (if any).
Value Value retrieved from the host.
Clicking in the parameter field or on the view/edit button will open a text area window of the
value or code block.
Values are truncated to a maximum size of 512KB and only in the frontend. If a result is
truncated, a warning icon is displayed. The warning description is displayed on mouseover. Note
that data larger than 512KB is still processed fully by Zabbix server.
For the rest of the parameters, see Testing hypothetical value above.
2 Preprocessing details
Overview
186
This section provides item value preprocessing details. The item value preprocessing allows to define and execute transformation
rules for the received item values.
Preprocessing is managed by the preprocessing manager process along with preprocessing workers that perform the preprocessing
steps. All values (with or without preprocessing) from different data gatherers pass through the preprocessing manager before
being added to the history cache. Socket-based IPC communication is used between data gatherers (pollers, trappers, etc.) and
the preprocessing process. Either Zabbix server or Zabbix proxy (for the items monitored by the proxy) performs the preprocessing
steps.
To visualize the data flow from data source to the Zabbix database, we can use the following simplified diagram:
187
188
The diagram above shows only processes, objects and actions related to item value processing in a simplified form. The diagram
does not show conditional direction changes, error handling or loops. The local data cache of the preprocessing manager is not
shown either because it doesn’t affect the data flow directly. The aim of this diagram is to show processes involved in the item
value processing and the way they interact.
• Data gathering starts with raw data from a data source. At this point, the data contains only ID, timestamp and value (can
be multiple values as well).
• No matter what type of data gatherer is used, the idea is the same for active or passive checks, for trapper items, etc., as
it only changes the data format and the communication starter (either data gatherer is waiting for a connection and data,
or data gatherer initiates the communication and requests the data). The raw data is validated, the item configuration is
retrieved from the configuration cache (data is enriched with the configuration data).
• A socket-based IPC mechanism is used to pass data from data gatherers to the preprocessing manager. At this point the
data gatherer continues to gather data without waiting for the response from preprocessing manager.
• Data preprocessing is performed. This includes the execution of preprocessing steps and dependent item processing.
Note:
An item can change its state to NOT SUPPORTED while preprocessing is performed if any of preprocessing steps fail.
• The history data from the local data cache of the preprocessing manager is being flushed into the history cache.
• At this point the data flow stops until the next synchronization of history cache (when the history syncer process performs
data synchronization).
• The synchronization process starts with data normalization before storing data in Zabbix database. The data normalization
performs conversions to the desired item type (type defined in item configuration), including truncation of textual data
based on predefined sizes allowed for those types (HISTORY_STR_VALUE_LEN for string, HISTORY_TEXT_VALUE_LEN for text
and HISTORY_LOG_VALUE_LEN for log values). The data is being sent to the Zabbix database after the normalization is done.
Note:
An item can change its state to NOT SUPPORTED if data normalization fails (for example, when a textual value cannot be
converted to number).
• The gathered data is being processed - triggers are checked, the item configuration is updated if item becomes NOT SUP-
PORTED, etc.
• This is considered the end of data flow from the point of view of item value processing.
• The item value is passed to the preprocessing manager using a UNIX socket-based IPC mechanism.
• If the item has neither preprocessing nor dependent items, its value is either added to the history cache or sent to the LLD
manager. Otherwise:
– A preprocessing task is created and added to the queue and preprocessing workers are notified about the new task.
– At this point the data flow stops until there is at least one unoccupied (i.e., not executing any tasks) preprocessing
worker.
– When a preprocessing worker is available, it takes the next task from the queue.
– After the preprocessing is done (both failed and successful execution of preprocessing steps), the preprocessed value
is added to the finished task queue and the manager is notified about a new finished task.
– The preprocessing manager converts the result to desired format (defined by item value type) and either adds it to the
history cache or sends to the LLD manager.
– If there are dependent items for the processed item, then dependent items are added to the preprocessing queue
with the preprocessed master item value. Dependent items are enqueued bypassing the normal value preprocessing
requests, but only for master items with the value set and not in a NOT SUPPORTED state.
189
Note that in the diagram the master item preprocessing is slightly simplified by skipping the preprocessing caching.
Preprocessing queue
Preprocessing caching
190
Preprocessing caching was introduced to improve the preprocessing performance for multiple dependent items having similar
preprocessing steps (which is a common LLD outcome).
Caching is done by preprocessing one dependent item and reusing some of the internal preprocessing data for the rest of the
dependent items. The preprocessing cache is supported only for the first preprocessing step of the following types:
The Zabbix server configuration file allows users to set the count of preprocessing worker threads. The StartPreprocessors config-
uration parameter should be used to set the number of pre-started instances of preprocessing workers. The optimal number of
preprocessing workers can be determined by many factors, including the count of ”preprocessable” items (items that require to
execute any preprocessing steps), the count of data gathering processes, the average step count for item preprocessing, etc.
But assuming that there are no heavy preprocessing operations like parsing large XML/JSON chunks, the number of preprocessing
workers can match the total number of data gatherers. This way, there will mostly (except for the cases when data from the
gatherer comes in bulk) be at least one unoccupied preprocessing worker for collected data.
Warning:
Too many data gathering processes (pollers, unreachable pollers, ODBC pollers, HTTP pollers, Java pollers, pingers, trap-
pers, proxypollers) together with IPMI manager, SNMP trapper and preprocessing workers can exhaust the per-process
file descriptor limit for the preprocessing manager. This will cause Zabbix server to stop (usually shortly after the start,
but sometimes it can take more time). The configuration file should be revised or the limit should be raised to avoid this
situation.
Item value processing is executed in multiple steps (or phases) by multiple processes. This can cause:
• A dependent item can receive values, while THE master value cannot. This can be achieved by using the following use case:
– Master item has value type UINT (trapper item can be used), dependent item has value type TEXT.
– No preprocessing steps are required for both master and dependent items.
– Textual value (for example, ”abc”) should be passed to master item.
– As there are no preprocessing steps to execute, preprocessing manager checks if master item is not in NOT SUPPORTED
state and if value is set (both are true) and enqueues dependent item with the same value as master item (as there
are no preprocessing steps).
– When both master and dependent items reach history synchronization phase, master item becomes NOT SUPPORTED
because of the value conversion error (textual data cannot be converted to unsigned integer).
As a result, the dependent item receives a value, while the master item changes its state to NOT SUPPORTED.
• A dependent item receives value that is not present in the master item history. The use case is very similar to the previous
one, except for the master item type. For example, if CHAR type is used for master item, then master item value will be
truncated at the history synchronization phase, while dependent items will receive their values from the initial (not truncated)
value of the master item.
3 Preprocessing examples
Overview
This section presents examples of using preprocessing steps to accomplish some practical tasks.
Regular expression
Using regular expressions to filter unnecessary events from the VMware event log.
1. On a working VMware Hypervisor host, check that the event log item vmware.eventlog[<url>,<mode>,<severity>] is
present and working properly. Note that the event log item could already be present on the hypervisor if the Template VM VMware
template has been linked during the host creation.
2. On the VMware Hypervisor host, create a dependent item of ’Log’ type and set the event log item as its master.
In the ”Preprocessing” tab of the dependent item, select the ”Matches regular expression” validation option and fill pattern, for
example:
191
".* logged in .*" - filters all logging events in the event log
"\bUser\s+\K\S+" - filter only lines with usernames from the event log
Attention:
If the regular expression is not matched, then the dependent item becomes unsupported with a corresponding error mes-
sage. To avoid this, mark the ”Custom on fail” checkbox and select to discard unmatched value, for example.
Another approach that allows using matching groups and output control is to select ”Regular expression” option in the ”Preprocess-
ing” tab and fill parameters, for example:
pattern: ".*logged in.*", output: "\0" - filters all logging events in the event log
pattern "User (.*?)(?=\ )", output: "\1" - filter only usernames from the event log
4 JSONPath functionality
Overview
This section outlines the supported JSONPath functionality within item value preprocessing steps.
JSONPath is composed of segments separated by dots. A segment can take the form of a simple word, representing a JSON value
name, the wildcard character (*), or a more intricate construct enclosed within square brackets. The dot before a bracketed
segment is optional and can be omitted.
See also: Escaping special characters from LLD macro values in JSONPath.
Supported segments
Segment Description
To find a matching segment ignoring its ancestry (detached segment), it must be prefixed with two dots (..). For example, $..name
or $..['name'] return values of all name properties.
Matched element names can be extracted by adding a tilde (~) suffix to the JSONPath. It returns the name of the matched object or
an index in string format of the matched array item. The output format follows the same rules as other JSONPath queries - definite
192
path results are returned ’as is’, and indefinite path results are returned in an array. However, there is minimal value in extracting
the name of an element that matches a definitive path, as it is already known.
Filter expression
Supported operands:
Operand Description
Example: 123
<jsonpath starting with $> Value referred to by the JSONPath from the input document root node; only definite
paths are supported.
Example: $.object.name
<jsonpath starting with @> Value referred to by the JSONPath from the current object/element; only definite
paths are supported.
Example: @.name
Supported operators:
Functions
Functions can be used at the end of JSONPath. Multiple functions can be chained if the preceding function returns value that is
accepted by the following function.
Supported functions:
193
JSONPath aggregate functions accept quoted numeric values. These values are automatically converted from strings to numeric
types when aggregation is needed. Incompatible input will cause the function to generate an error.
Output value
JSONPaths can be divided into definite and indefinite paths. A definite path can return only null or a single match. An indefinite path
can return multiple matches: JSONPaths with detached, multiple name/index lists, array slices, or expression segments. However,
when a function is used, the JSONPath becomes definite, as functions always output a single value.
A definite path returns the object/array/value it is referencing. In contrast, an indefinite path returns an array of the matched
objects/arrays/values.
Attention:
The property order in JSONPath query results may not align with the original JSON property order due to internal optimiza-
tion methods. For example, the JSONPath $.books[1]["author", "title"] may return ["title", "author"]. If
preserving the original property order is essential, alternative post-query processing methods should be considered.
Whitespaces (space, tab character) can be used in bracket notation segments and expressions, for example: $[ 'a' ][ 0 ][
?( $.b == 'c' ) ][ : -1 ].first( ).
Strings should be enclosed with single (') or double (") quotes. Inside the strings, single or double quotes (depending on which
are used to enclose it) and backslashes (\) are escaped with the backslash (\) character.
Example
{
"books": [
{
"category": "reference",
"author": "Nigel Rees",
"title": "Sayings of the Century",
"price": 8.95,
"id": 1
},
{
"category": "fiction",
"author": "Evelyn Waugh",
"title": "Sword of Honour",
"price": 12.99,
"id": 2
},
{
"category": "fiction",
"author": "Herman Melville",
"title": "Moby Dick",
"isbn": "0-553-21311-3",
"price": 8.99,
"id": 3
},
{
"category": "fiction",
"author": "J. R. R. Tolkien",
"title": "The Lord of the Rings",
"isbn": "0-395-19395-8",
"price": 22.99,
"id": 4
}
],
"services": {
"delivery": {
"servicegroup": 1000,
"description": "Next day delivery in local town",
"active": true,
"price": 5
194
},
"bookbinding": {
"servicegroup": 1001,
"description": "Printing and assembling book in A5 format",
"active": true,
"price": 154.99
},
"restoration": {
"servicegroup": 1002,
"description": "Various restoration methods",
"active": false,
"methods": [
{
"description": "Chemical cleaning",
"price": 46
},
{
"description": "Pressing pages damaged by moisture",
"price": 24.5
},
{
"description": "Rebinding torn book",
"price": 99.49
}
]
}
},
"filters": {
"price": 10,
"category": "fiction",
"no filters": "no \"filters\""
},
"closed message": "Store is closed",
"tags": [
"a",
"b",
"c",
"d",
"e"
]
}
$.filters.price definite 10
$.filters.category definite fiction
$.filters['no filters'] definite no ”filters”
$.filters definite {
”price”: 10,
”category”: ”fiction”,
”no filters”: ”no \”filters\””
}
$.books[1].title definite Sword of Honour
$.books[-1].author definite J. R. R. Tolkien
$.books.length() definite 4
$.tags[:] indefinite [”a”, ”b”, ”c”, ”d”, ”e” ]
$.tags[2:] indefinite [”c”, ”d”, ”e” ]
$.tags[:3] indefinite [”a”, ”b”, ”c”]
$.tags[1:4] indefinite [”b”, ”c”, ”d”]
$.tags[-2:] indefinite [”d”, ”e”]
$.tags[:-3] indefinite [”a”, ”b”]
$.tags[:-3].length() definite 2
$.books[0, 2].title indefinite [”Moby Dick”, ”Sayings of the Century”]
195
JSONPath Type Result
196
JSONPath Type Result
$..id.length() definite 4
$.books[?(@.id == definite Sword of Honour
2)].title.first()
$..tags.first().length() definite 5
When low-level discovery macros are used in JSONPath preprocessing and their values are resolved, the following rules of escaping
special characters are applied:
• only backslash (\) and double quote (”) characters are considered for escaping;
• if the resolved macro value contains these characters, each of them is escaped with a backslash;
• if they are already escaped with a backslash, it is not considered as escaping and both the backslash and the following
special characters are escaped once again.
For example:
When used in the expression, the macro that may have special characters should be enclosed in double quotes:
When used in the path, the macro that may have special characters should be enclosed in square brackets and double quotes:
197
5 JavaScript preprocessing
Overview
JavaScript preprocessing
JavaScript preprocessing is done by invoking JavaScript function with a single parameter ’value’ and user-provided function body.
The preprocessing step result is the value returned by this function, for example, to perform Fahrenheit to Celsius conversion,
enter:
function (value)
{
return (value - 32) * 5 / 9
}
The input parameter ’value’ is always passed as a string. The return value is automatically coerced to string via ToString() method
(if it fails, then the error is returned as string value), with a few exceptions:
Errors can be returned by throwing values/objects (normally either strings or Error objects).
For example:
if (value == 0)
throw "Zero input value"
return 1/value
Each script has a 10-second execution timeout (depending on the script, it might take longer for the timeout to trigger); exceeding
it will return error. A 512-megabyte heap limit is enforced.
The JavaScript preprocessing step bytecode is cached and reused when the step is applied next time. Any changes to the item’s
preprocessing steps will cause the cached script to be reset and recompiled later.
Consecutive runtime failures (3 in a row) will cause the engine to be reinitialized to mitigate the possibility of one script breaking
the execution environment for the next scripts (this action is logged with DebugLevel 4 and higher).
It is possible to use user macros in JavaScript code. If a script contains user macros, these macros are resolved by server/proxy
before executing specific preprocessing steps. Note that when testing preprocessing steps in the frontend, macro values will not
be pulled and need to be entered manually.
Note:
Context is ignored when a macro is replaced with its value. Macro value is inserted in the code as is, it is not possible to
add additional escaping before placing the value in the JavaScript code. Please be advised that this can cause JavaScript
errors in some cases.
In an example below, if received value exceeds a {$THRESHOLD} macro value, the threshold value (if present) will be returned
instead:
Examples
The following examples illustrate how you can use JavaScript preprocessing.
Each example contains a brief description, a function body for JavaScript preprocessing parameters, and the preprocessing step
result - value returned by the function.
198
Example 1: Convert a number (scientific notation to integer)
return (Number(value))
return(parseInt(value,2))
return (value.length)
Get the remaining time (in seconds) until the expiration date of a certificate (Feb 12 12:33:56 2022 GMT).
Modify the JSON data structure by removing any properties with the key "data_size" or "index_size".
var obj=JSON.parse(value);
return JSON.stringify(obj)
[
{
"table_name":"history",
"data_size":"326.05",
"index_size":"174.34"
},
{
"table_name":"history_log",
"data_size":"6.02",
"index_size":"3.45"
}
]
199
Value returned by the function:
[
{
"table_name":"history"
},
{
"table_name":"history_log"
}
]
Convert the value received from a web.page.get Zabbix agent item (e.g., web.page.get[http://127.0.0.1:80/server-status?auto]) to
a JSON object.
// Split the value into substrings and put these substrings into an array
var lines = value.split('\n');
// Add the substrings from the "lines" array to the "output" object as properties (key-value pairs)
for (var i = 0; i < lines.length; i++) {
var line = lines[i].match(/([A-z0-9 ]+): (.*)/);
// Multiversion metrics
output.ServerUptimeSeconds = output.ServerUptimeSeconds || output.Uptime;
output.ServerVersion = output.ServerVersion || output.Server;
workers[char]++;
}
}
200
HTTP/1.1 200 OK
Date: Mon, 27 Mar 2023 11:08:39 GMT
Server: Apache/2.4.52 (Ubuntu)
Vary: Accept-Encoding
Content-Encoding: gzip
Content-Length: 405
Content-Type: text/plain; charset=ISO-8859-1
127.0.0.1
ServerVersion: Apache/2.4.52 (Ubuntu)
ServerMPM: prefork
Server Built: 2023-03-08T17:32:01
CurrentTime: Monday, 27-Mar-2023 14:08:39 EEST
RestartTime: Monday, 27-Mar-2023 12:19:59 EEST
ParentServerConfigGeneration: 1
ParentServerMPMGeneration: 0
ServerUptimeSeconds: 6520
ServerUptime: 1 hour 48 minutes 40 seconds
Load1: 0.56
Load5: 0.33
Load15: 0.28
Total Accesses: 2476
Total kBytes: 8370
Total Duration: 52718
CPUUser: 8.16
CPUSystem: 3.44
CPUChildrenUser: 0
CPUChildrenSystem: 0
CPULoad: .177914
Uptime: 6520
ReqPerSec: .379755
BytesPerSec: 3461.58
BytesPerReq: 3461.58
DurationPerReq: 21.2916
BusyWorkers: 2
IdleWorkers: 6
Scoreboard: ____KW__......................................................................................
Value returned by the function:
{
"Date": "Mon, 27 Mar 2023 11:08:39 GMT",
"Server": "Apache/2.4.52 (Ubuntu)",
"Vary": "Accept-Encoding",
"Encoding": "gzip",
"Length": 405,
"Type": "text/plain; charset=ISO-8859-1",
"ServerVersion": "Apache/2.4.52 (Ubuntu)",
"ServerMPM": "prefork",
"Server Built": "2023-03-08T17:32:01",
"CurrentTime": "Monday, 27-Mar-2023 14:08:39 EEST",
"RestartTime": "Monday, 27-Mar-2023 12:19:59 EEST",
"ParentServerConfigGeneration": 1,
"ParentServerMPMGeneration": 0,
"ServerUptimeSeconds": 6520,
"ServerUptime": "1 hour 48 minutes 40 seconds",
"Load1": 0.56,
"Load5": 0.33,
"Load15": 0.28,
"Total Accesses": 2476,
"Total kBytes": 8370,
"Total Duration": 52718,
"CPUUser": 8.16,
"CPUSystem": 3.44,
201
"CPUChildrenUser": 0,
"CPUChildrenSystem": 0,
"CPULoad": 0.177914,
"Uptime": 6520,
"ReqPerSec": 0.379755,
"BytesPerSec": 1314.55,
"BytesPerReq": 3461.58,
"DurationPerReq": 21.2916,
"BusyWorkers": 2,
"IdleWorkers": 6,
"Scoreboard": "____KW__...............................................................................
"Workers": {
"waiting": 6,
"starting": 0,
"reading": 0,
"sending": 1,
"keepalive": 1,
"dnslookup": 0,
"closing": 0,
"logging": 0,
"finishing": 0,
"cleanup": 0,
"slot": 142
}
}
Overview
This section describes Zabbix additions to the JavaScript language implemented with Duktape, and supported global JavaScript
functions.
Built-in objects
Zabbix
The Zabbix object provides interaction with the internal Zabbix functionality.
Method Description
log(loglevel, Writes <message> into Zabbix log using <loglevel> log level (see configuration file DebugLevel
message) parameter).
Example:
Alias Alias to
Attention:
The total size of all logged messages is limited to 8 MB per script execution.
202
Method Description
Zabbix.sleep(15000)
HttpRequest
This object encapsulates cURL handle allowing to make simple HTTP requests. Errors are thrown as exceptions.
Attention:
The initialization of multiple HttpRequest objects is limited to 10 per script execution.
Method Description
addHeader(value) Adds HTTP header field. This field is used for all following requests until cleared with the
clearHeader() method.
The total length of header fields that can be added to a single HttpRequest object is limited to
128 Kbytes (special characters and header names included).
clearHeader() Clears HTTP header. If no header fields are set, HttpRequest will set Content-Type to
application/json if the data being posted is JSON-formatted; text/plain otherwise.
connect(url) Sends HTTP CONNECT request to the URL and returns the response.
customRequest(method,Allows to specify any HTTP method in the first parameter. Sends the method request to the URL
url, data) with optional data payload and returns the response.
delete(url, data) Sends HTTP DELETE request to the URL with optional data payload and returns the response.
getHeaders(<asArray>)Returns the object of received HTTP header fields.
The asArray parameter may be set to ”true” (e.g., getHeaders(true)), ”false” or be
undefined. If set to ”true”, the received HTTP header field values will be returned as arrays; this
should be used to retrieve the field values of multiple same-name headers.
If not set or set to ”false”, the received HTTP header field values will be returned as strings.
get(url, data) Sends HTTP GET request to the URL with optional data payload and returns the response.
head(url) Sends HTTP HEAD request to the URL and returns the response.
options(url) Sends HTTP OPTIONS request to the URL and returns the response.
patch(url, data) Sends HTTP PATCH request to the URL with optional data payload and returns the response.
put(url, data) Sends HTTP PUT request to the URL with optional data payload and returns the response.
post(url, data) Sends HTTP POST request to the URL with optional data payload and returns the response.
getStatus() Returns the status code of the last HTTP request.
setProxy(proxy) Sets HTTP proxy to ”proxy” value. If this parameter is empty, then no proxy is used.
setHttpAuth(bitmask, Sets enabled HTTP authentication methods (HTTPAUTH_BASIC, HTTPAUTH_DIGEST,
username, HTTPAUTH_NEGOTIATE, HTTPAUTH_NTLM, HTTPAUTH_NONE) in the ’bitmask’ parameter.
password) The HTTPAUTH_NONE flag allows to disable HTTP authentication.
Examples:
request.setHttpAuth(HTTPAUTH_NTLM \| HTTPAUTH_BASIC, username, password)
request.setHttpAuth(HTTPAUTH_NONE)
trace(url, data) Sends HTTP TRACE request to the URL with optional data payload and returns the response.
Example:
try {
Zabbix.log(4, 'jira webhook script value='+value);
var result = {
'tags': {
'endpoint': 'jira'
}
},
params = JSON.parse(value),
req = new HttpRequest(),
fields = {},
resp;
203
req.addHeader('Content-Type: application/json');
req.addHeader('Authorization: Basic '+params.authentication);
fields.summary = params.summary;
fields.description = params.description;
fields.project = {"key": params.project_key};
fields.issuetype = {"id": params.issue_id};
resp = req.post('https://jira.example.com/rest/api/2/issue/',
JSON.stringify({"fields": fields})
);
if (req.getStatus() != 201) {
throw 'Response code: '+req.getStatus();
}
resp = JSON.parse(resp);
result.tags.issue_id = resp.id;
result.tags.issue_key = resp.key;
} catch (error) {
Zabbix.log(4, 'jira issue creation failed json : '+JSON.stringify({"fields": fields}));
Zabbix.log(4, 'jira issue creation failed : '+error);
result = {};
}
return JSON.stringify(result);
XML
The XML object allows the processing of XML data in the item and low-level discovery preprocessing and webhooks.
Attention:
In order to use XML object, server/proxy must be compiled with libxml2 support.
Method Description
XML.query(data, Retrieves node content using XPath. Returns null if node is not found.
expression) expression - an XPath expression;
data - XML data as a string.
XML.toJson(data) Converts data in XML format to JSON.
XML.fromJson(object) Converts data in JSON format to XML.
Example:
Input:
<menu>
<food type = "breakfast">
<name>Chocolate</name>
<price>$5.95</price>
<description></description>
<calories>650</calories>
</food>
</menu>
Output:
{
"menu": {
"food": {
"@type": "breakfast",
"name": "Chocolate",
"price": "$5.95",
"description": null,
204
"calories": "650"
}
}
}
Serialization rules
XML to JSON conversion will be processed according to the following rules (for JSON to XML conversions reversed rules are applied):
1. XML attributes will be converted to keys that have their names prepended with ’@’.
Example:
Input:
<xml foo="FOO">
<bar>
<baz>BAZ</baz>
</bar>
</xml>
Output:
{
"xml": {
"@foo": "FOO",
"bar": {
"baz": "BAZ"
}
}
}
Example:
Input:
<xml>
<foo/>
</xml>
Output:
{
"xml": {
"foo": null
}
}
3. Empty attributes (with ”” value) will be converted as having empty string (”) value.
Example:
Input:
<xml>
<foo bar="" />
</xml>
Output:
{
"xml": {
"foo": {
"@bar": ""
}
}
}
4. Multiple child nodes with the same element name will be converted to a single key that has an array of values as its value.
Example:
205
Input:
<xml>
<foo>BAR</foo>
<foo>BAZ</foo>
<foo>QUX</foo>
</xml>
Output:
{
"xml": {
"foo": ["BAR", "BAZ", "QUX"]
}
}
Example:
Input:
<xml>
<foo>BAZ</foo>
</xml>
Output:
{
"xml": {
"foo": "BAZ"
}
}
6. If a text element has no children but has attributes, text content will be converted to an element with the key ’#text’ and content
as a value; attributes will be converted as described in the serialization rule 1.
Example:
Input:
<xml>
<foo bar="BAR">
BAZ
</foo>
</xml>
Output:
{
"xml": {
"foo": {
"@bar": "BAR",
"#text": "BAZ"
}
}
}
try {
b64 = btoa("utf8 string");
utf8 = atob(b64);
}
catch (error) {
return {'error.name' : error.name, 'error.message' : error.message}
}
206
• md5(data) - calculates the MD5 hash of the data
• hmac(’<hash type>’,key,data) - returns HMAC hash as hex formatted string; MD5 and SHA256 hash types are supported;
key and data parameters support binary data. Examples:
– hmac('md5',key,data)
– hmac('sha256',key,data)
• sign(hash,key,data) - returns calculated signature (RSA signature with SHA-256) as a string, where:<br> hash - only ’sha256’
is allowed, otherwise an error is thrown;<br> key - the private key. It should correspond to PKCS#1 or PKCS#8 standard.
The key can be provided in different forms:<br>
data - the data that will be signed. It can be a string (binary data also supported) or buffer (Uint8Array/ArrayBuffer).<br>
OpenSSL or GnuTLS is used to calculate the signatures. If Zabbix was built without any of these encryption libraries, an error
will be thrown (’missing OpenSSL or GnuTLS library’).
Overview
This section describes Zabbix additions to the JavaScript language implemented with Duktape for use in the Browser item script.
These additions supplement the JavaScript objects described on the Additional JavaScript objects page.
Browser
The Browser object manages WebDriver sessions, initializing a session upon creation and terminating it upon destruction. A single
script can support up to four Browser objects.
To construct a Browser object, use the new Browser(options) syntax. The options (JSON object) parameter specifies browser
Browser.chromeOptions()).
options, usually the WebDriver options method result (for example,
Method Description
Parameters:
url - (string) URL to navigate to.
getUrl() Return a string of the opened page URL.
getPageSource() Return a string of the opened page source.
findElement(strategy, Return an Element object with one element in the opened page (or return null if no
selector) elements match strategy and selector).
Parameters:
strategy - (string, CSS selector/link text/partial link text/tag name/Xpath) Location
strategy;
selector - (string) Element selector using the specified location strategy.
findElements(strategy, Return an array of Element objects with multiple elements in the opened page (or
target) return an empty array if no elements match location strategy and target).
Parameters:
strategy - (string, CSS selector/link text/partial link text/tag name/Xpath) Location
strategy;
target - (string) Element selector using the specified location strategy.
getCookies() Return an array of Cookie objects.
207
Method Description
Parameters:
cookie - (Cookie object) Cookie to set.
getScreenshot() Return a string (base64 encoded image) of browser’s viewport.
setScriptTimeout(timeout) Set script loading timeout.
Parameters:
timeout - (integer) Timeout value (in milliseconds).
setSessionTimeout(timeout) Set session (page load) timeout.
Parameters:
timeout - (integer) Timeout value (in milliseconds).
setElementWaitTimeout(timeout)Set element location strategy (implicit) timeout.
Parameters:
timeout - (integer) Timeout value (in milliseconds).
collectPerfEntries(mark) Collect performance entries to retrieve with the getResult() method.
Parameters:
mark - (string, optional) performance snapshot mark.
getRawPerfEntries() Return an array of performance entry objects.
getResult() Return a Result object with browser session statistics (error information,
performance snapshots, etc.).
getError() Return a BrowserError object with browser errors (or return null if there are no
browser errors).
setError(message) Set a custom error message to be included in the Result object.
Parameters:
message - (string) Error message.
discardError() Discard the error to be returned in theResult object.
getAlert() Return an Alert object with browser alerts (or return null if there are no browser
alerts).
chromeOptions() Return a chromeOptions object with predefined Chrome browser options.
firefoxOptions() Return a firefoxOptions object with predefined Firefox browser options.
safariOptions() Return a safariOptions object with predefined Safari browser options.
edgeOptions() Return an edgeOptions object with predefined Edge browser options.
• BrowserError - derived from the Error object that is thrown if the Browser constructor fails; contains an additional
browser property with a Browser object that threw this BrowserError.
• WebdriverError - derived from BrowserError; contains the same properties as the BrowserError object, which indi-
cate if the error was generated in response to an error in the WebDriver response.
Element
The Element object is returned by the Browser object findElement()/findElements() methods and cannot be constructed
directly.
The Element object represents an element in the web page and provides methods to interact with it.
The following methods are supported with the Element object.
Method Description
getAttribute(name) Return an attribute value string of the element attribute (or return null if the specified attribute
was not found).
Parameters:
name - (string) Attribute name.
208
Method Description
getProperty(name) Return a property value string of the element property (or return null if the specified property
was not found).
Parameters:
name - (string) Property name.
getText() Return a text value string of the element text.
click() Click on an element.
clear() Clear the content of an editable element.
sendKeys(keys) Send keys.
Parameters:
keys - (string) Keys to send.
Cookie
The Cookie object is returned by the Browser object getCookies() method and passed to the addCookie() method.
While the Cookie object does not have any methods, it can contain the following properties:
Alert
The Alert object represents a web page alert, is returned by Browser object getAlert() method, and cannot be constructed
directly.
The Alert object contains the text property with the alert text (or null if there are no alerts).
The following methods are supported with the Alert object.
Method Description
Result
The Result object contains session statistics and is returned by the Browser object getResult() method.
Typically, the Result object is stringified and returned from the script, and then parsed into dependent item values through
preprocessing.
While the Result object does not have any methods, it can contain the following properties.
209
Property Type Description
Overview
In this preprocessing step it is possible to convert CSV file data into JSON format. It’s supported in:
Configuration
The first parameter allows to set a custom delimiter. Note that if the first line of CSV input starts with ”Sep=” and is followed by a
single UTF-8 character then that character will be used as the delimiter in case the first parameter is not set. If the first parameter
is not set and a delimiter is not retrieved from the ”Sep=” line, then a comma is used as a separator.
If the With header row checkbox is marked, the header line values will be interpreted as column names (see Header processing for
more information).
If the Custom on fail checkbox is marked, the item will not become unsupported in case of a failed preprocessing step. Additionally,
custom error handling options may be set: discard the value, set a specified value or set a specified error message.
Header processing
The CSV file header line can be processed in two different ways:
210
• If the With header row checkbox is marked - header line values are interpreted as column names. In this case the column
names must be unique and the data row should not contain more columns than the header row.
• If the With header row checkbox is not marked - the header line is interpreted as data. Column names are generated
automatically (1,2,3,4...).
Nr,Item name,Key,Qty
1,active agent item,agent.hostname,33
"2","passive agent item","agent.version","44"
3,"active,passive agent items",agent.ping,55
Note:
A quotation character within a quoted field in the input must be escaped by preceding it with another quotation character.
[
{
"Nr":"1",
"Item name":"active agent item",
"Key":"agent.hostname",
"Qty":"33"
},
{
"Nr":"2",
"Item name":"passive agent item",
"Key":"agent.version",
"Qty":"44"
},
{
"Nr":"3",
"Item name":"active,passive agent items",
"Key":"agent.ping",
"Qty":"55"
}
]
[
{
"1":"Nr",
"2":"Item name",
"3":"Key"
"4":"Qty"
},
{
"1":"1",
"2":"active agent item",
"3":"agent.hostname"
"4":"33"
},
{
"1":"2",
"2":"passive agent item",
"3":"agent.version"
"4":"44"
},
{
"1":"3",
"2":"active,passive agent items",
211
"3":"agent.ping"
"4":"55"
}
]
3 Item types
Overview
Item types cover various methods of acquiring data from your system. Each item type comes with its own set of supported item
keys and required parameters.
Details for all item types are included in the subpages of this section. Even though item types offer a lot of options for data
gathering, there are further options through user parameters or loadable modules.
Some checks are performed by Zabbix server alone (as agent-less monitoring) while others require Zabbix agent or even Zabbix
Java gateway (with JMX monitoring).
Attention:
If a particular item type requires a particular interface (like an IPMI check needs an IPMI interface on the host) that interface
must exist in the host definition.
Multiple interfaces can be set in the host definition: Zabbix agent, SNMP agent, JMX and IPMI. If an item can use more than one
interface, it will search the available host interfaces (in the order: Agent→SNMP→JMX→IPMI) for the first appropriate one to be
linked with.
All items that return text (character, log, text types of information) can return whitespace only as well (where applicable) setting
the return value to an empty string (supported since 2.0).
1 Zabbix agent
Overview
This section provides details on the item keys that use communication with Zabbix agent for data gathering.
There are passive and active agent checks. When configuring an item, you can select the required type:
Note that all item keys supported by Zabbix agent on Windows are also supported by the new generation Zabbix agent 2. See the
additional item keys that you can use with the agent 2 only.
212
Supported item keys
The item keys that you can use with Zabbix agent are listed below.
The item keys are listed without parameters and additional information. Click on the item key to see the full details.
213
Item key Description Item group
Supported platforms
Except where specified differently in the item details, the agent items (and all parameters) are supported on:
• Linux
• FreeBSD
• Solaris
• HP-UX
• AIX
• Tru64
• MacOS X
• OpenBSD
• NetBSD
Many agent items are also supported on Windows. See the Windows agent item page for details.
Parameters without angle brackets are mandatory. Parameters marked with angle brackets < > are optional.
kernel.maxfiles
<br> The maximum number of opened files supported by OS.<br> Return value: Integer.<br> Supported platforms: Linux,
FreeBSD, MacOS X, OpenBSD, NetBSD.
214
kernel.maxproc
<br> The maximum number of processes supported by OS.<br> Return value: Integer.<br> Supported platforms: Linux 2.6 and
later, FreeBSD, Solaris, MacOS X, OpenBSD, NetBSD.
kernel.openfiles
<br> The number of currently open file descriptors.<br> Return value: Integer.<br> Supported platforms: Linux (the item may
work on other UNIX-like platforms).
log[file,<regexp>,<encoding>,<maxlines>,<mode>,<output>,<maxdelay>,<options>,<persistent dir>]
<br> The monitoring of a log file.<br> Return value: Log.<br> See supported platforms.
Parameters:
Comments:
log[/var/log/syslog]
log[/var/log/syslog,error]
log[/home/zabbix/logs/logfile,,,100]
Example of using the output parameter for extracting a number from log record:
log[/app1/app.log,"task run [0-9.]+ sec, processed ([0-9]+) records, [0-9]+ errors",,,,\1] #this item will
Example of using the output parameter for rewriting a log record before sending to server:
log[/app1/app.log,"([0-9 :-]+) task run ([0-9.]+) sec, processed ([0-9]+) records, ([0-9]+) errors",,,,"\1
log.count[file,<regexp>,<encoding>,<maxproclines>,<mode>,<maxdelay>,<options>,<persistent dir>]
<br> The count of matched lines in a monitored log file.<br> Return value: Integer.<br> See supported platforms.
Parameters:
215
• persistent dir (only in zabbix_agentd on Unix systems; not supported in Zabbix agent 2) - the absolute pathname of
directory where to store persistent files. See also additional notes on persistent files.
Comments:
<br> The monitoring of a log file that is rotated.<br> Return value: Log.<br> See supported platforms.
Parameters:
• file regexp - the absolute path to file and the file name described by a regular expression. Note that only the file name is
a regular expression.<br>
• regexp - a regular expression describing the required content pattern;<br>
• encoding - the code page identifier;<br>
• maxlines - the maximum number of new lines per second the agent will send to Zabbix server or proxy. This parameter
overrides the value of ’MaxLinesPerSecond’ in zabbix_agentd.conf.<br>
• mode - possible values: all (default) or skip - skip processing of older data (affects only newly created items).<br>
• output - an optional output formatting template. The \0 escape sequence is replaced with the matched part of text (from
the first character where match begins until the character where match ends) while an \N (where N=1...9) escape sequence
is replaced with Nth matched group (or an empty string if the N exceeds the number of captured groups).<br>
• maxdelay - the maximum delay in seconds. Type: float. Values: 0 - (default) never ignore log file lines; > 0.0 - ignore
older lines in order to get the most recent lines analyzed within ”maxdelay” seconds. Read the maxdelay notes before using
it!<br>
• options - the type of log file rotation and other options. Possible values:<br>rotate (default),<br>copytruncate - note that
copytruncate cannot be used together with maxdelay. In this case maxdelay must be 0 or not specified; see copytruncate
notes,<br>mtime-reread - non-unique records, reread if modification time or size changes (default),<br>mtime-noreread -
non-unique records, reread only if the size changes (ignore modification time change).<br>
• persistent dir (only in zabbix_agentd on Unix systems; not supported in Zabbix agent 2) - the absolute pathname of
directory where to store persistent files. See also additional notes on persistent files.
Comments:
<br> The count of matched lines in a monitored log file that is rotated.<br> Return value: Integer.<br> See supported platforms.
Parameters:
• file regexp - the absolute path to file and regular expression describing the file name pattern;<br>
• regexp - a regular expression describing the required pattern;<br>
• encoding - the code page identifier;<br>
• maxproclines - the maximum number of new lines per second the agent will analyze (cannot exceed 10000). The default
value is 10*’MaxLinesPerSecond’ in zabbix_agentd.conf.<br>
• mode - possible values: all (default) or skip - skip processing of older data (affects only newly created items).<br>
216
• maxdelay - the maximum delay in seconds. Type: float. Values: 0 - (default) never ignore log file lines; > 0.0 - ignore
older lines in order to get the most recent lines analyzed within ”maxdelay” seconds. Read the maxdelay notes before using
it!<br>
• options - the type of log file rotation and other options. Possible values:<br>rotate (default),<br>copytruncate - note that
copytruncate cannot be used together with maxdelay. In this case maxdelay must be 0 or not specified; see copytruncate
notes,<br>mtime-reread - non-unique records, reread if modification time or size changes (default),<br>mtime-noreread -
non-unique records, reread only if the size changes (ignore modification time change).<br>
• persistent dir (only in zabbix_agentd on Unix systems; not supported in Zabbix agent 2) - the absolute pathname of
directory where to store persistent files. See also additional notes on persistent files.
Comments:
modbus.get[endpoint,<slave id>,<function>,<address>,<count>,<type>,<endianness>,<offset>]
<br> Reads Modbus data.<br> Return value: JSON object.<br> Supported platforms: Linux.
Parameters:
net.dns[<ip>,name,<type>,<timeout>,<count>,<protocol>]
<br> Checks the status of a DNS service.<br> Return values: 0 - DNS resolution failed (DNS server did not respond or returned an
error); 1 - DNS resolution succeeded.<br> See supported platforms.
Parameters:
• ip (ignored on Windows unless using Zabbix agent 2) - the IP address of DNS server (leave empty for the default DNS server);
• name - the DNS name to query;
• type - the record type to be queried (default is SOA);
• timeout (ignored on Windows unless using Zabbix agent 2) - the timeout for the request in seconds (default is 1 second);
• count (ignored on Windows unless using Zabbix agent 2) - the number of tries for the request (default is 2);
• protocol - the protocol used to perform DNS queries: udp (default) or tcp.
Comments:
• The possible values for type are: ANY, A, NS, CNAME, MB, MG, MR, PTR, MD, MF, MX, SOA, NULL, WKS (not supported for
Zabbix agent on Windows, Zabbix agent 2 on all OS), HINFO, MINFO, TXT, SRV
• For reverse DNS lookups (when type is set to PTR), you can provide the DNS name in both reversed and non-reversed format
(see examples below). Note that when PTR record is requested, the DNS name is actually an IP address.
• Internationalized domain names are not supported, please use IDNA encoded names instead.
Examples:
net.dns[198.51.100.1,example.com,MX,2,1]
net.dns[,198.51.100.1,PTR]
net.dns[,1.100.51.198.in-addr.arpa,PTR]
net.dns[,2a00:1450:400f:800::200e,PTR]
net.dns[,e.0.0.2.0.0.0.0.0.0.0.0.0.0.0.0.0.0.8.0.f.0.0.4.0.5.4.1.0.0.a.2.ip6.arpa,PTR]
net.dns.perf[<ip>,name,<type>,<timeout>,<count>,<protocol>]
<br> Checks the performance of a DNS service.<br> Return value: Float (0 - service is down; seconds - the number of seconds
spent waiting for a response from the service).<br> See supported platforms.
Parameters:
217
• ip (ignored on Windows unless using Zabbix agent 2) - the IP address of DNS server (leave empty for the default DNS server);
• name - the DNS name to query;
• type - the record type to be queried (default is SOA);
• timeout (ignored on Windows unless using Zabbix agent 2) - the timeout for the request in seconds (default is 1 second);
• count (ignored on Windows unless using Zabbix agent 2) - the number of tries for the request (default is 2);
• protocol - the protocol used to perform DNS queries: udp (default) or tcp.
Comments:
• The possible values for type are:<br>ANY, A, NS, CNAME, MB, MG, MR, PTR, MD, MF, MX, SOA, NULL, WKS (not supported
for Zabbix agent on Windows, Zabbix agent 2 on all OS), HINFO, MINFO, TXT, SRV
• For reverse DNS lookups (when type is set to PTR), you can provide the DNS name in both reversed and non-reversed format
(see examples below). Note that when PTR record is requested, the DNS name is actually an IP address.
• Internationalized domain names are not supported, please use IDNA encoded names instead.
• Since Zabbix 7.0.1, the item returns a response time instead of 0 when the DNS server responds with an error code (for
example, NXDOMAIN or SERVFAIL).
Examples:
net.dns.perf[198.51.100.1,example.com,MX,2,1]
net.dns.perf[,198.51.100.1,PTR]
net.dns.perf[,1.100.51.198.in-addr.arpa,PTR]
net.dns.perf[,2a00:1450:400f:800::200e,PTR]
net.dns.perf[,e.0.0.2.0.0.0.0.0.0.0.0.0.0.0.0.0.0.8.0.f.0.0.4.0.5.4.1.0.0.a.2.ip6.arpa,PTR]
net.dns.record[<ip>,name,<type>,<timeout>,<count>,<protocol>]
<br> Performs a DNS query.<br> Return value: a character string with the required type of information.<br> See supported
platforms.
Parameters:
• ip (ignored on Windows unless using Zabbix agent 2) - the IP address of DNS server (leave empty for the default DNS server);
• name - the DNS name to query;
• type - the record type to be queried (default is SOA);
• timeout (ignored on Windows unless using Zabbix agent 2) - the timeout for the request in seconds (default is 1 second);
• count (ignored on Windows unless using Zabbix agent 2) - the number of tries for the request (default is 2);
• protocol - the protocol used to perform DNS queries: udp (default) or tcp.
Comments:
• The possible values for type are:<br>ANY, A, NS, CNAME, MB, MG, MR, PTR, MD, MF, MX, SOA, NULL, WKS (not supported
for Zabbix agent on Windows, Zabbix agent 2 on all OS), HINFO, MINFO, TXT, SRV
• For reverse DNS lookups (when type is set to PTR), you can provide the DNS name in reversed or non-reversed format (see
examples below). Note that when PTR record is requested, the DNS name is actually an IP address.
• Internationalized domain names are not supported, please use IDNA encoded names instead.
Examples:
net.dns.record[198.51.100.1,example.com,MX,2,1]
net.dns.record[,198.51.100.1,PTR]
net.dns.record[,1.100.51.198.in-addr.arpa,PTR]
net.dns.record[,2a00:1450:400f:800::200e,PTR]
net.dns.record[,e.0.0.2.0.0.0.0.0.0.0.0.0.0.0.0.0.0.8.0.f.0.0.4.0.5.4.1.0.0.a.2.ip6.arpa,PTR]
net.if.collisions[if]
<br> The number of out-of-window collisions.<br> Return value: Integer.<br> Supported platforms: Linux, FreeBSD, Solaris, AIX,
MacOS X, OpenBSD, NetBSD. Root privileges are required on NetBSD.
Parameter:
net.if.discovery
<br> The list of network interfaces. Used for low-level discovery.<br> Return value: JSON object.<br> Supported platforms: Linux,
FreeBSD, Solaris, HP-UX, AIX, OpenBSD, NetBSD.
218
net.if.in[if,<mode>]
<br> The incoming traffic statistics on a network interface.<br> Return value: Integer.<br> Supported platforms: Linux, FreeBSD,
5
Solaris , HP-UX, AIX, MacOS X, OpenBSD, NetBSD. Root privileges are required on NetBSD.
Parameters:
• if - network interface name (Unix); network interface full description or IPv4 address; or, if in braces, network interface GUID
(Windows);
• mode - possible values:<br>bytes - number of bytes (default)<br>packets - number of packets<br>errors - number of er-
rors<br>dropped - number of dropped packets<br>overruns (fifo) - the number of FIFO buffer errors<br>frame - the number
of packet framing errors<br>compressed - the number of compressed packets received by the device driver<br>multicast
- the number of multicast frames received by the device driver
Comments:
• You may use this key with the Change per second preprocessing step in order to get the bytes-per-second statistics;
• The dropped mode is supported only on Linux, FreeBSD, HP-UX, MacOS X, OpenBSD, NetBSD;
• The overruns, frame, compressed, multicast modes are supported only on Linux;
• On HP-UX this item does not provide details on loopback interfaces (e.g. lo0).
Examples:
net.if.in[eth0]
net.if.in[eth0,errors]
net.if.out[if,<mode>]
<br> The outgoing traffic statistics on a network interface.<br> Return value: Integer.<br> Supported platforms: Linux, FreeBSD,
5
Solaris , HP-UX, AIX, MacOS X, OpenBSD, NetBSD. Root privileges are required on NetBSD.
Parameters:
• if - network interface name (Unix); network interface full description or IPv4 address; or, if in braces, network interface GUID
(Windows);
• mode - possible values:<br>bytes - number of bytes (default)<br>packets - number of packets<br>errors - number of
errors<br>dropped - number of dropped packets<br>overruns (fifo) - the number of FIFO buffer errors<br>collisions (colls)
- the number of collisions detected on the interface<br>carrier - the number of carrier losses detected by the device
driver<br>compressed - the number of compressed packets transmitted by the device driver
Comments:
• You may use this key with the Change per second preprocessing step in order to get the bytes-per-second statistics;
• The dropped mode is supported only on Linux, HP-UX;
• The overruns, collision, carrier, compressed modes are supported only on Linux;
• On HP-UX this item does not provide details on loopback interfaces (e.g. lo0).
Examples:
net.if.out[eth0]
net.if.out[eth0,errors]
net.if.total[if,<mode>]
<br> The sum of incoming and outgoing traffic statistics on a network interface.<br> Return value: Integer.<br> Supported
5
platforms: Linux, FreeBSD, Solaris , HP-UX, AIX, MacOS X, OpenBSD, NetBSD. Root privileges are required on NetBSD.
Parameters:
• if - network interface name (Unix); network interface full description or IPv4 address; or, if in braces, network interface GUID
(Windows);
• mode - possible values:<br>bytes - number of bytes (default)<br>packets - number of packets<br>errors - number of
errors<br>dropped - number of dropped packets<br>overruns (fifo) - the number of FIFO buffer errors<br>collisions (colls)
- the number of collisions detected on the interface<br>compressed - the number of compressed packets transmitted or
received by the device driver
Comments:
• You may use this key with the Change per second preprocessing step in order to get the bytes-per-second statistics;
• The dropped mode is supported only on Linux, HP-UX. Dropped packets are supported only if both net.if.in and
net.if.out work for dropped packets on your platform.
• The overruns, collision, compressed modes are supported only on Linux;
• On HP-UX this item does not provide details on loopback interfaces (e.g. lo0).
219
Examples:
net.if.total[eth0]
net.if.total[eth0,errors]
net.tcp.listen[port]
<br> Checks if this TCP port is in LISTEN state.<br> Return values: 0 - it is not in LISTEN state; 1 - it is in LISTEN state.<br>
Supported platforms: Linux, FreeBSD, Solaris, MacOS X.
Parameter:
On Linux kernels 2.6.14 and above, the information about listening TCP sockets is obtained from the kernel’s NETLINK interface, if
possible. Otherwise, the information is retrieved from /proc/net/tcp and /roc/net/tcp6 files.
Example:
net.tcp.listen[80]
net.tcp.port[<ip>,port]
<br> Checks if it is possible to make a TCP connection to the specified port.<br> Return values: 0 - cannot connect; 1 - can
connect.<br> See supported platforms.
Parameters:
Comments:
Example:
net.tcp.port[,80] #this item can be used to test the web server availability running on port 80
net.tcp.service[service,<ip>,<port>]
<br> Checks if a service is running and accepting TCP connections.<br> Return values: 0 - service is down; 1 - service is run-
ning.<br> See supported platforms.
Parameters:
• service - ssh, ldap, smtp, ftp, http, pop, nntp, imap, tcp, https, or telnet (see details);
• ip - the IP address or DNS name (default is 127.0.0.1);
• port - the port number (by default the standard service port number is used).
Comments:
• These checks may result in additional messages in system daemon logfiles (SMTP and SSH sessions being logged usually);
• Checking of encrypted protocols (like IMAP on port 993 or POP on port 995) is currently not supported. As a workaround,
please use net.tcp.port[] for checks like these.
• Checking of LDAP and HTTPS on Windows is only supported by Zabbix agent 2;
• The telnet check looks for a login prompt (’:’ at the end).
Example:
net.tcp.service[ftp,,45] #this item can be used to test the availability of FTP server on TCP port 45
net.tcp.service.perf[service,<ip>,<port>]
<br> Checks the performance of a TCP service.<br> Return values: Float (0 - service is down; seconds - the number of seconds
spent waiting for a response from the service).<br> See supported platforms.
Parameters:
• service - ssh, ldap, smtp, ftp, http, pop, nntp, imap, tcp, https, or telnet (see details);
• ip - the IP address or DNS name (default is 127.0.0.1);
• port - the port number (by default the standard service port number is used).
Comments:
• Checking of encrypted protocols (like IMAP on port 993 or POP on port 995) is currently not supported. As a workaround,
please use net.tcp.service.perf[tcp,<ip>,<port>] for checks like these.
220
• The telnet check looks for a login prompt (’:’ at the end).
Example:
net.tcp.service.perf[ssh] #this item can be used to test the speed of initial response from the SSH server
net.tcp.socket.count[<laddr>,<lport>,<raddr>,<rport>,<state>]
<br> Returns the number of TCP sockets that match parameters.<br> Return value: Integer.<br> Supported platforms: Linux.
Parameters:
Example:
<br> Checks if this UDP port is in LISTEN state.<br> Return values: 0 - it is not in LISTEN state; 1 - it is in LISTEN state.<br>
Supported platforms: Linux, FreeBSD, Solaris, MacOS X.
Parameter:
Example:
net.udp.listen[68]
net.udp.service[service,<ip>,<port>]
<br> Checks if a service is running and responding to UDP requests.<br> Return values: 0 - service is down; 1 - service is
running.<br> See supported platforms.
Parameters:
Example:
net.udp.service[ntp,,45] #this item can be used to test the availability of NTP service on UDP port 45
net.udp.service.perf[service,<ip>,<port>]
<br> Checks the performance of a UDP service.<br> Return values: Float (0 - service is down; seconds - the number of seconds
spent waiting for a response from the service).<br> See supported platforms.
Parameters:
Example:
net.udp.service.perf[ntp] #this item can be used to test response time from NTP service
net.udp.socket.count[<laddr>,<lport>,<raddr>,<rport>,<state>]
<br> Returns the number of UDP sockets that match parameters.<br> Return value: Integer.<br> Supported platforms: Linux.
Parameters:
221
Example:
Parameters:
Comments:
• The returned value is based on a single CPU core utilization percentage. For example, the CPU utilization of a process fully
using two cores is 200%.
• The process CPU utilization data is gathered by a collector which supports the maximum of 1024 unique (by name, user and
command line) queries. Queries not accessed during the last 24 hours are removed from the collector.
• When setting the zone parameter to current (or default) in case the agent has been compiled on a Solaris without zone
support, but running on a newer Solaris where zones are supported, then the agent will return NOTSUPPORTED (the agent
cannot limit results to only the current zone). However, all is supported in this case.
Examples:
proc.cpu.util[,root] #CPU utilization of all processes running under the "root" user
proc.cpu.util[zabbix_server,zabbix] #CPU utilization of all zabbix_server processes running under the zabb
proc.get[<name>,<user>,<cmdline>,<mode>]
<br> The list of OS processes and their parameters. Can be used for low-level discovery.<br> Return value: JSON object.<br>
Supported platforms: Linux, FreeBSD, Windows, OpenBSD, NetBSD.
Parameters:
Comments:
• If a value cannot be retrieved, for example, because of an error (process already died, lack of permissions, system call
failure), -1 will be returned;
• See notes on selecting processes with name and cmdline parameters (Linux-specific).
Examples:
proc.get[zabbix_server,zabbix,,process] #list of all zabbix_server processes running under the zabbix user
proc.get[java,,,thread] #list of all Java processes, returns one entry per thread
proc.get[,zabbix,,summary] #combined data for processes of each type running under the zabbix user, return
proc.mem[<name>,<user>,<mode>,<cmdline>,<memtype>]
<br> The memory used by the process in bytes.<br> Return value: Integer - with mode as max, min, sum; Float - with mode as
avg<br> Supported platforms: Linux, FreeBSD, Solaris, AIX, Tru64, OpenBSD, NetBSD.
Parameters:
Comments:
222
• When several processes use shared memory, the sum of memory used by processes may result in large, unrealistic
values.<br><br>See notes on selecting processes with name and cmdline parameters (Linux-specific).<br><br>When
this item is invoked from the command line and contains a command line parameter (e.g. using the agent test mode:
zabbix_agentd -t proc.mem[,,,apache2]), one extra process will be counted, as the agent will count itself.
Examples:
proc.mem[,root] #the memory used by all processes running under the "root" user
proc.mem[zabbix_server,zabbix] #the memory used by all zabbix_server processes running under the zabbix us
proc.mem[,oracle,max,oracleZABBIX] #the memory used by the most memory-hungry process running under Oracle
proc.num[<name>,<user>,<state>,<cmdline>,<zone>]
6
<br> The number of processes.<br> Return value: Integer.<br> Supported platforms: Linux, FreeBSD, Solaris , HP-UX, AIX, Tru64,
OpenBSD, NetBSD.
Parameters:
Comments:
• The disk and trace state parameters are supported only on Linux, FreeBSD, OpenBSD, NetBSD;
• When this item is invoked from the command line and contains a command line parameter (e.g. using the agent test mode:
zabbix_agentd -t proc.num[,,,apache2]), one extra process will be counted, as the agent will count itself;
zone parameter to current (or default) in case the agent has been compiled on a Solaris without
• When setting the zone
support, but running on a newer Solaris where zones are supported, then the agent will return NOTSUPPORTED (the agent
cannot limit results to only the current zone). However, all is supported in this case.
• See notes on selecting processes with name and cmdline parameters (Linux-specific).
Examples:
<br> Hardware sensor reading.<br> Return value: Float.<br> Supported platforms: Linux, OpenBSD.
Parameters:
Comments:
Example:
sensor[w83781d-i2c-0-2d,temp1]
sensor[cpu0,temp0] #the temperature of one CPU
sensor["cpu[0-2]$",temp,avg] #the average temperature of the first three CPUs
system.boottime
<br> The system boot time.<br> Return value: Integer (Unix timestamp).<br> Supported platforms: Linux, FreeBSD, Solaris,
MacOS X, OpenBSD, NetBSD.
system.cpu.discovery
<br> The list of detected CPUs/CPU cores. Used for low-level discovery.<br> Return value: JSON object.<br> See supported
platforms.
system.cpu.intr
223
<br> The device interrupts.<br> Return value: Integer.<br> Supported platforms: Linux, FreeBSD, Solaris, AIX, OpenBSD, NetBSD.
system.cpu.load[<cpu>,<mode>]
<br> The CPU load.<br> Return value: Float.<br> See supported platforms.
Parameters:
• cpu - possible values: all (default) or percpu (the total load divided by online CPU count);
• mode - possible values: avg1 (one-minute average, default), avg5, or avg15.
Example:
system.cpu.load[,avg5]
system.cpu.num[<type>]
<br> The number of CPUs.<br> Return value: Integer.<br> Supported platforms: Linux, FreeBSD, Solaris, HP-UX, AIX, MacOS X,
OpenBSD, NetBSD.
Parameter:
The max type parameter is supported only on Linux, FreeBSD, Solaris, MacOS X.
Example:
system.cpu.num
system.cpu.switches
<br> The count of context switches.<br> Return value: Integer.<br> Supported platforms: Linux, FreeBSD, Solaris, AIX, OpenBSD,
NetBSD.
system.cpu.util[<cpu>,<type>,<mode>,<logical or physical>]
<br> The CPU utilization percentage.<br> Return value: Float.<br> Supported platforms: Linux, FreeBSD, Solaris, HP-UX, AIX,
Tru64, OpenBSD, NetBSD.
Parameters:
Comments:
• The nice type parameter is supported only on Linux, FreeBSD, HP-UX, Tru64, OpenBSD, NetBSD.
• The iowait type parameter is supported only on Linux 2.6 and later, Solaris, AIX.
• The interrupt type parameter is supported only on Linux 2.6 and later, FreeBSD, OpenBSD.
• The softirq, steal, guest, guest_nice type parameters are supported only on Linux 2.6 and later.
• The avg5 and avg15 mode parameters are supported on Linux, FreeBSD, Solaris, HP-UX, AIX, OpenBSD, NetBSD.
Example:
system.cpu.util[0,user,avg5]
system.hostname[<type>,<transform>]
<br> The system host name.<br> Return value: String.<br> See supported platforms.
Parameters:
• type - possible values: netbios (default on Windows), host (default on Linux), shorthost (returns part of the hostname before
the first dot, a full string for names without dots), fqdn (returns Fully Qualified Domain Name);
• transform - possible values: none (default) or lower (convert to lowercase).
The value is acquired by taking nodename from the uname() system API output.
Examples of returned values:
224
system.hostname → linux-w7x1
system.hostname → example.com
system.hostname[shorthost] → example
system.hostname → WIN-SERV2008-I6
system.hostname[host] → Win-Serv2008-I6LonG
system.hostname[host,lower] → win-serv2008-i6long
system.hostname[fqdn,lower] → blog.zabbix.com
system.hw.chassis[<info>]
<br> The chassis information.<br> Return value: String.<br> Supported platforms: Linux.
Parameter:
Comments:
Example:
<br> The CPU information.<br> Return value: String or Integer.<br> Supported platforms: Linux.
Parameters:
Comments:
Example:
system.hw.cpu[0,vendor] → AuthenticAMD
system.hw.devices[<type>]
<br> The listing of PCI or USB devices.<br> Return value: Text.<br> Supported platforms: Linux.
Parameter:
Returns the output of either the lspci or lsusb utility (executed without any parameters).
Example:
system.hw.devices → 00:00.0 Host bridge: Advanced Micro Devices [AMD] RS780 Host Bridge
system.hw.macaddr[<interface>,<format>]
<br> The listing of MAC addresses.<br> Return value: String.<br> Supported platforms: Linux.
Parameters:
Comments:
• Lists MAC addresses of the interfaces whose name matches the given interface regular expression (all lists for all inter-
faces);
• If format is specified as short, interface names and identical MAC addresses are not listed.
Example:
225
system.localtime[<type>]
<br> The system time.<br> Return value: Integer - with type as utc; String - with type as local.<br> See supported platforms.
Parameters:
• type - possible values: utc - (default) the time since the Epoch (00:00:00 UTC, January 1, 1970), measured in seconds or
local - the time in the ’yyyy-mm-dd,hh:mm:ss.nnn,+hh:mm’ format
Example:
system.localtime[local] #create an item using this key and then use it to display the host time in the *Cl
system.run[command,<mode>]
<br> Run the specified command on the host.<br> Return value: Text result of the command or 1 - with mode as nowait (regardless
of the command result).<br> See supported platforms.
Parameters:
Comments:
Example:
<br> The system statistics.<br> Return value: Integer or float.<br> Supported platforms: AIX.
Parameters:
• ent - the number of processor units this partition is entitled to receive (float);
• kthr,<type> - information about kernel thread states:<br>r - average number of runnable kernel threads (float)<br>b -
average number of kernel threads placed in the Virtual Memory Manager wait queue (float)
• memory,<type> - information about the usage of virtual and real memory:<br>avm - active virtual pages (integer)<br>fre
- size of the free list (integer)
• page,<type> - information about page faults and paging activity:<br>fi - file page-ins per second (float)<br>fo - file
page-outs per second (float)<br>pi - pages paged in from paging space (float)<br>po - pages paged out to paging space
(float)<br>fr - pages freed (page replacement) (float)<br>sr - pages scanned by page-replacement algorithm (float)
• faults,<type> - trap and interrupt rate:<br>in - device interrupts (float)<br>sy - system calls (float)<br>cs - kernel thread
context switches (float)
• cpu,<type> - breakdown of percentage usage of processor time:<br>us - user time (float)<br>sy - system time
(float)<br>id - idle time (float)<br>wa - idle time during which the system had outstanding disk/NFS I/O request(s)
(float)<br>pc - number of physical processors consumed (float)<br>ec - the percentage of entitled capacity consumed
(float)<br>lbusy - indicates the percentage of logical processor(s) utilization that occurred while executing at the user and
system level (float)<br>app - indicates the available physical processors in the shared pool (float)
• disk,<type> - disk statistics:<br>bps - indicates the amount of data transferred (read or written) to the drive in bytes per
second (integer)<br>tps - indicates the number of transfers per second that were issued to the physical disk/tape (float)|
Comments:
system.sw.arch
<br> The software architecture information.<br> Return value: String.<br> See supported platforms.
226
The info is acquired from the uname() function.
Example:
system.sw.arch → i686
system.sw.os[<info>]
<br> The operating system information.<br> Return value: String.<br> Supported platforms: Linux, Windows.
Parameter:
The info is acquired from (note that not all files and options are present in all distributions):
Examples:
<br> Detailed information about the operating system (version, type, distribution name, minor and major version, etc).<br> Return
value: JSON object.<br> Supported platforms: Linux, Windows.
system.sw.packages[<regexp>,<manager>,<format>]
<br> The listing of installed packages.<br> Return value: Text.<br> Supported platforms: Linux.
Parameters:
Comments:
• Lists (alphabetically) installed packages whose name matches the given regular expression (all lists them all);
• Supported package managers (executed command):<br>dpkg (dpkg --get-selections)<br>pkgtool (ls /var/log/packages)<br>rpm
(rpm -qa)<br>pacman (pacman -Q)<br>portage
• If format is specified as full, packages are grouped by package managers (each manager on a separate line beginning with
its name in square brackets);
• If format is specified as short, packages are not grouped and are listed on a single line.
Example:
<br> A detailed listing of installed packages.<br> Return value: JSON object.<br> Supported platforms: Linux.
Parameters:
Comments:
• Returns unformatted JSON with the installed packages whose name matches the given regular expression;
• The output is an array of objects each containing the following keys: name, manager, version, size, architecture, buildtime
and installtime (see more details).
system.swap.in[<device>,<type>]
<br> The swap-in (from device into memory) statistics.<br> Return value: Integer.<br> Supported platforms: Linux, FreeBSD,
OpenBSD.
Parameters:
• device - specify the device used for swapping (Linux only) or all (default);
227
• type - possible values: count (number of swapins, default on non-Linux platforms), sectors (sectors swapped in), or pages
(pages swapped in, default on Linux).
Comments:
• The source of this information is:<br>/proc/swaps, /proc/partitions, /proc/stat (Linux 2.4)<br>/proc/swaps, /proc/diskstats,
/proc/vmstat (Linux 2.6)
• Note that pages will only work if device was not specified;
• The sectors type parameter is supported only on Linux.
Example:
system.swap.in[,pages]
system.swap.out[<device>,<type>]
<br> The swap-out (from memory onto device) statistics.<br> Return value: Integer.<br> Supported platforms: Linux, FreeBSD,
OpenBSD.
Parameters:
• device - specify the device used for swapping (Linux only) or all (default);
• type - possible values: count (number of swapouts, default on non-Linux platforms), sectors (sectors swapped out), or pages
(pages swapped out, default on Linux).
Comments:
Example:
system.swap.out[,pages]
system.swap.size[<device>,<type>]
<br> The swap space size in bytes or in percentage from total.<br> Return value: Integer - for bytes; Float - for percentage.<br>
Supported platforms: Linux, FreeBSD, Solaris, AIX, Tru64, OpenBSD.
Parameters:
• device - specify the device used for swapping (FreeBSD only) or all (default);
• type - possible values: free (free swap space, default), pfree (free swap space, in percent), pused (used swap space, in
percent), total (total swap space), or used (used swap space).
Comments:
• Note that pfree, pused are not supported on Windows if swap size is 0;
• If device is not specified Zabbix agent will only take into account swap devices (files), the physical memory will be ignored.
For example, on Solaris systems the swap -s command includes a portion of physical memory and swap devices (unlike
swap -l).
Example:
<br> Identification of the system.<br> Return value: String.<br> See supported platforms.
Comments:
• On UNIX the value for this item is obtained with the uname() system call;
• On Windows the item returns the OS architecture, whereas on UNIX it returns the CPU architecture.|
Example:
system.uname → FreeBSD localhost 4.2-RELEASE FreeBSD 4.2-RELEASE #0: Mon Nov i386
system.uname → Windows ZABBIX-WIN 6.0.6001 Microsoft® Windows Server® 2008 Standard Service Pack 1 x86
system.uptime
<br> The system uptime in seconds.<br> Return value: Integer.<br> Supported platforms: Linux, FreeBSD, Solaris, AIX, MacOS
X, OpenBSD, NetBSD. The support on Tru64 is unknown.
228
In item configuration, use s or uptime units to get readable values.|
system.users.num
<br> The number of users logged in.<br> Return value: Integer.<br> See supported platforms.
The who command is used on the agent side to obtain the value.
vfs.dev.discovery
<br> The list of block devices and their type. Used for low-level discovery.<br> Return value: JSON object.<br> Supported
platforms: Linux.
vfs.dev.read[<device>,<type>,<mode>]
<br> The disk read statistics.<br> Return value: Integer - with type in sectors, operations, bytes; Float - with type in sps, ops,
bps.<br> Supported platforms: Linux, FreeBSD, Solaris, AIX, OpenBSD.
Parameters:
3
• device - disk device (default is all );
• type - possible values: sectors, operations, bytes, sps, ops, or bps (sps, ops, bps stand for: sectors, operations, bytes per
second, respectively);
• mode - possible values: avg1 (one-minute average, default), avg5, or avg15. This parameter is supported only with type
in: sps, ops, bps.
Comments:
2
• If using an update interval of three hours or more , this item will always return ’0’;
• The sectors and sps type parameters are supported only on Linux;
• The ops type parameter is supported only on Linux and FreeBSD;
• The bps type parameter is supported only on FreeBSD;
• The bytes type parameter is supported only on FreeBSD, Solaris, AIX, OpenBSD;
• The mode parameter is supported only on Linux, FreeBSD;
• You may use relative device names (for example, sda) as well as an optional /dev/ prefix (for example, /dev/sda);
• LVM logical volumes are supported;
• The default values of ’type’ parameter for different OSes:<br>AIX - operations<br>FreeBSD - bps<br>Linux - sps<br>OpenBSD
- operations<br>Solaris - bytes
• sps, ops and bps on supported platforms is limited to 1024 devices (1023 individual and one for all).
Example:
vfs.dev.read[,operations]
vfs.dev.write[<device>,<type>,<mode>]
<br> The disk write statistics.<br> Return value: Integer - with type in sectors, operations, bytes; Float - with type in sps, ops,
bps.<br> Supported platforms: Linux, FreeBSD, Solaris, AIX, OpenBSD.
Parameters:
3
• device - disk device (default is all );
• type - possible values: sectors, operations, bytes, sps, ops, or bps (sps, ops, bps stand for: sectors, operations, bytes per
second, respectively);
• mode - possible values: avg1 (one-minute average, default), avg5, or avg15. This parameter is supported only with type
in: sps, ops, bps.
Comments:
2
• If using an update interval of three hours or more , this item will always return ’0’;
• The sectors and sps type parameters are supported only on Linux;
• The ops type parameter is supported only on Linux and FreeBSD;
• The bps type parameter is supported only on FreeBSD;
• The bytes type parameter is supported only on FreeBSD, Solaris, AIX, OpenBSD;
• The mode parameter is supported only on Linux, FreeBSD;
• You may use relative device names (for example, sda) as well as an optional /dev/ prefix (for example, /dev/sda);
• LVM logical volumes are supported;
• The default values of ’type’ parameter for different OSes:<br>AIX - operations<br>FreeBSD - bps<br>Linux - sps<br>OpenBSD
- operations<br>Solaris - bytes
• sps, ops and bps on supported platforms is limited to 1024 devices (1023 individual and one for all).
Example:
229
vfs.dev.write[,operations]
vfs.dir.count[dir,<regex incl>,<regex excl>,<types incl>,<types excl>,<max depth>,<min size>,<max size>,<min age>,<max
age>,<regex excl dir>]
<br> The directory entry count.<br> Return value: Integer.<br> See supported platforms.
Parameters:
Comments:
• Environment variables, e.g. %APP_HOME%, $HOME and %TEMP% are not supported;
• Pseudo-directories ”.” and ”..” are never counted;
• Symbolic links are never followed for directory traversal;
• Both regex incl and regex excl are being applied to files and directories when calculating the entry count, but are
ignored when picking subdirectories to traverse (if regex incl is “(?i)^.+\.zip$” and max depth is not set, then all subdi-
rectories will be traversed, but only the files of type zip will be counted).
• The execution time is limited by the timeout value in agent configuration (3 sec). Since large directory traversal may take
longer than that, no data will be returned and the item will turn unsupported. Partial count will not be returned.
• When filtering by size, only regular files have meaningful sizes. Under Linux and BSD, directories also have non-zero sizes
(a few Kb typically). Devices have zero sizes, e.g. the size of /dev/sda1 does not reflect the respective partition size.
Therefore, when using <min_size> and <max_size>, it is advisable to specify <types_incl> as ”file”, to avoid surprises.
Examples:
<br> The directory entry list.<br> Return value: JSON object.<br> See supported platforms.
Parameters:
230
• max size - the maximum size (in bytes) for file to be listed. Larger files will not be listed. Memory suffixes can be used.
• min age - the minimum age (in seconds) of directory entry to be listed. More recent entries will not be listed. Time suffixes
can be used.
• max age - the maximum age (in seconds) of directory entry to be listed. Entries so old and older will not be listed (modifi-
cation time). Time suffixes can be used.
• regex excl dir - a regular expression describing the name pattern of the directory to exclude. All content of the directory
will be excluded (in contrast to regex excl)
Comments:
• Environment variables, e.g. %APP_HOME%, $HOME and %TEMP% are not supported;
• Pseudo-directories ”.” and ”..” are never listed;
• Symbolic links are never followed for directory traversal;
• Both regex incl and regex excl are being applied to files and directories when generating the entry list, but are ignored
when picking subdirectories to traverse (if regex
incl is “(?i)^.+\.zip$” and max depth is not set, then all subdirectories
will be traversed, but only the files of type zip will be counted).
• The execution time is limited by the timeout value in agent configuration. Since large directory traversal may take longer
than that, no data will be returned and the item will turn unsupported. Partial list will not be returned.
• When filtering by size, only regular files have meaningful sizes. Under Linux and BSD, directories also have non-zero sizes
(a few Kb typically). Devices have zero sizes, e.g. the size of /dev/sda1 does not reflect the respective partition size.
Therefore, when using min size and max size, it is advisable to specify types incl as ”file”, to avoid surprises.
Examples:
<br> The directory size (in bytes).<br> Return value: Integer.<br> Supported platforms: Linux. The item may work on other
UNIX-like platforms.
Parameters:
• Only directories with at least the read permission for zabbix user are calculated. For directories with read permission only,
the size of the directory itself is calculated. Directories with read & execute permissions are calculated including contents.
• With large directories or slow drives this item may time out due to the Timeout setting in agent and server/proxy configuration
files. Increase the timeout values as necessary.
• The file size limit depends on large file support.
Examples:
vfs.dir.size[/tmp,log] #calculates the size of all files in /tmp containing 'log' in their names
vfs.dir.size[/tmp,log,^.+\.old$] #calculates the size of all files in /tmp containing 'log' in their names
vfs.file.cksum[file,<mode>]
<br> The file checksum, calculated by the UNIX cksum algorithm.<br> Return value: Integer - with mode as crc32, String - with
mode as md5, sha256.<br> See supported platforms.
Parameters:
Example:
231
vfs.file.cksum[/etc/passwd]
Example of returned values (crc32/md5/sha256 respectively):
675436101
9845acf68b73991eb7fd7ee0ded23c44
ae67546e4aac995e5c921042d0cf0f1f7147703aa42bfbfb65404b30f238f2dc
vfs.file.contents[file,<encoding>]
7
<br> Retrieving the contents of a file .<br> Return value: Text.<br> See supported platforms.
Parameters:
Comments:
• The return value is limited to 16MB (including trailing whitespace that is truncated); database limits also apply;
• An empty string is returned if the file is empty or contains LF/CR characters only;
• The byte order mark (BOM) is excluded from the output.
Example:
vfs.file.contents[/etc/passwd]
vfs.file.exists[file,<types incl>,<types excl>]
<br> Checks if the file exists.<br> Return value: 0 - not found; 1 - file of the specified type exists.<br> See supported platforms.
Parameters:
Comments:
• Multiple types must be separated with a comma and the entire set enclosed in quotes ””;
• If the same type is in both <types_incl> and <types_excl>, files of this type are excluded;
• The file size limit depends on large file support.
Examples:
vfs.file.exists[/tmp/application.pid]
vfs.file.exists[/tmp/application.pid,"file,dir,sym"]
vfs.file.exists[/tmp/application_dir,dir]
vfs.file.get[file]
<br> Returns information about a file.<br> Return value: JSON object.<br> See supported platforms.
Parameter:
Comments:
• Supported file types on UNIX-like systems: regular file, directory, symbolic link, socket, block device, character device, FIFO.
• The file size limit depends on large file support.
Example:
vfs.file.get[/etc/passwd] #return a JSON with information about the /etc/passwd file (type, user, permissi
vfs.file.md5sum[file]
<br> The MD5 checksum of file.<br> Return value: Character string (MD5 hash of the file).<br> See supported platforms.
Parameter:
Example:
232
vfs.file.md5sum[/usr/local/etc/zabbix_agentd.conf]
Example of returned value:
b5052decb577e0fffd622d6ddc017e82
vfs.file.owner[file,<ownertype>,<resulttype>]
<br> Retrieves the owner of a file.<br> Return value: String.<br> See supported platforms.
Parameters:
Example:
<br> Return a 4-digit string containing the octal number with UNIX permissions.<br> Return value: String.<br> Supported plat-
forms: Linux. The item may work on other UNIX-like platforms.
Parameters:
Example:
Comments:
vfs.file.regexp[/etc/passwd,zabbix]
vfs.file.regexp[/path/to/some/file,"([0-9]+)$",,3,5,\1]
vfs.file.regexp[/etc/passwd,"^zabbix:.:([0-9]+)",,,,\1] → getting the ID of user *zabbix*
vfs.file.regmatch[file,regexp,<encoding>,<start line>,<end line>]
7
<br> Find a string in the file .<br> Return values: 0 - match not found; 1 - found.<br> See supported platforms.
Parameters:
233
• start line - the number of the first line to search (first line of file by default);
• end line - the number of the last line to search (last line of file by default).
Comments:
Example:
vfs.file.regmatch[/var/log/app.log,error]
vfs.file.size[file,<mode>]
<br> The file size (in bytes).<br> Return value: Integer.<br> See supported platforms.
Parameters:
Comments:
Example:
vfs.file.size[/var/log/syslog]
vfs.file.time[file,<mode>]
<br> The file time information.<br> Return value: Integer (Unix timestamp).<br> See supported platforms.
Parameters:
Example:
vfs.file.time[/etc/passwd,modify]
vfs.fs.discovery
<br> The list of mounted filesystems with their type and mount options. Used for low-level discovery.<br> Return value: JSON
object.<br> Supported platforms: Linux, FreeBSD, Solaris, HP-UX, AIX, MacOS X, OpenBSD, NetBSD.
vfs.fs.get
<br> The list of mounted filesystems with their type, available disk space, inode statistics and mount options. Can be used for
low-level discovery.<br> Return value: JSON object.<br> Supported platforms: Linux, FreeBSD, Solaris, HP-UX, AIX, MacOS X,
OpenBSD, NetBSD.
Comments:
• File systems with the inode count equal to zero, which can be the case for file systems with dynamic inodes (e.g. btrfs), are
also reported;
• See also: Discovery of mounted filesystems.
vfs.fs.inode[fs,<mode>]
<br> The number or percentage of inodes.<br> Return value: Integer - for number; Float - for percentage.<br> See supported
platforms.
Parameters:
• fs - the filesystem;
• mode - possible values: total (default), free, used, pfree (free, percentage), or pused (used, percentage).
If the inode count equals zero, which can be the case for file systems with dynamic inodes (e.g. btrfs), the pfree/pused values will
be reported as ”100” and ”0” respectively.
Example:
vfs.fs.inode[/,pfree]
234
vfs.fs.size[fs,<mode>]
<br> The disk space in bytes or in percentage from total.<br> Return value: Integer - for bytes; Float - for percentage.<br> See
supported platforms.
Parameters:
• fs - the filesystem;
• mode - possible values: total (default), free, used, pfree (free, percentage), or pused (used, percentage).
Comments:
• If the filesystem is not mounted, returns the size of a local filesystem where the mount point is located;
• The reserved space of a file system is taken into account and not included when using the free mode.
Example:
vfs.fs.size[/tmp,free]
vm.memory.size[<mode>]
<br> The memory size in bytes or in percentage from total.<br> Return value: Integer - for bytes; Float - for percentage.<br>
See supported platforms.
Parameter:
• mode - possible values: total (default), active, anon, buffers, cached, exec, file, free, inactive, pinned, shared, slab, wired,
used, pused (used, percentage), available, or pavailable (available, percentage).
Comments:
• This item accepts three categories of parameters:<br>1) total - total amount of memory<br>2) platform-specific memory
types: active, anon, buffers, cached, exec, file, free, inactive, pinned, shared, slab, wired<br>3) user-level estimates on
how much memory is used and available: used, pused, available, pavailable
• The active mode parameter is supported only on FreeBSD, HP-UX, MacOS X, OpenBSD, NetBSD;
• The anon, exec, file mode parameters are supported only on NetBSD;
• The buffers mode parameter is supported only on Linux, FreeBSD, OpenBSD, NetBSD;
• The cached mode parameter is supported only on Linux, FreeBSD, AIX, OpenBSD, NetBSD;
• The inactive, wired mode parameters are supported only on FreeBSD, MacOS X, OpenBSD, NetBSD;
• The pinned mode parameter is supported only on AIX;
• The shared mode parameter is supported only on Linux 2.4, FreeBSD, OpenBSD, NetBSD;
• See also additional details for this item.
Example:
vm.memory.size[pavailable]
web.page.get[host,<path>,<port>]
<br> Get the content of a web page.<br> Return value: Web page source as text (including headers).<br> See supported plat-
forms.
Parameters:
• host - the hostname or URL (as scheme://host:port/path, where only host is mandatory). Allowed URL schemes:
4
http, https . A missing scheme will be treated as http. If a URL is specified path and port must be empty. Specifying user
name/password when connecting to servers that require authentication, for example: http://user:[email protected]
4
is only possible with cURL support . Punycode is supported in hostnames.
• path - the path to an HTML document (default is /);
• port - the port number (default is 80 for HTTP)
Comments:
• This item turns unsupported if the resource specified in host does not exist or is unavailable;
• host can be a hostname, domain name, IPv4 or IPv6 address. But for IPv6 address Zabbix agent must be compiled with
IPv6 support enabled.
Example:
web.page.get[www.example.com,index.php,80]
web.page.get[https://www.example.com]
web.page.get[https://blog.example.com/?s=zabbix]
web.page.get[localhost:80]
web.page.get["[::1]/server-status"]
235
web.page.perf[host,<path>,<port>]
<br> The loading time of a full web page (in seconds).<br> Return value: Float.<br> See supported platforms.
Parameters:
• host - the hostname or URL (as scheme://host:port/path, where only host is mandatory). Allowed URL schemes:
4
http, https . A missing scheme will be treated as http. If a URL is specified path and port must be empty. Specifying user
name/password when connecting to servers that require authentication, for example: http://user:[email protected]
4
is only possible with cURL support . Punycode is supported in hostnames.
• path - the path to an HTML document (default is /);
• port - the port number (default is 80 for HTTP)
Comments:
• This item turns unsupported if the resource specified in host does not exist or is unavailable;
• host can be a hostname, domain name, IPv4 or IPv6 address. But for IPv6 address Zabbix agent must be compiled with
IPv6 support enabled.
Example:
web.page.perf[www.example.com,index.php,80]
web.page.perf[https://www.example.com]
web.page.regexp[host,<path>,<port>,regexp,<length>,<output>]
<br> Find a string on the web page.<br> Return value: The matched string, or as specified by the optional output parameter.<br>
See supported platforms.
Parameters:
• host - the hostname or URL (as scheme://host:port/path, where only host is mandatory). Allowed URL schemes:
4
http, https . A missing scheme will be treated as http. If a URL is specified path and port must be empty. Specifying user
name/password when connecting to servers that require authentication, for example: http://user:[email protected]
4
is only possible with cURL support . Punycode is supported in hostnames.
• path - the path to an HTML document (default is /);
• port - the port number (default is 80 for HTTP)
• regexp - a regular expression describing the required pattern;
• length - the maximum number of characters to return;
• output - an optional output formatting template. The \0 escape sequence is replaced with the matched part of text (from
the first character where match begins until the character where match ends) while an \N (where N=1...9) escape sequence
is replaced with Nth matched group (or an empty string if the N exceeds the number of captured groups).
Comments:
• This item turns unsupported if the resource specified in host does not exist or is unavailable;
• host can be a hostname, domain name, IPv4 or IPv6 address. But for IPv6 address Zabbix agent must be compiled with
IPv6 support enabled.
• Content extraction using the output parameter takes place on the agent.
Example:
web.page.regexp[www.example.com,index.php,80,OK,2]
web.page.regexp[https://www.example.com,,,OK,2]|
agent.hostmetadata
<br> The agent host metadata.<br> Return value: String.<br> See supported platforms.
Returns the value of HostMetadata or HostMetadataItem parameters, or empty string if none are defined.
agent.hostname
<br> The agent host name.<br> Return value: String.<br> See supported platforms.
Returns:
• As passive check - the name of the first host listed in the Hostname parameter of the agent configuration file;
• As active check - the name of the current hostname.
agent.ping
<br> The agent availability check.<br> Return value: Nothing - unavailable; 1 - available.<br> See supported platforms.
236
agent.variant
<br> The variant of Zabbix agent (Zabbix agent or Zabbix agent 2).<br> Return value: 1 - Zabbix agent; 2 - Zabbix agent 2.<br>
See supported platforms.
agent.version
<br> The version of Zabbix agent.<br> Return value: String.<br> See supported platforms.
6.0.3
zabbix.stats[<ip>,<port>]
<br> Returns a set of Zabbix server or proxy internal metrics remotely.<br> Return value: JSON object.<br> See supported
platforms.
Parameters:
Comments:
• A selected set of internal metrics is returned by this item. For details, see Remote monitoring of Zabbix stats;
• Note that the stats request will only be accepted from the addresses listed in the ’StatsAllowedIP’ server/proxy parameter
on the target instance.
zabbix.stats[<ip>,<port>,queue,<from>,<to>]
<br> Returns the number of monitored items in the queue which are delayed on Zabbix server or proxy remotely.<br> Return
value: JSON object.<br> See supported platforms.
Parameters:
Note that the stats request will only be accepted from the addresses listed in the ’StatsAllowedIP’ server/proxy parameter on the
target instance.
Footnotes
1
A Linux-specific note. Zabbix agent must have read-only access to filesystem /proc. Kernel patches from www.grsecurity.org limit
access rights of non-privileged users.
2
vfs.dev.read[], vfs.dev.write[]: Zabbix agent will terminate ”stale” device connections if the item values are not ac-
cessed for more than 3 hours. This may happen if a system has devices with dynamically changing paths or if a device gets
manually removed. Note also that these items, if using an update interval of 3 hours or more, will always return ’0’.
3
vfs.dev.read[], vfs.dev.write[]: If default all is used for the first parameter then the key will return summary statistics,
including all block devices like sda, sdb, and their partitions (sda1, sda2, sdb3...) and multiple devices (MD raid) based on those
block devices/partitions and logical volumes (LVM) based on those block devices/partitions. In such cases returned values should
be considered only as relative value (dynamic in time) but not as absolute values.
4
SSL (HTTPS) is supported only if agent is compiled with cURL support. Otherwise the item will turn unsupported.
5
The bytes and errors values are not supported for loopback interfaces on Solaris systems up to and including Solaris 10 6/06
as byte, error and utilization statistics are not stored and/or reported by the kernel. However, if you’re monitoring a Solaris system
via net-snmp, values may be returned as net-snmp carries legacy code from the cmu-snmp dated as old as 1997 that, upon failing
to read byte values from the interface statistics returns the packet counter (which does exist on loopback interfaces) multiplied
by an arbitrary value of 308. This makes the assumption that the average length of a packet is 308 octets, which is a very rough
estimation as the MTU limit on Solaris systems for loopback interfaces is 8892 bytes. These values should not be assumed to be
correct or even closely accurate. They are guestimates. The Zabbix agent does not do any guess work, but net-snmp will return a
value for these fields.
6
The command line on Solaris, obtained from /proc/pid/psinfo, is limited to 80 bytes and contains the command line as it was
when the process was started.
237
7
vfs.file.contents[], vfs.file.regexp[], vfs.file.regmatch[] items can be used for retrieving file contents. If you
want to restrict access to specific files with sensitive information, run Zabbix agent under a user that has no access permissions
to viewing these files.
Note that when testing or using item keys with zabbix_agentd or zabbix_get from the command line you should consider shell
syntax too.
For example, if a certain parameter of the key has to be enclosed in double quotes you have to explicitly escape double quotes,
otherwise they will be trimmed by the shell as special characters and will not be passed to the Zabbix utility.
Examples:
zabbix_agentd -t 'vfs.dir.count[/var/log,,,"file,dir",,0]'
zabbix_agentd -t vfs.dir.count[/var/log,,,\"file,dir\",,0]
Encoding settings
To make sure that the acquired data are not corrupted you may specify the correct encoding for processing the check (e.g.
’vfs.file.contents’) in the encoding parameter. The list of supported encodings (code page identifiers) may be found in docu-
mentation for libiconv (GNU Project) or in Microsoft Windows SDK documentation for ”Code Page Identifiers”.
If no encoding is specified in the encoding parameter the following resolution strategies are applied:
• If encoding is not specified (or is an empty string) it is assumed to be UTF-8, the data is processed ”as-is”;
• BOM analysis - applicable for items ’vfs.file.contents’, ’vfs.file.regexp’, ’vfs.file.regmatch’. An attempt is made to determine
the correct encoding by using the byte order mark (BOM) at the beginning of the file. If BOM is not present - standard
resolution (see above) is applied instead.
In case of passive checks, to prevent that item does not get any value because the server request to the agent times out first, the
following should be noted:
• Where agent version is older than server version, the Timeout value in the item configuration (or global timeout) may need
to be higher than the Timeout value in the agent configuration file.
• Where agent version is newer than server version, the Timeout value in the server configuration file may need to be higher
than the Timeout value in the agent configuration file.
1 Zabbix agent 2
Zabbix agent 2 supports all item keys supported for Zabbix agent on Unix and Windows. This page provides details on the additional
item keys, which you can use with Zabbix agent 2 only, grouped by the plugin they belong to.
The item keys are listed without parameters and additional information. Click on the item key to see the full details.
ceph.df.details The cluster’s data usage and distribution among pools. Ceph
ceph.osd.stats Aggregated and per OSD statistics.
ceph.osd.discovery The list of discovered OSDs.
ceph.osd.dump The usage thresholds and statuses of OSDs.
ceph.ping Tests whether a connection to Ceph can be established.
ceph.pool.discovery The list of discovered pools.
ceph.status The overall cluster’s status.
docker.container_info Low-level information about a container. Docker
docker.container_stats The container resource usage statistics.
docker.containers Returns the list of containers.
docker.containers.discovery
Returns the list of containers. Used for low-level discovery.
docker.data.usage Information about the current data usage.
docker.images Returns the list of images.
docker.images.discovery Returns the list of images. Used for low-level discovery.
docker.info The system information.
docker.ping Test if the Docker daemon is alive or not.
ember.get Returns the result of the required device. Ember+
memcached.ping Test if a connection is alive or not. Memcached
238
Item key Description Plugin
239
Item key Description Plugin
See also:
• Built-in plugins
• Loadable plugins
Parameters without angle brackets are mandatory. Parameters marked with angle brackets < > are optional.
ceph.df.details[connString,<user>,<apikey>]
<br> The cluster’s data usage and distribution among pools.<br> Return value: JSON object.
Parameters:
240
• user, password - the Ceph login credentials.<br>
ceph.osd.stats[connString,<user>,<apikey>]
<br> Aggregated and per OSD statistics.<br> Return value: JSON object.
Parameters:
ceph.osd.discovery[connString,<user>,<apikey>]
<br> The list of discovered OSDs. Used for low-level discovery.<br> Return value: JSON object.
Parameters:
ceph.osd.dump[connString,<user>,<apikey>]
<br> The usage thresholds and statuses of OSDs.<br> Return value: JSON object.
Parameters:
ceph.ping[connString,<user>,<apikey>]
<br> Tests whether a connection to Ceph can be established.<br> Return value: 0 - connection is broken (if there is any error
presented including AUTH and configuration issues); 1 - connection is successful.
Parameters:
ceph.pool.discovery[connString,<user>,<apikey>]
<br> The list of discovered pools. Used for low-level discovery.<br> Return value: JSON object.
Parameters:
ceph.status[connString,<user>,<apikey>]
Parameters:
docker.container_info[<ID>,<info>]
<br> Low-level information about a container.<br> Return value: The output of the ContainerInspect API call serialized as JSON.
Parameters:
The Agent 2 user (’zabbix’) must be added to the ’docker’ group for sufficient privileges. Otherwise the check will fail.
docker.container_stats[<ID>]
<br> The container resource usage statistics.<br> Return value: The output of the ContainerStats API call and CPU usage percent-
age serialized as JSON.
Parameter:
241
The Agent 2 user (’zabbix’) must be added to the ’docker’ group for sufficient privileges. Otherwise the check will fail.
docker.containers
<br> The list of containers.<br> Return value: The output of the ContainerList API call serialized as JSON.
The Agent 2 user (’zabbix’) must be added to the ’docker’ group for sufficient privileges. Otherwise the check will fail.
docker.containers.discovery[<options>]
<br> Returns the list of containers. Used for low-level discovery.<br> Return value: JSON object.
Parameter:
• options - specify whether all or only running containers should be discovered. Supported values: true - return all containers;
false - return only running containers (default).
The Agent 2 user (’zabbix’) must be added to the ’docker’ group for sufficient privileges. Otherwise the check will fail.
docker.data.usage
<br> Information about the current data usage.<br> Return value: The output of the SystemDataUsage API call serialized as JSON.
The Agent 2 user (’zabbix’) must be added to the ’docker’ group for sufficient privileges. Otherwise the check will fail.
docker.images
<br> Returns the list of images.<br> Return value: The output of the ImageList API call serialized as JSON.
The Agent 2 user (’zabbix’) must be added to the ’docker’ group for sufficient privileges. Otherwise the check will fail.
docker.images.discovery
<br> Returns the list of images. Used for low-level discovery.<br> Return value: JSON object.
The Agent 2 user (’zabbix’) must be added to the ’docker’ group for sufficient privileges. Otherwise the check will fail.
docker.info
<br> The system information.<br> Return value: The output of the SystemInfo API call serialized as JSON.
The Agent 2 user (’zabbix’) must be added to the ’docker’ group for sufficient privileges. Otherwise the check will fail.
docker.ping
<br> Test if the Docker daemon is alive or not.<br> Return value: 1 - the connection is alive; 0 - the connection is broken.
The Agent 2 user (’zabbix’) must be added to the ’docker’ group for sufficient privileges. Otherwise the check will fail.
ember.get[<uri>,<path>]
<br> Returns the result of the required device.<br> Return value: JSON object.
Parameters:
memcached.ping[connString,<user>,<password>]
<br> Test if a connection is alive or not.<br> Return value: 1 - the connection is alive; 0 - the connection is broken (if there is any
error presented including AUTH and configuration issues).
Parameters:
memcached.stats[connString,<user>,<password>,<type>]
<br> Gets the output of the STATS command.<br> Return value: JSON - the output is serialized as JSON.
Parameters:
242
mongodb.collection.stats[connString,<user>,<password>,<database>,collection]
<br> Returns a variety of storage statistics for a given collection.<br> Return value: JSON object.
Parameters:
mongodb.collections.discovery[connString,<user>,<password>]
<br> Returns a list of discovered collections. Used for low-level discovery.<br> Return value: JSON object.
Parameters:
mongodb.collections.usage[connString,<user>,<password>]
<br> Returns the usage statistics for collections.<br> Return value: JSON object.
Parameters:
mongodb.connpool.stats[connString,<user>,<password>]
<br> Returns information regarding the open outgoing connections from the current database instance to other members of the
sharded cluster or replica set.<br> Return value: JSON object.
Parameters:
mongodb.db.stats[connString,<user>,<password>,<database>]
<br> Returns the statistics reflecting a given database system state.<br> Return value: JSON object.
Parameters:
mongodb.db.discovery[connString,<user>,<password>]
<br> Returns a list of discovered databases. Used for low-level discovery.<br> Return value: JSON object.
Parameters:
mongodb.jumbo_chunks.count[connString,<user>,<password>]
<br> Returns the count of jumbo chunks.<br> Return value: JSON object.
Parameters:
mongodb.oplog.stats[connString,<user>,<password>]
<br> Returns the status of the replica set, using data polled from the oplog.<br> Return value: JSON object.
Parameters:
243
mongodb.ping[connString,<user>,<password>]
<br> Test if a connection is alive or not.<br> Return value: 1 - the connection is alive; 0 - the connection is broken (if there is any
error presented including AUTH and configuration issues).
Parameters:
mongodb.rs.config[connString,<user>,<password>]
<br> Returns the current configuration of the replica set.<br> Return value: JSON object.
Parameters:
mongodb.rs.status[connString,<user>,<password>]
<br> Returns the replica set status from the point of view of the member where the method is run.<br> Return value: JSON object.
Parameters:
mongodb.server.status[connString,<user>,<password>]
Parameters:
mongodb.sh.discovery[connString,<user>,<password>]
<br> Returns the list of discovered shards present in the cluster.<br> Return value: JSON object.
Parameters:
mongodb.version[connString,<user>,<password>]
Parameters:
mqtt.get[<broker url>,topic,<user>,<password>]
<br> Subscribes to a specific topic or topics (with wildcards) of the provided broker and waits for publications.<br> Return value:
Depending on topic content. If wildcards are used, returns topic content as JSON.
Parameters:
• broker url - the MQTT broker URL in the format protocol://host:port without query parameters (supported protocols:
tcp, ssl, ws). If no value is specified, the agent will use tcp://localhost:1883. If a protocol or port are omitted, default
protocol (tcp) or port (1883) will be used; <br>
• topic - the MQTT topic (mandatory). Wildcards (+,#) are supported;<br>
• user, password - the authentication credentials (if required).<br>
Comments:
• The item must be configured as an active check (’Zabbix agent (active)’ item type);
• TLS encryption certificates can be used by saving them into a default location (e.g. /etc/ssl/certs/ directory for Ubuntu).
For TLS, use the tls:// scheme.
mssql.availability.group.get[URI,<user>,<password>]
Parameters:
244
• URI - MSSQL server URI (the only supported schema is sqlserver://). Embedded credentials will be ignored;<br>
• user, password - username, password to send to protected MSSQL server.<br>
mssql.custom.query[URI,<user>,<password>,queryName,<args...>]
<br> Returns the result of a custom query.<br> Return value: JSON object.
Parameters:
• URI - MSSQL server URI (the only supported schema is sqlserver://). Embedded credentials will be ignored;<br>
• user, password - username, password to send to protected MSSQL server;<br>
• queryName - name of a custom query configured in Plugins.MSSQL.CustomQueriesDir without the .sql exten-
sion;<br>
• args - one or several comma-separated arguments to pass to a query.
mssql.db.get
<br> Returns all available MSSQL databases.<br> Return value: JSON object.
mssql.job.status.get
mssql.last.backup.get
<br> Returns the last backup time for all databases.<br> Return value: JSON object.
mssql.local.db.get
<br> Returns databases that are participating in an Always On availability group and replica (primary or secondary) and are located
on the server that the connection was established to.<br> Return value: JSON object.
mssql.mirroring.get
mssql.nonlocal.db.get
<br> Returns databases that are participating in an Always On availability group and replica (primary or secondary) located on
other servers (the database is not local to the SQL Server instance that the connection was established to).<br> Return value:
JSON object.
mssql.perfcounter.get
mssql.ping
<br> Ping the database. Test if connection is correctly configured.<br> Return value: 1 - alive, 0 - not alive.
mssql.quorum.get
mssql.quorum.member.get
245
mssql.replica.get
mssql.version
mysql.custom.query[connString,<user>,<password>,queryName,<args...>]
<br> Returns the result of a custom query.<br> Return value: JSON object.
Parameters:
mysql.db.discovery[connString,<user>,<password>]
<br> Returns the list of MySQL databases. Used for low-level discovery.<br> Return value: The result of the ”show databases”
SQL query in LLD JSON format.
Parameters:
mysql.db.size[connString,<user>,<password>,<database name>]
<br> The database size in bytes.<br> Return value: Result of the ”select coalesce(sum(data_length + index_length),0) as size
from information_schema.tables where table_schema=?” SQL query for specific database in bytes.
Parameters:
mysql.get_status_variables[connString,<user>,<password>]
<br> Values of the global status variables.<br> Return value: Result of the ”show global status” SQL query in JSON format.
Parameters:
mysql.ping[connString,<user>,<password>]
<br> Test if a connection is alive or not.<br> Return value: 1 - the connection is alive; 0 - the connection is broken (if there is any
error presented including AUTH and configuration issues).
Parameters:
mysql.replication.discovery[connString,<user>,<password>]
<br> Returns the list of MySQL replications. Used for low-level discovery.<br> Return value: The result of the ”show slave status”
SQL query in LLD JSON format.
Parameters:
mysql.replication.get_slave_status[connString,<user>,<password>,<master host>]
<br> The replication status.<br> Return value: Result of the ”show slave status” SQL query in JSON format.
Parameters:
246
• connString - the URI or session name;<br>
• user, password - the MySQL login credentials;<br>
• master host - the replication master host name. If none found, an error is returned. If this parameter is not specified, all
hosts are returned.<br>
mysql.version[connString,<user>,<password>]
<br> The MySQL version.<br> Return value: String (with the MySQL instance version).
Parameters:
net.dns.get[<ip>,name,<type>,<timeout>,<count>,<protocol>,”<flags>”]
Performs a DNS query and returns detailed DNS record information.<br> This item is an extended version of the net.dns.record
Zabbix agent item with more record types and customizable flags supported.<br> Return values: JSON object
Parameters:
• ip - the IP address of DNS server (leave empty for the default DNS server);
• name - the DNS name to query;
• type - the record type to be queried (default is SOA);
• timeout - the timeout for the request in seconds (default is 1 second);
• count - the number of tries for the request (default is 2);
• protocol - the protocol used to perform DNS queries: udp (default) or tcp;
• flags - one or more comma-separated arguments to pass to a query.
Comments:
• The possible values for type are: A, NS, MD, MF, CNAME, SOA, MB, MG, MR, NULL, PTR, HINFO, MINFO, MX, TXT, RP, AFSDB,
X25, ISDN, RT, NSAPPTR, SIG, KEY, PX, GPOS, AAAA, LOC, NXT, EID, NIMLOC, SRV, ATMA, NAPTR, KX, CERT, DNAME, OPT, APL,
DS, SSHFP, IPSECKEY, RRSIG, NSEC, DNSKEY, DHCID, NSEC3, NSEC3PARAM, TLSA, SMIMEA, HIP, NINFO, RKEY, TALINK, CDS,
CDNSKEY, OPENPGPKEY, CSYNC, ZONEMD, SVCB, HTTPS, SPF, UINFO, UID, GID, UNSPEC, NID, L32, L64, LP, EUI48, EUI64,
URI, CAA, AVC, AMTRELAY. Note that values must be in uppercase only; lowercase or mixed case values are not supported.
• For reverse DNS lookups (when type is set to PTR), you can provide the DNS name in both reversed and non-reversed format
(see examples below). Note that when PTR record is requested, the DNS name is actually an IP address.
• The possible values for flags are: cdflag or nocdflag (default), rdflag (default) or nordflag, dnssec or nodnssec (default),
nsid or nonsid (default), edns0 (default) or noedns0, aaflag or noaaflag (default), adflag or noadflag (default). The flags
dnssec and nsid cannot be used together with noedns0, as both require edns0. Note that values must be in lowercase only;
uppercase case or mixed case values are not supported.
• Internationalized domain names are not supported, please use IDNA encoded names instead.
• The output is an object containing DNS record information based on the parameters provided (see more details).
Examples:
net.dns.get[192.0.2.0,zabbix.com,DNSKEY,3,3,tcp,"cdflag,rdflag,nsid"]
net.dns.get[,198.51.100.1,PTR,,,,"cdflag,rdflag,nsid"]
net.dns.get[,1.100.51.198.in-addr.arpa,PTR,,,,"cdflag,rdflag,nsid"]
net.dns.get[,2a00:1450:400f:800::200e,PTR,,,,"cdflag,rdflag,nsid"]
net.dns.get[,e.0.0.2.0.0.0.0.0.0.0.0.0.0.0.0.0.0.8.0.f.0.0.4.0.5.4.1.0.0.a.2.ip6.arpa,PTR,,,,"cdflag,rdfla
oracle.diskgroups.stats[connString,<user>,<password>,<service>,<diskgroup>]
<br> Returns the Automatic Storage Management (ASM) disk groups statistics.<br> Return value: JSON object.
Parameters:
oracle.diskgroups.discovery[connString,<user>,<password>,<service>]
<br> Returns the list of ASM disk groups. Used for low-level discovery.<br> Return value: JSON object.
247
Parameters:
oracle.archive.info[connString,<user>,<password>,<service>,<destination>]
Parameters:
oracle.cdb.info[connString,<user>,<password>,<service>,<database>]
<br> The Container Databases (CDBs) information.<br> Return value: JSON object.
Parameters:
oracle.custom.query[connString,<user>,<password>,<service>,queryName,<args...>]
Parameters:
oracle.datafiles.stats[connString,<user>,<password>,<service>]
<br> Returns the data files statistics.<br> Return value: JSON object.
Parameters:
oracle.db.discovery[connString,<user>,<password>,<service>]
<br> Returns the list of databases. Used for low-level discovery.<br> Return value: JSON object.
Parameters:
248
oracle.fra.stats[connString,<user>,<password>,<service>]
<br> Returns the Fast Recovery Area (FRA) statistics.<br> Return value: JSON object.
Parameters:
oracle.instance.info[connString,<user>,<password>,<service>]
Parameters:
oracle.pdb.info[connString,<user>,<password>,<service>,<database>]
<br> The Pluggable Databases (PDBs) information.<br> Return value: JSON object.
Parameters:
oracle.pdb.discovery[connString,<user>,<password>,<service>]
<br> Returns the list of PDBs. Used for low-level discovery.<br> Return value: JSON object.
Parameters:
oracle.pga.stats[connString,<user>,<password>,<service>]
<br> Returns the Program Global Area (PGA) statistics.<br> Return value: JSON object.
Parameters:
oracle.ping[connString,<user>,<password>,<service>]
<br> Test whether a connection to Oracle can be established.<br> Return value: 1 - the connection is successful; 0 - the connection
is broken (if there is any error presented including AUTH and configuration issues).
Parameters:
249
oracle.proc.stats[connString,<user>,<password>,<service>]
Parameters:
oracle.redolog.info[connString,<user>,<password>,<service>]
<br> The log file information from the control file.<br> Return value: JSON object.
Parameters:
oracle.sga.stats[connString,<user>,<password>,<service>]
<br> Returns the System Global Area (SGA) statistics.<br> Return value: JSON object.
Parameters:
oracle.sessions.stats[connString,<user>,<password>,<service>,<lockMaxTime>]
Parameters:
oracle.sys.metrics[connString,<user>,<password>,<service>,<duration>]
<br> Returns a set of system metric values.<br> Return value: JSON object.
Parameters:
oracle.sys.params[connString,<user>,<password>,<service>]
<br> Returns a set of system parameter values.<br> Return value: JSON object.
Parameters:
250
• service - the Oracle service name.<br>
oracle.ts.stats[connString,<user>,<password>,<service>,<tablespace>,<type>,<conname>]
Parameters:
If tablespace, type, or conname is omitted, the item will return tablespace statistics for all matching containers (including PDBs
and CDB).
oracle.ts.discovery[connString,<user>,<password>,<service>]
<br> Returns a list of tablespaces. Used for low-level discovery.<br> Return value: JSON object.
Parameters:
oracle.user.info[connString,<user>,<password>,<service>,<username>]
Parameters:
oracle.version[connString,<user>,<password>,<service>]
Parameters:
pgsql.autovacuum.count[uri,<username>,<password>,<database name>]
Parameters:
pgsql.archive[uri,<username>,<password>,<database name>]
<br> The information about archived files.<br> Return value: JSON object.
Parameters:
251
• uri - the URI or session name;<br>
• username, password - the PostgreSQL credentials;<br>
• database name - the database name.<br>
pgsql.bgwriter[uri,<username>,<password>,<database name>]
<br> The combined number of checkpoints for the database cluster, broken down by checkpoint type.<br> Return value: JSON
object.
Parameters:
pgsql.cache.hit[uri,<username>,<password>,<database name>]
<br> The PostgreSQL buffer cache hit rate.<br> Return value: Float.
Parameters:
pgsql.connections[uri,<username>,<password>,<database name>]
Parameters:
pgsql.custom.query[uri,<username>,<password>,queryName,<args...>]
<br> Returns the result of a custom query.<br> Return value: JSON object.
Parameters:
pgsql.db.age[uri,<username>,<password>,<database name>]
<br> The age of the oldest FrozenXID of the database.<br> Return value: Integer.
Parameters:
pgsql.db.bloating_tables[uri,<username>,<password>,<database name>]
<br> The number of bloating tables per database.<br> Return value: Integer.
Parameters:
pgsql.db.discovery[uri,<username>,<password>,<database name>]
<br> The list of PostgreSQL databases. Used for low-level discovery.<br> Return value: JSON object.
Parameters:
252
pgsql.db.size[uri,<username>,<password>,<database name>]
Parameters:
pgsql.dbstat[uri,<username>,<password>,<database name>]
<br> Collects the statistics per database. Used for low-level discovery.<br> Return value: JSON object.
Parameters:
pgsql.dbstat.sum[uri,<username>,<password>,<database name>]
<br> The summarized data for all databases in a cluster.<br> Return value: JSON object.
Parameters:
pgsql.locks[uri,<username>,<password>,<database name>]
<br> The information about granted locks per database. Used for low-level discovery.<br> Return value: JSON object.
Parameters:
pgsql.oldest.xid[uri,<username>,<password>,<database name>]
Parameters:
pgsql.ping[uri,<username>,<password>,<database name>]
<br> Tests whether a connection is alive or not.<br> Return value: 1 - the connection is alive; 0 - the connection is broken (if there
is any error presented including AUTH and configuration issues).
Parameters:
Parameters:
pgsql.replication.count[uri,<username>,<password>]
Parameters:
253
• uri - the URI or session name;<br>
• username, password - the PostgreSQL credentials.
pgsql.replication.process[uri,<username>,<password>]
<br> The flush lag, write lag and replay lag per each sender process.<br> Return value: JSON object.
Parameters:
pgsql.replication.process.discovery[uri,<username>,<password>]
<br> The replication process name discovery.<br> Return value: JSON object.
Parameters:
pgsql.replication.recovery_role[uri,<username>,<password>]
<br> The recovery status.<br> Return value: 0 - master mode; 1 - recovery is still in progress (standby mode).
Parameters:
pgsql.replication.status[uri,<username>,<password>]
<br> The status of replication.<br> Return value: 0 - streaming is down; 1 - streaming is up; 2 - master mode.
Parameters:
pgsql.replication_lag.b[uri,<username>,<password>]
Parameters:
pgsql.replication_lag.sec[uri,<username>,<password>]
Parameters:
pgsql.uptime[uri,<username>,<password>,<database name>]
Parameters:
pgsql.version[uri,<username>,<password>,<database name>]
Parameters:
254
pgsql.wal.stat[uri,<username>,<password>,<database name>]
Parameters:
redis.config[connString,<password>,<pattern>]
<br> Gets the configuration parameters of a Redis instance that match the pattern.<br> Return value: JSON - if a glob-style pattern
was used; single value - if a pattern did not contain any wildcard character.
Parameters:
redis.info[connString,<password>,<section>]
<br> Gets the output of the INFO command.<br> Return value: JSON - the output is serialized as JSON.
Parameters:
redis.ping[connString,<password>]
<br> Test if a connection is alive or not.<br> Return value: 1 - the connection is alive; 0 - the connection is broken (if there is any
error presented including AUTH and configuration issues).
Parameters:
redis.slowlog.count[connString,<password>]
<br> The number of slow log entries since Redis was started.<br> Return value: Integer.
Parameters:
smart.attribute.discovery
<br> Returns a list of S.M.A.R.T. device attributes.<br> Return value: JSON object.
Comments:
• The following macros and their values are returned: {#NAME}, {#DISKTYPE}, {#ID}, {#ATTRNAME}, {#THRESH};
• HDD, SSD and NVME drive types are supported. Drives can be alone or combined in a RAID. {#NAME} will have an add-on
in case of RAID, e.g: {”{#NAME}”: ”/dev/sda cciss,2”}.
smart.disk.discovery
Comments:
• The following macros and their values are returned: {#NAME}, {#DISKTYPE}, {#MODEL}, {#SN}, {#PATH}, {#AT-
TRIBUTES}, {#RAIDTYPE};
• HDD, SSD and NVME drive types are supported. If a drive does not belong to a RAID, the {#RAIDTYPE} will be empty.
{#NAME} will have an add-on in case of RAID, e.g: {”{#NAME}”: ”/dev/sda cciss,2”}.
smart.disk.get[<path>,<raid type>]
<br> Returns all available properties of S.M.A.R.T. devices.<br> Return value: JSON object.
Parameters:
• path - the disk path, the {#PATH} macro may be used as a value;<br>
255
• raid_type - the RAID type, the {#RAID} macro may be used as a value
Comments:
• HDD, SSD and NVME drive types are supported. Drives can be alone or combined in a RAID;<br>
• The data includes smartctl version and call arguments, and additional fields:<br>disk_name - holds the name with the
required add-ons for RAID discovery, e.g: {”disk_name”: ”/dev/sda cciss,2”}<br>disk_type - holds the disk type HDD, SSD,
or NVME, e.g: {”disk_type”: ”ssd”};<br>
• If no parameters are specified, the item will return information about all disks.
systemd.unit.get[unit name,<interface>]
<br> Returns all properties of a systemd unit.<br> Return value: JSON object.
Parameters:
• unit name - the unit name (you may want to use the {#UNIT.NAME} macro in item prototype to discover the name);<br>
• interface - the unit interface type, possible values: Unit (default), Service, Socket, Device, Mount, Automount, Swap, Target,
Path.
Comments:
Parameters:
• unit name - the unit name (you may want to use the {#UNIT.NAME} macro in item prototype to discover the name);<br>
• property - unit property (e.g. ActiveState (default), LoadState, Description);
• interface - the unit interface type (e.g. Unit (default), Socket, Service).
Comments:
Examples:
<br> List of systemd units and their details. Used for low-level discovery.<br> Return value: JSON object.
Parameter:
• type - possible values: all, automount, device, mount, path, service (default), socket, swap, target.
web.certificate.get[hostname,<port>,<address>]
<br> Validates the certificates and returns certificate details.<br> Return value: JSON object.
Parameter:
• hostname - can be either IP or DNS.<br>May contain the URL scheme (https only), path (it will be ignored), and port.<br>If
a port is provided in both the first and the second parameters, their values must match.<br>If address (the 3rd parameter)
is specified, the hostname is only used for SNI and hostname verification;<br>
• port - the port number (default is 443 for HTTPS);<br>
• address - can be either IP or DNS. If specified, it will be used for connection, and hostname (the 1st parameter) will be used
for SNI, and host verification. In case, the 1st parameter is an IP and the 3rd parameter is DNS, the 1st parameter will be
used for connection, and the 3rd parameter will be used for SNI and host verification.
Comments:
256
• This item turns unsupported if the resource specified in host does not exist or is unavailable or if TLS handshake fails with
any error except an invalid certificate;<br>
• Currently, AIA (Authority Information Access) X.509 extension, CRLs and OCSP (including OCSP stapling), Certificate Trans-
parency, and custom CA trust store are not supported.
Overview
• Shared items - the item keys that are shared with the UNIX Zabbix agent;
• Windows-specific items - the item keys that are supported only on Windows.
Note that all item keys supported by Zabbix agent on Windows are also supported by the new generation Zabbix agent 2. See the
additional item keys that you can use with the agent 2 only.
Shared items
The table below lists Zabbix agent items that are supported on Windows and are shared with the UNIX Zabbix agent:
• The item key is a link to full details of the UNIX Zabbix agent item
• Windows-relevant item comments are included
log The monitoring of a log file. This item is not supported for Windows Event Log. Log
The persistent_dir parameter is not supported on Windows. monitoring
log.count The count of matched lines in a monitored log file. This item is not supported for
Windows Event Log.
The persistent_dir parameter is not supported on Windows.
logrt The monitoring of a log file that is rotated. This item is not supported for Windows
Event Log.
The persistent_dir parameter is not supported on Windows.
logrt.count The count of matched lines in a monitored log file that is rotated. This item is not
supported for Windows Event Log.
The persistent_dir parameter is not supported on Windows.
modbus.get Reads Modbus data. Modbus
net.dns Checks if the DNS service is up. Network
The ip, timeout and count parameters are ignored on Windows unless using
Zabbix agent 2.
net.dns.perf Checks the performance of a DNS service.
The ip, timeout and count parameters are ignored on Windows unless using
Zabbix agent 2.
net.dns.record Performs a DNS query.
The ip, timeout and count parameters are ignored on Windows unless using
Zabbix agent 2.
net.if.discovery The list of network interfaces.
Some Windows versions (for example, Server 2008) might require the latest
updates installed to support non-ASCII characters in interface names.
net.if.in The incoming traffic statistics on a network interface.
On Windows, the item gets values from 64-bit counters if available. 64-bit
interface statistic counters were introduced in Windows Vista and Windows Server
2008. If 64-bit counters are not available, the agent uses 32-bit counters.
Multi-byte interface names on Windows are supported.
You may obtain network interface descriptions on Windows with net.if.discovery or
net.if.list items.
net.if.out The outgoing traffic statistics on a network interface.
On Windows, the item gets values from 64-bit counters if available. 64-bit
interface statistic counters were introduced in Windows Vista and Windows Server
2008. If 64-bit counters are not available, the agent uses 32-bit counters.
Multi-byte interface names on Windows are supported.
You may obtain network interface descriptions on Windows with net.if.discovery or
net.if.list items.
257
Item key Description Item group
net.if.total The sum of incoming and outgoing traffic statistics on a network interface.
On Windows, the item gets values from 64-bit counters if available. 64-bit
interface statistic counters were introduced in Windows Vista and Windows Server
2008. If 64-bit counters are not available, the agent uses 32-bit counters.
You may obtain network interface descriptions on Windows with net.if.discovery or
net.if.list items.
net.tcp.listen Checks if this TCP port is in LISTEN state.
net.tcp.port Checks if it is possible to make a TCP connection to the specified port.
net.tcp.service Checks if a service is running and accepting TCP connections.
Checking of LDAP and HTTPS on Windows is only supported by Zabbix agent 2.
net.tcp.service.perf Checks the performance of a TCP service.
Checking of LDAP and HTTPS on Windows is only supported by Zabbix agent 2.
net.tcp.socket.count Returns the number of TCP sockets that match parameters.
This item is supported on Linux by Zabbix agent, but on Windows it is supported
only by Zabbix agent 2 on 64-bit Windows.
net.udp.service Checks if a service is running and responding to UDP requests.
net.udp.service.perf Checks the performance of a UDP service.
net.udp.socket.count Returns the number of UDP sockets that match parameters.
This item is supported on Linux by Zabbix agent, but on Windows it is supported
only by Zabbix agent 2 on 64-bit Windows.
proc.num The number of processes. Processes
On Windows, only the name and user parameters are supported.
system.cpu.discovery The list of detected CPUs/CPU cores. System
system.cpu.load The CPU load.
system.cpu.num The number of CPUs.
system.cpu.util The CPU utilization percentage.
The value is acquired using the Processor Time performance counter. Note that
since Windows 8 its Task Manager shows CPU utilization based on the Processor
Utility performance counter, while in previous versions it was the Processor Time
counter.
system is the only type parameter supported on Windows.
system.hostname The system host name.
The value is acquired by either GetComputerName() (for netbios),
GetComputerNameExA() (for fqdn), or gethostname() (for host) functions on
Windows.
See also a more detailed description.
system.localtime The system time.
system.run Run the specified command on the host.
system.sw.arch The software architecture information.
system.swap.size The swap space size in bytes or in percentage from total.
The pused type parameter is supported on Linux by Zabbix agent, but on
Windows it is supported only by Zabbix agent 2.
Note that this key might report incorrect swap space size/percentage on
virtualized (VMware ESXi, VirtualBox) Windows platforms. In this case you may use
the perf_counter[\700(_Total)\702] key to obtain correct swap space
percentage.
system.uname Identification of the system.
On Windows the value for this item is obtained from Win32_OperatingSystem and
Win32_Processor WMI classes. The OS name (including edition) might be
translated to the user’s display language. On some versions of Windows it
contains trademark symbols and extra spaces.
system.uptime The system uptime in seconds.
vfs.dir.count The directory entry count. Virtual file
On Windows, directory symlinks are skipped and hard links are counted only once. systems
vfs.dir.get The directory entry list.
On Windows, directory symlinks are skipped and hard links are counted only once.
vfs.dir.size The directory size.
On Windows any symlink is skipped and hard links are taken into account only
once.
vfs.file.cksum The file checksum, calculated by the UNIX cksum algorithm.
vfs.file.contents Retrieving the contents of a file.
258
Item key Description Item group
Windows-specific items
The table provides details on the item keys that are supported only by the Windows Zabbix agent.
Windows-specific items sometimes are an approximate counterpart of a similar agent item, for example proc_info, supported
on Windows, roughly corresponds to the proc.mem item, not supported on Windows.
The item key is a link to full item key details.
259
Item key Description Item group
registry.get The list of Windows Registry values or keys located at given key.
service.discovery The list of Windows services. Services
service.info Information about a service.
services The listing of services.
vm.vmemory.size The virtual memory size in bytes or in percentage from the total. Virtual
memory
wmi.get Execute a WMI query and return the first selected object. WMI
wmi.getall Execute a WMI query and return the whole response.
Parameters without angle brackets are mandatory. Parameters marked with angle brackets < > are optional.
eventlog[name,<regexp>,<severity>,<source>,<eventid>,<maxlines>,<mode>]
Parameters:
Comments:
Examples:
eventlog[Application]
eventlog[Security,,"Failure Audit",,^(529|680)$]
eventlog[System,,"Warning|Error"]
eventlog[System,,,,^1$]
eventlog[System,,,,@TWOSHORT] #here a custom regular expression named `TWOSHORT` is referenced (defined as
eventlog.count[name,<regexp>,<severity>,<source>,<eventid>,<maxproclines>,<mode>]
<br> The count of lines in the Windows event log.<br> Return value: Integer.
Parameters:
Comments:
260
• Selecting a non-Log type of information for this item will lead to the loss of local timestamp, as well as log severity and
source information;
• See also additional information on log monitoring.
Example:
eventlog.count[System,,"Warning|Error"]
net.if.list
<br> The network interface list (includes interface type, status, IPv4 address, description).<br> Return value: Text.
Comments:
perf_counter[counter,<interval>]
<br> The value of any Windows performance counter.<br> Return value: Integer, float, string or text (depending on the request).
Parameters:
Comments:
• interval is used for counters that require more than one sample (like CPU utilization), so the check returns an average
value for last ”interval” seconds every time;
• Performance Monitor can be used to obtain the list of available counters.
• See also: Windows performance counters.
perf_counter_en[counter,<interval>]
<br> The value of any Windows performance counter in English.<br> Return value: Integer, float, string or text (depending on the
request).
Parameters:
Comments:
• interval is used for counters that require more than one sample (like CPU utilization), so the check returns an average
value for last ”interval” seconds every time;
• This item is only supported on Windows Server 2008/Vista and above;
• You can find the list of English strings by viewing the following registry key: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows
NT\CurrentVersion\Perflib\009.
perf_instance.discovery[object]
<br> The list of object instances of Windows performance counters. Used for low-level discovery.<br> Return value: JSON object.
Parameter:
perf_instance_en.discovery[object]
<br> The list of object instances of Windows performance counters, discovered using the object names in English. Used for low-level
discovery.<br> Return value: JSON object.
Parameter:
proc_info[process,<attribute>,<type>]
261
Parameters:
Comments:
• The following attributes are supported:<br>vmsize (default) - size of process virtual memory in Kbytes<br>wkset - size of
process working set (amount of physical memory used by process) in Kbytes<br>pf - number of page faults<br>ktime - pro-
cess kernel time in milliseconds<br>utime - process user time in milliseconds<br>io_read_b - number of bytes read by pro-
cess during I/O operations<br>io_read_op - number of read operation performed by process<br>io_write_b - number of bytes
written by process during I/O operations<br>io_write_op - number of write operation performed by process<br>io_other_b
- number of bytes transferred by process during operations other than read and write operations<br>io_other_op - number
of I/O operations performed by process, other than read and write operations<br>gdiobj - number of GDI objects used by
process<br>userobj - number of USER objects used by process;<br>
• Valid types are:<br>avg (default) - average value for all processes named <process><br>min - minimum value among all
processes named <process><br>max - maximum value among all processes named <process><br>sum - sum of values
for all processes named <process>;
• io_*, gdiobj and userobj attributes are available only on Windows 2000 and later versions of Windows, not on Windows NT
4.0;
• On a 64-bit system, a 64-bit Zabbix agent is required for this item to work correctly.<br>
Examples:
proc_info[iexplore.exe,wkset,sum] #retrieve the amount of physical memory taken by all Internet Explorer p
proc_info[iexplore.exe,pf,avg] #retrieve the average number of page faults for Internet Explorer processes
registry.data[key,<value name>]
<br> Return data for the specified value name in the Windows Registry key.<br> Return value: Integer, string or text (depending
on the value type)
Parameters:
• key - the registry key including the root key; root abbreviations (e.g. HKLM) are allowed;
• value name - the registry value name in the key (empty string ”” by default). The default value is returned if the value
name is not supplied.
Comments:
Examples:
<br> The list of Windows Registry values or keys located at given key.<br> Return value: JSON object.
Parameters:
• key - the registry key including the root key; root abbreviations (e.g. HKLM) are allowed (see comments for registry.data[]
to see full list of abbreviations);<br>
• mode - possible values:<br>values (default) or keys;<br>
• name regexp - only discover values with names that match the regexp (default - discover all values). Allowed only with
values as mode.
Keys with spaces must be double-quoted.
Examples:
registry.get[HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall,values,"^DisplayName|DisplayVersion$
registry.get[HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall,values] #return the data of the all
registry.get[HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall,keys] #return all subkeys of this ke
262
service.discovery
<br> The list of Windows services. Used for low-level discovery.<br> Return value: JSON object.
service.info[service,<param>]
<br> Information about a service.<br> Return value: Integer - with param as state, startup; String - with param as displayname,
path, user; Text - with param as description<br>Specifically for state: 0 - running, 1 - paused, 2 - start pending, 3 - pause pending,
4 - continue pending, 5 - stop pending, 6 - stopped, 7 - unknown, 255 - no such service<br>Specifically for startup: 0 - automatic,
1 - automatic delayed, 2 - manual, 3 - disabled, 4 - unknown, 5 - automatic trigger start, 6 - automatic delayed trigger start, 7 -
manual trigger start
Parameters:
• service - a real service name or its display name as seen in the MMC Services snap-in;
• param - state (default), displayname, path, user, startup, or description.
Comments:
• Items like service.info[service,state] and service.info[service] will return the same information;
• Only with param as state this item returns a value for non-existing services (255).
Examples:
<br> The listing of services.<br> Return value: 0 - if empty; Text - the list of services separated by a newline.
Parameters:
Examples:
<br> The virtual memory size in bytes or in percentage from the total.<br> Return value: Integer - for bytes; float - for percentage.
Parameter:
• type - possible values: available (available virtual memory), pavailable (available virtual memory, in percent), pused (used
virtual memory, in percent), total (total virtual memory, default), or used (used virtual memory)
Comments:
Example:
<br> Execute a WMI query and return the first selected object.<br> Return value: Integer, float, string or text (depending on the
request).
Parameters:
263
WMI queries are performed with WQL.
Example:
wmi.get[root\cimv2,select status from Win32_DiskDrive where Name like '%PHYSICALDRIVE0%'] #returns the sta
wmi.getall[<namespace>,<query>]
<br> Execute a WMI query and return the whole response. Can be used for low-level discovery.<br> Return value: JSON object
Parameters:
Comments:
Example:
wmi.getall[root\cimv2,select * from Win32_DiskDrive where Name like '%PHYSICALDRIVE%'] #returns status inf
Monitoring Windows services
This tutorial provides step-by-step instructions for setting up the monitoring of Windows services. It is assumed that Zabbix server
and agent are configured and operational.
Step 1
You can get the service name by going to the MMC Services snap-in and bringing up the properties of the service. In the General
tab you should see a field called ”Service name”. The value that follows is the name you will use when setting up an item for
monitoring. For example, if you wanted to monitor the ”workstation” service, then your service might be: lanmanworkstation.
Step 2
The item service.info[service,<param>] retrieves information about a particular service. Depending on the information
you need, specify the param option which accepts the following values: displayname, state, path, user, startup or description. The
default value is state if param is not specified (service.info[service]).
The type of return value depends on chosen param: integer for state and startup; character string for displayname, path and user;
text for description.
Example:
• Key: service.info[lanmanworkstation]
• Type of information: Numeric (unsigned)
The item service.info[lanmanworkstation] will retrieve information about the state of the service as a numerical value. To
map a numerical value to a text representation in the frontend (”0” as ”Running”, ”1” as ”Paused”, etc.), you can configure value
mapping on the host on which the item is configured. To do this, either link the template Windows services by Zabbix agent or
Windows services by Zabbix agent active to the host, or configure on the host a new value map that is based on the Windows
service state value map configured on the mentioned templates.
Note that both of the mentioned templates have a discovery rule configured that will discover services automatically. If you do not
want this, you can disable the discovery rule on the host level once the template has been linked to the host.
Low-level discovery provides a way to automatically create items, triggers, and graphs for different entities on a computer. Zabbix
can automatically start monitoring Windows services on your machine, without the need to know the exact name of a service or
create items for each service manually. A filter can be used to generate real items, triggers, and graphs only for services of interest.
2 SNMP agent
Overview
You may want to use SNMP monitoring on devices such as printers, network switches, routers or UPS that usually are SNMP-enabled
and on which it would be impractical to attempt setting up complete operating systems and Zabbix agents.
264
To be able to retrieve data provided by SNMP agents on these devices, Zabbix server must be initially configured with SNMP support
by specifying the --with-net-snmp flag.
SNMP checks are performed over the UDP protocol only.
Zabbix server and proxy daemons log lines similar to the following if they receive an incorrect SNMP response:
SNMP response from host "gateway" does not contain all of the requested variable bindings
While they do not cover all the problematic cases, they are useful for identifying individual SNMP devices for which combined
requests should be disabled.
Zabbix server/proxy will always retry at least one time after an unsuccessful query attempt: either through the SNMP library’s
retrying mechanism or through the internal combined processing mechanism.
Warning:
If monitoring SNMPv3 devices, make sure that msgAuthoritativeEngineID (also known as snmpEngineID or ”Engine ID”) is
never shared by two devices. According to RFC 2571 (section 3.1.1.1) it must be unique for each device.
Warning:
RFC3414 requires the SNMPv3 devices to persist their engineBoots. Some devices do not do that, which results in their
SNMP messages being discarded as outdated after being restarted. In such situation, SNMP cache needs to be manually
cleared on a server/proxy (by using -R snmp_cache_reload) or the server/proxy needs to be restarted.
To start monitoring a device through SNMP, the following steps have to be performed:
Step 1
Find out the SNMP string (or OID) of the item you want to monitor.
To get a list of SNMP strings, use the snmpwalk command (part of net-snmp software which you should have installed as part of
the Zabbix installation) or equivalent tool:
This should give you a list of SNMP strings and their last value. If it doesn’t then it is possible that the SNMP ’community’ is different
from the standard ’public’ in which case you will need to find out what it is.
You can then go through the list until you find the string you want to monitor, e.g. if you wanted to monitor the bytes coming in to
your switch on port 3 you would use the IF-MIB::ifHCInOctets.3 string from this line:
IF-MIB::ifHCInOctets.3 = Counter64: 3409739121
You may now use the snmpget command to find out the numeric OID for ’IF-MIB::ifHCInOctets.3’:
Note:
Some of the most used SNMP OIDs are translated automatically to a numeric representation by Zabbix.
In the last example above value type is ”Counter64”, which internally corresponds to ASN_COUNTER64 type. The full list of sup-
ported types is ASN_COUNTER, ASN_COUNTER64, ASN_UINTEGER, ASN_UNSIGNED64, ASN_INTEGER, ASN_INTEGER64, ASN_FLOAT,
ASN_DOUBLE, ASN_TIMETICKS, ASN_GAUGE, ASN_IPADDRESS, ASN_OCTET_STR and ASN_OBJECT_ID. These types roughly corre-
spond to ”Counter32”, ”Counter64”, ”UInteger32”, ”INTEGER”, ”Float”, ”Double”, ”Timeticks”, ”Gauge32”, ”IpAddress”, ”OCTET
STRING”, ”OBJECT IDENTIFIER” in snmpget output, but might also be shown as ”STRING”, ”Hex-STRING”, ”OID” and other, de-
pending on the presence of a display hint.
Step 2
265
Add an SNMP interface for the host:
266
SNMPv3 parameter Description
In case of wrong SNMPv3 credentials (security name, authentication protocol/passphrase, privacy protocol):
• Zabbix receives an ERROR from net-snmp, except for wrong Privacy passphrase in which case Zabbix receives a TIMEOUT
error from net-snmp;
• SNMP interface availability will switch to red (unavailable).
Warning:
Changes in Authentication protocol, Authentication passphrase, Privacy protocol or Privacy passphrase, made without
changing the Security name, will take effect only after the cache on a server/proxy is manually cleared (by using -R
snmp_cache_reload) or the server/proxy is restarted. In cases, where Security name is also changed, all parameters will
be updated immediately.
You can use one of the provided SNMP templates that will automatically add a set of items. Before using a template, verify that it
is compatible with the host.
Step 3
So, now go back to Zabbix and click on Items for the SNMP host you created earlier. Depending on whether you used a template or
not when creating your host, you will have either a list of SNMP items associated with your host or just an empty list. We will work
on the assumption that you are going to create the item yourself using the information you have just gathered using snmpwalk
and snmpget, so click on Create item.
Parameter Description
267
Parameter Description
SNMP OID Use one of the supported formats to enter OID value(s):
OID - (legacy) enter a single textual or numeric OID to retrieve a single value synchronously,
optionally combined with other values.
For example: 1.3.6.1.2.1.31.1.1.1.6.3.
For this option, the item check timeout will be equal to the value set in the server configuration
file.
It is recommended to use walk[OID] and get[OID] items for better performance. All
walk[OID] and get[OID] items are executed asynchronously - it is not required to receive the
response to one request before other checks are started. DNS resolving is asynchronous as well.
The maximum concurrency of asynchronous checks is 1000 (defined by
MaxConcurrentChecksPerPoller). The number of asynchronous SNMP pollers is defined by the
StartSNMPPollers parameter.
Note that for network traffic statistics, returned by any of the methods, a Change per second
step must be added in the Preprocessing tab; otherwise you will get the cumulative value from
the SNMP device instead of the latest change.
Now save the item and go to Monitoring → Latest data for your SNMP data.
Example 1
General example:
Parameter Description
Note that OID can be given in either numeric or string form. However, in some cases, string OID must be converted to numeric
representation. Utility snmpget may be used for this purpose:
268
Monitoring of uptime:
Parameter Description
OID MIB::sysUpTime.0
Key router.uptime
Value type Float
Units uptime
Preprocessing step: Custom multiplier 0.01
The walk[OID1,OID2,...] item allows to use native SNMP functionality for bulk requests (GetBulkRequest-PDUs), available in
SNMP versions 2/3.
A GetBulk request in SNMP executes multiple GetNext requests and returns the result in a single response. This may be used for
regular SNMP items as well as for SNMP discovery to minimize network roundtrips.
The SNMP walk[OID1,OID2,...] item may be used as the master item that collects data in one request with dependent items that
parse the response as needed using preprocessing.
Note that using native SNMP bulk requests is not related to the option of combining SNMP requests, which is Zabbix own way of
combining multiple SNMP requests (see next section).
Zabbix server and proxy may query SNMP devices for multiple values in a single request. This affects several types of SNMP items:
All SNMP items on a single interface with identical parameters are scheduled to be queried at the same time. The first two types of
items are taken by pollers in batches of at most 128 items, whereas low-level discovery rules are processed individually, as before.
On the lower level, there are two kinds of operations performed for querying values: getting multiple specified objects and walking
an OID tree.
For ”getting”, a GetRequest-PDU is used with at most 128 variable bindings. For ”walking”, a GetNextRequest-PDU is used for
SNMPv1 and GetBulkRequest with ”max-repetitions” field of at most 128 is used for SNMPv2 and SNMPv3.
Thus, the benefits of combined processing for each SNMP item type are outlined below:
However, there is a technical issue that not all devices are capable of returning 128 values per request. Some always return a
proper response, but others either respond with a ”tooBig(1)” error or do not respond at all once the potential response is over a
certain limit.
In order to find an optimal number of objects to query for a given device, Zabbix uses the following strategy. It starts cautiously
with querying 1 value in a request. If that is successful, it queries 2 values in a request. If that is successful again, it queries 3
values in a request and continues similarly by multiplying the number of queried objects by 1.5, resulting in the following sequence
of request sizes: 1, 2, 3, 4, 6, 9, 13, 19, 28, 42, 63, 94, 128.
However, once a device refuses to give a proper response (for example, for 42 variables), Zabbix does two things.
First, for the current item batch it halves the number of objects in a single request and queries 21 variables. If the device is alive,
then the query should work in the vast majority of cases, because 28 variables were known to work and 21 is significantly less than
that. However, if that still fails, then Zabbix falls back to querying values one by one. If it still fails at this point, then the device is
definitely not responding and request size is not an issue.
The second thing Zabbix does for subsequent item batches is it starts with the last successful number of variables (28 in our
example) and continues incrementing request sizes by 1 until the limit is hit. For example, assuming the largest response size is
32 variables, the subsequent requests will be of sizes 29, 30, 31, 32, and 33. The last request will fail and Zabbix will never issue
a request of size 33 again. From that point on, Zabbix will query at most 32 variables for this device.
If large queries fail with this number of variables, it can mean one of two things. The exact criteria that a device uses for limiting
response size cannot be known, but we try to approximate that using the number of variables. So the first possibility is that this
number of variables is around the device’s actual response size limit in the general case: sometimes response is less than the limit,
269
sometimes it is greater than that. The second possibility is that a UDP packet in either direction simply got lost. For these reasons,
if Zabbix gets a failed query, it reduces the maximum number of variables to try to get deeper into the device’s comfortable range,
but only up to two times.
In the example above, if a query with 32 variables happens to fail, Zabbix will reduce the count to 31. If that happens to fail, too,
Zabbix will reduce the count to 30. However, Zabbix will not reduce the count below 30, because it will assume that further failures
are due to UDP packets getting lost, rather than the device’s limit.
If, however, a device cannot handle combined requests properly for other reasons and the heuristic described above does not
work, there is a ”Use combined requests” setting for each interface that allows to disable combined requests for that device.
1 Dynamic indexes
Overview
While you may find the required index number (for example, of a network interface) among the SNMP OIDs, sometimes you may
not completely rely on the index number always staying the same.
Index numbers may be dynamic - they may change over time and your item may stop working as a consequence.
To avoid this scenario, it is possible to define an OID which takes into account the possibility of an index number changing.
For example, if you need to retrieve the index value to append to ifInOctets that corresponds to the GigabitEthernet0/1 interface
on a Cisco device, use the following OID:
ifInOctets["index","ifDescr","GigabitEthernet0/1"]
The syntax
Parameter Description
OID of data Main OID to use for data retrieval on the item.
index Method of processing. Currently one method is supported:
index – search for index and append it to the data OID
base OID of index This OID will be looked up to get the index value corresponding to the string.
string to search for The string to use for an exact match with a value when doing lookup. Case sensitive.
Example
HOST-RESOURCES-MIB::hrSWRunPerfMem["index","HOST-RESOURCES-MIB::hrSWRunPath", "/usr/sbin/apache2"]
the index number will be looked up here:
...
HOST-RESOURCES-MIB::hrSWRunPath.5376 = STRING: "/sbin/getty"
HOST-RESOURCES-MIB::hrSWRunPath.5377 = STRING: "/sbin/getty"
HOST-RESOURCES-MIB::hrSWRunPath.5388 = STRING: "/usr/sbin/apache2"
HOST-RESOURCES-MIB::hrSWRunPath.5389 = STRING: "/sbin/sshd"
...
Now we have the index, 5388. The index will be appended to the data OID in order to receive the value we are interested in:
When a dynamic index item is requested, Zabbix retrieves and caches whole SNMP table under base OID for index, even if a match
would be found sooner. This is done in case another item would refer to the same base OID later - Zabbix would look up index in
the cache, instead of querying the monitored host again. Note that each poller process uses separate cache.
In all subsequent value retrieval operations only the found index is verified. If it has not changed, value is requested. If it has
changed, cache is rebuilt - each poller that encounters a changed index walks the index SNMP table again.
2 Special OIDs
270
Some of the most used SNMP OIDs are translated automatically to a numeric representation by Zabbix. For example, ifIndex is
translated to 1.3.6.1.2.1.2.2.1.1, ifIndex.0 is translated to 1.3.6.1.2.1.2.2.1.1.0.
3 MIB files
Introduction
MIB stands for the Management Information Base. MIB files allow to use textual representation of an OID (Object Identifier). It is
possible to use raw OIDs when monitoring SNMP devices with Zabbix, but if you feel more comfortable using textual representation,
you need to install MIB files.
271
For example,
ifHCOutOctets
is textual representation of the OID
1.3.6.1.2.1.31.1.1.1.10
Installing MIB files
On Debian-based systems:
On RedHat-based systems, MIB files should be enabled by default. On Debian-based systems, you have to edit the file
/etc/snmp/snmp.conf and comment out the line that says mibs :
# As the snmp packages come without MIB files due to license reasons, loading
# of MIBs is disabled by default. If you added the MIBs you can re-enable
# loading them by commenting out the following line.
mibs :
Testing MIB files
Testing SNMP MIBs can be done using snmpwalk utility. If you don’t have it installed, use the following instructions.
On Debian-based systems:
The most important to keep in mind is that Zabbix processes do not get informed of the changes made to MIB files. So after every
change you must restart Zabbix server or proxy, e. g.:
There are standard MIB files coming with every GNU/Linux distribution. But some device vendors provide their own.
Let’s say, you would like to use CISCO-SMI MIB file. The following instructions will download and install it:
272
If you receive errors instead of the OID, ensure all the previous commands did not return any errors.
The object name translation worked, you are ready to use custom MIB file. Note the MIB name prefix (CISCO-SMI::) used in the
query. You will need this when using command-line tools as well as Zabbix.
Don’t forget to restart Zabbix server/proxy before using this MIB file in Zabbix.
Attention:
Keep in mind that MIB files can have dependencies. That is, one MIB may require another. In order to satisfy these
dependencies you have to install all the affected MIB files.
3 SNMP traps
Overview
In this case, the information is sent from an SNMP-enabled device to snmptrapd and is collected or ”trapped” by Zabbix server or
Zabbix proxy from file.
Usually, traps are sent upon some condition change and the agent connects to the server on port 162 (as opposed to port 161 on
the agent side that is used for queries). Using traps may detect some short problems that occur amidst the query interval and
may be missed by the query data.
Receiving SNMP traps in Zabbix is designed to work with snmptrapd and one of the mechanisms for passing the traps to Zabbix
- either a Bash or Perl script or SNMPTT.
Note:
The simplest way to set up trap monitoring after configuring Zabbix is to use the Bash script solution, because Perl and
SNMPTT are often missing in modern distributions and require more complex configuration. However, this solution uses a
script configured as traphandle. For better performance on production systems, use the embedded Perl solution (either
script with do perl option or SNMPTT).
Notes on HA failover
During high-availability (HA) node switch, Zabbix will continue processing after the last record within the last ISO 8601 timestamp;
if the same record is not found then only the timestamp will be used to identify last position.
Configuring the following fields in the frontend is specific for this item type:
In Data collection → Hosts, in the Host interface field set an SNMP interface with the correct IP or DNS address. The address from
each received trap is compared to the IP and DNS addresses of all SNMP interfaces to find the corresponding hosts.
Key
273
Key
Catches all SNMP traps SNMP trap This item can be set only for SNMP interfaces.
that match the regular User macros and global regular expressions are supported in the
expression specified in parameter of this item key.
regexp. If regexp is
unspecified, catches
any trap.
snmptrap.fallback
Catches all SNMP traps SNMP trap This item can be set only for SNMP interfaces.
that were not caught
by any of the
snmptrap[] items for
that interface.
Note:
Multiline regular expression matching is not supported at this time.
Set the Type of information to ’Log’ for the timestamps to be parsed. Note that other formats such as ’Numeric’ are also
acceptable but might require a custom trap handler.
Note:
For SNMP trap monitoring to work, it must first be set up correctly (see below).
To read the traps, Zabbix server or proxy must be configured to start the SNMP trapper process and point to the trap file that is being
written by SNMPTT or a Bash/Perl trap receiver. To do that, edit the configuration file (zabbix_server.conf or zabbix_proxy.conf):
StartSNMPTrapper=1
SNMPTrapperFile=[TRAP FILE]
Warning:
If systemd parameter PrivateTmp is used, this file is unlikely to work in /tmp.
A Bash trap receiver script can be used to pass traps to Zabbix server from snmptrapd using trapper file. To configure it, add the
traphandle option to snmptrapd configuration file (snmptrapd.conf), see example.
Note:
snmptrapd might need to be restarted to pick up changes to its configuration.
Requirements: Perl, Net-SNMP compiled with --enable-embedded-perl (done by default since Net-SNMP 5.4)
A Perl trap receiver (look for misc/snmptrap/zabbix_trap_receiver.pl) can be used to pass traps to Zabbix server directly from
snmptrapd. To configure it:
• add the Perl script to the snmptrapd configuration file (snmptrapd.conf), e.g.:
Note:
snmptrapd might need to be restarted to pick up changes to its configuration.
274
Note:
If the script name is not quoted, snmptrapd will refuse to start up with messages, similar to these:<br><br>
Regexp modifiers "/l" and "/a" are mutually exclusive at (eval 2) line 1, at end of line
Regexp modifier "/l" may not appear twice at (eval 2) line 1, at end of line
Configuring SNMPTT
Note:
For the best performance, SNMPTT should be configured as a daemon using snmptthandler-embedded to pass the traps
to it. See instructions for configuring SNMPTT.
date_time_format = %Y-%m-%dT%H:%M:%S%z
Warning:
The ”net-snmp-perl” package has been removed in RHEL 8.0-8.2; re-added in RHEL 8.3. For more information, see the
known issues.
Now format the traps for Zabbix to recognize them (edit snmptt.conf):
1. Each FORMAT statement should start with ”ZBXTRAP [address]”, where [address] will be compared to IP and DNS addresses
of SNMP interfaces on Zabbix. E.g.:
EVENT coldStart .1.3.6.1.6.3.1.1.5.1 "Status Events" Normal
FORMAT ZBXTRAP $aA Device reinitialized (coldStart)
Attention:
Do not use unknown traps - Zabbix will not be able to recognize them. Unknown traps can be handled by defining a general
event in snmptt.conf:<br><br>
EVENT general .* "General event" Normal
All customized Perl trap receivers and SNMPTT trap configuration must format the trap in the following way:
where
Note that ”ZBXTRAP” and ”[address]” will be cut out from the message during processing. If the trap is formatted otherwise, Zabbix
might parse the traps unexpectedly.
Example trap:
2024-01-11T15:28:47+0200 .1.3.6.1.6.3.1.1.5.3 Normal "Status Events" localhost - ZBXTRAP 192.168.1.1 Link
This will result in the following trap for SNMP interface with IP=192.168.1.1:
275
2024-01-11T15:28:47+0200 .1.3.6.1.6.3.1.1.5.3 Normal "Status Events"
localhost - Link down on interface 2. Admin state: 1. Operational state: 2
System requirements
Zabbix has large file support for SNMP trapper files. The maximum file size that Zabbix can read is 2^63 (8 EiB). Note that the
filesystem may impose a lower limit on the file size.
Log rotation
Zabbix does not provide any log rotation system - that should be handled by the user. The log rotation should first rename the old
file and only later delete it so that no traps are lost:
1. Zabbix opens the trap file at the last known location and goes to step 3
2. Zabbix checks if the currently opened file has been rotated by comparing the inode number to the defined trap file’s inode
number. If there is no opened file, Zabbix resets the last location and goes to step 1.
3. Zabbix reads the data from the currently opened file and sets the new location.
4. The new data are parsed. If this was the rotated file, the file is closed and goes back to step 2.
5. If there was no new data, Zabbix sleeps for 1 second and goes back to step 2.
File system
Because of the trap file implementation, Zabbix needs the file system to support inodes to differentiate files (the information is
acquired by a stat() call).
This example uses snmptrapd and a Bash receiver script to pass traps to Zabbix server.
Setup:
1. Configure Zabbix to start SNMP trapper and set the trap file. Add to zabbix_server.conf:
StartSNMPTrapper=1
SNMPTrapperFile=/var/lib/zabbix/snmptraps/snmptraps.log
Note: Before 7.0 release use this link for script download.
If necessary, adjust the ZABBIX_TRAPS_FILE variable in the script. To use the default value, create the parent directory first:
mkdir -p /var/lib/zabbix/snmptraps
Note:
snmptrapd might need to be restarted to pick up changes to its configuration.
Note that the ISO 8601 date and time format is used.
5. Next we will configure snmptrapd for our chosen SNMP protocol version and send test traps using the snmptrap utility.
SNMPv1, SNMPv2
SNMPv1 and SNMPv2 protocols rely on ”community string” authentication. In the example below we will use ”secret” as community
string. It must be set to the same value on SNMP trap senders.
276
Please note that while still widely used in production environments, SNMPv2 doesn’t offer any encryption and real sender authen-
tication. The data is sent as plain text and therefore these protocol versions should only be used in secure environments such as
private network and should never be used over any public or third-party network.
SNMP version 1 isn’t really used these days since it doesn’t support 64-bit counters and is considered a legacy protocol.
To enable accepting SNMPv1 or SNMPv2 traps you should add the following line to snmptrapd.conf. Replace ”secret” with the
SNMP community string configured on SNMP trap senders:
authCommunity log,execute,net secret
Next we can send a test trap using snmptrap. We will use the common ”link up” OID in this example:
snmptrap -v 2c -c secret localhost '' linkUp.0
SNMPv3
SNMPv3 addresses SNMPv1/v2 security issues and provides authentication and encryption. You can use the MD5 or multiple SHA
authentication methods and DES/multiple AES as cipher.
Attention:
Please note the ”execute” keyword that allows to execute scripts for this user security model.
Warning:
If you wish to use strong encryption methods such as AES192 or AES256, please use net-snmp starting with version 5.8.
You might have to recompile it with configure option: --enable-blumenthal-aes. Older versions of net-snmp do not
support AES192/AES256. See also: http://www.net-snmp.org/wiki/index.php/Strong_Authentication_or_Encryption
Verification
277
See also
4 IPMI checks
Overview
You can monitor the health and availability of Intelligent Platform Management Interface (IPMI) devices in Zabbix. To perform IPMI
checks Zabbix server must be initially configured with IPMI support.
IPMI is a standardized interface for remote ”lights-out” or ”out-of-band” management of computer systems. It allows to monitor
hardware status directly from the so-called ”out-of-band” management cards, independently from the operating system or whether
the machine is powered on at all.
Zabbix IPMI monitoring works only for devices having IPMI support (HP iLO, DELL DRAC, IBM RSA, Sun SSP, etc).
An IPMI manager process schedules the IPMI checks by IPMI pollers. A host is always polled by only one IPMI poller at a time,
reducing the number of open connections to BMC controllers. Thus it’s safe to increase the number of IPMI pollers without worrying
about BMC controller overloading. The IPMI manager process is automatically started when at least one IPMI poller is started.
Configuration
Host configuration
A host must be configured to process IPMI checks. An IPMI interface must be added, with the respective IP and port numbers, and
IPMI authentication parameters must be defined.
Server configuration
By default, the Zabbix server is not configured to start any IPMI pollers, thus any added IPMI items won’t work. To change this,
open the Zabbix server configuration file (zabbix_server.conf) as root and look for the following line:
# StartIPMIPollers=0
Uncomment it and set poller count to, say, 3, so that it reads:
StartIPMIPollers=3
Save the file and restart zabbix_server afterwards.
Item configuration
Supported checks
The table below describes in-built items that are supported in IPMI agent checks.
Item key
278
Timeout and session termination
IPMI message timeouts and retry counts are defined in OpenIPMI library. Due to the current design of OpenIPMI, it is not possible
to make these values configurable in Zabbix, neither on interface nor item level.
IPMI session inactivity timeout for LAN is 60 +/-3 seconds. Currently it is not possible to implement periodic sending of Activate
Session command with OpenIPMI. If there are no IPMI item checks from Zabbix to a particular BMC for more than the session
timeout configured in BMC then the next IPMI check after the timeout expires will time out due to individual message timeouts,
retries or receive error. After that a new session is opened and a full rescan of the BMC is initiated. If you want to avoid unnecessary
rescans of the BMC it is advised to set the IPMI item polling interval below the IPMI session inactivity timeout configured in BMC.
To find sensors on a host start Zabbix server with DebugLevel=4 enabled. Wait a few minutes and find sensor discovery records
in Zabbix server logfile:
The first parameter to start with is ”reading_type”. Use ”Table 42-1, Event/Reading Type Code Ranges” from the specifications
to decode ”reading_type” code. Most of the sensors in our example have ”reading_type:0x1” which means ”threshold” sensor.
”Table 42-3, Sensor Type Codes” shows that ”type:0x1” means temperature sensor, ”type:0x2” - voltage sensor, ”type:0x4” - Fan
etc. Threshold sensors sometimes are called ”analog” sensors as they measure continuous parameters like temperature, voltage,
revolutions per minute.
Another example - a sensor with ”reading_type:0x3”. ”Table 42-1, Event/Reading Type Code Ranges” says that reading type codes
02h-0Ch mean ”Generic Discrete” sensor. Discrete sensors have up to 15 possible states (in other words - up to 15 meaningful bits).
For example, for sensor ’CATERR’ with ”type:0x7” the ”Table 42-3, Sensor Type Codes” shows that this type means ”Processor”
and the meaning of individual bits is: 00h (the least significant bit) - IERR, 01h - Thermal Trip etc.
There are few sensors with ”reading_type:0x6f” in our example. For these sensors the ”Table 42-1, Event/Reading Type Code
Ranges” advises to use ”Table 42-3, Sensor Type Codes” for decoding meanings of bits. For example, sensor ’Power Unit Stat’ has
type ”type:0x9” which means ”Power Unit”. Offset 00h means ”PowerOff/Power Down”. In other words if the least significant bit is
1, then server is powered off. To test this bit, the bitand function with mask ’1’ can be used. The trigger expression could be like
Names of discrete sensors in OpenIPMI-2.0.16, 2.0.17 and 2.0.18 often have an additional ”0” (or some other digit or letter)
appended at the end. For example, while ipmitool and OpenIPMI-2.0.19 display sensor names as ”PhysicalSecurity” or
”CATERR”, in OpenIPMI-2.0.16, 2.0.17 and 2.0.18 the names are ”PhysicalSecurity0” or ”CATERR0”, respectively.
When configuring an IPMI item with Zabbix server using OpenIPMI-2.0.16, 2.0.17 and 2.0.18, use these names ending with ”0” in
the IPMI sensor field of IPMI agent items. When your Zabbix server is upgraded to a new Linux distribution, which uses OpenIPMI-
2.0.19 (or later), items with these IPMI discrete sensors will become ”NOT SUPPORTED”. You have to change their IPMI sensor
names (remove the ’0’ in the end) and wait for some time before they turn ”Enabled” again.
279
Notes on threshold and discrete sensor simultaneous availability
Some IPMI agents provide both a threshold sensor and a discrete sensor under the same name. The preference is always given to
the threshold sensor.
If IPMI checks are not performed (by any reason: all host IPMI items disabled/notsupported, host disabled/deleted, host in mainte-
nance etc.) the IPMI connection will be terminated from Zabbix server or proxy in 3 to 4 hours depending on the time when Zabbix
server/proxy was started.
5 Simple checks
Overview
Simple checks are normally used for remote agent-less checks of services.
Note that Zabbix agent is not needed for simple checks. Zabbix server/proxy is responsible for the processing of simple checks
(making external connections, etc).
net.tcp.service[ftp,,155]
net.tcp.service[http]
net.tcp.service.perf[http,,8080]
net.udp.service.perf[ntp]
Note:
User name and Password fields (limited to 255 characters) in simple check item configuration are used for VMware moni-
toring items; ignored otherwise.
Supported checks
The item keys are listed without optional parameters and additional information. Click on the item key to see the full details.
Parameters without angle brackets are mandatory. Parameters marked with angle brackets < > are optional.
icmpping[<target>,<packets>,<interval>,<size>,<timeout>,<options>]
<br> The host accessibility by ICMP ping.<br> Return value: 0 - ICMP ping fails; 1 - ICMP ping successful.
Parameters:
Example:
icmpping[,4] #If at least one packet of the four is returned, the item will return 1.
280
icmppingloss[<target>,<packets>,<interval>,<size>,<timeout>,<options>]
Parameters:
icmppingsec[<target>,<packets>,<interval>,<size>,<timeout>,<mode>,<options>]
<br> The ICMP ping response time (in seconds).<br> Return value: Float.
Parameters:
Comments:
• Packets which are lost or timed out are not used in the calculation;
• If the host is not available (timeout reached), the item will return 0;
• If the return value is less than 0.0001 seconds, the value will be set to 0.0001 seconds;
• See also the table of default values.
net.tcp.service[service,<ip>,<port>]
<br> Checks if a service is running and accepting TCP connections.<br> Return value: 0 - the service is down; 1 - the service is
running.
Parameters:
• service - possible values: ssh, ldap, smtp, ftp, http, pop, nntp, imap, tcp, https, telnet (see details);
• ip - the IP address or DNS name (by default the host IP/DNS is used);
• port - the port number (by default the standard service port number is used).
Comments:
net.tcp.service[ftp,,45] #This item can be used to test the availability of FTP server on TCP port 45.
net.tcp.service.perf[service,<ip>,<port>]
<br> Checks the performance of a TCP service.<br> Return value: Float: 0.000000 - the service is down; seconds - the number of
seconds spent while connecting to the service.
Parameters:
• service - possible values: ssh, ldap, smtp, ftp, http, pop, nntp, imap, tcp, https, telnet (see details);
• ip - the IP address or DNS name (by default the host IP/DNS is used);
• port - the port number (by default the standard service port number is used).
Comments:
281
• Checking of encrypted protocols (like IMAP on port 993 or POP on port 995) is currently not supported. As a workaround,
please use net.tcp.service[tcp,<ip>,port] for checks like these.
Example:
net.tcp.service.perf[ssh] #This item can be used to test the speed of initial response from SSH server.
net.udp.service[service,<ip>,<port>]
<br> Checks if a service is running and responding to UDP requests.<br> Return value: 0 - the service is down; 1 - the service is
running.
Parameters:
Example:
net.udp.service[ntp,,45] #This item can be used to test the availability of NTP service on UDP port 45.
net.udp.service.perf[service,<ip>,<port>]
<br> Checks the performance of a UDP service.<br> Return value: Float: 0.000000 - the service is down; seconds - the number
of seconds spent waiting for response from the service.
Parameters:
Example:
net.udp.service.perf[ntp] #This item can be used to test the response time from NTP service.
Attention:
For SourceIP support in LDAP simple checks (e.g. net.tcp.service[ldap]), OpenLDAP version 2.6.1 or above is re-
quired.
Timeout processing
Zabbix will not process a simple check longer than the Timeout seconds defined in the item configuration form. For VMware items
and icmpping* items, Zabbix will not process a simple check longer than the Timeout seconds defined in the Zabbix server or
proxy configuration file.
ICMP pings
Zabbix uses an external utility fping to process ICMP pings (icmpping, icmppingloss, icmppingsec).
Installation
• Various Unix-based platforms have the fping package in their default repositories, but it is not pre-installed. In this case you
can use the package manager to install fping.
• Zabbix provides fping packages for RHEL. Please note that these packages are provided without official support.
Configuration
Specify fping location in the FpingLocation parameter of Zabbix server/proxy configuration file (or Fping6Location parameter for
using IPv6 addresses).
fping should be executable by the user Zabbix server/proxy run as and this user should have sufficient rights.
See also: Known issues for processing simple checks with fping versions below 3.10.
Default values
282
Allowed
limits
Fping’s Defaults by
Parameter Unit Description flag set by Zabbix
The defaults may differ slightly depending on the platform and version.
In addition, Zabbix uses fping options -i interval ms (do not mix up with the item parameter interval mentioned in the table above,
which corresponds to fping option -p) and -S source IP address (or -I in older fping versions). These options are auto-detected by
running checks with different option combinations. Zabbix tries to detect the minimal value in milliseconds that fping allows to
use with -i by trying 3 values: 0, 1 and 10. The value that first succeeds is then used for subsequent ICMP checks. This process is
done by each ICMP pinger process individually.
Auto-detected fping options are invalidated every hour and detected again on the next attempt to perform ICMP check. Set
DebugLevel>=4 in order to view details of this process in the server or proxy log file.
Zabbix writes IP addresses to be checked by any of the three icmpping* keys to a temporary file, which is then passed to fping. If
items have different key parameters, only the ones with identical key parameters are written to a single file. All IP addresses written
to the single file will be checked by fping in parallel, so Zabbix ICMP pinger process will spend fixed amount of time disregarding
the number of IP addresses in the file.
List of VMware monitoring item keys has been moved to VMware monitoring section.
Overview
Zabbix can be used for centralized monitoring and analysis of log files with/without log rotation support.
Notifications can be used to warn users when a log file contains certain strings or string patterns.
Attention:
The size limit of a monitored log file depends on large file support.
Configuration
283
Make sure that in the agent configuration file:
Item configuration
284
Log time format In this field you may optionally specify the pattern for parsing the log line timestamp.
If left blank the timestamp will not be parsed.
Supported placeholders:
* y: Year (0001-9999)
* M: Month (01-12)
* d: Day (01-31)
* h: Hour (00-23)
* m: Minute (00-59)
* s: Second (00-59)
For example, consider the following line from the Zabbix agent log file:
” 23480:20100328:154718.045 Zabbix agent started. Zabbix 1.8.2 (revision 11211).”
It begins with six character positions for PID, followed by date, time, and the rest of the line.
Log time format for this line would be ”pppppp:yyyyMMdd:hhmmss”.
Note that ”p” and ”:” chars are just placeholders and can be anything but ”yMdhms”.
Important notes
• The server and agent keep the trace of a monitored log’s size and last modification time (for logrt) in two counters. Addi-
tionally:
– The agent also internally uses inode numbers (on UNIX/GNU/Linux), file indexes (on Microsoft Windows) and MD5 sums
of the first 512 log file bytes for improving decisions when logfiles get truncated and rotated.
– On UNIX/GNU/Linux systems it is assumed that the file systems where log files are stored report inode numbers, which
can be used to track files.
– On Microsoft Windows Zabbix agent determines the file system type the log files reside on and uses:
∗ On NTFS file systems 64-bit file indexes.
∗ On ReFS file systems (only from Microsoft Windows Server 2012) 128-bit file IDs.
∗ On file systems where file indexes change (e.g. FAT32, exFAT) a fall-back algorithm is used to take a sensible
approach in uncertain conditions when log file rotation results in multiple log files with the same last modification
time.
– The inode numbers, file indexes and MD5 sums are internally collected by Zabbix agent. They are not transmitted to
Zabbix server and are lost when Zabbix agent is stopped.
– Do not modify the last modification time of log files with ’touch’ utility, do not copy a log file with later restoration of
the original name (this will change the file inode number). In both cases the file will be counted as different and will
be analyzed from the start, which may result in duplicated alerts.
– If there are several matching log files for logrt[] item and Zabbix agent is following the most recent of them and
this most recent log file is deleted, a warning message "there are no files matching "<regexp mask>" in
"<directory>" is logged. Zabbix agent ignores log files with modification time less than the most recent modification
time seen by the agent for the logrt[] item being checked.
• The agent starts reading the log file from the point it stopped the previous time.
• The number of bytes already analyzed (the size counter) and last modification time (the time counter) are stored in the
Zabbix database and are sent to the agent to make sure the agent starts reading the log file from this point in cases when
the agent is just started or has received items which were previously disabled or not supported. However, if the agent has
received a non-zero size counter from server, but the logrt[] or logrt.count[] item is unable to find matching files, the size
counter is reset to 0 to analyze from the start if the files appear later.
• Whenever the log file becomes smaller than the log size counter known by the agent, the counter is reset to zero and the
agent starts reading the log file from the beginning taking the time counter into account.
• If there are several matching files with the same last modification time in the directory, then the agent tries to correctly
analyze all log files with the same modification time and avoid skipping data or analyzing the same data twice, although it
cannot be guaranteed in all situations. The agent does not assume any particular log file rotation scheme nor determines
one. When presented multiple log files with the same last modification time, the agent will process them in a lexicographically
descending order. Thus, for some rotation schemes the log files will be analyzed and reported in their original order. For
other rotation schemes the original log file order will not be honored, which can lead to reporting matched log file records in
altered order (the problem does not happen if log files have different last modification times).
• Zabbix agent processes new records of a log file once per Update interval seconds.
• Zabbix agent does not send more than maxlines of a log file per second. The limit prevents overloading of network and
CPU resources and overrides the default value provided by MaxLinesPerSecond parameter in the agent configuration file.
• To find the required string Zabbix will process 10 times more new lines than set in MaxLinesPerSecond. Thus, for example, if
a log[] or logrt[] item has Update interval of 1 second, by default the agent will analyze no more than 200 log file records
and will send no more than 20 matching records to Zabbix server in one check. By increasing MaxLinesPerSecond in the
agent configuration file or setting maxlines parameter in the item key, the limit can be increased up to 10000 analyzed log
file records and 1000 matching records sent to Zabbix server in one check. If the Update interval is set to 2 seconds the
limits for one check would be set 2 times higher than with Update interval of 1 second.
285
• Additionally, log and log.count values are always limited to 50% of the agent send buffer size, even if there are no non-log
values in it. So for the maxlines values to be sent in one connection (and not in several connections), the agent BufferSize
parameter must be at least maxlines x 2. Zabbix agent can upload data during log gathering and thus free the buffer, whereas
Zabbix agent 2 will stop log gathering until the data is uploaded and the buffer is freed, which is performed asynchronously.
• In the absence of log items all agent buffer size is used for non-log values. When log values come in they replace the older
non-log values as needed, up to the designated 50%.
• For log file records longer than 256kB, only the first 256kB are matched against the regular expression and the rest of the
record is ignored. However, if Zabbix agent is stopped while it is dealing with a long record the agent internal state is lost
and the long record may be analyzed again and differently after the agent is started again.
• Special note for ”\” path separators: if file_format is ”file\.log”, then there should not be a ”file” directory, since it is not
possible to unambiguously define whether ”.” is escaped or is the first symbol of the file name.
• Regular expressions for logrt are supported in filename only, directory regular expression matching is not supported.
• On UNIX platforms a logrt[] item becomes NOTSUPPORTED if a directory where the log files are expected to be found
does not exist.
• On Microsoft Windows, if a directory does not exist the item will not become NOTSUPPORTED (for example, if directory is
misspelled in item key).
• An absence of log files for logrt[] item does not make it NOTSUPPORTED. Errors of reading log files for logrt[] item are
logged as warnings into Zabbix agent log file but do not make the item NOTSUPPORTED.
• Zabbix agent log file can be helpful to find out why a log[] or logrt[] item became NOTSUPPORTED. Zabbix can monitor
its agent log file, except when at DebugLevel=4 or DebugLevel=5.
• Searching for a question mark using a regular expression, e.g. \? may result in false positives if the text file contains NUL
symbols, as those are replaced with ”?” by Zabbix to continue processing the line until the newline character.
Sometimes we may want to extract only the interesting value from a target file instead of returning the whole line when a regular
expression match is found.
Log items have the ability to extract desired values from matched lines. This is accomplished by the additional output parameter
in log and logrt items.
Using the ’output’ parameter allows to indicate the ”capturing group” of the match that we may be interested in.
And, with the ability to extract and return a number, the value can be used to define triggers.
The ’maxdelay’ parameter in log items allows ignoring some older lines from log files in order to get the most recent lines analyzed
within the ’maxdelay’ seconds.
Warning:
Specifying ’maxdelay’ > 0 may lead to ignoring important log file records and missed alerts. Use it carefully at your
own risk only when necessary.
By default items for log monitoring follow all new lines appearing in the log files. However, there are applications which in some
situations start writing an enormous number of messages in their log files. For example, if a database or a DNS server is unavailable,
such applications flood log files with thousands of nearly identical error messages until normal operation is restored. By default,
all those messages will be dutifully analyzed and matching lines sent to server as configured in log and logrt items.
Built-in protection against overload consists of a configurable ’maxlines’ parameter (protects server from too many incoming
matching log lines) and a 10*’maxlines’ limit (protects host CPU and I/O from overloading by agent in one check). Still, there are
2 problems with the built-in protection. First, a large number of potentially not-so-informative messages are reported to server
and consume space in the database. Second, due to the limited number of lines analyzed per second the agent may lag behind
the newest log records for hours. Quite likely, you might prefer to be sooner informed about the current situation in the log files
instead of crawling through old records for hours.
286
The solution to both problems is using the ’maxdelay’ parameter. If ’maxdelay’ > 0 is specified, during each check the number of
processed bytes, the number of remaining bytes and processing time is measured. From these numbers the agent calculates an
estimated delay - how many seconds it would take to analyze all remaining records in a log file.
If the delay does not exceed ’maxdelay’ then the agent proceeds with analyzing the log file as usual.
If the delay is greater than ’maxdelay’ then the agent ignores a chunk of a log file by ”jumping” over it to a new estimated
position so that the remaining lines could be analyzed within ’maxdelay’ seconds.
Note that agent does not even read ignored lines into buffer, but calculates an approximate position to jump to in a file.
The fact of skipping log file lines is logged in the agent log file like this:
14287:20160602:174344.206 item:"logrt["/home/zabbix32/test[0-9].log",ERROR,,1000,,,120.0]"
logfile:"/home/zabbix32/test1.log" skipping 679858 bytes
(from byte 75653115 to byte 76332973) to meet maxdelay
The ”to byte” number is approximate because after the ”jump” the agent adjusts the position in the file to the beginning of a log
line which may be further in the file or earlier.
Depending on how the speed of growing compares with the speed of analyzing the log file you may see no ”jumps”, rare or often
”jumps”, large or small ”jumps”, or even a small ”jump” in every check. Fluctuations in the system load and network latency also
affect the calculation of delay and hence, ”jumping” ahead to keep up with the ”maxdelay” parameter.
Setting ’maxdelay’ < ’update interval’ is not recommended (it may result in frequent small ”jumps”).
logrt with the copytruncate option assumes that different log files have different records (at least their timestamps are differ-
ent), therefore MD5 sums of initial blocks (up to the first 512 bytes) will be different. Two files with the same MD5 sums of initial
blocks means that one of them is the original, another - a copy.
logrt with the copytruncate option makes effort to correctly process log file copies without reporting duplicates. However,
things like producing multiple log file copies with the same timestamp, log file rotation more often than logrt[] item update interval,
frequent restarting of agent are not recommended. The agent tries to handle all these situations reasonably well, but good results
cannot be guaranteed in all circumstances.
When Zabbix agent is started it receives a list of active checks from Zabbix server or proxy. For log*[] metrics it receives the
processed log size and the modification time for finding where to start log file monitoring from. Depending on the actual log file
size and modification time reported by file system the agent decides either to continue log file monitoring from the processed log
size or re-analyze the log file from the beginning.
A running agent maintains a larger set of attributes for tracking all monitored log files between checks. This in-memory state is
lost when the agent is stopped.
The new optional parameter persistent_dir specifies a directory for storing this state of log[], log.count[], logrt[] or logrt.count[]
item in a file. The state of log item is restored from the persistent file after the Zabbix agent is restarted.
The primary use-case is monitoring of log file located on a mirrored file system. Until some moment in time the log file is written
to both mirrors. Then mirrors are split. On the active copy the log file is still growing, getting new records. Zabbix agent analyzes
it and sends processed logs size and modification time to server. On the passive copy the log file stays the same, well behind
the active copy. Later the operating system and Zabbix agent are rebooted from the passive copy. The processed log size and
modification time the Zabbix agent receives from server may not be valid for situation on the passive copy. To continue log file
monitoring from the place the agent left off at the moment of file system mirror split the agent restores its state from the persistent
file.
On startup Zabbix agent knows nothing about persistent files. Only after receiving a list of active checks from Zabbix server (proxy)
the agent sees that some log items should be backed by persistent files under specified directories.
During agent operation the persistent files are opened for writing (with fopen(filename, ”w”)) and overwritten with the latest data.
The chance of losing persistent file data if the overwriting and file system mirror split happen at the same time is very small, no
special handling for it. Writing into persistent file is NOT followed by enforced synchronization to storage media (fsync() is not
called).
Overwriting with the latest data is done after successful reporting of matching log file record or metadata (processed log size and
modification time) to Zabbix server. That may happen as often as every item check if log file keeps changing.
287
After receiving a list of active checks the agent marks obsolete persistent files for removal. A persistent file becomes obsolete if:
1) the corresponding log item is no longer monitored, 2) a log item is reconfigured with a different persistent_dir location than
before.
Removing is done with delay 24 hours because log files in NOTSUPPORTED state are not included in the list of active checks but
they may become SUPPORTED later and their persistent files will be useful.
If the agent is stopped before 24 hours expire, then the obsolete files will not be deleted as Zabbix agent is not getting info about
their location from Zabbix server anymore.
Warning:
Reconfiguring a log item’s persistent_dir back to the old persistent_dir location while the agent is stopped, without
deleting the old persistent file by user - will cause restoring the agent state from the old persistent file resulting in missed
messages or false alerts.
Zabbix agent distinguishes active checks by their keys. For example, logrt[/home/zabbix/test.log] and logrt[/home/zabbix/test.log,]
are different items. Modifying the item logrt[/home/zabbix/test.log„,10] in frontend to logrt[/home/zabbix/test.log„,20]
will result in deleting the item logrt[/home/zabbix/test.log„,10] from the agent’s list of active checks and creating lo-
grt[/home/zabbix/test.log„,20] item (some attributes are carried across modification in frontend/server, not in agent).
The file name is composed of MD5 sum of item key with item key length appended to reduce possibility of collisions. For
example, the state of logrt[/home/zabbix50/test.log„„„„/home/zabbix50/agent_private] item will be kept in persistent file
c963ade4008054813bbc0a650bb8e09266.
persistent_dir is specified by taking into account specific file system layouts, mount points and mount options and storage
mirroring configuration - the persistent file should be on the same mirrored filesystem as the monitored log file.
If persistent_dir directory cannot be created or does not exist, or access rights for Zabbix agent does not allow to cre-
ate/write/read/delete files the log item becomes NOTSUPPORTED.
If access rights to persistent storage files are removed during agent operation or other errors occur (e.g. disk full) then errors are
logged into the agent log file but the log item does not become NOTSUPPORTED.
Load on I/O
Item’s persistent file is updated after successful sending of every batch of data (containing item’s data) to server. For example,
default ’BufferSize’ is 100. If a log item has found 70 matching records then the first 50 records will be sent in one batch, persistent
file will be updated, then remaining 20 records will be sent (maybe with some delay when more data is accumulated) in the 2nd
batch, and the persistent file will be updated again.
Each matching line from log[] and logrt[] item and a result of each log.count[] and logrt.count[] item check requires
a free slot in the designated 50% area in the agent send buffer. The buffer elements are regularly sent to server (or proxy) and the
buffer slots are free again.
While there are free slots in the designated log area in the agent send buffer and communication fails between agent and server
(or proxy) the log monitoring results are accumulated in the send buffer. This helps to mitigate short communication failures.
During longer communication failures all log slots get occupied and the following actions are taken:
• log[] and logrt[] item checks are stopped. When communication is restored and free slots in the buffer are available
the checks are resumed from the previous position. No matching lines are lost, they are just reported later.
• log.count[] and logrt.count[] checks are stopped if maxdelay = 0 (default). Behavior is similar to log[] and
logrt[] items as described above. Note that this can affect log.count[] and logrt.count[] results: for example,
one check counts 100 matching lines in a log file, but as there are no free slots in the buffer the check is stopped. When
communication is restored the agent counts the same 100 matching lines and also 70 new matching lines. The agent now
sends count = 170 as if they were found in one check.
• log.count[] and logrt.count[] checks with maxdelay > 0: if there was no ”jump” during the check, then behavior
is similar to described above. If a ”jump” over log file lines took place then the position after ”jump” is kept and the counted
result is discarded. So, the agent tries to keep up with a growing log file even in case of communication failure.
If a regular expression used in log[], logrt[], log.count[] or logrt.count[] item cannot be compiled by PCRE or PCRE2
library then the item goes into NOTSUPPORTED state with an error message. To continue monitoring the log item, the regular
expression should be fixed.
288
If the regular expression compiles successfully, but fails at runtime (on some or on all log records), then the log item remains
supported and monitoring continues. The runtime error is logged in the Zabbix agent log file (without the log file record).
The logging rate is limited to one runtime error per check to allow Zabbix agent to monitor its own log file. For example, if 10
records are analyzed and 3 records fail with a regexp runtime error, one record is produced in the agent log.
Exception: if MaxLinesPerSecond=1 and update interval=1 (only 1 record is allowed to analyze per check) then regexp runtime
errors are not logged.
zabbix_agentd logs the item key in case of a runtime error, zabbix_agent2 logs the item ID to help identify which log item has
runtime errors. It is recommended to redesign the regular expression in case of runtime errors.
7 Calculated items
Overview
A calculated item allows to create a calculation based on the values of some existing items. For example, you may want to calculate
the hourly average of some item value or to calculate the total value for a group of items. That is what a calculated item is for.
Calculated items are a way of creating virtual data sources. All calculations are done by Zabbix server only. The values are
periodically calculated based on the arithmetical expression used.
The resulting data is stored in the Zabbix database as for any other item; both history and trend values are stored and graphs can
be generated.
Note:
If the calculation result is a float value it will be trimmed to an integer if the calculated item type of information is Numeric
(unsigned).
Also, if there is no recent data in the cache and there is no defined querying period in the function, Zabbix will by default
go as far back in the past as one week to query the database for historical values.
Calculated items share their syntax with trigger expressions. Comparison to strings is allowed in calculated items. Calculated
items may be referenced by macros or other entities same as any other item type.
Configurable fields
The key is a unique item identifier (per host). You can create any key name using supported symbols.
The calculation definition should be entered in the Formula field. There is no connection between the formula and the key. The
key parameters are not used in the formula in any way.
function(/host/key,<parameter1>,<parameter2>,...)
where:
function One of the supported functions: last, min, max, avg, count, etc
host Host of the item that is used for calculation.
The current host can be omitted (i.e. as in function(//key,parameter,...)).
key Key of the item that is used for calculation.
parameter(s) Parameters of the function, if required.
Attention:
User macros in the formula will be expanded if used to reference a function parameter, item filter parameter, or a constant.
User macros will NOT be expanded if referencing a function, host name, item key, item key parameter or operator.
A more complex formula may use a combination of functions, operators and brackets. You can use all functions and operators
supported in trigger expressions. The logic and operator precedence are exactly the same.
Unlike trigger expressions, Zabbix processes calculated items according to the item update interval, not upon receiving a new
value.
289
All items that are referenced by history functions in the calculated item formula must exist and be collecting data. Also, if you
change the item key of a referenced item, you have to manually update any formulas using that key.
• referenced item(s)
– is not found
– is disabled
– belongs to a disabled host
– is not supported (except with nodata() function and operators with unknown values)
• no data to calculate a function
• division by zero
• incorrect syntax used
Usage examples
Example 1
100*last(//vfs.fs.size[/,free])/last(//vfs.fs.size[/,total])
Zabbix will take the latest values for free and total disk spaces and calculate percentage according to the given formula.
Example 2
avg(/Zabbix Server/zabbix[wcache,values],10m)
Note that extensive use of calculated items with long time periods may affect performance of Zabbix server.
Example 3
last(//net.if.in[eth0,bytes])+last(//net.if.out[eth0,bytes])
Example 4
100*last(//net.if.in[eth0,bytes])/(last(//net.if.in[eth0,bytes])+last(//net.if.out[eth0,bytes]))
See also: Examples of aggregate calculations
1 Aggregate calculations
Overview
Aggregate calculations are a calculated item type allowing to collect information from several items by Zabbix server and then
calculate an aggregate, depending on the aggregate function used.
Aggregate calculations do not require any agent running on the host being monitored.
Syntax
To retrieve aggregates use one of the supported aggregate functions: avg, max, min, sum, etc. Then add the foreach function as
the only parameter and its item filter to select the required items:
aggregate_function(function_foreach(/host/key?[group="host group"],timeperiod))
A foreach function (e.g. avg_foreach, count_foreach, etc.) returns one aggregate value for each selected item. Items are selected
by using the item filter (/host/key?[group="host group"]), from item history. For more details, see foreach functions.
If some of the items have no data for the requested period, they are ignored in the calculation. If no items have data, the function
will return an error.
290
Alternatively you may list several items as parameters for aggregation:
aggregate_function(function(/host/key,parameter),function(/host2/key2,parameter),...)
Note that function here must be a history/trend function.
Note:
If the aggregate results in a float value it will be trimmed to an integer if the aggregated item type of information is Numeric
(unsigned).
• none of the referenced items is found (which may happen if the item key is incorrect, none of the items exists or all included
groups are incorrect)
• no data to calculate a function
Usage examples
Example 1
sum(last_foreach(/*/vfs.fs.size[/,total]?[group="MySQL Servers"]))
Example 2
sum(last_foreach(/host/net.if.in[*]))
Example 3
avg(last_foreach(/*/system.cpu.load[,avg1]?[group="MySQL Servers"]))
Example 4
5-minute average of the number of queries per second for host group ’MySQL Servers’.
avg(avg_foreach(/*/mysql.qps?[group="MySQL Servers"],5m))
Example 5
Average CPU load on all hosts in multiple host groups that have the specific tags.
Calculation used on the latest item value sums of a whole host group.
sum(last_foreach(/*/net.if.out[eth0,bytes]?[group="video"])) / sum(last_foreach(/*/nginx_stat.sh[active]?[
Example 7
sum(last_foreach(/*/zabbix[host,,items_unsupported]?[group="Zabbix servers"]))
Examples of correct/incorrect syntax
Expressions (including function calls) cannot be used as history, trend, or foreach function parameters. However, those functions
themselves can be used in other (non-historical) function parameters.
Expression Example
Valid avg(last(/host/key1),last(/host/key2)*10,last(/host/key1)*100)
max(avg(avg_foreach(/*/system.cpu.load?[group="Servers
A"],5m)),avg(avg_foreach(/*/system.cpu.load?[group="Servers
B"],5m)),avg(avg_foreach(/*/system.cpu.load?[group="Servers C"],5m)))
Invalid sum(/host/key,10+2)
sum(/host/key, avg(10,2))
sum(/host/key,last(/host/key2))
291
Note that in an expression like:
sum(sum_foreach(//resptime[*],5m))/sum(count_foreach(//resptime[*],5m))
it cannot be guaranteed that both parts of the equation will always have the same set of values. While one part of the expression
is evaluated, a new value for the requested period may arrive and then the other part of the expression will have a different set of
values.
8 Internal checks
Overview
Internal checks allow to monitor the internal processes of Zabbix. In other words, you can monitor what goes on with Zabbix server
or Zabbix proxy.
Internal checks are processed by server or proxy regardless of the host maintenance status.
Note:
Internal checks are processed by Zabbix pollers.
Performance
Using some internal items may negatively affect performance. These items are:
• zabbix[host,,items]
• zabbix[host,,items_unsupported]
• zabbix[hosts]
• zabbix[items]
• zabbix[items_unsupported]
• zabbix[queue]
• zabbix[requiredperformance]
• zabbix[stats,,,queue]
• zabbix[triggers]
The System information and Queue frontend sections are also affected.
Supported checks
The item keys are listed without optional parameters and additional information. Click on the item key to see the full details.
zabbix[boottime] The startup time of Zabbix server or Zabbix proxy process in seconds.
zabbix[cluster,discovery,nodes]
Discovers the high availability cluster nodes.
zabbix[connector_queue] The count of values enqueued in the connector queue.
zabbix[discovery_queue] The count of network checks enqueued in the discovery queue.
zabbix[host„items] The number of enabled items (supported and not supported) on the host.
zabbix[host„items_unsupported]
The number of enabled unsupported items on the host.
zabbix[host„maintenance] The current maintenance status of the host.
zabbix[host,active_agent,available]
The availability of active agent checks on the host.
zabbix[host,discovery,interfaces]
The details of all configured interfaces of the host in Zabbix frontend.
zabbix[host,available] The availability of the main interface of a particular type of checks on the host.
zabbix[hosts] The number of monitored hosts.
zabbix[items] The number of enabled items (supported and not supported).
zabbix[items_unsupported] The number of unsupported items.
zabbix[java] The information about Zabbix Java gateway.
zabbix[lld_queue] The count of values enqueued in the low-level discovery processing queue.
zabbix[preprocessing_queue]
The count of values enqueued in the preprocessing queue.
zabbix[process] The percentage of time a particular Zabbix process or a group of processes (identified by <type>
and <mode>) spent in <state>.
zabbix[proxy] The information about Zabbix proxy.
292
Item key Description
• Parameters without angle brackets are constants - for example, ’host’ and ’available’ in zabbix[host,<type>,available].
Use them in the item key as is.
• Values for items and item parameters that are ”not supported on proxy” can only be retrieved if the host is monitored by
server. And vice versa, values ”not supported on server” can only be retrieved if the host is monitored by proxy.
zabbix[boottime]
<br> The startup time of Zabbix server or Zabbix proxy process in seconds.<br> Return value: Integer.
zabbix[cluster,discovery,nodes]
<br> Discovers the high availability cluster nodes.<br> Return value: JSON object.
zabbix[connector_queue]
<br> The count of values enqueued in the connector queue.<br> Return value: Integer.
zabbix[discovery_queue]
<br> The count of network checks enqueued in the discovery queue.<br> Return value: Integer.
zabbix[host„items]
<br> The number of enabled items (supported and not supported) on the host.<br> Return value: Integer.
zabbix[host„items_unsupported]
<br> The number of enabled unsupported items on the host.<br> Return value: Integer.
zabbix[host„maintenance]
<br> The current maintenance status of the host.<br> Return values: 0 - normal state; 1 - maintenance with data collection; 2 -
maintenance without data collection.
Comments:
293
• This item is always processed by Zabbix server regardless of the host location (on server or proxy). The proxy will not receive
this item with configuration data.
• The second parameter must be empty and is reserved for future use.
zabbix[host,active_agent,available]
<br> The availability of active agent checks on the host.<br> Return values: 0 - unknown; 1 - available; 2 - not available.
zabbix[host,discovery,interfaces]
<br> The details of all configured interfaces of the host in Zabbix frontend.<br> Return value: JSON object.
Comments:
zabbix[host,<type>,available]
<br> The availability of the main interface of a particular type of checks on the host.<br> Return values: 0 - not available; 1 -
available; 2 - unknown.
Comments:
zabbixhosts
zabbixitems
<br> The number of enabled items (supported and not supported).<br> Return value: Integer.
zabbix[items_unsupported]
zabbix[java„<param>]
<br> The information about Zabbix Java gateway.<br> Return values: 1 - if <param> is ping; Java gateway version - if <param>
is version (for example: ”7.0.0”).
Comments:
zabbix[lld_queue]
<br> The count of values enqueued in the low-level discovery processing queue.<br> Return value: Integer.
This item can be used to monitor the low-level discovery processing queue length.
zabbix[preprocessing_queue]
<br> The count of values enqueued in the preprocessing queue.<br> Return value: Integer.
zabbix[process,<type>,<mode>,<state>]
<br> The percentage of time a particular Zabbix process or a group of processes (identified by <type> and <mode>) spent in
<state>. It is calculated for the last minute only.<br> Return value: Float.
Parameters:
• type - for server processes: agent poller, alert manager, alert syncer, alerter, availability manager, configuration syncer,
configuration syncer worker, connector manager, connector worker, discovery manager, discovery worker, escalator, ha
manager, history poller, history syncer, housekeeper, http agent poller, http poller, icmp pinger, ipmi manager, ipmi poller,
java poller, lld manager, lld worker, odbc poller, poller, preprocessing manager, preprocessing worker, proxy group man-
ager, proxy poller, self-monitoring, service manager, snmp poller, snmp trapper, task manager, timer, trapper, trigger
housekeeper, unreachable poller, vmware collector;<br>for proxy processes: agent poller, availability manager, configura-
tion syncer, data sender, discovery manager, discovery worker, history syncer, housekeeper, http agent poller, http poller,
icmp pinger, ipmi manager, ipmi poller, java poller, odbc poller, poller, preprocessing manager, preprocessing worker, self-
monitoring, snmp poller, snmp trapper, task manager, trapper, unreachable poller, vmware collector
294
• mode - avg - average value for all processes of a given type (default)<br>count - returns number of forks for a given
process type, <state> should not be specified<br>max - maximum value<br>min - minimum value<br><process number>
- process number (between 1 and the number of pre-forked instances). For example, if 4 trappers are running, the value is
between 1 and 4.
• state - busy - process is in busy state, for example, the processing request (default); idle - process is in idle state doing
nothing.
Comments:
• If <mode> is a Zabbix process number that is not running (for example, with 5 pollers running the <mode> is specified to
be 6), such an item will turn unsupported;
• Minimum and maximum refers to the usage percentage for a single process. So if in a group of 3 pollers usage percentages
per process were 2, 18 and 66, min would return 2 and max would return 66.
• Processes report what they are doing in shared memory and the self-monitoring process summarizes that data each second.
State changes (busy/idle) are registered upon change - thus a process that becomes busy registers as such and doesn’t
change or update the state until it becomes idle. This ensures that even fully hung processes will be correctly registered as
100% busy.
• Currently, ”busy” means ”not sleeping”, but in the future additional states might be introduced - waiting for locks, performing
database queries, etc.
• On Linux and most other systems, resolution is 1/100 of a second.
Examples:
zabbix[process,poller,avg,busy] #the average time of poller processes spent doing something during the las
zabbix[process,"icmp pinger",max,busy] #the maximum time spent doing something by any ICMP pinger process
zabbix[process,"history syncer",2,busy] #the time spent doing something by history syncer number 2 during
zabbix[process,trapper,count] #the amount of currently running trapper processes
zabbix[proxy,<name>,<param>]
Parameters:
zabbix[proxy,discovery]
<br> The list of Zabbix proxies with name, mode, encryption, compression, version, last seen, host count, item count, required
values per second (vps), version status (current/outdated/unsupported), timeouts by item type, proxy group name (if proxy belongs
to group), state (unknown/offline/online).<br> Return value: JSON object.
zabbix[proxy group,<name>,available]
<br> The number of online proxies in a proxy group.<br> Return value: Integer.
Parameters:
zabbix[proxy group,<name>,pavailable]
<br> The percentage of online proxies in a proxy group.<br> Return value: Float.
Parameters:
zabbix[proxy group,<name>,proxies]
<br> The list of Zabbix proxies in a proxy group with name, mode, encryption, compression, version, last seen, host count, item
count, required values per second (vps), version status (current/outdated/unsupported), timeouts, proxy group name, state (un-
known/offline/online).<br> Return value: JSON.
Parameters:
zabbix[proxy group,<name>,state]
<br> The state of a proxy group.<br> Return value: 0 - unknown; 1 - offline; 2 - recovering; 3 - online; 4 - degrading.
Parameters:
295
• name - the proxy group name.
zabbix[proxy_buffer,buffer,<mode>]
<br> The proxy memory buffer usage statistics.<br> Return values: Integer (for size); Float (for percentage).
Parameters:
• mode:<br>total - the total size of buffer (can be used to check if memory buffer is enabled)<br>free - the size of free
buffer<br>pfree - the percentage of free buffer<br>used - the size of used buffer<br>pused - the percentage of used
buffer
Comments:
• Returns a ’Proxy memory buffer is disabled’ error when the memory buffer is disabled;<br>
• This item is not supported on Zabbix server.
zabbix[proxy_buffer,state,changes]
<br> Returns the number of state changes between disk/memory buffer modes since start.<br> Return values: Integer; 0 - the
memory buffer is disabled.
Comments:
• Frequent state changes indicate that either the memory buffer size or age must be increased;
• If the memory buffer state is monitored infrequently (for example, once a minute) then the buffer might flip its state without
it being registered.
zabbix[proxy_buffer,state,current]
<br> Returns the current working state where the new data are being stored.<br> Return values: 0 - disk; 1 - memory.
zabbix[proxy_history]
<br> The number of values in the proxy history table waiting to be sent to the server.<br> Return values: Integer.
zabbix[queue,<from>,<to>]
<br> The number of monitored items in the queue which are delayed at least by <from> seconds, but less than <to> seconds.<br>
Return value: Integer.
Parameters:
zabbix[rcache,<cache>,<mode>]
<br> The availability statistics of the Zabbix configuration cache.<br> Return values: Integer (for size); Float (for percentage).
Parameters:
• cache - buffer;
• mode - total - the total size of buffer<br>free - the size of free buffer<br>pfree - the percentage of free buffer<br>used -
the size of used buffer<br>pused - the percentage of used buffer
zabbix[requiredperformance]
<br> The required performance of Zabbix server or Zabbix proxy, in new values per second expected.<br> Return value: Float.
Approximately correlates with ”Required server performance, new values per second” in Reports → System information.
zabbix[stats,<ip>,<port>]
<br> The internal metrics of a remote Zabbix server or proxy.<br> Return values: JSON object.
Parameters:
Comments:
296
• The stats request will only be accepted from the addresses listed in the ’StatsAllowedIP’ server/proxy parameter on the
target instance;
• A selected set of internal metrics is returned by this item. For details, see Remote monitoring of Zabbix stats.
zabbix[stats,<ip>,<port>,queue,<from>,<to>]
<br> The internal queue metrics (see zabbix[queue,<from>,<to>]) of a remote Zabbix server or proxy.<br> Return values:
JSON object.
Parameters:
Comments:
• The stats request will only be accepted from the addresses listed in the ’StatsAllowedIP’ server/proxy parameter on the
target instance;
• A selected set of internal metrics is returned by this item. For details, see Remote monitoring of Zabbix stats.
zabbix[tcache,<cache>,<parameter>]
<br> The effectiveness statistics of the Zabbix trend function cache.<br> Return values: Integer (for size); Float (for percentage).
Parameters:
• cache - buffer;
• mode - all - total cache requests (default)<br>hits - cache hits<br>phits - percentage of cache hits<br>misses - cache
misses<br>pmisses - percentage of cache misses<br>items - the number of cached items<br>requests - the number of
cached requests<br>pitems - percentage of cached items from cached items + requests. Low percentage most likely means
that the cache size can be reduced.
zabbixtriggers
<br> The number of enabled triggers in Zabbix database, with all items enabled on enabled hosts.<br> Return value: Integer.
zabbix[uptime]
<br> The uptime of the Zabbix server or proxy process in seconds.<br> Return value: Integer.
zabbix[vcache,buffer,<mode>]
<br> The availability statistics of the Zabbix value cache.<br> Return values: Integer (for size); Float (for percentage).
Parameter:
• mode - total - the total size of buffer<br>free - the size of free buffer<br>pfree - the percentage of free buffer<br>used -
the size of used buffer<br>pused - the percentage of used buffer
zabbix[vcache,cache,<parameter>]
<br> The effectiveness statistics of the Zabbix value cache.<br> Return values: Integer. With the mode parameter returns: 0 -
normal mode; 1 - low memory mode.
Parameters:
• parameter - requests - the total number of requests<br>hits - the number of cache hits (history values taken from the
cache)<br>misses - the number of cache misses (history values taken from the database)<br>mode - the value cache
operating mode
Comments:
• Once the low-memory mode has been switched on, the value cache will remain in this state for 24 hours, even if the problem
that triggered this mode is resolved sooner;
• You may use this key with the Change per second preprocessing step in order to get values-per-second statistics;
• This item is not supported on Zabbix proxy.
297
zabbixversion
<br> The version of Zabbix server or proxy.<br> Return value: String. For example: 7.0.0.
zabbix[vmware,buffer,<mode>]
<br> The availability statistics of the Zabbix vmware cache.<br> Return values: Integer (for size); Float (for percentage).
Parameters:
• mode - total - the total size of buffer<br>free - the size of free buffer<br>pfree - the percentage of free buffer<br>used -
the size of used buffer<br>pused - the percentage of used buffer
zabbix[vps,written]
<br> The total number of history values written to database.<br> Return value: Integer.
zabbix[wcache,<cache>,<mode>]
<br> The statistics and availability of the Zabbix write cache.<br> Return values: Integer (for number/size); Float (for percentage).
Parameters:
Comments:
• Specifying <cache> is mandatory. The trend cache parameter is not supported with Zabbix proxy;
• The history cache is used to store item values. A low number indicates performance problems on the database side;
• The history index cache is used to index the values stored in the history cache;
• After the history cache is filled and then cleared, the history index cache will still keep some data. This behavior is expected
and helps the system run more efficiently by avoiding the extra processing required to constantly resize the memory;
• The trend cache stores the aggregate for the current hour for all items that receive data;
• You may use the zabbix[wcache,values] key with the Change per second preprocessing step in order to get values-per-second
statistics.
9 SSH checks
Overview
SSH checks are performed as agent-less monitoring. Zabbix agent is not needed for SSH checks.
To perform SSH checks Zabbix server must be initially configured with SSH2 support (libssh or libssh2). See also: Requirements.
Attention:
Starting with RHEL 8, only libssh is supported. For other distributions, libssh is suggested over libssh2.
Configuration
Passphrase authentication
SSH checks provide two authentication methods - a user/password pair and key-file based.
If you do not intend to use keys, no additional configuration is required, besides linking libssh or libssh2 to Zabbix, if you’re building
from source.
To use key based authentication for SSH items, certain changes to the server configuration are required.
Open the Zabbix server configuration file (zabbix_server.conf) as root and look for the following line:
##### SSHKeyLocation=
Uncomment it and set the full path to the folder where the public and private keys will be located:
298
SSHKeyLocation=/home/zabbix/.ssh
The path /home/zabbix here is the home directory for the zabbix user account, and .ssh is a directory where by default public and
private keys will be generated by an ssh-keygen command inside the home directory.
Usually installation packages of Zabbix server from different OS distributions create the zabbix user account with a home directory
elsewhere, for example, /var/lib/zabbix (as for system accounts).
Before generating the keys, you could reallocate the home directory to /home/zabbix, so that it corresponds with the
SSHKeyLocation Zabbix server configuration parameter mentioned above.
Note:
The following steps can be skipped if zabbix account has been added manually according to the installation section. In
such a case the home directory for the zabbix account is most likely already /home/zabbix.
To change the home directory of the zabbix user account, all working processes which are using it have to be stopped:
service zabbix-agent stop
service zabbix-server stop
To change the home directory location with an attempt to move it (if it exists) the following command should be executed:
It is also possible that a home directory did not exist in the old location, so it should be created at the new location. A safe attempt
to do that is:
To be sure that all is secure, additional commands could be executed to set permissions to the home directory:
Now, the steps to generate the public and private keys can be performed with the following commands (for better readability,
command prompts are commented out):
sudo -u zabbix ssh-keygen -t rsa
##### Generating public/private rsa key pair.
##### Enter file in which to save the key (/home/zabbix/.ssh/id_rsa):
/home/zabbix/.ssh/id_rsa
##### Enter passphrase (empty for no passphrase):
<Leave empty>
##### Enter same passphrase again:
<Leave empty>
##### Your identification has been saved in /home/zabbix/.ssh/id_rsa.
##### Your public key has been saved in /home/zabbix/.ssh/id_rsa.pub.
##### The key fingerprint is:
##### 90:af:e4:c7:e3:f0:2e:5a:8d:ab:48:a2:0c:92:30:b9 zabbix@it0
##### The key's randomart image is:
##### +--[ RSA 2048]----+
##### | |
##### | . |
##### | o |
##### | . o |
##### |+ . S |
##### |.+ o = |
##### |E . * = |
##### |=o . ..* . |
##### |... oo.o+ |
##### +-----------------+
299
Note:
The public and private keys (id_rsa.pub and id_rsa) have been generated by default in the /home/zabbix/.ssh directory,
which corresponds to the Zabbix server SSHKeyLocation configuration parameter.
Attention:
Key types other than ”rsa” may be supported by the ssh-keygen tool and SSH servers but they may not be supported by
libssh2 used by Zabbix.
This step should be performed only once for every host that will be monitored by SSH checks.
By using the following commands, the public key file can be installed on a remote host 10.10.10.10, so that the SSH checks can
be performed with a root account (for better readability, command prompts are commented out):
sudo -u zabbix ssh-copy-id [email protected]
##### The authenticity of host '10.10.10.10 (10.10.10.10)' can't be established.
##### RSA key fingerprint is 38:ba:f2:a4:b5:d9:8f:52:00:09:f7:1f:75:cc:0b:46.
##### Are you sure you want to continue connecting (yes/no)?
yes
##### Warning: Permanently added '10.10.10.10' (RSA) to the list of known hosts.
##### [email protected]'s password:
<Enter root password>
##### Now try logging into the machine, with "ssh '[email protected]'",
##### and check to make sure that only the key(s) you wanted were added.
Now it is possible to check the SSH login using the default private key (/home/zabbix/.ssh/id_rsa) for the zabbix user account:
sudo -u zabbix ssh [email protected]
If the login is successful, then the configuration part in the shell is finished and the remote SSH session can be closed.
Item configuration
Actual command(s) to be executed must be placed in the Executed script field in the item configuration. Multiple commands can
be executed one after another by placing them on a new line. In this case returned values will also be formatted as multilined.
300
All mandatory input fields are marked with a red asterisk.
The fields that require specific information for SSH items are:
Examples:
=>
ssh.run[KexAlgorithms,127.0.0.1,,,Ciphers=aes128-
=>
ssh.run[KexAlgorithms,,,,"KexAlgorithms=diffie-he
Authentication One of the ”Password” or ”Public key”.
method
User name User name (up to 255 characters) to authenticate
on remote host. Required.
Public key file File name of public key if Authentication method is Example: id_rsa.pub - default public key file name
”Public key”. Required. generated by a command ssh-keygen.
Private key file File name of private key if Authentication method Example: id_rsa - default private key file name.
is ”Public key”. Required.
Password or Password (up to 255 characters) to authenticate or Leave the Key passphrase field empty if
Key passphrase Passphrase if it was used for the private key. passphrase was not used.
See also known issues regarding passphrase
usage.
Executed script Executed shell command(s) using SSH remote The return value of the executed shell
session. command(s) is limited to 16MB (including trailing
whitespace that is truncated); database limits also
apply.
Examples:
date +%s
service mysql-server status
ps auxww | grep httpd | wc -l
10 Telnet checks
Overview
Telnet checks are performed as agent-less monitoring. Zabbix agent is not needed for Telnet checks.
Configurable fields
Actual command(s) to be executed must be placed in the Executed script field in the item configuration.
Multiple commands can be executed one after another by placing them on a new line. In this case returned value also will be
formatted as multilined.
301
Supported characters that the shell prompt can end with:
• $
• #
• >
• %
Note:
A telnet prompt line which ended with one of these characters will be removed from the returned value, but only for the
first command in the commands list, i.e. only at a start of the telnet session.
Key Description
Attention:
If a telnet check returns a value with non-ASCII characters and in non-UTF8 encoding then the <encoding> parameter of
the key should be properly specified. See encoding of returned values page for more details.
11 External checks
Overview
External check is a check executed by Zabbix server by running a shell script or a binary. However, when hosts are monitored by
a Zabbix proxy, the external checks are executed by the proxy.
External checks do not require any agent running on a host being monitored.
script[<parameter1>,<parameter2>,...]
Where:
ARGUMENT DEFINITION
If you don’t want to pass any parameters to the script you may use:
script[] or
script
Zabbix server will look in the directory defined as the location for external scripts (parameter ’ExternalScripts’ in Zabbix server
configuration file) and execute the command. The command will be executed as the user Zabbix server runs as, so any access
permissions or environment variables should be handled in a wrapper script, if necessary, and permissions on the command should
allow that user to execute it. Only commands in the specified directory are available for execution.
Warning:
Do not overuse external checks! As each script requires starting a fork process by Zabbix server, running many scripts
can decrease Zabbix performance a lot.
Usage example
Executing the script check_oracle.sh with the first parameters ’-h’. The second parameter will be replaced by IP address or DNS
name, depending on the selection in the host properties.
check_oracle.sh["-h","{HOST.CONN}"]
Assuming host is configured to use IP address, Zabbix will execute:
302
External check result
The return value of an external check is a standard output together with a standard error produced by the check.
Attention:
An item that returns text (character, log, or text type of information) will not become unsupported in case of a standard
error output.
The return value is limited to 16MB (including trailing whitespace that is truncated); database limits also apply.
If the requested script is not found or Zabbix server has no permissions to execute it, the item will become unsupported and a
corresponding error message will be displayed.
In case of a timeout, the item will become unsupported, a corresponding error message will be displayed, and the process forked
for the script will be terminated.
12 Trapper items
Overview
Trapper items accept incoming data instead of querying for it. This is useful for any data you want to send to Zabbix.
Configuration
The fields that require specific information for trapper items are:
303
Allowed hosts List of comma-delimited IP addresses (optionally in CIDR notation) or DNS names.
If specified, incoming connections will be accepted only from the hosts listed here.
If IPv6 support is enabled then ’127.0.0.1’, ’::127.0.0.1’, ’::ffff:127.0.0.1’ are treated equally and
’::/0’ will allow any IPv4 or IPv6 address. ’0.0.0.0/0’ can be used to allow any IPv4 address.
Note that ”IPv4-compatible IPv6 addresses” (0000::/96 prefix) are supported but deprecated by
RFC4291.
Spaces, user macros, and host macros {HOST.HOST}, {HOST.NAME}, {HOST.IP}, {HOST.DNS},
{HOST.CONN} are supported.
Note:
Before sending values, you may have to wait up to 60 seconds after saving the item until Zabbix server picks up the
changes from a configuration cache update.
Sending data
Sending data to Zabbix server or proxy is possible using the Zabbix sender utility or Zabbix sender protocol. Sending data to
Zabbix server is also possible using the history.push API method.
Zabbix sender
For sending data to Zabbix server or proxy using the Zabbix sender utility, you could run the following command to send the ”test
value”:
zabbix_sender -z <server IP address> -p 10051 -s "New host" -k trap -o "test value"
To send the ”test value”, the following command options are used:
Attention:
The Zabbix trapper process does not expand macros used in the item key to check the corresponding item key existence
for the targeted host.
For more information on the communication between Zabbix sender and Zabbix server or proxy, see Zabbix sender protocol.
history.push
For sending data to Zabbix server using the history.push API method, you could make the following HTTP POST request con-
taining some test values:
If the request is correct, the response returned by API could look as follows:
{
"jsonrpc": "2.0",
"result": {
"response": "success",
"data": [
{
"itemid": "10600"
304
},
{
"itemid": "10601",
"error": "Item is disabled."
},
{
"error": "No permissions to referred object or it does not exist."
}
]
},
"id": 1
}
Errors in response data indicate that sending data for specific items has failed validation by Zabbix server. This can happen for the
following reasons:
• the user sending the data has no read permission to the item’s host;
• the host is disabled or in maintenance without data collection;
• the item does not exist or is not yet included in the server configuration cache;
• the item is disabled or its type is other than Zabbix trapper or HTTP agent (with trapping enabled);
• the user’s IP or DNS is not set in the item’s Allowed hosts list;
• another item has a value with a duplicate timestamp on the nanosecond level.
The absence of errors indicates that the values sent have been accepted for processing, which includes preprocessing (if any),
trigger processing, and saving to the database. Note that the processing of an accepted value may also fail (for example, during
preprocessing), resulting in the value being discarded.
For more information on how to work with Zabbix API, see API.
Displaying data
Once data is sent, you can navigate to Monitoring → Latest data to see the result:
Note:
If a single numeric value is sent, the data graph will show a horizontal line to the left and right of the value’s time point.
13 JMX monitoring
Overview
JMX monitoring has native support in Zabbix in the form of a Zabbix daemon called ”Zabbix Java gateway”.
To retrieve the value of a particular JMX counter on a host, Zabbix server queries the Zabbix Java gateway, which in turn uses the
JMX management API to query the application of interest remotely.
For more details and setup see the Zabbix Java gateway section.
Warning:
Communication between Java gateway and the monitored JMX application should not be firewalled.
305
A Java application does not need any additional software installed, but it needs to be started with the command-line options
specified below to have support for remote JMX monitoring.
As a bare minimum, if you just wish to get started by monitoring a simple Java application on a local host with no security enforced,
start it with these options:
java \
-Dcom.sun.management.jmxremote \
-Dcom.sun.management.jmxremote.port=12345 \
-Dcom.sun.management.jmxremote.authenticate=false \
-Dcom.sun.management.jmxremote.ssl=false \
-Dcom.sun.management.jmxremote.registry.ssl=false \
-jar /usr/share/doc/openjdk-6-jre-headless/demo/jfc/Notepad/Notepad.jar
This makes Java listen for incoming JMX connections on port 12345, from local host only, and tells it not to require authentication
or SSL.
If you want to allow connections on another interface, set the -Djava.rmi.server.hostname parameter to the IP of that interface.
If you wish to be more stringent about security, there are many other Java options available to you. For instance, the next example
starts the application with a more versatile set of options and opens it to a wider network, not just local host.
java \
-Djava.rmi.server.hostname=192.168.3.14 \
-Dcom.sun.management.jmxremote \
-Dcom.sun.management.jmxremote.port=12345 \
-Dcom.sun.management.jmxremote.authenticate=true \
-Dcom.sun.management.jmxremote.password.file=/etc/java-6-openjdk/management/jmxremote.password \
-Dcom.sun.management.jmxremote.access.file=/etc/java-6-openjdk/management/jmxremote.access \
-Dcom.sun.management.jmxremote.ssl=true \
-Dcom.sun.management.jmxremote.registry.ssl=true \
-Djavax.net.ssl.keyStore=$YOUR_KEY_STORE \
-Djavax.net.ssl.keyStorePassword=$YOUR_KEY_STORE_PASSWORD \
-Djavax.net.ssl.trustStore=$YOUR_TRUST_STORE \
-Djavax.net.ssl.trustStorePassword=$YOUR_TRUST_STORE_PASSWORD \
-Dcom.sun.management.jmxremote.ssl.need.client.auth=true \
-jar /usr/share/doc/openjdk-6-jre-headless/demo/jfc/Notepad/Notepad.jar
Most (if not all) of these settings can be specified in /etc/java-6-openjdk/management/management.properties (or wher-
ever that file is on your system).
Note that if you wish to use SSL, you have to modify startup.sh script by adding -Djavax.net.ssl.* options to Java gateway,
so that it knows where to find key and trust stores.
With Java gateway running, server knowing where to find it and a Java application started with support for remote JMX monitoring,
it is time to configure the interfaces and items in Zabbix GUI.
306
All mandatory input fields are marked with a red asterisk.
For each JMX counter you are interested in you add JMX agent item attached to that interface.
The fields that require specific information for JMX items are:
If you wish to monitor a Boolean counter that is either ”true” or ”false”, then you specify type of information as ”Numeric (unsigned)”
and select ”Boolean to decimal” preprocessing step in the Preprocessing tab. Server will store Boolean values as 1 or 0, respectively.
Simple attributes
An MBean object name is nothing but a string which you define in your Java application. An attribute name, on the other hand,
can be more complex. In case an attribute returns primitive data type (an integer, a string etc.) there is nothing to worry about,
the key will look like this:
jmx[com.example:Type=Hello,weight]
307
In this example the object name is ”com.example:Type=Hello”, the attribute name is ”weight”, and the returned value type should
probably be ”Numeric (float)”.
It becomes more complicated when your attribute returns composite data. For example: your attribute name is ”apple” and it
returns a hash representing its parameters, like ”weight”, ”color” etc. Your key may look like this:
jmx[com.example:Type=Hello,apple.weight]
This is how an attribute name and a hash key are separated, by using a dot symbol. Same way, if an attribute returns nested
composite data the parts are separated by a dot:
jmx[com.example:Type=Hello,fruits.apple.weight]
Attributes returning tabular data
Tabular data attributes consist of one or multiple composite attributes. If such an attribute is specified in the attribute name
parameter then this item value will return the complete structure of the attribute in JSON format. The individual element values
inside the tabular data attribute can be retrieved using preprocessing.
jmx[com.example:type=Hello,foodinfo]
Item value:
[
{
"a": "apple",
"b": "banana",
"c": "cherry"
},
{
"a": "potato",
"b": "lettuce",
"c": "onion"
}
]
So far so good. But what if an attribute name or a hash key contains dot symbol? Here is an example:
jmx[com.example:Type=Hello,all.fruits.apple.weight]
That’s a problem. How to tell Zabbix that attribute name is ”all.fruits”, not just ”all”? How to distinguish a dot that is part of the
name from the dot that separates an attribute name and hash keys?
This is possible, all you need to do is to escape the dots that are part of the name with a backslash:
jmx[com.example:Type=Hello,all\.fruits.apple.weight]
Same way, if your hash key contains a dot you escape it:
jmx[com.example:Type=Hello,all\.fruits.apple.total\.weight]
Other issues
jmx[com.example:type=Hello,c:\\documents]
For handling any other special characters in JMX item key, please see the item key format section.
It is possible to work with custom MBeans returning non-primitive data types, which override the toString() method.
Custom endpoints allow working with different transport protocols other than the default RMI.
To illustrate this possibility, let’s try to configure JBoss EAP 6.4 monitoring as an example. First, let’s make some assumptions:
• You have already installed Zabbix Java gateway. If not, then you can do it in accordance with the documentation.
308
• Zabbix server and Java gateway are installed with the prefix/usr/local/
• JBoss is already installed in /opt/jboss-eap-6.4/ and is running in standalone mode
• We shall assume that all these components work on the same host
• Firewall and SELinux are disabled (or configured accordingly)
JavaGateway=127.0.0.1
StartJavaPollers=5
And in the zabbix_java/settings.sh configuration file (or zabbix_java_gateway.conf):
START_POLLERS=5
Check that JBoss listens to its standard management port:
As we know that this version of JBoss uses the JBoss Remoting protocol instead of RMI, we may mass update the JMX endpoint
parameter for items in our JMX template accordingly:
service:jmx:remoting-jmx://{HOST.CONN}:{HOST.PORT}
/usr/local/sbin/zabbix_server -R config_cache_reload
Note that you may encounter an error first.
309
”Unsupported protocol: remoting-jmx” means that Java gateway does not know how to work with the specified protocol. That can
be fixed by creating a ~/needed_modules.txt file with the following content:
jboss-as-remoting
jboss-logging
jboss-logmanager
jboss-marshalling
jboss-remoting
jboss-sasl
jcl-over-slf4j
jul-to-slf4j-stub
log4j-jboss-logmanager
remoting-jmx
slf4j-api
xnio-api
xnio-nio
and then executing the command:
for i in $(cat ~/needed_modules.txt); do find /opt/jboss-eap-6.4 -iname "${i}*.jar" -exec cp '{}' /usr/loc
Thus, Java gateway will have all the necessary modules for working with jmx-remoting. What’s left is to restart the Java gateway,
wait a bit and if you did everything right, see that JMX monitoring data begin to arrive in Zabbix (see also: Latest data).
14 ODBC monitoring
Overview
ODBC monitoring corresponds to the Database monitor item type in the Zabbix frontend.
ODBC is a C programming language middle-ware API for accessing database management systems (DBMS). The ODBC concept
was developed by Microsoft and later ported to other platforms.
Zabbix may query any database, which is supported by ODBC. To do that, Zabbix does not directly connect to the databases, but
uses the ODBC interface and drivers set up in ODBC. This allows for more efficient monitoring of different databases for multiple
purposes (for example, checking specific database queues, usage statistics, etc.).
Zabbix supports unixODBC, which is one of the most commonly used open source ODBC API implementations.
Attention:
See also: known issues for ODBC checks.
Installing unixODBC
310
The suggested way of installing unixODBC is to use the Linux operating system default package repositories. In the most popular
Linux distributions, unixODBC is included in the package repository by default. If packages are not available, the source files can
be obtained at the unixODBC homepage: http://www.unixodbc.org/download.html.
To install unixODBC, use the package manager for the system of your choice:
Attention:
The unixodbc-dev or unixODBC-devel package is necessary to compile Zabbix with unixODBC support. To enable
ODBC support, Zabbix should be compiled with the following configuration option: <br><br>
--with-unixodbc[=ARG] # Use ODBC driver against unixODBC package.
The unixODBC database driver should be installed for the database that will be monitored. For a list of supported databases and
drivers, see the unixODBC homepage: http://www.unixodbc.org/drivers.html.
Note:
In some Linux distributions, database drivers are included in package repositories.
MySQL
To install the MySQL unixODBC database driver, use the package manager for the system of your choice:
To install the database driver without a package manager, please refer to MySQL documentation for mysql-connector-odbc, or
MariaDB documentation for mariadb-connector-odbc.
PostgreSQL
To install the PostgreSQL unixODBC database driver, use the package manager for the system of your choice:
To install the database driver without a package manager, please refer to PostgreSQL documentation.
Oracle
MSSQL
To install the MSSQL unixODBC database driver for Ubuntu/Debian systems, use the package manager for the system of your
choice:
311
##### For Ubuntu/Debian systems:
apt install tdsodbc
To install the database driver without a package manager, please refer to FreeTDS user guide.
Configuring unixODBC
To configure unixODBC, you must edit the odbcinst.ini and odbc.ini files. You can verify the location of these files by executing
the following command:
odbcinst -j
The command result should contain information that is similar to the following:
unixODBC 2.3.9
DRIVERS............: /etc/odbcinst.ini
SYSTEM DATA SOURCES: /etc/odbc.ini
FILE DATA SOURCES..: /etc/ODBCDataSources
odbcinst.ini
The odbcinst.ini file lists the installed ODBC database drivers. If odbcinst.ini is missing, it is necessary to create it manually.
[TEST_MYSQL]
Description=ODBC for MySQL
Driver=/usr/lib/libmyodbc5.so
FileUsage=1
Parameter Description
odbc.ini
Parameter Description
312
Parameter Description
Parameter Description
ReadOnly Specifies whether the database connection allows only read operations (SELECT queries) and
restricts modifications (INSERT, UPDATE, and DELETE statements); useful for scenarios where
data should remain unchanged.
Protocol PostgreSQL backend protocol version (ignored when using SSL connections).
ShowOidColumn Specifies whether to include Object ID (OID) in SQLColumns.
FakeOidIndex Specifies whether to create a fake unique index on OID.
RowVersioning Specifies whether to enable applications to detect if data has been modified by other users while
you are attempting to update a row. Note that this parameter can speed up the update process,
since, to update a row, every single column does not need to be specified in the WHERE clause.
ShowSystemTables Specifies whether the database driver should treat system tables as regular tables in SQLTables;
useful for accessibility, allowing visibility into system tables.
Fetch Specifies whether the driver should automatically use declare cursor/fetch to handle SELECT
statements and maintain a cache of 100 rows.
BoolsAsChar Controls the mapping of Boolean types.
If set to ”Yes”, Bools are mapped to SQL_CHAR; otherwise, they are mapped to SQL_BIT.
SSLmode Specifies the SSL mode for the connection.
ConnSettings Additional settings sent to the backend on connection.
To test if the ODBC connection is working successfully, you can use the isql utility (included in the unixODBC package):
isql test
+---------------------------------------+
| Connected! |
| |
| sql-statement |
| help [tablename] |
| quit |
| |
+---------------------------------------+
313
Configure a Database monitoring item.
Important notes
• Database monitoring items will become unsupported if no odbc poller processes are started in the server or proxy configu-
ration. To activate ODBC pollers, set the StartODBCPollers parameter in Zabbix server configuration file or, for checks
performed by proxy, in Zabbix proxy configuration file.
• The Timeout parameter value in the item configuration form is used as the ODBC login timeout and the query execution
timeout. Note that these timeout settings might be ignored if the installed ODBC driver does not support them.
• The SQL command must return a result set like any query using the select statement. The query syntax will depend on
the RDBMS which will process them. The syntax of request to a storage procedure must be started with the call keyword.
314
Item key details
Parameters without angle brackets are mandatory. Parameters marked with angle brackets < > are optional.
<br> Returns one value, that is, the first column of the first row of the SQL query result.<br> Return value: Depending on the SQL
query.
Parameters:
• unique short description - a unique short description to identify the item (for use in triggers, etc.);
• dsn - the data source name (as specified in odbc.ini);
• connection string - the connection string (may contain driver-specific arguments).
Comments:
• Although dsn and connection string are optional parameters, at least one of them is required; if both are defined, dsn
will be ignored.
• If a query returns more than one column, only the first column is read. If a query returns more than one line, only the first
line is read.
<br> Transforms the SQL query result into a JSON array.<br> Return value: JSON object.
Parameters:
• unique short description - a unique short description to identify the item (for use in triggers, etc.);
• dsn - the data source name (as specified in odbc.ini);
• connection string - the connection string (may contain driver-specific arguments).
Comments:
• Although dsn and connection string are optional parameters, at least one of them is required; if both are defined, dsn
will be ignored.
• Multiple rows/columns in JSON format may be returned. This item may be used as a master item that collects all data in
one system call, while JSONPath preprocessing may be used in dependent items to extract individual values. For more
information, see an example of the returned format, used in low-level discovery.
Example:
<br> Transforms the SQL query result into a JSON array, used for low-level discovery. The column names from the query result are
turned into low-level discovery macro names paired with the discovered field values. These macros can be used in creating item,
trigger, etc. prototypes.<br> Return value: JSON object.
Parameters:
• unique short description - a unique short description to identify the item (for use in triggers, etc.);
• dsn - the data source name (as specified in odbc.ini);
• connection string - the connection string (may contain driver-specific arguments).
Comments:
• Although dsn and connection string are optional parameters, at least one of them is required; if both are defined, dsn
will be ignored.
Error messages
ODBC error messages are structured into fields to provide detailed information. For example:
Cannot execute ODBC query: [SQL_ERROR]:[42601][7][ERROR: syntax error at or near ";"; Error while executin
������������������������� ����������� �������������������������������������������������������������������������������
� � � �� Native error code �� Native error message
� � �� SQLState
�� Zabbix message �� ODBC return code
Note that the error message length is limited to 2048 bytes, so the message can be truncated. If there is more than one ODBC
diagnostic record, Zabbix tries to concatenate them (separated with |) as far as the length limit allows.
315
15 Dependent items
Overview
There are situations when one item gathers multiple metrics at a time or it even makes more sense to collect related metrics
simultaneously, for example:
To allow for bulk metric collection and simultaneous use in several related items, Zabbix supports dependent items. Dependent
items depend on the master item that collects their data simultaneously, in one query. A new value for the master item auto-
matically populates the values of the dependent items. Dependent items cannot have a different update interval than the master
item.
Zabbix preprocessing options can be used to extract the part that is needed for the dependent item from the master item data.
Preprocessing is managed by a preprocessing manager process, along with worker threads that perform the preprocessing
steps. All values (with or without preprocessing) from different data gatherers pass through the preprocessing manager before
being added to the history cache. Socket-based IPC communication is used between data gatherers (pollers, trappers, etc) and
the preprocessing process.
Zabbix server or Zabbix proxy (if host is monitored by proxy) are performing preprocessing steps and processing dependent items.
Item of any type, even dependent item, can be set as master item. Additional levels of dependent items can be used to extract
smaller parts from the value of an existing dependent item.
Limitations
Item configuration
A dependent item depends on its master item for data. That is why the master item must be configured (or exist) first:
316
All mandatory input fields are marked with a red asterisk.
The fields that require specific information for dependent items are:
You may use item value preprocessing to extract the required part of the master item value.
Without preprocessing, the dependent item value will be exactly the same as the master item value.
A shortcut to creating a dependent item quicker can be accessed by clicking on the button in the item list and selecting Create
dependent item.
317
Display
In the item list dependent items are displayed with their master item name as prefix.
16 HTTP agent
Overview
This item type allows data polling using the HTTP/HTTPS protocol. Trapping is also possible using the Zabbix sender utility or Zabbix
sender protocol (for sending data to Zabbix server or proxy), or using the history.push API method (for sending data to Zabbix
server).
HTTP item checks are executed by Zabbix server. However, when hosts are monitored by a Zabbix proxy, HTTP item checks are
executed by the proxy.
HTTP item checks do not require any agent running on a host being monitored.
HTTP agent supports both HTTP and HTTPS. Zabbix will optionally follow redirects (see the Follow redirects option below). Maximum
number of redirects is hard-coded to 10 (using cURL option CURLOPT_MAXREDIRS).
Attention:
Zabbix server/proxy must be initially configured with cURL (libcurl) support.
HTTP checks are executed asynchronously - it is not required to receive the response to one request before other checks are
started. DNS resolving is asynchronous as well.
The number of asynchronous HTTP agent pollers is defined by the StartHTTPAgentPollers parameter.
The persistent connections cURL feature has been added to HTTP agent checks since Zabbix 7.0.
318
Configuration
319
320
All mandatory input fields are marked with a red asterisk.
The fields that require specific information for HTTP items are:
Parameter Description
321
Parameter Description
Convert to JSON Headers are saved as attribute and value pairs under the ”header” key.
If ’Content-Type: application/json’ is encountered then body is saved as an object, otherwise it is
stored as string, for example:
HTTP proxy You can specify an HTTP proxy to use, using the format
[protocol://][username[:password]@]proxy.example.com[:port].
protocol:// prefix may be used to specify alternative proxy protocols (e.g.
The optional https,
socks4, socks5; see documentation; the protocol prefix support was added in cURL 7.21.7). With
no protocol specified, the proxy will be treated as an HTTP proxy. If you specify the wrong
protocol, the connection will fail and the item will become unsupported.
By default, 1080 port will be used.
If specified, the proxy will overwrite proxy related environment variables like http_proxy,
HTTPS_PROXY. If not specified, the proxy will not overwrite proxy-related environment variables.
The entered value is passed on ”as is”, no sanity checking takes place.
Note that only simple authentication is supported with HTTP proxy.
Supported macros: {HOST.IP}, {HOST.CONN}, {HOST.DNS}, {HOST.HOST}, {HOST.NAME},
{ITEM.ID}, {ITEM.KEY}, {ITEM.KEY.ORIG}, user macros, low-level discovery macros.
This sets the CURLOPT_PROXY cURL option.
HTTP authentication Select the authentication option:
None - no authentication used;
Basic - basic authentication is used;
NTLM - NTLM (Windows NT LAN Manager) authentication is used;
Kerberos - Kerberos authentication is used (see also: Configuring Kerberos with Zabbix);
Digest - Digest authentication is used.
This sets the CURLOPT_HTTPAUTH cURL option.
Username Enter the user name (up to 255 characters).
This field is available if HTTP authentication is set to Basic, NTLM, Kerberos, or Digest. User
macros and low-level discovery macros are supported.
Password Enter the user password (up to 255 characters).
This field is available if HTTP authentication is set to Basic, NTLM, Kerberos, or Digest. User
macros and low-level discovery macros are supported.
SSL verify peer Mark the checkbox to verify the SSL certificate of the web server. The server certificate will be
automatically taken from system-wide certificate authority (CA) location. You can override the
location of CA files using Zabbix server or proxy configuration parameter SSLCALocation.
This sets the CURLOPT_SSL_VERIFYPEER cURL option.
SSL verify host Mark the checkbox to verify that the Common Name field or the Subject Alternate Name field of
the web server certificate matches.
This sets the CURLOPT_SSL_VERIFYHOST cURL option.
1
SSL certificate file Name of the SSL certificate file used for client authentication. The certificate file must be in PEM
format. If the certificate file contains also the private key, leave the SSL key file field empty. If
the key is encrypted, specify the password in SSL key password field. The directory containing
this file is specified by Zabbix server or proxy configuration parameter SSLCertLocation.
Supported macros: {HOST.IP}, {HOST.CONN}, {HOST.DNS}, {HOST.HOST}, {HOST.NAME},
{ITEM.ID}, {ITEM.KEY}, {ITEM.KEY.ORIG}, user macros, low-level discovery macros.
This sets the CURLOPT_SSLCERT cURL option.
SSL key file Name of the SSL private key file used for client authentication. The private key file must be in
1
PEM format. The directory containing this file is specified by Zabbix server or proxy
configuration parameter SSLKeyLocation.
Supported macros: {HOST.IP}, {HOST.CONN}, {HOST.DNS}, {HOST.HOST}, {HOST.NAME},
{ITEM.ID}, {ITEM.KEY}, {ITEM.KEY.ORIG}, user macros, low-level discovery macros.
This sets the CURLOPT_SSLKEY cURL option.
322
Parameter Description
Note:
If the HTTP proxy field is left empty, another way for using an HTTP proxy is to set proxy-related environment variables.
For HTTP - set the http_proxy environment variable for the Zabbix server user. For example:
http_proxy=http://proxy_ip:proxy_port.
For HTTPS - set the HTTPS_PROXY environment variable. For example:
HTTPS_PROXY=http://proxy_ip:proxy_port. More details are available by running a shell command: # man curl.
Attention:
[1] Zabbix supports certificate and private key files in PEM format only. In case you have your certificate and private
key data in PKCS #12 format file (usually with extension *.p12 or *.pfx) you may generate the PEM file from it using the
following commands:
openssl pkcs12 -in ssl-cert.p12 -clcerts -nokeys -out ssl-cert.pem
openssl pkcs12 -in ssl-cert.p12 -nocerts -nodes -out ssl-cert.key
Examples
Example 1
Send simple GET requests to retrieve data from services such as Elasticsearch:
{
"name" : "YQ2VAY-",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "kH4CYqh5QfqgeTsjh2F9zg",
"version" : {
"number" : "6.1.3",
"build_hash" : "af51318",
"build_date" : "2018-01-26T18:22:55.523Z",
"build_snapshot" : false,
"lucene_version" : "7.1.0",
"minimum_wire_compatibility_version" : "5.6.0",
323
"minimum_index_compatibility_version" : "5.0.0"
},
"tagline" : "You know, for search"
}
• Now extract the version number using a JSONPath preprocessing step: $.version.number
Example 2
Send simple POST requests to retrieve data from services such as Elasticsearch:
{
"query": {
"bool": {
"must": [{
"match": {
"itemid": 28275
}
}],
"filter": [{
"range": {
"clock": {
"gt": 1517565836,
"lte": 1517566137
}
}
}]
}
}
}
• Received:
{
"_scroll_id": "DnF1ZXJ5VGhlbkZldGNoBQAAAAAAAAAkFllRMlZBWS1UU1pxTmdEeGVwQjRBTFEAAAAAAAAAJRZZUTJWQVk
"took": 18,
"timed_out": false,
"_shards": {
"total": 5,
"successful": 5,
"skipped": 0,
"failed": 0
},
"hits": {
"total": 1,
"max_score": 1.0,
"hits": [{
"_index": "dbl",
"_type": "values",
"_id": "dqX9VWEBV6sEKSMyk6sw",
"_score": 1.0,
"_source": {
"itemid": 28275,
"value": "0.138750",
"clock": 1517566136,
"ns": 25388713,
"ttl": 604800
}
}]
}
}
• Now use a JSONPath preprocessing step to get the item value: $.hits.hits[0]._source.value
324
Example 3
• Item configuration:
Note the use of the POST method with JSON data, setting request headers and asking to return headers only:
325
Example 4
Note the usage of macros in query fields. Refer to the Openweathermap API for how to fill them.
{
"body": {
"coord": {
"lon": 40.01,
"lat": 56.11
},
"weather": [{
"id": 801,
"main": "Clouds",
"description": "few clouds",
"icon": "02n"
}],
"base": "stations",
"main": {
"temp": 15.14,
"pressure": 1012.6,
"humidity": 66,
326
"temp_min": 15.14,
"temp_max": 15.14,
"sea_level": 1030.91,
"grnd_level": 1012.6
},
"wind": {
"speed": 1.86,
"deg": 246.001
},
"clouds": {
"all": 20
},
"dt": 1526509427,
"sys": {
"message": 0.0035,
"country": "RU",
"sunrise": 1526432608,
"sunset": 1526491828
},
"id": 487837,
"name": "Stavrovo",
"cod": 200
}
}
The next task is to configure dependent items that extract data from the JSON.
Other weather metrics such as ’Temperature’ are added in the same manner.
327
Example 5
328
• Sample dependent item value preprocessing with regular expression server accepts handled requests\s+([0-9]+)
([0-9]+) ([0-9]+):
17 Prometheus checks
Overview
• an HTTP master item pointing to the appropriate data endpoint, e.g. https://<prometheus host>/metrics
• dependent items using a Prometheus preprocessing option to query required data from the metrics gathered by the master
item
Bulk processing
Bulk processing is supported for dependent items. To enable caching and indexing, the Prometheus pattern preprocessing must
be the first preprocessing step. When Prometheus pattern is first preprocessing step then the parsed Prometheus data is cached
and indexed by the first <label>==<value> condition in the Prometheus pattern preprocessing step. This cache is reused when
329
processing other dependent items in this batch. For optimal performance, the first label should be the one with most different
values.
If there is other preprocessing to be done before the first step, it should be moved either to the master item or to a new dependent
item which would be used as the master item for the dependent items.
Configuration
Providing you have the HTTP master item configured, you need to create a dependent item that uses a Prometheus preprocessing
step:
The following parameters are specific to the Prometheus pattern preprocessing option:
Pattern To define the required data pattern you may use a wmi_os_physical_memory_free_bytes
query language that is similar to Prometheus cpu_usage_system{cpu=”cpu-total”}
query language (see comparison table), e.g.: cpu_usage_system{cpu=~”.*”}
<metric name> - select by metric name cpu_usage_system{cpu=”cpu-total”,host=~”.*”}
{__name__=”<metric name>”} - select by metric wmi_service_state{name=”dhcp”}==1
name wmi_os_timezone{timezone=~”.*”}==1
{__name__=~”<regex>”} - select by metric name
matching a regular expression
{<label name>=”<label value>”,...} - select by
label name
{<label name>=~”<regex>”,...} - select by label
name matching a regular expression
{__name__=~”.*”}==<value> - select by metric
value
Or a combination of the above:
<metric name>{<label1 name>=”<label1
value>”,<label2
name>=~”<regex>”,...}==<value>
330
Parameter Description Examples
Result Specify whether to return the value, the label or See also examples of using parameters below.
processing apply the appropriate function (if the pattern
matches several lines and the result needs to be
aggregated):
value - return metric value (error if multiple lines
matched)
label - return value of the label specified in the
Label field (error if multiple metrics are matched)
sum - return the sum of values
min - return the minimum value
max - return the maximum value
avg - return the average value
count - return the count of values
This field is only available for the Prometheus
pattern option.
Output Define label name (optional). In this case the
value corresponding to the label name is returned.
This field is only available for the Prometheus
pattern option, if ’Label’ is selected in the Result
processing field.
1. The most common use case is to return the value. To return the value of /var/db from:
node_disk_usage_bytes{path="/var/cache"} 2.1766144e+09<br> node_disk_usage_bytes{path="/var/db"}
20480<br> node_disk_usage_bytes{path="/var/dpkg"} 8192<br> node_disk_usage_bytes{path="/var/empty"}
4096
use the following parameters:
• Pattern - node_disk_usage_bytes{path="/var/db"}
• Result processing - select ’value’
2. You may also be interested in the average value of all node_disk_usage_bytes parameters:
• Pattern - node_disk_usage_bytes
• Result processing - select ’avg’
3. While Prometheus supports only numerical data, it is popular to use a workaround that allows to return the relevant textual
description as well. This can be accomplished with a filter and specifying the label. So, to return the value of the ’color’ label
from
The filter (based on the numeric value ’1’) will match the appropriate row, while the label will return the health status description
(currently ’green’; but potentially also ’red’ or ’yellow’).
Prometheus to JSON
Data from Prometheus can be used for low-level discovery. In this case data in JSON format are needed and the Prometheus to
JSON preprocessing option will return exactly that.
The following table lists differences and similarities between PromQL and Zabbix Prometheus preprocessing query language.
Differences
331
PromQL instant vector selector Zabbix Prometheus preprocessing
18 Script items
Overview
Script items can be used to collect data by executing a user-defined JavaScript code with the ability to retrieve data over
HTTP/HTTPS. In addition to the script, an optional list of parameters (pairs of name and value) and timeout can be specified.
This item type may be useful in data collection scenarios that require multiple steps or complex logic. As an example, a Script item
can be configured to make an HTTP call, then process the data received in the first step in some way, and pass transformed value
to the second HTTP call.
Configuration
In the Type field of item configuration form select Script then fill out required fields.
332
All mandatory input fields are marked with a red asterisk.
The fields that require specific information for Script items are:
Field Description
Key Enter a unique key that will be used to identify the item.
Parameters Specify the variables to be passed to the script as the attribute and value pairs.
User macros are supported. To see which built-in macros are supported, do a search for
”Script-type item” in the supported macro table.
Script Enter JavaScript code in the block that appears when clicking in the parameter field (or on the
view/edit button next to it). This code must provide the logic for returning the metric value.
The code has access to all parameters and all additional JavaScript objects added by Zabbix.
See also: JavaScript Guide.
Timeout JavaScript execution timeout (1-600s; exceeding it will return an error).
Note that depending on the script, it might take longer for the timeout to trigger.
For more information on the Timeout parameter, see general item attributes.
Examples
333
Collect the content of a specific page and make use of parameters:
Logging
Add the ”Log test” entry to the Zabbix server log and receive the item value ”1” in return:
19 Browser items
Overview
Browser items allow monitoring complex websites and web applications using a browser.
Attention:
The support of Browser items is currently experimental.
Browser items collect data by executing a user-defined JavaScript code and retrieving data over HTTP/HTTPS. This item can simulate
such browser-related actions as clicking, entering text, navigating through web pages, and other user interactions with websites
or web applications.
In addition to the script, an optional list of parameters (pairs of name and value) and timeout can be specified.
334
Attention:
The item partially implements the W3C WebDriver standard with either Selenium Server or a plain WebDriver (for example,
ChromeDriver) as a web testing endpoint. For the item to work, set the endpoint in the Zabbix server/proxy configuration
file WebDriverURL parameter.
Browser items are executed and processed by Zabbix server or proxy browser pollers. If necessary, you can adjust the number of
pre-forked instances of Browser item pollers in the Zabbix server/proxy configuration file StartBrowserPollers parameter.
For monitoring complex websites and web applications, the Website by Browser template is available as an out-of-the-box template.
Configuration
Attention:
If using ChromeDriver as the web testing endpoint, see Security Considerations.
In the Type field of item configuration form, select Browser and then fill out the required fields.
The fields that require specific information for Browser items are:
335
Field Description
Key Enter a unique key that will be used to identify the item.
Parameters Specify the variables to be passed to the script as the attribute and value pairs.
User macros are supported. To see which built-in macros are supported, do a search for
”Browser-type item” in the supported macro table.
Script Enter JavaScript code in the block that appears when clicking in the parameter field (or on the
view/edit button next to it). This code must provide the logic for returning the metric value.
The code has access to all parameters, all additional JavaScript objects and Browser item
JavaScript objects added by Zabbix.
See also: JavaScript Guide.
Timeout JavaScript execution timeout (1-600s; exceeding it will return an error).
Note that depending on the script, it might take longer for the timeout to trigger.
For more information on the Timeout parameter, see general item attributes.
Examples
Default script
try {
browser.navigate("http://example.com");
browser.collectPerfEntries();
}
finally {
return JSON.stringify(browser.getResult());
}
try {
browser.navigate("http://example.com/zabbix/index.php");
browser.collectPerfEntries("open page");
el = browser.findElement("xpath", "//input[@id='password']");
336
if (el === null) {
throw Error("cannot find password input field");
}
el.sendKeys("zabbix");
el = browser.findElement("xpath", "//button[@id='enter']");
if (el === null) {
throw Error("cannot find login button");
}
el.click();
browser.collectPerfEntries("login");
browser.collectPerfEntries("logout");
result = browser.getResult();
}
catch (err) {
if (!(err instanceof BrowserError)) {
browser.setError(err.message);
}
result = browser.getResult();
result.error.screenshot = browser.getScreenshot();
}
finally {
return JSON.stringify(result);
}
Initialize browser
1. Initializes a browser session for the available browser based on the first matching browser in the order specified within the
script.
2. Defines browser capabilities, including page load strategy and options specific to each browser, such as the headless mode
for Chrome, Firefox, and Microsoft Edge browsers.
Note that to set up WebDriver instances, you’ll need additional test scripts or scenarios to interact with the websites or web
application under test.
337
"--headless"
]
}
},
{
"browserName":"MicrosoftEdge",
"pageLoadStrategy":"normal",
"ms:edgeOptions":{
"args":[
"--headless=new"
]
}
},
{
"browserName":"safari",
"pageLoadStrategy":"normal"
}
]
}
});
Overview
History and trends are the two ways of storing collected data in Zabbix.
Whereas history keeps each collected value, trends keep averaged information on hourly basis and therefore are less resource-
hungry.
Keeping history
You can set for how many days history will be kept:
The general strong advice is to keep history for the smallest possible number of days and that way not to overload the database
with lots of historical values.
Instead of keeping a long history, you can keep longer data of trends. For example, you could keep history for 14 days and trends
for 5 years.
You can get a good idea of how much space is required by history versus trends data by referring to the database sizing page.
While keeping shorter history, you will still be able to review older data in graphs, as graphs will use trend values for displaying
older data.
Attention:
If history is set to ’0’, the item will update only dependent items and inventory. No trigger functions will be evaluated
because trigger evaluation is based on history data only.
Note:
As an alternative way to preserve history consider to use history export functionality of loadable modules.
Keeping trends
Trends is a built-in historical data reduction mechanism which stores minimum, maximum, average and the total number of values
per every hour for numeric data types.
You can set for how many days trends will be kept:
338
• when setting up Housekeeper tasks
Trends usually can be kept for much longer than history. Any older data will be removed by the housekeeper.
Zabbix server accumulates trend data in runtime in the trend cache, as the data flows in. Server flushes previous hour trends of
every item into the database (where frontend can find them) in these situations:
To see trends on a graph you need to wait at least to the beginning of the next hour (if item is updated frequently) and at most to
the end of the next hour (if item is updated rarely), which is 2 hours maximum.
When server flushes trend cache and there are already trends in the database for this hour (for example, server has been restarted
mid-hour), server needs to use update statements instead of simple inserts. Therefore on a bigger installation if restart is needed
it is desirable to stop server in the end of one hour and start in the beginning of the next hour to avoid trend data overlap.
Attention:
If trends are set to ’0’, Zabbix server does not calculate or store trends at all.
Note:
The trends are calculated and stored with the same data type as the original values. As a result the average value
calculations of unsigned data type values are rounded and the less the value interval is the less precise the result will be.
For example if item has values 0 and 1, the average value will be 0, not 0.5.
Also restarting server might result in the precision loss of unsigned data type average value calculations for the current
hour.
5 User parameters
Overview
Sometimes you may want to run an agent check that does not come predefined with Zabbix. This is where user parameters come
to help.
You may write a command that retrieves the data you need and include it in the user parameter in the agent configuration file
(’UserParameter’ configuration parameter).
UserParameter=<key>,<command>
As you can see, a user parameter also contains a key. The key will be necessary when configuring an item. Enter a key of your
choice that will be easy to reference (it must be unique within a host).
Restart the agent or use the agent runtime control option to pick up the new parameter, e. g.:
zabbix_agentd -R userparameter_reload
Then, when configuring an item, enter the key to reference the command from the user parameter you want executed.
User parameters are commands executed by Zabbix agent. Note that up to 16MB of data can be returned before item value
preprocessing steps.
/bin/sh is used as a command line interpreter under UNIX operating systems. User parameters obey the agent check timeout; if
timeout is reached the forked user parameter process is terminated.
See also:
A simple command:
UserParameter=ping,echo 1
The agent will always return ’1’ for an item with ’ping’ key.
339
UserParameter=mysql.ping,mysqladmin -uroot ping | grep -c alive
The agent will return ’1’, if MySQL server is alive, ’0’ - otherwise.
Flexible user parameters accept parameters with the key. This way a flexible user parameter can be the basis for creating several
items.
UserParameter=key[*],command
Parameter Description
Key Unique item key. The [*] defines that this key accepts parameters within the brackets.
Parameters are given when configuring the item.
Command Command to be executed to evaluate value of the key.
For flexible user parameters only:
You may use positional references $1…$9 in the command to refer to the respective parameter
in the item key.
Zabbix parses the parameters enclosed in [ ] of the item key and substitutes $1,...,$9 in the
command accordingly.
$0 will be substituted by the original command (prior to expansion of $0,...,$9) to be run.
Positional references are interpreted regardless of whether they are enclosed between double (”)
or single (’) quotes.
To use positional references unaltered, specify a double dollar sign - for example, awk ’{print
$$2}’. In this case $$2 will actually turn into $2 when executing the command.
Attention:
Positional references with the $ sign are searched for and replaced by Zabbix agent only for flexible user parameters. For
simple user parameters, such reference processing is skipped and, therefore, any $ sign quoting is not necessary.
Attention:
Certain symbols are not allowed in user parameters by default. See UnsafeUserParameters documentation for a full list.
Example 1
UserParameter=ping[*],echo $1
We may define unlimited number of items for monitoring all having format ping[something].
Example 2
mysql.ping[zabbix,our_password]
Example 3
UserParameter=wc[*],grep -c "$2" $1
This parameter can be used to calculate number of lines in a file.
wc[/etc/passwd,root]
wc[/etc/services,zabbix]
Command result
The return value of the command is a standard output together with a standard error produced by the command.
340
Attention:
An item that returns text (character, log, or text type of information) will not become unsupported in case of a standard
error output.
The return value is limited to 16MB (including trailing whitespace that is truncated); database limits also apply.
User parameters that return text (character, log, or text type of information) can also return a whitespace. In case of an invalid
result, the item will become unsupported.
This tutorial provides step-by-step instructions on how to extend the functionality of Zabbix agent with the use of a user parameter.
Step 1
For example, we may write the following command in order to get total number of queries executed by a MySQL server:
Step 2
Test this parameter by using Zabbix agent with ”-t” flag (if running under root, however, note that the agent may have different
permissions when launched as a daemon):
zabbix_agentd -t mysql.questions
Step 3
zabbix_agentd -R userparameter_reload
You may also restart the agent instead of the runtime control command.
Step 4
Add new item with Key=mysql.questions to the monitored host. Type of the item must be either Zabbix Agent or Zabbix Agent
(active).
Be aware that type of returned values must be set correctly on Zabbix server. Otherwise Zabbix won’t accept them.
Overview
You can effectively monitor Windows performance counters using the perf_counter[] key.
For example:
perf_counter["\Processor(0)\Interrupts/sec"]
or
perf_counter["\Processor(0)\Interrupts/sec", 10]
For more information on using this key or its English-only equivalent perf_counter_en, see Windows-specific item keys.
In order to get a full list of performance counters available for monitoring, you may run:
typeperf -qx
341
You may also use low-level discovery to discover multiple object instances of Windows performance counters and automate the
creation of perf_counter items for multiple instance objects.
Numeric representation
Windows maintains numeric representations (indexes) for object and performance counter names. Zabbix supports these numeric
representations as parameters to the perf_counter, perf_counter_en item keys and in PerfCounter, PerfCounterEn
configuration parameters.
However, it’s not recommended to use them unless you can guarantee your numeric indexes map to correct strings on specific
hosts. If you need to create portable items that work across different hosts with various localized Windows versions, you can use
the perf_counter_en key or PerfCounterEn configuration parameter which allow to use English names regardless of system
locale.
To find out the numeric equivalents, run regedit, then find HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Perflib\009
1
1847
2
System
4
Memory
6
% Processor Time
10
File Read Operations/sec
12
File Write Operations/sec
14
File Control Operations/sec
16
File Read Bytes/sec
18
File Write Bytes/sec
....
Here you can find the corresponding numbers for each string part of the performance counter, like in ’\System\% Processor Time’:
System → 2
% Processor Time → 6
Then you can use these numbers to represent the path in numbers:
\2\6
Performance counter parameters
You can deploy some PerfCounter parameters for the monitoring of Windows performance counters.
For example, you can add these to the Zabbix agent configuration file:
PerfCounter=UserPerfCounter1,"\Memory\Page Reads/sec",30
or
PerfCounter=UserPerfCounter2,"\4\24",30
With such parameters in place, you can then simply use UserPerfCounter1 or UserPerfCounter2 as the keys for creating the respec-
tive items.
Remember to restart Zabbix agent after making changes to the configuration file.
7 Mass update
Overview
Sometimes you may want to change some attribute for a number of items at once. Instead of opening each individual item for
editing, you may use the mass update function for that.
342
• Mark the checkboxes of the items to update in the list
• Click on Mass update below the list
• Navigate to the tab with required attributes (Item, Tags or Preprocessing)
• Mark the checkboxes of the attributes to update
• Enter new values for the attributes
343
The Tags option allows to:
• Add - add specified tags to the items (tags with the same name, but different values are not considered ’duplicates’ and can
be added to the same item).
• Replace - remove the specified tags and add tags with new values
• Remove - remove specified tags from the items
User macros, {INVENTORY.*} macros, {HOST.HOST}, {HOST.NAME}, {HOST.CONN}, {HOST.DNS}, {HOST.IP}, {HOST.PORT} and
{HOST.ID} macros are supported in tags.
Mark the checkbox next to Preprocessing steps to activate mass update for preprocessing steps.
• Replace - replace the existing preprocessing steps on the items with the preprocessing steps specified below
• Remove all - remove all preprocessing steps from the items
8 Value mapping
Overview
Value mapping allows configuring a more user-friendly representation of received values using mappings between numeric/string
values and string representations.
For example, when an item’s value is ”0” or ”1,” value mappings can be used to represent these values in a more user-friendly
manner:
• 0 → Not Available
• 1 → Available
344
• F → Full
• D → Differential
• I → Incremental
• <=209 → Low
• 210-230 → OK
• >=231 → High
Value mapping is used in Zabbix frontend and notifications sent by media types.
Attention:
Substitution of the received value with the configured representation is performed both in Zabbix frontend and server;
however, the server handles substitution only in the following cases:<br><br>
• when populating host inventory;
• when expanding supported macros {ITEM.VALUE}, {ITEM.LASTVALUE}, {EVENT.OPDATA}, and
{EVENT.CAUSE.OPDATA}.
Value mappings are set up on templates or hosts. Once configured, they are available for all items within the respective template
or host. When configuring items, specify the name of a previously configured value mapping in the Value mapping parameter.
Note:
There is no value map inheritance - hosts and templates do not inherit value mappings from linked templates. Template
items on a host will continue to use the value mappings configured on the template.
Note:
Value mappings can be used with items having Numeric (unsigned), Numeric (float), and Character types of information.
Value mappings are exported/imported with the respective template or host. They can also be mass-updated using the host and
template mass update forms.
Configuration
2. In the Value mapping tab, click Add to add a new value mapping, or click on the name of an existing mapping to edit it.
345
Parameter Description
Mapping is applied in the order of the rules that can be reordered by dragging.
Type Mapping type:
equals - equal values will be mapped;
is greater than or equals - equal or greater values will be mapped;
is less than or equals - equal or smaller values will be mapped;
in range - values in range will be mapped; the range is expressed as
<number1>-<number2> or <number>; multiple ranges are supported (for example,
1-10,101-110,201);
regexp - values corresponding to the regular expression will be mapped (global regular
expressions are not supported);
default - all outstanding values will be mapped, other than those with specific mappings.
For mapping ranges, only numeric value types (is greater than or equals, is less than or
equals, in range) are supported.
Value Incoming value (may contain a range or regular expression, depending on the mapping type).
Mapped to String representation (up to 64 characters) for the incoming value.
When viewing the value mapping in the list, only the first three mappings are visible, with three dots indicating that more mappings
exist.
One of the predefined agent items Zabbix agent ping uses a template-level value mapping ”Zabbix agent ping status” to display
its values.
In the item configuration form, you can find a reference to this value mapping in the Value mapping field:
This mapping is used in the Monitoring → Latest data section to display ”Up” (with the raw value in parentheses).
346
Note:
In the Latest data section, displayed values are shortened to 20 symbols. If value mapping is used, this shortening is not
applied to the mapped value but only to the raw value (displayed in parentheses).
Without a predefined value mapping, you would only see ”1”, which might be challenging to understand.
9 Queue
Overview
The queue displays items that are waiting for a refresh. The queue is just a logical representation of data. There is no IPC queue
or any other queue mechanism in Zabbix.
Items monitored by proxies are also included in the queue - they will be counted as queued for the proxy history data update
period.
Only items with scheduled refresh times are displayed in the queue. This means that the following item types are excluded from
the queue:
Statistics shown by the queue is a good indicator of the performance of Zabbix server.
The queue is retrieved directly from Zabbix server using JSON protocol. The information is available only if Zabbix server is running.
The picture here is generally ”ok” so we may assume that the server is doing fine.
The queue shows some items waiting up to 30 seconds. It would be great to know what items these are.
To do just that, select Queue details in the title dropdown. Now you can see a list of those delayed items.
347
With these details provided it may be possible to find out why these items might be delayed.
With one or two delayed items there perhaps is no cause for alarm. They might get updated in a second. However, if you see a
bunch of items getting delayed for too long, there might be a more serious problem.
Queue item
A special internal item zabbix[queue,<from>,<to>] can be used to monitor the health of the queue in Zabbix. It will return the
number of items delayed by the set amount of time. For more information see Internal items.
10 Value cache
Overview
To make the calculation of trigger expressions, calculated items and some macros much faster, a value cache option is supported
by the Zabbix server.
This in-memory cache can be used for accessing historical data, instead of making direct SQL calls to the database. If historical
values are not present in the cache, the missing values are requested from the database and the cache updated accordingly.
• the item is deleted (cached values are deleted after the next configuration sync);
• the item value is outside the time or count range specified in the trigger/calculated item expression (cached value is removed
when a new value is received);
• the time or count range specified in the trigger/calculated item expression is changed so that less data is required for
calculation (unnecessary cached values are removed after 24 hours).
Note:
Value cache status can be observed by using the server runtime control option diaginfo (or diaginfo=valuecache)
and inspecting the section for value cache diagnostic information. This can be useful for determining misconfigured triggers
or calculated items.
348
To enable the value cache functionality, an optional ValueCacheSize parameter is supported by the Zabbix server configuration
file.
Two internal items are supported for monitoring the value cache: zabbix[vcache,buffer,<mode>] and zabbix[vcache,cache,<parameter>
See more details with internal items.
11 Execute now
Overview
Checking for a new item value in Zabbix is a cyclic process that is based on configured update intervals. While for many items
the update intervals are quite short, there are others (including low-level discovery rules) for which the update intervals are quite
long, so in real-life situations there may be a need to check for a new value quicker - to pick up changes in discoverable resources,
for example. To accommodate such a necessity, it is possible to reschedule a passive check and retrieve a new value immediately.
This functionality is supported for passive checks only. The following item types are supported:
Attention:
The check must be present in the configuration cache in order to get executed; for more information see CacheUpdateFre-
quency. Before executing the check, the configuration cache is not updated, thus very recent changes to item/discovery
rule configuration will not be picked up. Therefore, it is also not possible to check for a new value for an item/rule that is
being created or has been created just now; use the Test option while configuring an item for that.
Configuration
• click on Execute now for selected items in the list of latest data:
349
Several items can be selected and ”executed now” at once.
In latest data this option is available only for hosts with read-write access. Accessing this option for hosts with read-only permissions
depends on the user role option called Invoke ”Execute now” on read-only hosts.
• click on Execute now in an existing item (or discovery rule) configuration form:
• click on Execute now for selected items/rules in the list of items/discovery rules:
Overview
It is possible to restrict checks on the agent side by creating an item blacklist, a whitelist, or a combination of whitelist/blacklist.
350
To do that use a combination of two agent configuration parameters:
• AllowKey=<pattern> - which checks are allowed; <pattern> is specified using a wildcard (*) expression
• DenyKey=<pattern> - which checks are denied; <pattern> is specified using a wildcard (*) expression
Note that:
• All system.run[*] items (remote commands, scripts) are disabled by default, even when no deny keys are specified, it should
be assumed that DenyKey=system.run[*] is implicitly appended.
• Since Zabbix 5.0.2 the EnableRemoteCommands agent parameter is:
– deprecated by Zabbix agent
– unsupported by Zabbix agent2
Therefore, to allow remote commands, specify an AllowKey=system.run[<command>,*] for each allowed command, * stands
for wait and nowait mode. It is also possible to specify AllowKey=system.run[*] parameter to allow all commands with wait
and nowait modes. To disallow specific remote commands, add DenyKey parameters with system.run[] commands before the
AllowKey=system.run[*] parameter.
Important rules
• A whitelist without a deny rule is only allowed for system.run[*] items. For all other items, AllowKey parameters are not
allowed without a DenyKey parameter; in this case Zabbix agent will not start with only AllowKey parameters.
• The order matters. The specified parameters are checked one by one according to their appearance order in the configuration
file:
– As soon as an item key matches an allow/deny rule, the item is either allowed or denied; and rule checking stops. So
if an item matches both an allow rule and a deny rule, the result will depend on which rule comes first.
– The order affects also EnableRemoteCommands parameter (if used).
• Unlimited numbers of AllowKey/DenyKey parameters is supported.
• AllowKey, DenyKey rules do not affect HostnameItem, HostMetadataItem, HostInterfaceItem configuration parameters.
• Key pattern is a wildcard expression where the wildcard (*) character matches any number of any characters in certain
position. It might be used in both the key name and parameters.
• If a specific item key is disallowed in the agent configuration, the item will be reported as unsupported (no hint is given as
to the reason);
• Zabbix agent with --print (-p) command line option will not show keys that are not allowed by configuration;
• Zabbix agent with --test (-t) command line option will return ”Unsupported item key.” status for keys that are not allowed by
configuration;
• Denied remote commands will not be logged in the agent log (if LogRemoteCommands=1).
Use cases
• Blacklist a specific check with DenyKey parameter. Matching keys will be disallowed. All non-matching keys will be allowed,
except system.run[] items.
For example:
Attention:
A blacklist may not be a good choice, because a new Zabbix version may have new keys that are not explicitly restricted
by the existing configuration. This could cause a security flaw.
• Blacklist a specific command with DenyKey parameter. Whitelist all other commands, with the AllowKey parameter.
• Whitelist specific checks with AllowKey parameters, deny others with DenyKey=*
For example:
351
# Allow reading logs:
AllowKey=vfs.file.*[/var/log/*]
A hypothetical script like ’myscript.sh’ may be executed on a host via Zabbix agent in several ways:
• system.run[myscript.sh]
• system.run[myscript.sh,wait]
• system.run[myscript.sh.nowait]
Here the user may add ”wait”, ”nowait” or omit the 2nd argument to use its default value in system.run[].
A user configures this script in Alerts → Scripts, sets ”Execute on: Zabbix agent” and puts ”myscript.sh” into the script’s ”Com-
mands” input field. When invoked from frontend or API the Zabbix server sends to agent:
• system.run[myscript.sh,nowait]
352
Here again the user does not control the ”wait”/”nowait” parameters.
AllowKey=system.run[myscript.sh]
then
AllowKey=system.run[myscript.sh,*]
DenyKey=system.run[*]
to the agent/agent2 parameters.
3 Triggers
Overview
Triggers are logical expressions that ”evaluate” data gathered by items and represent the current system state.
While items are used to gather system data, it is highly impractical to follow these data all the time waiting for a condition that is
alarming or deserves attention. The job of ”evaluating” data can be left to trigger expressions.
Trigger expressions allow to define a threshold of what state of data is ”acceptable”. Therefore, should the incoming data surpass
the acceptable state, a trigger is ”fired” - or changes its status to PROBLEM.
Status Description
In a simple trigger we may want to set a threshold for a five-minute average of some data, for example, the CPU load. This is
accomplished by defining a trigger expression where:
• the ’avg’ function is applied to the value received in the item key
• a five minute period for evaluation is used
• a threshold of ’2’ is set
avg(/host/key,5m)>2
This trigger will ”fire” (become PROBLEM) if the five-minute average is over 2.
In a more complex trigger, the expression may include a combination of multiple functions and multiple thresholds. See also:
Trigger expression.
Note:
After enabling a trigger (changing its configuration status from Disabled to Enabled), the trigger expression is evaluated
as soon as an item in it receives a value or the time to handle a time-based function comes.
Most trigger functions are evaluated based on item value history data, while some trigger functions for long-term analytics, e.g.
trendavg(), trendcount(), etc, use trend data.
Calculation time
A trigger is recalculated every time Zabbix server receives a new value that is part of the expression. When a new value is received,
each function that is included in the expression is recalculated (not just the one that received the new value).
Additionally, a trigger is recalculated each time when a new value is received and every 30 seconds if time-based functions are
used in the expression.
353
Time-based functions are nodata(), date(), dayofmonth(), dayofweek(), time(), now(); they are recalculated every 30 seconds
by the Zabbix history syncer process.
Triggers that reference trend functions only are evaluated once per the smallest time period in the expression. See also trend
functions.
Evaluation period
An evaluation period is used in functions referencing the item history. It allows to specify the interval we are interested in. It can
be specified as time period (30s, 10m, 1h) or as a value range (#5 - for five latest values).
The evaluation period is measured up to ”now” - where ”now” is the latest recalculation time of the trigger (see Calculation time
above); ”now” is not the ”now” time of the server.
• To consider all values between ”now-time period” and ”now” (or, with time shift, between ”now-time shift-time period” and
”now-time_shift”)
• To consider no more than the num count of values from the past, up to ”now”
– If there are 0 available values for the time period or num count specified - then the trigger or calculated item that uses
this function becomes unsupported
Note that:
• If only a single function (referencing data history) is used in the trigger, ”now” is always the latest received value. For
example, if the last value was received an hour ago, the evaluation period will be regarded as up to the latest value an hour
ago.
• A new trigger is calculated as soon as the first value is received (history functions); it will be calculated within 30 seconds for
time-based functions. Thus the trigger will be calculated even though perhaps the set evaluation period (for example, one
hour) has not yet passed since the trigger was created. The trigger will also be calculated after the first value, even though
the evaluation range was set, for example, to ten latest values.
Unknown status
In this case a trigger generally evaluates to ”unknown” (although there are some exceptions). For more details, see Expressions
with unknown operands.
1 Configuring a trigger
Overview
Configuration
354
All mandatory input fields are marked with a red asterisk.
Parameter Description
355
Parameter Description
Recovery expression Logical expression (optional) defining additional conditions that have to be met before the
problem is resolved, after the original problem expression has already been evaluated as FALSE.
Recovery expression is useful for trigger hysteresis. It is not possible to resolve a problem by
recovery expression alone if the problem expression is still TRUE.
This field is only available if ’Recovery expression’ is selected for OK event generation.
PROBLEM event Mode for generating problem events:
generation mode Single - a single event is generated when a trigger goes into the ’Problem’ state for the first
time;
Multiple - an event is generated upon every ’Problem’ evaluation of the trigger.
OK event closes Select if OK event closes:
All problems - all problems of this trigger
All problems if tag values match - only those trigger problems with matching event tag values
Tag for matching Enter event tag name to use for event correlation.
This field is displayed if ’All problems if tag values match’ is selected for the OK event closes
property and is mandatory in this case.
Allow manual close Check to allow manual closing of problem events generated by this trigger. Manual closing is
possible when acknowledging problem events.
Menu entry name If not empty, the name entered here (up to 64 characters) is used in several frontend locations
as a label for the trigger URL specified in the Menu entry URL parameter. If empty, the default
name Trigger URL is used.
The same set of macros is supported as in the trigger URL.
Menu entry URL If not empty, the URL entered here (up to 2048 characters) is available as a link in the event
menu in several frontend locations, for example, when clicking on the problem name in
Monitoring → Problems or Problems dashboard widget.
The same set of macros is supported as in the trigger name, plus {EVENT.ID}, {HOST.ID} and
{TRIGGER.ID}. Note: user macros with secret values will not be resolved in the URL.
Description Text field used to provide more information about this trigger. May contain instructions for fixing
specific problem, contact detail of responsible staff, etc.
The same set of macros is supported as in the trigger name.
Enabled Unchecking this box will disable the trigger if required.
Problems of a disabled trigger are no longer displayed in the frontend, but are not deleted.
The Tags tab allows you to define trigger-level tags. All problems of this trigger will be tagged with the values entered here.
In addition the Inherited and trigger tags option allows to view tags defined on template level, if the trigger comes from that
template. If there are multiple templates with the same tag, these tags are displayed once and template names are separated
with commas. A trigger does not ”inherit” and display host-level tags.
356
Parameter Description
Note:
You can also configure a trigger by opening an existing one, pressing the Clone button and then saving under a different
name.
Testing expressions
It is possible to test the configured trigger expression as to what the expression result would be depending on the received value.
In the Expression constructor, all individual expressions are listed. To open the testing window, click on Test below the expression
list.
357
In the testing window you can enter sample values (’80’, ’70’, ’0’, ’1’ in this example) and then see the expression result, by clicking
on the Test button.
The result of the individual expressions as well as the whole expression can be seen.
”TRUE” means that the specified expression is correct. In this particular case A, ”80” is greater than the {$TEMP_WARN} specified
value, ”70” in this example. As expected, a ”TRUE” result appears.
”FALSE” means that the specified expression is incorrect. In this particular case B, {$TEMP_WARN_STATUS} ”1” needs to be equal
with specified value, ”0” in this example. As expected, a ”FALSE” result appears.
The chosen expression type is ”OR”. If at least one of the specified conditions (A or B in this case) is TRUE, the overall result will
be TRUE as well. Meaning that the current value exceeds the warning value and a problem has occurred.
2 Trigger expression
Overview
The expressions used in triggers are very flexible. You can use them to create complex logical tests regarding monitored statistics.
A simple expression uses a function that is applied to the item with some parameters. The function returns a result that is
compared to the threshold, using an operator and a constant.
min(/Zabbix server/net.if.in[eth0,bytes],5m)>100K
will trigger if the number of received bytes during the last five minutes was always over 100 kilobytes.
While the syntax is exactly the same, from the functional point of view there are two types of trigger expressions:
When defining a problem expression alone, this expression will be used both as the problem threshold and the problem recovery
threshold. As soon as the problem expression evaluates to TRUE, there is a problem. As soon as the problem expression evaluates
to FALSE, the problem is resolved.
When defining both problem expression and the supplemental recovery expression, problem resolution becomes more complex:
not only the problem expression has to be FALSE, but also the recovery expression has to be TRUE. This is useful to create hysteresis
and avoid trigger flapping.
358
Functions
Functions allow to calculate the collected values (average, minimum, maximum, sum), find strings, reference current time and
other factors.
Typically functions return numeric values for comparison. When returning strings, comparison is possible with the = and <>
operators (see example).
Function parameters
• host and item key (functions referencing the host item history only)
• function-specific parameters
• other expressions (not available for functions referencing the host item history, see other expressions for examples)
The host and item key can be specified as /host/key. The referenced item must be in a supported state (except for nodata()
function, which is calculated for unsupported items as well).
While other trigger expressions as function parameters are limited to non-history functions in triggers, this limitation does not
apply in calculated items.
Function-specific parameters
Function-specific parameters are placed after the item key and are separated from the item key by a comma. See the supported
functions for a complete list of these parameters.
Most of numeric functions accept time as a parameter. You may use seconds or time suffixes to indicate time. Preceded by a hash
mark, the parameter has a different meaning:
Expression Description
Parameters with a hash mark have a different meaning with the function last - they denote the Nth previous value, so given the
values 3, 7, 2, 6, 5 (from the most recent to the least recent):
An optional time shift is supported with time or value count as the function parameter. This parameter allows to reference data
from a period of time in the past.
Time shift starts with now - specifying the current time, and is followed by +N<time unit> or -N<time unit> - to add or subtract
N time units.
For example, avg(/host/key,1h:now-1d) will return the average value for an hour one day ago.
Attention:
Time shift specified in months (M) and years (y) is only supported for trend functions. Other functions support seconds (s),
minutes (m), hours (h), days (d), and weeks (w).
Absolute time periods are supported in the time shift parameter, for example, midnight to midnight for a day, Monday-Sunday for
a week, first day-last day of the month for a month.
Time shift for absolute time periods starts with now - specifying the current time, and is followed by any number of time operations:
/<time unit> - defines the beginning and end of the time unit, for example, midnight to midnight for a day, +N<time unit>
or -N<time unit> - to add or subtract N time units.
Please note that the value of time shift can be greater or equal to 0, while the time period minimum value is 1.
Parameter Description
1d:now/d Yesterday
1d:now/d+1d Today
359
Parameter Description
Other expressions
min(min(/host/key,1h),min(/host2/key2,1h)*10)
Note that other expressions may not be used, if the function references item history. For example, the following syntax is not
allowed:
min(/host/key,#5*10)
Operators
The following operators are supported for triggers (in descending priority of execution):
Force
cast
operand
1
Priority OperatorDefinition Notes for unknown values to float
A<B ⇔
(A<B-0.000001)
<= Less than or Unknown <= Unknown → Unknown Yes
equal to. The
operator is
defined as:
A<=B ⇔
(A≤B+0.000001)
> More than. The Yes
operator is
defined as:
A>B ⇔
(A>B+0.000001)
>= More than or Yes
equal to. The
operator is
defined as:
A>=B ⇔
(A≥B-0.000001)
360
Force
cast
operand
1
Priority OperatorDefinition Notes for unknown values to float
1
6 = Is equal. The No
operator is
defined as:
A=B ⇔
(A≥B-0.000001)
and
(A≤B+0.000001)
1
<> Not equal. The No
operator is
defined as:
A<>B ⇔
(A<B-0.000001)
or
(A>B+0.000001)
7 and Logical AND 0 and Unknown → 0 Yes
1 and Unknown → Unknown
Unknown and Unknown → Unknown
8 or Logical OR 1 or Unknown → 1 Yes
0 or Unknown → Unknown
Unknown or Unknown → Unknown
1
String operand is still cast to numeric if:
(If the cast fails - numeric operand is cast to a string operand and both operands get compared as strings.)
not, and and or operators are case-sensitive and must be in lowercase. They also must be surrounded by spaces or parentheses.
All operators, except unary - and not, have left-to-right associativity. Unary - and not are non-associative (meaning -(-1) and not
(not 1) should be used instead of --1 and not not 1).
Evaluation result:
• <, <=, >, >=, =, <> operators shall yield ’1’ in the trigger expression if the specified relation is true and ’0’ if it is false. If
at least one operand is Unknown the result is Unknown;
• and for known operands shall yield ’1’ if both of its operands compare unequal to ’0’; otherwise, it yields ’0’; for unknown
operands and yields ’0’ only if one operand compares equal to ’0’; otherwise, it yields ’Unknown’;
• or for known operands shall yield ’1’ if either of its operands compare unequal to ’0’; otherwise, it yields ’0’; for unknown
operands or yields ’1’ only if one operand compares unequal to ’0’; otherwise, it yields ’Unknown’;
• The result of the logical negation operator not for a known operand is ’0’ if the value of its operand compares unequal to
’0’; ’1’ if the value of its operand compares equal to ’0’. For unknown operand not yields ’Unknown’.
Value caching
Values required for trigger evaluation are cached by Zabbix server. Because of this trigger evaluation causes a higher database
load for some time after the server restarts. The value cache is not cleared when item history values are removed (either manually
or by housekeeper), so the server will use the cached values until they are older than the time periods defined in trigger functions
or server is restarted.
Note:
If there is no recent data in the cache and there is no defined querying period in the function, Zabbix will by default go as
far back in the past as one week to query the database for historical values.
Examples of triggers
Example 1
361
last(/Zabbix server/system.cpu.load[all,avg1])>5
By using the function ’last()’, we are referencing the most recent value. /Zabbix server/system.cpu.load[all,avg1]
gives a short name of the monitored parameter. It specifies that the host is ’Zabbix server’ and the key being monitored is
’system.cpu.load[all,avg1]’. Finally, >5 means that the trigger is in the PROBLEM state whenever the most recent processor load
measurement from Zabbix server is greater than 5.
Example 2
www.example.com is overloaded.
last(/www.example.com/system.cpu.load[all,avg1])>5 or min(/www.example.com/system.cpu.load[all,avg1],10m)>
The expression is true when either the current processor load is more than 5 or the processor load was more than 2 during last 10
minutes.
Example 3
last(/www.example.com/vfs.file.cksum[/etc/passwd],#1)<>last(/www.example.com/vfs.file.cksum[/etc/passwd],#
The expression is true when the previous value of /etc/passwd checksum differs from the most recent one.
Similar expressions could be useful to monitor changes in important files, such as /etc/passwd, /etc/inetd.conf, /kernel, etc.
Example 4
min(/www.example.com/net.if.in[eth0,bytes],5m)>100K
The expression is true when number of received bytes on eth0 is more than 100 KB within last 5 minutes.
Example 5
Example 6
find(/example.example.com/agent.version,,"like","beta8")=1
The expression is true if Zabbix agent has version beta8.
Example 7
Server is unreachable.
count(/example.example.com/icmpping,30m,,"0")>5
The expression is true if host ”example.example.com” is unreachable more than 5 times in the last 30 minutes.
Example 8
nodata(/example.example.com/tick,3m)=1
To make use of this trigger, ’tick’ must be defined as a Zabbix trapper item. The host should periodically send data for this item
using zabbix_sender. If no data is received within 180 seconds, the trigger value becomes PROBLEM.
Example 9
362
min(/Zabbix server/system.cpu.load[all,avg1],5m)>2 and time()<060000
The trigger may change its state to problem only at night time (00:00 - 06:00).
Example 10
min(/zabbix/system.cpu.load[all,avg1],5m)>2
and not (dayofweek()=7 and time()>230000)
and not (dayofweek()=1 and time()<010000)
The trigger may change its state to problem at any time, except for 2 hours on a week change (Sunday, 23:00 - Monday, 01:00).
Example 11
fuzzytime(/MySQL_DB/system.localtime,10s)=0
The trigger will change to the problem state in case when local time on server MySQL_DB and Zabbix server differs by more than
10 seconds. Note that ’system.localtime’ must be configured as a passive check.
Example 12
Comparing average load today with average load of the same time yesterday (using time shift as now-1d).
avg(/server/system.cpu.load,1h)/avg(/server/system.cpu.load,1h:now-1d)>2
The trigger will fire if the average load of the last hour tops the average load of the same hour yesterday more than two times.
Example 13
Example 14
Example 15
Comparing string values of two items - operands here are functions that return strings.
net.dns.record[192.0.2.0,{$WEBSITE_NAME},{$DNS_RESOURCE_RECORD_TYPE},2,1]
with macros defined as
{$WEBSITE_NAME} = example.com
{$DNS_RESOURCE_RECORD_TYPE} = MX
and normally returns:
example.com MX 0 mail.example.com
So our trigger expression to detect if the DNS query result deviated from the expected result is:
363
last(/Zabbix server/net.dns.record[192.0.2.0,{$WEBSITE_NAME},{$DNS_RESOURCE_RECORD_TYPE},2,1])<>"{$WEBSITE
Notice the quotes around the second operand.
Example 17
last(/Zabbix server/vfs.file.contents[/tmp/hello])={$HELLO_MACRO}
Example 18
Problem: Load of Exchange server increased by more than 10% last month
trendavg(/Exchange/system.cpu.load,1M:now/M)>1.1*trendavg(/Exchange/system.cpu.load,1M:now/M-1M)
You may also use the Event name field in trigger configuration to build a meaningful alert message, for example to receive some-
thing like
"Load of Exchange server increased by 24% in July (0.69) comparing to June (0.56)"
the event name must be defined as:
Hysteresis
Sometimes an interval is needed between problem and recovery states, rather than a simple threshold. For example, if we want
to define a trigger that reports a problem when server room temperature goes above 20°C and we want it to stay in the problem
state until the temperature drops below 15°C, a simple trigger threshold at 20°C will not be enough.
Instead, we need to define a trigger expression for the problem event first (temperature above 20°C). Then we need to define
an additional recovery condition (temperature below 15°C). This is done by defining an additional Recovery expression parameter
when defining a trigger.
• First, the problem expression (temperature above 20°C) will have to evaluate to FALSE
• Second, the recovery expression (temperature below 15°C) will have to evaluate to TRUE
The recovery expression will be evaluated only when the problem event is resolved first.
Warning:
The recovery expression being TRUE alone does not resolve a problem if the problem expression is still TRUE!
Example 1
Problem expression:
last(/server/temp)>20
Recovery expression:
last(/server/temp)<=15
364
Example 2
max(/server/vfs.fs.size[/,free],5m)<10G
Recovery expression: it is more than 40GB for last 10 minutes
min(/server/vfs.fs.size[/,free],10m)>40G
Expressions with unknown operands
Generally an unknown operand (such as an unsupported item) in the expression will immediately render the trigger value to
Unknown.
However, in some cases unknown operands (unsupported items, function errors) are admitted into expression evaluation:
• The nodata() function is evaluated regardless of whether the referenced item is supported or not.
• Logical expressions with OR and AND can be evaluated to known values in two cases regardless of unknown operands:
– Case 1: ”1 or some_function(unsupported_item1) or some_function(unsupported_item2) or ...”
can be evaluated to known result (’1’ or ”Problem”),
– Case 2: ”0 and some_function(unsupported_item1) and some_function(unsupported_item2) and
...” can be evaluated to known result (’0’ or ”OK”).
Zabbix tries to evaluate such logical expressions by taking unsupported items as unknown operands. In the two cases
above a known value will be produced (”Problem” or ”OK”, respectively); in all other cases the trigger will evaluate
to Unknown.
• If the function evaluation for a supported item results in error, the function value becomes Unknown and it takes part as
unknown operand in further expression evaluation.
Note that unknown operands may ”disappear” only in logical expressions as described above. In arithmetic expressions unknown
operands always lead to the result Unknown (except division by 0).
Attention:
An expression that results in Unknown does not change the trigger state (”Problem/OK”). So, if it was ”Problem” (see Case
1), it stays in the same problem state even if the known part is resolved (’1’ becomes ’0’), because the expression is now
evaluated to Unknown and that does not change the trigger state.
If a trigger expression with several unsupported items evaluates to Unknown the error message in the frontend refers to the last
unsupported item evaluated.
3 Trigger dependencies
Overview
Sometimes the availability of one host depends on another. A server that is behind a router will become unreachable if the router
goes down. With triggers configured for both, you might get notifications about two hosts down - while only the router was the
guilty party.
This is where some dependency between hosts might be useful. With dependency set, notifications of the dependents could be
withheld and only the notification on the root problem sent.
While Zabbix does not support dependencies between hosts directly, they may be defined with another, more flexible method -
trigger dependencies. A trigger may have one or more triggers it depends on.
So in our simple example we open the server trigger configuration form and set that it depends on the respective trigger of the
router. With such dependency, the server trigger will not change its state as long as the trigger it depends on is in the ’PROBLEM’
state - and thus no dependent actions will be taken and no notifications sent.
If both the server and the router are down and dependency is there, Zabbix will not execute actions for the dependent trigger.
While the parent trigger is in the PROBLEM state, its dependents may report values that cannot be trusted. Therefore dependent
triggers will not be re-evaluated until the parent trigger (the router in the example above):
365
In all of the cases mentioned above, the dependent trigger (server) will be re-evaluated only when a new metric for it is received.
This means that the dependent trigger may not be updated immediately.
Also:
• Trigger dependency may be added from any host trigger to any other host trigger, as long as it doesn’t result in a circular
dependency.
• Trigger dependency may be added from one template to another. If some trigger from template A depends on some trigger
from template B, template A may only be linked to a host (or another template) together with template B, but template B
may be linked to a host (or another template) alone.
• Trigger dependency may be added from a template trigger to a host trigger. In this case, linking such a template to a host
will create a host trigger that depends on the same trigger template that the trigger was depending on. This allows to, for
example, have a template where some triggers depend on the router (host) triggers. All hosts linked to this template will
depend on that specific router.
• Trigger dependency may not be added from a host trigger to a template trigger.
• Trigger dependency may be added from a trigger prototype to another trigger prototype (within the same low-level discovery
rule) or a real trigger. A trigger prototype may not depend on a trigger prototype from a different LLD rule or on a trigger
created from trigger prototype. A host trigger prototype cannot depend on a trigger from a template.
Configuration
To define a dependency, open the Dependencies tab in the trigger configuration form. Click on Add in the ’Dependencies’ block
and select one or more triggers that the trigger will depend on.
Click Update. Now the trigger has the indication of its dependency in the list.
For example, the Host is behind the Router2 and the Router2 is behind the Router1.
Zabbix performs this check recursively. If the Router1 or the Router2 is unreachable, the Host trigger won’t be updated.
4 Trigger severity
366
Zabbix supports the following default trigger severities.
Not Gray Can be used where the severity level of an event is unknown, has not been determined, is not
classified part of the regular monitoring scope, etc., for example, during initial configuration, as a
placeholder for future assessment, or as part of an integration process.
Information Light blue Can be used for informational events that do not require immediate attention, but can still
provide valuable insights.
Warning Yellow Can be used to indicate a potential issue that might require investigation or action, but that is
not critical.
Average Orange Can be used to indicate a significant issue that should be addressed relatively soon to prevent
further problems.
High Light red Can be used to indicate critical issues that need immediate attention to avoid significant
disruptions.
Disaster Red Can be used to indicate a severe incident that requires immediate action to prevent, for
example, system outages or data loss.
Note:
Trigger severity names and colors can be customized.
Trigger severity names and colors for severity related GUI elements can be configured in Administration → General → Trigger
displaying options. Colors are shared among all GUI themes.
Attention:
If Zabbix frontend translations are used, custom severity names will override translated names by default.
Default trigger severity names are available for translation in all locales. If a severity name is changed, a custom name is used in
all locales and additional manual translation is needed.
msgid "Important"
msgstr "<translation string>"
and save file.
Here msgid should match the new custom severity name and msgstr should be the translation for it in the specific language.
6 Mass update
Overview
With mass update you may change some attribute for a number of triggers at once, saving you the need to open each individual
trigger for editing.
367
Using mass update
• Mark the checkboxes of the triggers you want to update in the list
• Click on Mass update below the list
• Navigate to the tab with required attributes (Trigger, Tags or Dependencies)
• Mark the checkboxes of any attribute to update
The following options are available when selecting the respective button for tag update:
Note that tags with the same name but different values are not considered ’duplicates’ and can be added to the same trigger.
Replace dependencies - will remove any existing dependencies from the trigger and replace them with the one(s) specified.
Overview
368
Sometimes there are signs of the upcoming problem. These signs can be spotted so that actions may be taken in advance to
prevent or at least minimize the impact of the problem.
Zabbix has tools to predict the future behavior of the monitored system based on historic data. These tools are realized through
predictive trigger functions.
1 Functions
Before setting a trigger, it is necessary to define what a problem state is and how much time is needed to take action. Then there
are two ways to set up a trigger signaling about a potential unwanted situation. First: the trigger must fire when the system is
expected to be in a problem state after ”time to act”. Second: the trigger must fire when the system is going to reach the problem
state in less than ”time to act”. Corresponding trigger functions to use are forecast and timeleft. Note that underlying statistical
analysis is basically identical for both functions. You may set up a trigger whichever way you prefer with similar results.
2 Parameters
Both functions use almost the same set of parameters. Use the list of supported functions for reference.
First of all, you should specify the historic period Zabbix should analyze to come up with the prediction. You do it in a familiar
way by means of the time period parameter and optional time shift like you do it with avg, count, delta, max, min and sum
functions.
(forecast only)
Parameter time specifies how far in the future Zabbix should extrapolate dependencies it finds in historic data. No matter if you
use time_shift or not, time is always counted starting from the current moment.
2.3 Threshold to reach
(timeleft only)
Parameter threshold specifies a value the analyzed item has to reach, no difference if from above or from below. Once we have
determined f(t) (see below), we should solve equation f(t) = threshold and return the root which is closer to now and to the right
from now or 1.7976931348623158E+308 if there is no such root.
Note:
When item values approach the threshold and then cross it, timeleft assumes that intersection is already in the past and
therefore switches to the next intersection with threshold level, if any. Best practice should be to use predictions as a
a
complement to ordinary problem diagnostics, not as a substitution.
a
According to specification these are voltages on chip pins and generally speaking may need scaling.
Default fit is the linear function. But if your monitored system is more complicated you have more options to choose from.
fit x = f(t)
linear x = a + b*t
1 2 n
polynomialN x = a0 + a1 *t + a2 *t + ... + an *t
exponential x = a*exp(b*t)
logarithmic x = a + b*log(t)
b
power x = a*t
2.5 Modes
(forecast only)
Every time a trigger function is evaluated, it gets data from the specified history period and fits a specified function to the data.
So, if the data is slightly different, the fitted function will be slightly different. If we simply calculate the value of the fitted function
at a specified time in the future, you will know nothing about how the analyzed item is expected to behave between now and that
moment in the future. For some fit options (like polynomial) a simple value from the future may be misleading.
1
Secure indicates that the cookie should only be transmitted over a secure HTTPS connection from the client. When set to ’true’, the cookie will only be set if
a secure connection exists.
369
mode forecast result
3 Details
To avoid calculations with huge numbers, we consider the timestamp of the first value in specified period plus 1 ns as a new
9 18 -16
zero-time (current epoch time is of order 10 , epoch squared is 10 , double precision is about 10 ). 1 ns is added to provide all
positive time values for logarithmic and power fits which involve calculating log(t). Time shift does not affect linear, polynomial,
exponential (apart from easier and more precise calculations) but changes the shape of logarithmic and power functions.
4 Potential errors
Note:
No warnings or errors are flagged if chosen fit poorly describes provided data or there is just too few data for accurate
prediction.
To get a warning when you are about to run out of free disk space on your host, you may use a trigger expression like this:
timeleft(/host/vfs.fs.size[/,free],1h,0)}<1h
However, error code -1 may come into play and put your trigger in a problem state. Generally it’s good because you get a warning
that your predictions don’t work correctly and you should look at them more thoroughly to find out why. But sometimes it’s bad
because -1 can simply mean that there was no data about the host free disk space obtained in the last hour. If you are getting too
4
many false positive alerts, consider using more complicated trigger expression :
4 Events
Overview
370
Events are time-stamped and can be the basis of actions such as sending notification email etc.
To view details of events in the frontend, go to Monitoring → Problems. There you can click on the event date and time to view
details of an event.
• trigger events
• other event sources
Overview
Change of trigger status is the most frequent and most important source of events. Each time the trigger changes its state, an
event is generated. The event contains details of the trigger state’s change - when it happened and what the new state is.
Problem events
OK events
An OK event closes the related problem event(s) and may be created by 3 components:
• triggers - based on ’OK event generation’ and ’OK event closes’ settings;
• event correlation
• task manager – when an event is manually closed
Triggers
Triggers have an ’OK event generation’ setting that controls how OK events are generated:
• Expression - an OK event is generated for a trigger in problem state when its expression evaluates to FALSE. This is the
simplest setting, enabled by default.
• Recovery expression - an OK event is generated for a trigger in problem state when its expression evaluates to FALSE and
the recovery expression evaluates to TRUE. This can be used if trigger recovery criteria is different from problem criteria.
• None - an OK event is never generated. This can be used in conjunction with multiple problem event generation to simply
send a notification when something happens.
Additionally triggers have an ’OK event closes’ setting that controls which problem events are closed:
• All problems - an OK event will close all open problems created by the trigger
• All problems if tag values match - an OK event will close open problems created by the trigger and having at least one
matching tag value. The tag is defined by ’Tag for matching’ trigger setting. If there are no problem events to close then OK
event is not generated. This is often called trigger level event correlation.
Event correlation
Event correlation (also called global event correlation) is a way to set up custom event closing (resulting in OK event generation)
rules.
The rules define how the new problem events are paired with existing problem events and allow to close the new event or the
matched events by generating corresponding OK events.
However, event correlation must be configured very carefully, as it can negatively affect event processing performance or, if
misconfigured, close more events than intended (in the worst case even all problem events could be closed). A few configuration
tips:
1. always reduce the correlation scope by setting a unique tag for the control event (the event that is paired with old events)
and use the ’new event tag’ correlation condition
2. don’t forget to add a condition based on the old event when using ’close old event’ operation, or all existing problems could
be closed
3. avoid using common tag names used by different correlation configurations
Task manager
If the ’Allow manual close’ setting is enabled for trigger, then it’s possible to manually close problem events generated by the
trigger. This is done in the frontend when updating a problem. The event is not closed directly – instead a ’close event’ task is
371
created, which is handled by the task manager shortly. The task manager will generate a corresponding OK event and the problem
event will be closed.
Service events
Service events are generated only if service actions for these events are enabled. In this case, each service status change creates
a new event:
The event contains details of the service state change - when it happened and what the new state is.
Discovery events
Zabbix periodically scans the IP ranges defined in network discovery rules. Frequency of the check is configurable for each rule
individually. Once a host or a service is discovered, a discovery event (or several events) are generated.
If configured, active agent autoregistration event is created when a previously unknown active agent asks for checks or if the host
metadata has changed. The server adds a new auto-registered host, using the received IP address and port of the agent.
Internal events
The aim of introducing internal events is to allow users to be notified when any internal event takes place, for example, an item
becomes unsupported and stops gathering data.
Internal events are only created when internal actions for these events are enabled. To stop generation of internal events (for
example, for items becoming unsupported), disable all actions for internal events in Alerts → Actions → Internal actions.
Note:
If internal actions are disabled, while an object is in the ’unsupported’ state, recovery event for this object will still be
created.
If internal actions are enabled, while an object is in the ’unsupported’ state, recovery event for this object will be
created, even though ’problem event’ has not been created for the object.
372
3 Manual closing of problems
Overview
While generally problem events are resolved automatically when trigger status goes from ’Problem’ to ’OK’, there may be cases
when it is difficult to determine if a problem has been resolved by means of a trigger expression. In such cases, the problem needs
to be resolved manually.
For example, syslog may report that some kernel parameters need to be tuned for optimal performance. In this case the issue is
reported to Linux administrators, they fix it and then close the problem manually.
Problems can be closed manually only for triggers with the Allow manual close option enabled.
When a problem is ”manually closed”, Zabbix generates a new internal task for Zabbix server. Then the task manager process
executes this task and generates an OK event, therefore closing problem event.
A manually closed problem does not mean that the underlying trigger will never go into a ’Problem’ state again. The trigger
expression is re-evaluated and may result in a problem:
• When new data arrive for any item included in the trigger expression (note that the values discarded by a throttling prepro-
cessing step are not considered as received and will not cause trigger expression to be re-evaluated);
• When time-based functions are used in the expression. Complete time-based function list can be found on Triggers page.
Configuration
Trigger configuration
If a problem arises for a trigger with the Manual close flag, you can open the problem update popup window of that problem and
close the problem manually.
To close the problem, check the Close problem option in the form and click on Update.
373
All mandatory input fields are marked with a red asterisk.
The request is processed by Zabbix server. Normally it will take a few seconds to close the problem. During that process CLOSING
is displayed in Monitoring → Problems as the status of the problem.
Verification
5 Event correlation
Overview
Event correlation allows to correlate problem events to their resolution in a manner that is very precise and flexible.
• on trigger level - one trigger may be used to relate separate problems to their solution
• globally - problems can be correlated to their solution from a different trigger/polling method using global correlation rules
Overview
Trigger-based event correlation allows to correlate separate problems reported by one trigger.
While generally an OK event can close all problem events created by one trigger, there are cases when a more detailed approach is
needed. For example, when monitoring log files you may want to discover certain problems in a log file and close them individually
rather than all together.
This is the case with triggers that have PROBLEM event generation mode parameter set to Multiple. Such triggers are normally
used for log monitoring, trap processing, etc.
374
It is possible in Zabbix to relate problem events based on tagging. Tags are used to extract values and create identification for
problem events. Taking advantage of that, problems can also be closed individually based on matching tag.
In other words, the same trigger can create separate events identified by the event tag. Therefore problem events can be identified
one-by-one and closed separately based on the identification by the event tag.
How it works
Configuration
Item
To begin with, you may want to set up an item that monitors a log file, for example:
log[/var/log/syslog]
With the item set up, wait a minute for the configuration changes to be picked up and then go to Latest data to make sure that the
item has started collecting data.
Trigger
With the item working you need to configure the trigger. It’s important to decide what entries in the log file are worth paying
attention to. For example, the following trigger expression will search for a string like ’Stopping’ to signal potential problems:
find(/My host/log[/var/log/syslog],,"regexp","Stopping")=1
Attention:
To make sure that each line containing the string ”Stopping” is considered a problem also set the Problem event generation
mode in trigger configuration to ’Multiple’.
375
Then define a recovery expression. The following recovery expression will resolve all problems if a log line is found containing the
string ”Starting”:
find(/My host/log[/var/log/syslog],,"regexp","Starting")=1
Since we do not want that it’s important to make sure somehow that the corresponding root problems are closed, not just all
problems. That’s where tagging can help.
Problems and resolutions can be matched by specifying a tag in the trigger configuration. The following settings have to be made:
376
If configured successfully you will be able to see problem events tagged by application and matched to their resolution in Monitoring
→ Problems.
Warning:
Because misconfiguration is possible, when similar event tags may be created for unrelated problems, please review the
cases outlined below!
• With two applications writing error and recovery messages to the same log file a user may decide to use two Application tags
in the same trigger with different tag values by using separate regular expressions in the tag values to extract the names
of, say, application A and application B from the {ITEM.VALUE} macro (e.g. when the message formats differ). However,
this may not work as planned if there is no match to the regular expressions. Non-matching regexps will yield empty tag
values and a single empty tag value in both problem and OK events is enough to correlate them. So a recovery message
from application A may accidentally close an error message from application B.
• Actual tags and tag values only become visible when a trigger fires. If the regular expression used is invalid, it is silently
replaced with an *UNKNOWN* string. If the initial problem event with an *UNKNOWN* tag value is missed, there may appear
subsequent OK events with the same *UNKNOWN* tag value that may close problem events which they shouldn’t have
closed.
• If a user uses the {ITEM.VALUE} macro without macro functions as the tag value, the 255-character limitation applies. When
log messages are long and the first 255 characters are non-specific, this may also result in similar event tags for unrelated
problems.
Overview
Global event correlation allows to reach out over all metrics monitored by Zabbix and create correlations.
It is possible to correlate events created by completely different triggers and apply the same operations to them all. By creating
intelligent correlation rules it is actually possible to save yourself from thousands of repetitive notifications and focus on root causes
of a problem!
Global event correlation is a powerful mechanism, which allows you to untie yourself from one-trigger based problem and resolution
logic. So far, a single problem event was created by one trigger and we were dependent on that same trigger for the problem
resolution. We could not resolve a problem created by one trigger with another trigger. But with event correlation based on event
tagging, we can.
For example, a log trigger may report application problems, while a polling trigger may report the application to be up and running.
Taking advantage of event tags you can tag the log trigger as Status: Down while tag the polling trigger as Status: Up. Then, in a
global correlation rule you can relate these triggers and assign an appropriate operation to this correlation such as closing the old
events.
In another use, global correlation can identify similar triggers and apply the same operation to them. What if we could get only
one problem report per network port problem? No need to report them all. That is also possible with global event correlation.
Global event correlation is configured in correlation rules. A correlation rule defines how the new problem events are paired
with existing problem events and what to do in case of a match (close the new event, close matched old events by generating
corresponding OK events). If a problem is closed by global correlation, it is reported in the Info column of Monitoring → Problems.
Configuring global correlation rules is available to Super Admin level users only.
Attention:
Event correlation must be configured very carefully, as it can negatively affect event processing performance or, if mis-
configured, close more events than was intended (in the worst case even all problem events could be closed).
• Reduce the correlation scope. Always set a unique tag for the new event that is paired with old events and use the New
event tag correlation condition;
• Add a condition based on the old event when using the Close old event operation (or else all existing problems could be
closed);
377
• Avoid using common tag names that may end up being used by different correlation configurations;
• Keep the number of correlation rules limited to the ones you really need.
Configuration
Parameter Description
378
Parameter Description
Operations Mark the checkbox of the operation to perform when event is correlated. The following
operations are available:
Close old events - close old events when a new event happens. Always add a condition based
on the old event when using the Close old events operation or all existing problems could be
closed.
Close new event - close the new event when it happens
Enabled If you mark this checkbox, the correlation rule will be enabled.
To configure details of a new condition, click on in the Conditions block. A popup window will open where you can edit the
condition details.
Parameter Description
Warning:
Because misconfiguration is possible, when similar event tags may be created for unrelated problems, please review the
cases outlined below!
379
• Actual tags and tag values only become visible when a trigger fires. If the regular expression used is invalid, it is silently
replaced with an *UNKNOWN* string. If the initial problem event with an *UNKNOWN* tag value is missed, there may appear
subsequent OK events with the same *UNKNOWN* tag value that may close problem events which they shouldn’t have
closed.
• If a user uses the {ITEM.VALUE} macro without macro functions as the tag value, the 255-character limitation applies. When
log messages are long and the first 255 characters are non-specific, this may also result in similar event tags for unrelated
problems.
Example
This global correlation rule will correlate problems if Host and Port tag values exist on the trigger and they are the same in the
original event and the new one.
The operation will close new problem events on the same network port, keeping only the original problem open.
6 Tagging
Overview
There is an option to tag various entities in Zabbix. Tags can be defined for:
• templates
• hosts
380
• items
• web scenarios
• triggers
• services
• template items and triggers
• host, item and trigger prototypes
Tags have several uses, most notably, to mark events. If entities are tagged, the corresponding new events get marked accordingly:
• with tagged templates - any host problems created by relevant entities (items, triggers, etc) from this template will be
marked
• with tagged hosts - any problem of the host will be marked
• with tagged items, web scenarios - any data/problem of this item or web scenario will be marked
• with tagged triggers - any problem of this trigger will be marked
A problem event inherits all tags from the whole chain of templates, hosts, items, web scenarios, triggers. Completely identical
tag:value combinations (after resolved macros) are merged into one rather than being duplicated, when marking the event.
Having custom event tags allows for more flexibility. Importantly, events can be correlated based on event tags. In other uses,
actions can be defined based on tagged events. Item problems can be grouped based on tags. Problem tags can also be used to
map problems to services.
Tagging is realized as a pair of tag name and value. You can use only the name or pair it with a value:
Use cases
381
• See these tags on all triggers created from trigger prototypes.
11. Match services using service tags:
• Define service actions for services with matching tags;
• Use service tags to map a service to an SLA for SLA calculations.
12. Map services to problems using problem tags:
• In the service configuration, specify problem tag, for example target:MySQL;
• Problems with the matching tag will be automatically correlated to the service;
• Service status will change to the status of the problem with the highest severity.
13. Suppress problems when a host is in maintenance mode:
• Define tags in Maintenance periods to suppress only problems with matching tags.
14. Grant access to user groups:
• Specify tags in the user group configuration to allow viewing only problems with matching tags.
Configuration
Macro support
Built-in and user macros in tags are resolved at the time of the event. Until the event has occurred, these macros will be shown in
Zabbix frontend unresolved.
• {EVENT.TAGS} and {EVENT.RECOVERY.TAGS} built-in macros will resolve to a comma separated list of event tags or recovery
event tags.
• {EVENT.TAGSJSON} and {EVENT.RECOVERY.TAGSJSON} built-in macros will resolve to a JSON array containing event tag
objects or recovery event tag objects.
The following macros may be used in template, host, item, and web scenario tags:
• {HOST.HOST}, {HOST.NAME}, {HOST.CONN}, {HOST.DNS}, {HOST.IP}, {HOST.PORT} and {HOST.ID} built-in macros.
• {INVENTORY.*} built-in macros.
• User macros.
• Low-level discovery macros can be used in item prototype tags.
382
• {HOST.HOST}, {HOST.NAME}, {HOST.CONN}, {HOST.DNS}, {HOST.IP}, {HOST.PORT} and {HOST.ID} built-in macros.
• {INVENTORY.*} built-in macros.
• User macros.
• Low-level discovery macros will be resolved during discovery process and then added to the discovered host.
Substring extraction is supported for populating the tag name or tag value, using a macro function - applying a regular expression
to the value obtained by the supported macro. For example:
{{ITEM.VALUE}.regsub(pattern, output)}
{{ITEM.VALUE}.iregsub(pattern, output)}
{{#LLDMACRO}.regsub(pattern, output)}
{{#LLDMACRO}.iregsub(pattern, output)}
Tag name and value will be cut to 255 characters if their length exceeds 255 characters after macro resolution.
See also: Using macro functions in low-level discovery macros for event tagging.
• Monitoring → Problems
• Monitoring → Problems → Event details
• Dashboards → Problems widget
Only the first three tag entries can be displayed. If there are more than three tag entries, it is indicated by three dots. If you roll
your mouse over these three dots, all tag entries are displayed in a pop-up window.
Note that the order in which tags are displayed is affected by tag filtering and the Tag display priority option in the filter of Monitoring
→ Problems or the Problems dashboard widget.
7 Visualization
1 Graphs
Overview
With lots of data flowing into Zabbix, it becomes much easier for the users if they can look at a visual representation of what is
going on rather than only numbers.
This is where graphs come in. Graphs allow to grasp the data flow at a glance, correlate problems, discover when something
started or make a presentation of when something might turn into a problem.
1 Simple graphs
Overview
Simple graphs are provided for the visualization of data gathered by items.
No configuration effort is required on the user part to view simple graphs. They are freely made available by Zabbix.
383
Just go to Monitoring → Latest data and click on the Graph link for the respective item and a graph will be displayed.
Note:
Simple graphs are provided for all numeric items. For textual items, a link to History is available in Monitoring → Latest
data.
The Time period selector above the graph allows to select often required periods with one mouse click. For more information, see
Time period selector.
For very recent data a single line is drawn connecting each received value. The single line is drawn as long as there is at least
one horizontal pixel available for one value.
For data that show a longer period three lines are drawn - a dark green one shows the average, while a light pink and a light
green line shows the maximum and minimum values at that point in time. The space between the highs and the lows is filled with
yellow background.
Working time (working days) is displayed in graphs as a white background, while non-working time is displayed in gray (with the
Original blue default frontend theme).
Working time is always displayed in simple graphs, whereas displaying it in custom graphs is a user preference.
384
Working time is not displayed if the graph shows more than 3 months.
Trigger lines
Simple triggers are displayed as lines with black dashes over trigger severity color -- take note of the blue line on the graph and
the trigger information displayed in the legend. Up to 3 trigger lines can be displayed on the graph; if there are more triggers then
the triggers with lower severity are prioritized. Triggers are always displayed in simple graphs, whereas displaying them in custom
graphs is a user preference.
For the users who have frontend debug mode activated, a gray, vertical caption is displayed at the bottom right of a graph indicating
where the data come from.
• longevity of item history. For example, item history can be kept for 14 days. In that case, any data older than the fourteen
days will be coming from trends.
• data congestion in the graph. If the amount of seconds to display in a horizontal graph pixel exceeds 3600/16, trend data
are displayed (even if item history is still available for the same period).
• if trends are disabled, item history is used for graph building - if available for that period.
Absence of data
For items with a regular update interval, nothing is displayed in the graph if item data are not collected.
However, for trapper items and items with a scheduled update interval (and regular update interval set to 0), a straight line is
drawn leading up to the first collected value and from the last collected value to the end of graph; the line is on the level of the
first/last value respectively.
A dropdown on the upper right allows to switch from the simple graph to the Values/500 latest values listings. This can be useful
for viewing the numeric values making up the graph.
The values represented here are raw, i.e. no units or postprocessing of values is used. Value mapping, however, is applied.
Known issues
385
2 Custom graphs
Overview
While simple graphs are good for viewing data of a single item, they do not offer configuration capabilities.
Thus, if you want to change graph style or the way lines are displayed or compare several items, for example, incoming and
outgoing traffic in a single graph, you need a custom graph.
They can be created for a host or several hosts or for a single template.
Graph attributes:
Parameter Description
386
Parameter Description
This parameter is not available for pie and exploded pie graphs.
Y Maximum value of Y-axis:
axis Calculated - Y axis maximum value will be automatically calculated.
MAX Fixed - fixed maximum value for Y-axis.
value Item - last value of the selected item will be the maximum value
This parameter is not available for pie and exploded pie graphs.
3D Enable 3D style. For pie and exploded pie graphs only.
view
Items Items, data of which are to be displayed in this graph. Click on Add to select items. You can
also select various displaying options (function, draw style, left/right axis display, color).
Sort order (0→100) Draw order. 0 will be processed first. Can be used to draw lines or regions behind (or in front
of) another.
You can drag and drop items using the icon at the beginning of a line to set the sort order or
which item is displayed in front of the other.
Name Name of the selected item is displayed as a link. Clicking on the link opens the list of other
available items.
Type Type (only available for pie and exploded pie graphs):
Simple - the value of the item is represented proportionally on the pie
Graph sum - the value of the item represents the whole pie
Note that coloring of the ”graph sum” item will only be visible to the extent that it is not
taken up by ”proportional” items.
Function Select what values will be displayed when more than one value exists per vertical graph pixel
for an item:
all - display all possible values (minimum, maximum, average) in the graph. Note that for
shorter periods this setting has no effect; only for longer periods, when data congestion in a
vertical graph pixel increases, ’all’ starts displaying minimum, maximum, and average
values. This function is only available for Normal graph type. See also: Generating graphs
from history/trends.
avg - display the average values
last - display the latest values. This function is only available if either Pie/Exploded pie is
selected as graph type.
max - display the maximum values
min - display the minimum values
387
Parameter Description
Draw style Select the draw style (only available for normal graphs; for stacked graphs filled region is
always used) to apply to the item data - Line, Bold line, Filled region, Dot, Dashed line,
Gradient line.
Y axis side Select the Y axis side to show the item data - Left, Right.
Color Select the color to apply to the item data.
Graph preview
In the Preview tab, a preview of the graph is displayed so you can immediately see what you are creating.
Note that the preview will not show any data for template items.
In this example, pay attention to the dashed bold line displaying the trigger level and the trigger information displayed in the
legend.
Note:
No more than 3 trigger lines can be displayed. If there are more triggers then the triggers with lower severity are prioritized
for display.
If graph height is set as less than 120 pixels, no trigger will be displayed in the legend.
3 Ad-hoc graphs
Overview
388
While a simple graph is great for accessing data of one item and custom graphs offer customization options, none of the two allow
to quickly create a comparison graph for multiple items with little effort and no maintenance.
To address this issue, it is possible to create ad-hoc graphs for several items in a very quick way.
Configuration
Note that to avoid displaying too many lines in the graph, only the average value for each item is displayed (min/max value lines
are not displayed). Triggers and trigger information is not displayed in the graph.
In the created graph window you have the Time period selector available and the possibility to switch from the ”normal” line graph
to a stacked one (and back).
389
4 Aggregation in graphs
Overview
Aggregation functions, available in the graph and pie chart widgets of the dashboard, allow displaying an aggregated value for the
chosen interval (5 minutes, an hour, a day), instead of all values.
This section provides more detail on aggregation options in the graph widget.
• min
• max
• avg
• count
• sum
• first (first value displayed)
• last (last value displayed)
The most exciting use of data aggregation is the possibility to create nice side-by-side comparisons of data for some period:
When hovering over a point in time in the graph, date and time is displayed in addition to items and their aggregated values. Items
are displayed in parentheses, prefixed by the aggregation function used. If the graph widget has a Data set label configured, the
label is displayed in parentheses, prefixed by the aggregation function used. Note that this is the date and time of the point in the
graph, not of the actual values.
390
Configuration
The options for aggregation are available in data set settings when configuring a graph widget.
You may pick the aggregation function and the time interval. As the data set may comprise several items, there is also another
option allowing to show aggregated data for each item separately or for all data set items as one aggregated value.
Use cases
View the average request count per second per day to the Nginx server:
• add the request count per second item to the data set
• select the aggregate function avg and specify interval 1d
• a bar graph is displayed, where each bar represents the average number of requests per second per day
• add to the data set: hosts cluster*, key ”Free disk space on /data”
• select the aggregate function min and specify interval 1w
• a bar graph is displayed, where each bar represents the minimum disk space per week for each /data volume of the cluster
2 Network maps
Overview
If you have a network to look after, you may want to have an overview of your infrastructure somewhere. For that purpose, you
can create maps in Zabbix - of networks and of anything you like.
All users can create network maps. The maps can be public (available to all users) or private (available to selected users).
Overview
Configuring a map in Zabbix requires that you first create a map by defining its general parameters and then you start filling the
actual map with elements and their links.
You can populate the map with elements that are a host, a host group, a trigger, an image, or another map.
Icons are used to represent map elements. You can define the information that will be displayed with the icons and set that recent
problems are displayed in a special way. You can link the icons and define information to be displayed on the links.
You can add custom URLs to be accessible by clicking on the icons. Thus you may link a host icon to host properties or a map icon
to another map.
Maps are managed in Monitoring → Maps, where they can be configured, managed and viewed. In the monitoring view, you can
click on the icons and take advantage of the links to some scripts and URLs.
391
Network maps are based on vector graphics (SVG).
All users in Zabbix (including non-admin users) can create network maps. Maps have an owner - the user who created them. Maps
can be made public or private.
• Public maps are visible to all users, although to see it the user must have read access to at least one map element. Public
maps can be edited in case a user/ user group has read-write permissions for this map and at least read permissions to all
elements of the corresponding map including triggers in the links.
• Private maps are visible only to their owner and the users/user groups the map is shared with by the owner. Regular (non-
Super admin) users can only share with the groups and users they are members of. Admin level users can see private maps
regardless of being the owner or belonging to the shared user list. Private maps can be edited by the owner of the map
and in case a user/ user group has read-write permissions for this map and at least read permissions to all elements of the
corresponding map including triggers in the links.
Map elements that the user does not have read permission to are displayed with a grayed-out icon and all textual information on
the element is hidden. However, the trigger label is visible even if the user has no permission to the trigger.
To add an element to the map the user must also have at least read permission to the element.
Creating a map
• Go to Monitoring → Maps
• Go to the view with all maps
• Click on Create map
You can also use the Clone button in the configuration form of an existing map to create a new map. This map will have all of the
properties of the existing map, including general layout attributes, as well as the elements of the existing map.
392
All mandatory input fields are marked with a red asterisk.
393
Parameter Description
Sharing
The Sharing tab contains the map type as well as sharing options (user groups, users) for private maps:
394
Parameter Description
When you click on Add to save this map, you have created an empty map with a name, dimensions, and certain preferences. Now
you need to add some elements. For that, click on Edit in the map list to open the editable area.
Adding elements
To add an element, click on Add next to Map element. The new element will appear at the top left corner of the map. Drag and
drop it wherever you like.
Note that with the Grid option ”On”, elements will always align to the grid (you can pick various grid sizes from the dropdown, also
hide/show the grid). If you want to put elements anywhere without alignment, turn the option to ”Off”. (You can align random
elements to the grid later, by clicking on Align map elements.)
Now that you have some elements in place, you may want to start differentiating them by giving names, etc. By clicking on the
element, a form is displayed and you can set the element type, give a name, choose a different icon, etc.
395
Map element attributes:
Parameter Description
396
Parameter Description
Triggers If the element type is ’Trigger’, select one or more triggers in the New triggers field below and
click on Add.
The order of selected triggers can be changed, but only within the same severity of triggers.
Multiple trigger selection also affects {HOST.*} macro resolution both in the editing and view
modes.
// 1 In editing mode// the first displayed {HOST.*} macros will be resolved depending on the first
trigger in the list (based on trigger severity).
// 2 View mode// depends on the Display problems parameter in General map attributes.
* If Expand single problem mode is chosen, the first displayed {HOST.*} macros will be resolved
depending on the latest detected problem trigger (not mattering the severity) or the first trigger
in the list (in case no problem detected);
* If Number of problems and expand most critical one mode is chosen, the first displayed
{HOST.*} macros will be resolved depending on the trigger severity.
Host group Enter the host group if the element type is ’Host group’. This field is auto-complete so starting to
type the name of a group will offer a dropdown of matching groups. Scroll down to select. Click
on ’x’ to remove the selected.
Tags Specify tags to limit the number of problems displayed in the widget. It is possible to include as
well as exclude specific tags and tag values. Several conditions can be set. Tag name matching
is always case-sensitive.
There are several operators available for each condition:
Exists - include the specified tag names
Equals - include the specified tag names and values (case-sensitive)
Contains - include the specified tag names where the tag values contain the entered string
(substring match, case-insensitive)
Does not exist - exclude the specified tag names
Does not equal - exclude the specified tag names and values (case-sensitive)
Does not contain - exclude the specified tag names where the tag values contain the entered
string (substring match, case-insensitive)
There are two calculation types for conditions:
And/Or - all conditions must be met, conditions having the same tag name will be grouped by
the Or condition
Or - enough if one condition is met
This field is available for host and host group element types.
Automatic icon selection In this case an icon mapping will be used to determine which icon to display.
Icons You can choose to display different icons for the element in these cases: default, problem,
maintenance, disabled.
Coordinate X X coordinate of the map element.
Coordinate Y Y coordinate of the map element.
URLs Element-specific URLs (up to 2048 characters) can be set for the element. A label for the URL
can also be defined. These will be displayed as links when a user clicks on the element in the
map viewing mode. If the element has its own URLs and there are map level URLs for its type
defined, they will be combined in the same menu.
Macros can be used in map element names and values. For a full list, see supported macros and
search for ’map URL names and values’.
Attention:
Added elements are not automatically saved. If you navigate away from the page, all changes may be lost.
Therefore it is a good idea to click on the Update button in the top right corner. Once clicked, the changes are saved
regardless of what you choose in the following popup.
Selected grid options are also saved with each map.
Selecting elements
To select elements, select one and then hold down Ctrl to select the others.
You can also select multiple elements by dragging a rectangle in the editable area and selecting all elements in it.
Once you select more than one element, the element property form shifts to the mass-update mode so you can change attributes
of selected elements in one go. To do so, mark the attribute using the checkbox and enter a new value for it. You may use macros
here (for example, {HOST.NAME} for the element label).
397
Linking elements
Once you have put some elements on the map, it is time to start linking them. To link two elements you must first select them.
With the elements selected, click on Add next to Link.
With a link created, the single element form now contains an additional Links section. Click on Edit to edit link attributes.
398
Link attributes:
Parameter Description
399
Parameter Description
Several selected elements can be moved to another place in the map by clicking on one of the selected elements, holding down
the mouse button, and moving the cursor to the desired location.
One or more elements can be copied by selecting the elements, then clicking on a selected element with the right mouse button
and selecting Copy from the menu.
To paste the elements, click on a map area with the right mouse button and select Paste from the menu. The Paste without external
links option will paste the elements retaining only the links that are between the selected elements.
Copy-pasting works within the same browser window. Keyboard shortcuts are not supported.
Adding shapes
In addition to map elements, it is also possible to add some shapes. Shapes are not map elements; they are just a visual represen-
tation. For example, a rectangle shape can be used as a background to group some hosts. Rectangle and ellipse shapes can be
added.
To add a shape, click on Add next to Shape. The new shape will appear at the top left corner of the map. Drag and drop it wherever
you like.
A new shape is added with default colors. By clicking on the shape, a form is displayed and you can customize the way a shape
looks, add text, etc.
400
To select shapes, select one and then hold down Ctrl to select the others. With several shapes selected, common properties can
be mass updated, similarly as with elements.
Text can be added in the shapes. Expression macros are supported in the text, but only with avg, last, min and max functions,
with time as parameter (for example, {?avg(/host/key,1h)}).
To display text only, the shape can be made invisible by removing the shape border (select ’None’ in the Border field). For example,
take note of how the {MAP.NAME} macro, visible in the screenshot above, is actually a rectangle shape with text, which can be
seen when clicking on the macro:
{MAP.NAME} resolves to the configured map name when viewing the map.
If hyperlinks are used in the text, they become clickable when viewing the map.
Line wrapping for text is always ”on” within shapes. However, within an ellipse, the lines are wrapped as though the ellipse were
a rectangle. Word wrapping is not implemented, so long words (words that do not fit the shape) are not wrapped, but are masked
401
(on map editing page) or clipped (other pages with maps).
Adding lines
In addition to shapes, it is also possible to add some lines. Lines can be used to link elements or shapes in a map.
To add a line, click on Add next to Shape. A new shape will appear at the top left corner of the map. Select it and click on Line in
the editing form to change the shape into a line. Then adjust line properties, such as line type, width, color, etc.
To bring one shape in front of the other (or vice versa) click on the shape with the right mouse button bringing up the map shape
menu.
402
2 Host group elements
Overview
This section explains how to add a “Host group” type element when configuring a network map.
Configuration
This table consists of parameters typical for Host group element type:
Parameter Description
403
Parameter Description
Area type This setting is available if the “Host group elements” parameter is selected:
Fit to map - all host group elements are equally placed within the map
Custom size - a manual setting of the map area for all the host group elements to be displayed
Area size This setting is available if “Host group elements” parameter and “Area type” parameter are
selected:
Width - numeric value to be entered to specify map area width
Height - numeric value to be entered to specify map area height
Placing algorithm Grid – only available option of displaying all the host group elements
Label Icon label, any string.
Macros and multiline strings can be used in labels.
If the type of the map element is “Host group” specifying certain macros has an impact on the
map view displaying corresponding information about every single host. For example, if
{HOST.IP} macro is used, the edit map view will only display the macro {HOST.IP} itself while
map view will include and display each host’s unique IP address
This option is available if the ”Host group elements” show option is chosen. When selecting ”Host group elements” as the show
option, you will at first see only one icon for the host group. However, when you save the map and then go to the map view, you
will see that the map includes all the elements (hosts) of the certain host group:
Notice how the {HOST.NAME} macro is used. In map editing, the macro name is unresolved, while in map view all the unique
names of the hosts are displayed.
3 Link indicators
Overview
You can assign some triggers to a link between elements in a network map. When these triggers go into a problem state, the link
can reflect that.
404
When you configure a link, you set the default link type and color. When you assign triggers to a link, you can assign different link
types and colors with these triggers.
Should any of these triggers go into a problem state, their link style and color will be displayed on the link. So maybe your default
link was a green line. Now, with the trigger in the problem state, your link may become bold red (if you have defined it so).
Configuration
You can set the link type and color for each trigger directly from the list. When done, click on Apply, close the form and click on
Update to save the map changes.
Display
In Monitoring → Maps the respective color will be displayed on the link if the trigger goes into a problem state.
405
Note:
If multiple triggers go into a problem state, the problem with the highest severity will determine the link style and color. If
multiple triggers with the same severity are assigned to the same map link, the one with the lowest ID takes precedence.
Note also that:
1. Minimum trigger severity and Show suppressed problem settings from map configuration affect which problems
are taken into account.
2. In the case of triggers with multiple problems (multiple problem generation), each problem may have a severity that
differs from trigger severity (changed manually), may have different tags (due to macros), and may be suppressed.
3 Dashboards
Dashboards - both global dashboards and host dashboards - provide a strong visualization platform with such widgets and tools as
modern graphs, maps, slideshows, and more.
406
Overview
The use of templates is an excellent way of reducing one’s workload and streamlining the Zabbix configuration. A template is a
set of entities that can be conveniently applied to multiple hosts.
• items
• triggers
• graphs
• dashboards
• low-level discovery rules
• web scenarios
As many hosts in real life are identical or fairly similar so it naturally follows that the set of entities (items, triggers, graphs,...) you
have created for one host, may be useful for many. Of course, you could copy them to each new host, but that would be a lot
of manual work. Instead, with templates you can copy them to one template and then apply the template to as many hosts as
needed.
When a template is linked to a host, all entities (items, triggers, graphs,...) of the template are added to the host. Templates are
assigned to each individual host directly (and not to a host group).
Templates are often used to group entities for particular services or applications (like Apache, MySQL, PostgreSQL, Postfix...) and
then applied to hosts running those services.
Another benefit of using templates is when something has to be changed for all the hosts. Changing something on the template
level once will propagate the change to all the linked hosts.
Overview
Zabbix strives to provide a growing list of useful out-of-the-box templates. Out-of-the-box templates come preconfigured and thus
are a useful way for speeding up the deployment of monitoring jobs.
Please use the sidebar to access information about specific template types and operation requirements.
See also:
• Template import
• Linking a template
• Known issues
Steps to ensure correct operation of templates that collect metrics with Zabbix agent:
1. Make sure that Zabbix agent is installed on the host. For active checks, also make sure that the host is added to the ’ServerActive’
parameter of the agent configuration file.
2. Link the template to a target host (if the template is not available in your Zabbix installation, you may need to import the
template’s .xml file first - see Templates out-of-the-box section for instructions).
3. If necessary, adjust the values of template macros.
4. Configure the instance being monitored to allow sharing data with Zabbix.
A detailed description of a template, including the full list of macros, items and triggers is available in the template’s Readme.md
file (accessible by clicking on a template name).
407
• Apache by Zabbix agent
• HAProxy by Zabbix agent
• IIS by Zabbix agent
• IIS by Zabbix agent active
• Microsoft Exchange Server 2016 by Zabbix agent
• Microsoft Exchange Server 2016 by Zabbix agent active
• MySQL by Zabbix agent
• Nginx by Zabbix agent
• PHP-FPM by Zabbix agent
• PostgreSQL by Zabbix agent
• RabbitMQ cluster by Zabbix agent
Steps to ensure correct operation of templates that collect metrics with Zabbix agent 2:
1. Make sure that the agent 2 is installed on the host, and that the installed version contains the required plugin. In some cases,
you may need to upgrade the agent 2 first.
2. Link the template to a target host (if the template is not available in your Zabbix installation, you may need to import the
template’s import file first - see Templates out-of-the-box section for instructions).
3. If necessary, adjust the values of template macros. Note that user macros can be used to override configuration parameters.
4. Configure the instance being monitored to allow sharing data with Zabbix.
Attention:
Zabbix agent 2 templates work in conjunction with the plugins. While the basic configuration can be done by simply
adjusting user macros, the deeper customization can be achieved by configuring the plugin itself. For example, if a plugin
supports named sessions, it is possible to monitor several entities of the same kind (e.g. MySQL1 and MySQL2) by specifying
named session with own URI, username and password for each entity in the configuration file.
A detailed description of a template, including the full list of macros, items and triggers is available in the template’s Readme.md
file (accessible by clicking on a template name).
Steps to ensure correct operation of templates that collect metrics with HTTP agent:
1. Create a host in Zabbix and specify an IP address or DNS name of the monitoring target as the main interface. This is needed
for the {HOST.CONN} macro to resolve properly in the template items.
2. Link the template to the host created in step 1 (if the template is not available in your Zabbix installation, you may need to
import the template’s .xml file first - see Templates out-of-the-box section for instructions).
3. If necessary, adjust the values of template macros.
4. Configure the instance being monitored to allow sharing data with Zabbix.
A detailed description of a template, including the full list of macros, items and triggers is available in the template’s Readme.md
file (accessible by clicking on a template name).
408
• Apache by HTTP
• Asterisk by HTTP
• AWS by HTTP
• AWS Cost Explorer by HTTP
• AWS EC2 by HTTP
• AWS ECS Cluster by HTTP
• AWS ECS Serverless Cluster by HTTP
• AWS ELB Application Load Balancer by HTTP
• AWS ELB Network Load Balancer by HTTP
• AWS RDS instance by HTTP
• AWS S3 bucket by HTTP
• Azure by HTTP
• Cisco Meraki organization by HTTP
• Cisco SD-WAN by HTTP
• ClickHouse by HTTP
• Cloudflare by HTTP
• CockroachDB by HTTP
• Control-M enterprise manager by HTTP
• Control-M server by HTTP
• DELL PowerEdge R720 by HTTP
• DELL PowerEdge R740 by HTTP
• DELL PowerEdge R820 by HTTP
• DELL PowerEdge R840 by HTTP
• Elasticsearch Cluster by HTTP
• Envoy Proxy by HTTP
• Etcd by HTTP
• FortiGate by HTTP
• GitLab by HTTP
• Google Cloud Platform (GCP) by HTTP
• Hadoop by HTTP
• HAProxy by HTTP
• HashiCorp Consul Cluster by HTTP
• HashiCorp Consul Node by HTTP
• HashiCorp Nomad by HTTP
• HashiCorp Vault by HTTP
• Hikvision camera by HTTP
• HPE iLO by HTTP
• HPE MSA 2040 Storage by HTTP
• HPE MSA 2060 Storage by HTTP
• HPE Primera by HTTP
• HPE Synergy by HTTP
• InfluxDB by HTTP
• Jenkins by HTTP
• Kubernetes API server by HTTP
• Kubernetes cluster state by HTTP
• Kubernetes Controller manager by HTTP
• Kubernetes kubelet by HTTP
• Kubernetes nodes by HTTP
• Kubernetes Scheduler by HTTP
• MantisBT by HTTP
• Microsoft SharePoint by HTTP
• NetApp AFF A700 by HTTP
• Nextcloud by HTTP
• NGINX by HTTP
• NGINX Plus by HTTP
• OpenStack by HTTP
• OpenWeatherMap by HTTP
• Oracle Cloud by HTTP
• PHP-FPM by HTTP
• Proxmox VE by HTTP
• RabbitMQ cluster by HTTP
• TiDB by HTTP
• TiDB PD by HTTP
409
• TiDB TiKV by HTTP
• Travis CI by HTTP
• Veeam Backup Enterprise Manager by HTTP
• Veeam Backup and Replication by HTTP
• VMware SD-WAN VeloCloud by HTTP
• YugabyteDB by HTTP
• ZooKeeper by HTTP
IPMI templates do not require any specific setup. To start monitoring, link the template to a target host (if the template is not
available in your Zabbix installation, you may need to import the template’s .xml file first - see Templates out-of-the-box section
for instructions).
A detailed description of a template, including the full list of macros, items and triggers is available in the template’s Readme.md
file (accessible by clicking on a template name).
Available template:
• Chassis by IPMI
A detailed description of a template, including the full list of macros, items and triggers is available in the template’s Readme.md
file (accessible by clicking on a template name).
Steps to ensure correct operation of templates that collect metrics via ODBC monitoring:
1. Make sure that required ODBC driver is installed on Zabbix server or proxy.
2. Link the template to a target host (if the template is not available in your Zabbix installation, you may need to import the
template’s .xml file first - see Templates out-of-the-box section for instructions).
3. If necessary, adjust the values of template macros.
4. Configure the instance being monitored to allow sharing data with Zabbix.
A detailed description of a template, including the full list of macros, items and triggers is available in the template’s Readme.md
file (accessible by clicking on a template name).
• MSSQL by ODBC
• MySQL by ODBC
• Oracle by ODBC
• PostgreSQL by ODBC
410
7 Standardized templates for network devices
Overview
In order to provide monitoring for network devices such as switches and routers, we have created two so-called models: for the
network device itself (its chassis basically) and for network interface.
Templates for many families of network devices are provided. All templates cover (where possible to get these items from the
device):
• Chassis fault monitoring (power supplies, fans and temperature, overall status)
• Chassis performance monitoring (CPU and memory items)
• Chassis inventory collection (serial numbers, model name, firmware version)
• Network interface monitoring with IF-MIB and EtherLike-MIB (interface status, interface traffic load, duplex status for Ether-
net)
If you are importing the new out-of-the-box templates, you may want to also update the @Network interfaces for discovery
global regular expression to:
Devices
Device
Template name Vendor family Known models OS MIBs used Tags
411
Device
Template name Vendor family Known models OS MIBs used Tags
412
Device
Template name Vendor family Known models OS MIBs used Tags
413
Device
Template name Vendor family Known models OS MIBs used Tags
414
Device
Template name Vendor family Known models OS MIBs used Tags
415
Device
Template name Vendor family Known models OS MIBs used Tags
Template design
• User macros are used as much as possible so triggers can be tuned by the user;
• Low-level discovery is used as much as possible to minimize the number of unsupported items;
• All templates depend on Template ICMP Ping so all devices are also checked by ICMP;
• Items don’t use any MIBs - SNMP OIDs are used in items and low-level discoveries. So it’s not necessary to load any MIBs
into Zabbix for templates to work;
• Loopback network interfaces are filtered when discovering as well as interfaces with ifAdminStatus = down(2)
• 64bit counters are used from IF-MIB::ifXTable where possible. If it is not supported, default 32bit counters are used instead.
All discovered network interfaces have a trigger that monitors its operational status (link), for example:
where Gi0/0 is {#IFNAME}. That way the trigger is not used any more for this specific interface.
• You can also change the default behavior for all triggers not to fire and activate this trigger only to limited number of
interfaces like uplinks:
416
Tags
• Performance – device family MIBs provide a way to monitor CPU and memory items;
• Fault - device family MIBs provide a way to monitor at least one temperature sensor;
• Inventory – device family MIBs provide a way to collect at least the device serial number and model name;
• Certified – all three main categories above are covered.
Overview
Assuming that we have configured some items and triggers and now are getting some events happening as a result of triggers
changing state, it is time to consider some actions.
To begin with, we would not want to stare at the triggers or events list all the time. It would be much better to receive notification
if something significant (such as a problem) has happened. Also, when problems occur, we would like to see that all the people
concerned are informed.
That is why sending notifications is one of the primary actions offered by Zabbix. Who and when should be notified upon a certain
event can be defined.
To be able to send and receive notifications from Zabbix you have to:
Actions consist of conditions and operations. Basically, when conditions are met, operations are carried out. The two principal
operations are sending a message (notification) and executing a remote command.
For discovery and autoregistration created events, some additional operations are available. Those include adding or removing a
host, linking a template etc.
1 Media types
Overview
Media are the delivery channels used for sending notifications and alerts from Zabbix.
• Email
• SMS
• Custom alertscripts
• Webhook
417
Some media types come pre-defined in the default dataset. You just need to finetune their parameters to get them working.
Gmail or Office365 users may benefit from easier media type configuration. The Email provider field in the mail media type
configuration allows to select pre-configured options for Gmail and Office 365.
When selecting the Gmail/Office365 related options, it is only required to supply the sender email address/password to create a
working media type.
As soon as the email address/password is supplied, Zabbix will be able to automatically fill all required settings for Gmail/Office365
media types with the actual/recommended values, i.e., SMTP server, SMTP server port, SMTP helo, and Connection security.
Because of this automation, these fields are not even shown, however, it is possible to see the SMTP server and email details in
the media type list (see the Details column).
418
• For Office365 relay, the domain name of the provided email address will be used to dynamically fill the SMTP server (i.e.,
replace ”example.com” in example-com.mail.protection.outlook.com with the real value).
To test if a configured media type works, click on the Test link in the last column (see media type testing for Email, Webhook, or
Script for more details).
To create a new media type, click on the Create media type button. A media type configuration form is opened.
Common parameters
Parameter Description
The Message templates tab allows to set default notification messages for all or some of the following event types:
• Problem
• Problem recovery
• Problem update
• Service
• Service recovery
• Service update
• Discovery
• Autoregistration
• Internal problem
• Internal problem recovery
419
To customize message templates:
1. In the Message templates tab click on : a Message template popup window will open.
2. Select required Message type and edit Subject and Message texts.
3. Click on Add to save the message template.
Parameter Description
Message type Type of an event for which the default message should be used.
Only one default message can be defined for each event type.
Subject Subject of the default message. The subject may contain macros. It is limited to 255 characters.
Subject is not available for SMS media type.
Message The default message. It is limited to certain amount of characters depending on the database type (see
Sending messages for more information).
The message may contain supported macros.
In problem and problem update messages, expression macros are supported (for example,
{?avg(/host/key,1h)}).
To make changes to an existing message template: in the Actions column click on to edit the template or click on
to delete the message template.
It is possible to define a custom message template for a specific action (see action operations for details). Custom messages
defined in the action configuration will override default media type message template.
420
Warning:
Defining message templates is mandatory for all media types, including webhooks or custom alert scripts that do not use
default messages for notifications. For example, an action ”Send message to Pushover webhook” will fail to send problem
notifications, if the Problem message for the Pushover webhook is not defined.
The Options tab contains alert processing settings. The same set of options is configurable for each media type.
All media types are processed in parallel. While the maximum number of concurrent sessions is configurable per media type, the
total number of alerter processes on the server can only be limited by the StartAlerters parameter. Alerts generated by one trigger
are processed sequentially. So multiple notifications may be processed simultaneously only if they are generated by multiple
triggers.
Parameter Description
Concurrent sessions Select the number of parallel alerter sessions for the media type:
One - one session
Unlimited - unlimited number of sessions
Custom - select a custom number of sessions
Unlimited/high values mean more parallel sessions and increased capacity for sending
notifications. Unlimited/high values should be used in large environments where lots of
notifications may need to be sent simultaneously.
If more notifications need to be sent than there are concurrent sessions, the remaining
notifications will be queued; they will not be lost.
Attempts Number of attempts for trying to send a notification. Up to 100 attempts can be specified; the
default value is ’3’. If ’1’ is specified, Zabbix will send the notification only once and will not retry
if the sending fails.
Attempt interval Frequency of trying to resend a notification in case the sending failed, in seconds (0-3600). If ’0’
is specified, Zabbix will retry immediately.
Time suffixes are supported, e.g., 5s, 3m, 1h.
User media
To receive notifications of a media type, a medium (email address/phone number/webhook user ID, etc.) for this media type must
be defined in the user profile. For example, an action sending messages to user ”Admin” using webhook ”X” will always fail to
send anything if the webhook ”X” medium is not defined in the user profile.
1. Go to your user profile, or go to Users → Users and open the user properties form.
2. In the Media tab, click on .
421
User media attributes:
Parameter Description
Type The drop-down list contains the names of enabled media types.
Note that when editing a medium of a disabled media type, the type will be displayed in red.
Send to Provide required contact information where to send messages.
For an email media type it is possible to add several addresses by clicking on below the
address field. In this case, the notification will be sent to all email addresses provided. It is also
possible to specify recipient name in the Send to field of the email recipient in a format ’Recipient
name <[email protected]>’. Note that if a recipient name is provided, an email address
should be wrapped in angle brackets (<>). UTF-8 characters in the name are supported, quoted
pairs and comments are not. For example: John Abercroft <[email protected]> and
[email protected] are both valid formats. Incorrect examples: John Doe
[email protected], %%”Zabbix\@\<H(comment)Q\>” [email protected] %%.
When active You can limit the time when messages are sent, for example, set the working days only
(1-5,09:00-18:00). Note that this limit is based on the user time zone. If the user time zone is
changed and is different from the system time zone this limit may need to be adjusted
accordingly so as not to miss important messages.
See the Time period specification page for description of the format.
User macros are supported.
Use if severity Mark the checkboxes of trigger severities that you want to receive notifications for.
Note that the default severity (’Not classified’) must be checked if you want to receive
notifications for non-trigger events.
After saving, the selected trigger severities will be displayed in the corresponding severity colors,
while unselected ones will be grayed out.
Status Status of the user media.
Enabled - is in use.
Disabled - is not being used.
422
1 Email
Overview
To configure email as the delivery channel for messages, you need to configure email as the media type and assign specific
addresses to users.
Note:
Multiple notifications for single event will be grouped together on the same email thread.
Configuration
The following parameters are specific for the email media type:
Parameter Description
Email provider Select the email provider: Generic SMTP, Gmail, Gmail relay, Office365, or Office365 relay.
If you select the Gmail/Office365-related options, you will only need to supply the sender email
address and password; such options as SMTP server, SMTP server port, SMTP helo, and
Connection security will be automatically filled by Zabbix. See also: Automated Gmail/Office365
media types.
SMTP server Set an SMTP server to handle outgoing messages.
This field is available if Generic SMTP is selected as the email provider.
423
Parameter Description
SMTP server port Set the SMTP server port to handle outgoing messages.
This field is available if Generic SMTP is selected as the email provider.
Email The address entered here will be used as the From address for the messages sent.
Adding a sender display name (like ”Zabbix_info” in Zabbix_info <[email protected]> in the
screenshot above) with the actual email address is supported.
There are some restrictions on display names in Zabbix emails in comparison to what is allowed
by RFC 5322, as illustrated by examples:
Valid examples:
[email protected] (only email address, no need to use angle brackets)
Zabbix_info <[email protected]> (display name and email address in angle brackets)
∑Ω-monitoring <[email protected]> (UTF-8 characters in display name)
Invalid examples:
Zabbix HQ [email protected] (display name present but no angle brackets around email
address)
”Zabbix\@\<H(comment)Q\>” <[email protected]> (although valid by RFC 5322, quoted
pairs and comments are not supported in Zabbix emails)
SMTP helo Set a correct SMTP helo value, normally a domain name.
If empty, the domain name of the email will be sent (i.e., what comes after @ in the Email field).
If it is impossible to fetch the domain name, a debug-level warning will be logged and the server
hostname will be sent as the domain for HELO command.
This field is available if Generic SMTP is selected as the email provider.
Connection security Select the level of connection security:
None - do not use the CURLOPT_USE_SSL option
STARTTLS - use the CURLOPT_USE_SSL option with CURLUSESSL_ALL value
SSL/TLS - use of CURLOPT_USE_SSL is optional
SSL verify peer Mark the checkbox to verify the SSL certificate of the SMTP server.
The value of ”SSLCALocation” server configuration directive should be put into CURLOPT_CAPATH
for certificate validation.
This sets cURL option CURLOPT_SSL_VERIFYPEER.
SSL verify host Mark the checkbox to verify that the Common Name field or the Subject Alternate Name field of
the SMTP server certificate matches.
This sets cURL option CURLOPT_SSL_VERIFYHOST.
Authentication Select the level of authentication:
None - no cURL options are set
Username and password - implies ”AUTH=*” leaving the choice of authentication mechanism
to cURL
Username User name to use in authentication.
This sets the value of CURLOPT_USERNAME.
User macros supported.
Password Password to use in authentication.
This sets the value of CURLOPT_PASSWORD.
User macros supported.
Message format Select message format:
HTML - send as HTML
Plain text - send as plain text
Attention:
To enable SMTP authentication options, Zabbix server must be both compiled with the --with-libcurl compilation
option (with cURL 7.20.0 or higher) and use the libcurl-full packages during runtime.
See also common media type parameters for details on how to configure default messages and alert processing options.
424
User media
Once the email media type is configured, go to the Users → Users section and edit user profile to assign email media to the user.
Steps for setting up user media, being common for all media types, are described on the Media types page.
2 SMS
Overview
Zabbix supports the sending of SMS messages using a serial GSM modem connected to Zabbix server’s serial port.
• The speed of the serial device (normally /dev/ttyS0 under Linux) matches that of the GSM modem. Zabbix does not set the
speed of the serial link. It uses default settings.
• The ’zabbix’ user has read/write access to the serial device. Run the command ls –l /dev/ttyS0 to see current permissions of
the serial device.
• The GSM modem has PIN entered and it preserves it after power reset. Alternatively you may disable PIN on the SIM card.
PIN can be entered by issuing command AT+CPIN=”NNNN” (NNNN is your PIN number, the quotes must be present) in a
terminal software, such as Unix minicom or Windows HyperTerminal.
• Siemens MC35
• Teltonika ModemCOM/G10
To configure SMS as the delivery channel for messages, you also need to configure SMS as the media type and enter the respective
phone numbers for the users.
Configuration
The following parameters are specific for the SMS media type:
Parameter Description
GSM modem Set the serial device name of the GSM modem.
See common media type parameters for details on how to configure default messages and alert processing options. Note that
parallel processing of sending SMS notifications is not possible.
User media
425
Once the SMS media type is configured, go to the Users → Users section and edit user profile to assign SMS media to the user.
Steps for setting up user media, being common for all media types, are described on the Media types page.
Overview
If you are not satisfied with the existing media types for sending alerts, there is an alternative way to do that. You can create a
script that will handle the notification your way.
Custom alert scripts are executed on Zabbix server. These scripts must be located in the directory specified in the server configu-
ration file AlertScriptsPath parameter.
Here is an example of a custom alert script:
#####!/bin/bash
to=$1
subject=$2
body=$3
host=$4
value=$5
Host: $host
Value: $value
EOF
Attention:
Zabbix checks for the exit code of the executed commands and scripts. Any exit code, which is different from 0, is
considered as a command execution error. In such cases, Zabbix will try to repeat failed execution.
Environment variables are not preserved or created for the script, so they should be handled explicitly.
Configuration
426
All mandatory input fields are marked with a red asterisk.
The following parameters are specific for the script media type:
Parameter Description
Script name Enter the name of the script file (e.g., notification.sh) that is located in the directory specified in
the server configuration file AlertScriptsPath parameter.
Script parameters Add optional script parameters that will be passed to the script as command-line arguments in
the order in which they are defined.
See common media type parameters for details on how to configure default messages and alert processing options.
Warning:
Even if an alert script does not use default messages, the message templates for operation types used by this media type
must still be defined. Otherwise, a notification will not be sent.
Attention:
If more than one script media type is configured, these scripts may be processed in parallel by the alerter processes. The
total number of alerter processes is limited by the server configuration file StartAlerters parameter.
2. Click on Test in the last column of the list; a testing form will open in a pop-up window. The testing form will contain the
same number of parameters that are configured for the script media type.
427
3. Edit the script parameter values if needed. Editing only affects the test procedure; the actual values will not be changed.
4. Click on Test.
Note:
When testing a configured script media type, {ALERT.SENDTO}, {ALERT.SUBJECT}, {ALERT.MESSAGE} and user macros
will resolve to their values, but macros that are related to events (e.g., {HOST.HOST}, {ITEM.LASTVALUE}, etc.) will not
resolve, as during testing there is no related event to get the details from. Note that macros within {ALERT.SUBJECT} and
{ALERT.MESSAGE} macros will also not resolve. For example, if the value of {ALERT.SUBJECT} is composed of ”Problem:
{EVENT.NAME}” then the {EVENT.NAME} macro will not be resolved.
User media
Once the media type is configured, go to the Users → Users section and edit a user profile by assigning this media type to the user.
Steps for setting up user media, being common for all media types, are described on the Media types page.
Note that when defining the user media, the Send to field cannot be empty. If this field is not used in the alert script, enter any
combination of supported characters to bypass validation requirements.
4 Webhook
Overview
The webhook media type is useful for making HTTP calls using custom JavaScript code for straightforward integration with external
software such as helpdesk systems, chats, or messengers. You may choose to import an integration provided by Zabbix or create
a custom integration from scratch.
Integrations
The following integrations are available allowing to use predefined webhook media types for pushing Zabbix notifications to:
• brevis.one
• Discord
• Event-Driven Ansible
• Express.ms messenger
• Github issues
• GLPi
• iLert
• iTop
• Jira
• Jira Service Desk
• ManageEngine ServiceDesk
• Mantis Bug Tracker
• Mattermost
• Microsoft Teams
• LINE
• Opsgenie
• OTRS
• Pagerduty
• Pushover
• Redmine
428
• Rocket.Chat
• ServiceNow
• SIGNL4
• Slack
• SolarWinds
• SysAid
• Telegram
• TOPdesk
• VictorOps
• Zammad
• Zendesk
Note:
In addition to the services listed here, Zabbix can be integrated with Spiceworks (no webhook is required). To convert
Zabbix notifications into Spiceworks tickets, create an email media type and enter Spiceworks helpdesk email address
(e.g. [email protected]) in the profile settings of a designated Zabbix user.
Configuration
1. Locate required .xml file in the templates/media directory of the downloaded Zabbix version or download it from Zabbix
git repository.
2. Import the file into your Zabbix installation. The webhook will appear in the list of media types.
3. Configure the webhook according to instructions in the Readme.md file (you may click on a webhook’s name above to quickly
access Readme.md).
The Media type tab contains various attributes specific for this media type:
429
All mandatory input fields are marked with a red asterisk.
The following parameters are specific for the webhook media type:
430
Parameter Description
Parameters Specify the webhook variables as the attribute and value pairs.
For preconfigured webhooks, a list of parameters varies, depending on the service. Check the
webhook’s Readme.md file for parameter description.
For new webhooks, several common variables are included by default (URL:<empty>,
HTTPProxy:<empty>, To:{ALERT.SENDTO}, Subject:{ALERT.SUBJECT},
Message:{ALERT.MESSAGE}), feel free to keep or remove them.
Webhook parameters support user macros, all macros that are supported in problem notifications
and, additionally, {ALERT.SENDTO}, {ALERT.SUBJECT}, and {ALERT.MESSAGE} macros.
If you specify an HTTP proxy, the field supports the same functionality as in the item
configuration HTTP proxy field. The proxy string may be prefixed with [scheme]:// to specify
which kind of proxy is used (e.g., https, socks4, socks5; see documentation).
Script Enter JavaScript code in the block that appears when clicking in the parameter field (or on the
view/edit button next to it). This code will perform the webhook operation.
The script is a function code that accepts parameter - value pairs. The values should be
converted into JSON objects using JSON.parse() method, for example: var params =
JSON.parse(value);.
The code has access to all parameters, it may perform HTTP GET, POST, PUT and DELETE
requests and has control over HTTP headers and request body.
The script must contain a return operator, otherwise it will not be valid. It may return OK status
along with an optional list of tags and tag values (see Process tags option) or an error string.
Note that the script is executed only after an alert is created. If the script is configured to return
and process tags, these tags will not get resolved in {EVENT.TAGS} and
{EVENT.RECOVERY.TAGS} macros in the initial problem message and recovery messages because
the script has not had the time to run yet.
See also: Webhook development guidelines, Webhook script examples, Additional JavaScript
objects.
See common media type parameters for details on how to configure default messages and alert processing options.
Warning:
Even if a webhook doesn’t use default messages, message templates for operation types used by this webhook must still
be defined.
431
2. Click on Test in the last column of the list (a testing window will open).
3. Edit the webhook parameter values, if needed.
4. Click on Test.
By default, webhook tests are performed with parameters entered during configuration. However, it is possible to change attribute
values for testing. Replacing or deleting values in the testing window affects the test procedure only, the actual webhook attribute
values will remain unchanged.
To view media type test log entries without leaving the test window, click on Open log (a new pop-up window will open).
• ”Media type test failed.” message is displayed, followed by additional failure details.
User media
Once the media type is configured, go to the Users → Users section and assign the webhook media to an existing user or create a
new user to represent the webhook. Steps for setting up user media for an existing user, being common for all media types, are
432
described on the Media types page.
If a webhook uses tags to store ticket\message ID, avoid assigning the same webhook as a media to different users as doing so
may cause webhook errors (applies to the majority of webhooks that utilize Include event menu entry option). In this case, the
best practice is to create a dedicated user to represent the webhook:
1. After configuring the webhook media type, go to the Users → Users section and create a dedicated Zabbix user to represent
the webhook - for example, with a username Slack for the Slack webhook. All settings, except media, can be left at their
defaults as this user will not be logging into Zabbix.
2. In the user profile, go to a tab Media and add a webhook with the required contact information. If the webhook does not use
a Send to field, enter any combination of supported characters to bypass validation requirements.
3. Grant this user at least read permissions to all hosts for which it should send the alerts.
When configuring alert action, add this user in the Send to users field in Operation details - this will tell Zabbix to use the webhook
for notifications from this action.
Actions determine which notifications should be sent via the webhook. Steps for configuring actions involving webhooks are the
same as for all other media types with these exceptions:
• If a webhook uses tags to store ticket\message ID and to follow up with update\resolve operations, this webhook should not
be used in several alert actions for a single problem event. If {EVENT.TAGS.<name>} already exists, and is updated in the
webhook, then its resulting value is not defined. For such a case, a new tag name should be used in the webhook to store
updated values. This applies to Jira, Jira Service Desk, Mattermost, Opsgenie, OTRS, Redmine, ServiceNow, Slack, Zammad,
and Zendesk webhooks provided by Zabbix and to the majority of webhooks that utilize Include event menu entry option.
Using the webhook in several operations is allowed if those operations or escalation steps belong to the same action. It is
also ok to use this webhook in different actions if the actions will not be applied to the same problem event due to different
filter conditions.
• When using a webhook in actions for internal events: in the action operation configuration, check the Custom message
checkbox and define the custom message, otherwise, a notification will not be sent.
Overview
Though Zabbix offers a large number of webhook integrations available out-of-the-box, you may want to create your own webhooks
instead. This section provides examples of custom webhook scripts (used in the Script parameter). See webhook section for
description of other webhook parameters.
433
This script will create a JIRA issue and return some info on the created issue.
try {
Zabbix.log(4, '[ Jira webhook ] Started with params: ' + value);
var result = {
'tags': {
'endpoint': 'jira'
}
},
params = JSON.parse(value),
req = new HttpRequest(),
fields = {},
resp;
if (params.HTTPProxy) {
req.setProxy(params.HTTPProxy);
}
req.addHeader('Content-Type: application/json');
req.addHeader('Authorization: Basic ' + params.authentication);
434
fields.summary = params.summary;
fields.description = params.description;
fields.project = {key: params.project_key};
fields.issuetype = {id: params.issue_id};
resp = req.post('https://jira.example.com/rest/api/2/issue/',
JSON.stringify({"fields": fields})
);
if (req.getStatus() != 201) {
throw 'Response code: ' + req.getStatus();
}
resp = JSON.parse(resp);
result.tags.issue_id = resp.id;
result.tags.issue_key = resp.key;
return JSON.stringify(result);
}
catch (error) {
Zabbix.log(4, '[ Jira webhook ] Issue creation failed json : ' + JSON.stringify({"fields": fields}));
Zabbix.log(3, '[ Jira webhook ] issue creation failed : ' + error);
try {
var params = JSON.parse(value),
req = new HttpRequest(),
response;
if (params.HTTPProxy) {
req.setProxy(params.HTTPProxy);
}
req.addHeader('Content-Type: application/x-www-form-urlencoded');
435
response = req.post(params.hook_url, 'payload=' + encodeURIComponent(value));
Zabbix.log(4, '[ Slack webhook ] Responded with code: ' + req.getStatus() + '. Response: ' + response)
try {
response = JSON.parse(response);
}
catch (error) {
if (req.getStatus() < 200 || req.getStatus() >= 300) {
throw 'Request failed with status code ' + req.getStatus();
}
else {
throw 'Request success, but response parsing failed.';
}
}
return 'OK';
}
catch (error) {
Zabbix.log(3, '[ Slack webhook ] Sending failed. Error: ' + error);
2 Actions
Overview
If you want some operations taking place as a result of events (for example, notifications sent), you need to configure actions.
• Trigger actions - for events when trigger status changes from OK to PROBLEM and back
• Service actions - for events when service status changes from OK to PROBLEM and back
• Discovery actions - for events when network discovery takes place
• Autoregistration actions - for events when new active agents auto-register (or host metadata changes for registered ones)
• Internal actions - for events when items become unsupported or triggers go into an unknown state
• User access to service actions depends on access rights to services granted by user’s role
• Service actions support different set of conditions
Configuring an action
• Go to Alerts → Actions and select the required action type from the submenu (you can switch to another type later, using
the title dropdown)
• Click on Create action
• Name the action
• Choose conditions upon which operations are carried out
• Choose the operations to carry out
436
All mandatory input fields are marked with a red asterisk.
Parameter Description
1 Conditions
Overview
It is possible to define that an action is executed only if the event matches a defined set of conditions. Conditions are set when
configuring the action.
Trigger actions
437
Condition type Supported operators Description
Service actions
438
Condition type Supported operators Description
Service name contains Specify a string in the service name or a string to exclude.
does not contain contains - event is generated by a service, containing this string in
the name.
does not contain - this string cannot be found in the service name.
Service tag name equals Specify an event tag or an event tag to exclude. Service event tags
does not equal can be defined in the service configuration section Tags.
contains equals - event has this tag.
does not contain does not equal - event does not have this tag.
contains - event has a tag containing this string.
does not contain - event does not have a tag containing this string.
Service tag value equals Specify an event tag and value combination or a tag and value
does not equal combination to exclude. Service event tags can be defined in the
contains service configuration section Tags.
does not contain equals - event has this tag and value.
does not equal - event does not have this tag and value.
contains - event has a tag and value containing these strings.
does not contain - event does not have a tag and value containing
these strings.
Attention:
Make sure to define message templates for Service actions in the Alerts → Media types menu. Otherwise, the notifications
will not be sent.
Discovery actions
Host IP equals Specify an IP address range or a range to exclude for a discovered host.
does not equal equals - host IP is in the range.
does not equal - host IP is not in the range.
It may have the following formats:
Single IP: 192.168.1.33
Range of IP addresses: 192.168.1-10.1-254
IP mask: 192.168.4.0/24
List: 192.168.1.1-254, 192.168.2.1-100, 192.168.2.200,
192.168.4.0/24
Spaces in the list format are supported.
Service type equals Specify a service type of a discovered service or a service type to
does not equal exclude.
equals - matches the discovered service.
does not equal - does not match the discovered service.
Available service types: SSH, LDAP, SMTP, FTP, HTTP, HTTPS, POP,
NNTP, IMAP, TCP, Zabbix agent, SNMPv1 agent, SNMPv2 agent,
SNMPv3 agent, ICMP ping, telnet.
Service port equals Specify a TCP port range of a discovered service or a range to exclude.
does not equal equals - service port is in the range.
does not equal - service port is not in the range.
Discovery rule equals Specify a discovery rule or a discovery rule to exclude.
does not equal equals - using this discovery rule.
does not equal - using any other discovery rule, except this one.
Discovery check equals Specify a discovery check or a discovery check to exclude.
does not equal equals - using this discovery check.
does not equal - using any other discovery check, except this one.
Discovery object equals Specify the discovered object.
equals - equal to discovered object (a device or a service).
439
Condition type Supported operators Description
Discovery status equals Up - matches ’Host Up’ and ’Service Up’ events.
Down - matches ’Host Down’ and ’Service Down’ events.
Discovered - matches ’Host Discovered’ and ’Service Discovered’
events.
Lost - matches ’Host Lost’ and ’Service Lost’ events.
Uptime/Downtime is greater than or Uptime for ’Host Up’ and ’Service Up’ events. Downtime for ’Host
equals Down’ and ’Service Down’ events.
is less than or equals is greater than or equals - is more or equal to. Parameter is given in
seconds.
is less than or equals - is less or equal to. Parameter is given in
seconds.
Received value equals Specify the value received from an agent (Zabbix, SNMP) check in a
does not equal discovery rule. String comparison. If several Zabbix agent or SNMP
is greater than or checks are configured for a rule, received values for each of them are
equals checked (each check generates a new event which is matched against
is less than or equals all conditions).
contains equals - equal to the value.
does not contain does not equal - not equal to the value.
is greater than or equals - more or equal to the value.
is less than or equals - less or equal to the value.
contains - contains the substring. Parameter is given as a string.
does not contain - does not contain the substring. Parameter is given
as a string.
Proxy equals Specify a proxy or a proxy to exclude.
does not equal equals - using this proxy.
does not equal - using any other proxy except this one.
Note:
Service checks in a discovery rule, which result in discovery events, do not take place simultaneously. Therefore, if multiple
values are configured for Service type, Service port or Received value conditions in the action, they will be
compared to one discovery event at a time, but not to several events simultaneously. As a result, actions with multiple
values for the same check types may not be executed correctly.
Autoregistration actions
The following conditions can be used in actions based on active agent autoregistration:
The following conditions can be set for actions based on internal events:
440
Condition type Supported operators Description
Event type equals Item in ”not supported” state - matches events where an item
goes from a ’normal’ to ’not supported’ state.
Low-level discovery rule in ”not supported” state - matches
events where a low-level discovery rule goes from a ’normal’ to ’not
supported’ state.
Trigger in ”unknown” state - matches events where a trigger goes
from a ’normal’ to ’unknown’ state.
Host group equals Specify host groups or host groups to exclude.
does not equal equals - event belongs to this host group.
does not equal - event does not belong to this host group.
Tag name equals Specify event tag or event tag to exclude.
does not equal equals - event has this tag.
contains does not equal - event does not have this tag.
does not contain contains - event has a tag containing this string.
does not contain - event does not have a tag containing this string.
Tag value equals Specify event tag and value combination or tag and value combination
does not equal to exclude.
contains equals - event has this tag and value.
does not contain does not equal - event does not have this tag and value.
contains - event has a tag and value containing these strings.
does not contain - event does not have a tag and value containing
these strings.
Template equals Specify templates or templates to exclude.
does not equal equals - event belongs to an item/trigger/low-level discovery rule
inherited from this template.
does not equal - event does not belong to an item/trigger/low-level
discovery rule inherited from this template.
Host equals Specify hosts or hosts to exclude.
does not equal equals - event belongs to this host.
does not equal - event does not belong to this host.
Type of calculation
Note that using ”And” calculation is disallowed between several triggers when they are selected as a Trigger= condition. Actions
can only be executed based on the event of one trigger.
is evaluated as
(Host group equals Oracle servers or Host group equals MySQL servers) and (Event name contains ’Database is down’ or Event
name contains ’Database is unavailable’)
• Custom expression - a user-defined calculation formula for evaluating action conditions. It must include all conditions
(represented as uppercase letters A, B, C, ...) and may include spaces, tabs, brackets ( ), and (case sensitive), or (case
sensitive), not (case sensitive).
While the previous example with And/Or would be represented as (A or B) and (C or D), in a custom expression you may as well
have multiple other ways of calculation:
(A and B) and (C or D)
(A and B) or (C and D)
((A or B) and C) or D
(not (A or B) and C) or not D
etc.
441
Actions disabled due to deleted objects
If a certain object (host, template, trigger, etc.) used in an action condition/operation is deleted, the condition/operation is removed
and the action is disabled to avoid incorrect execution of the action. The action can be re-enabled by the user.
• host groups (”host group” condition, ”remote command” operation on a specific host group);
• hosts (”host” condition, ”remote command” operation on a specific host);
• templates (”template” condition, ”link template” and ”unlink template” operations);
• triggers (”trigger” condition);
• discovery rules (when using ”discovery rule” and ”discovery check” conditions).
Note:
If a remote command has many target hosts, and we delete one of them, only this host will be removed from the target
list, the operation itself will remain. But, if it’s the only host, the operation will be removed, too. The same goes for ”link
template” and ”unlink template” operations.
Actions are not disabled when deleting a user or user group used in a ”send message” operation.
2 Operations
Overview
• Send a message
• Execute a remote command
Attention:
Zabbix server does not create alerts if access to the host is explicitly ”denied” for the user defined as action operation
recipient or if the user has no rights defined to the host at all.
• Add host
• Remove host
• Enable host
• Disable host
• Add to host group
• Remove from host group
• Add host tags
• Remove host tags
• Link template
• Unlink template
• Set host inventory mode
Configuring an operation
442
General operation attributes:
Parameter Description
Default operation Duration of one operation step by default (60 seconds to 1 week).
step duration For example, an hour-long step duration means that if an operation is carried out, an hour will pass
before the next step.
Time suffixes are supported, e.g. 60s, 1m, 2h, 1d.
User macros are supported.
Operations Action operations (if any) are displayed, with these details:
Steps - escalation step(s) to which the operation is assigned.
Details - type of operation and its recipient/target.
The operation list also displays the media type (email, SMS or script) used as well as the name and
surname (in parentheses after the username) of a notification recipient.
Start in - how long after an event the operation is performed.
Duration (sec) - step duration is displayed. Default is displayed if the step uses default duration, and
a time is displayed if custom duration is used.
Action - links for editing and removing an operation are displayed.
Recovery Action operations (if any) are displayed, with these details:
operations Details - type of operation and its recipient/target.
The operation list also displays the media type (email, SMS or script) used as well as the name and
surname (in parentheses after the username) of a notification recipient.
Action - links for editing and removing an operation are displayed.
Update Action operations (if any) are displayed, with these details:
operations Details - type of operation and its recipient/target.
The operation list also displays the media type (email, SMS or script) used as well as the name and
surname (in parentheses after the username) of a notification recipient.
Action - links for editing and removing an operation are displayed.
Pause operations Mark this checkbox to pause operations (after the first operation) for symptom problems.
for symptom Note that this setting affects only problem escalations; recovery and update operations will not be
problems affected.
This option is available for Trigger actions only.
Pause operations Mark this checkbox to delay the start of operations for the duration of a maintenance period. When
for suppressed operations are started, after the maintenance, all operations are performed including those for the
problems events during the maintenance.
Note that this setting affects only problem escalations; recovery and update operations will not be
affected.
If you unmark this checkbox, operations will be executed without delay even during a maintenance
period.
This option is not available for Service actions.
443
Parameter Description
Notify about Unmark this checkbox to disable notifications about canceled escalations (when host, item, trigger or
canceled action is disabled).
escalations
To configure details of a new operation, click on in the Operations block. To edit an existing operation, click on next to
the operation. A pop-up window will open where you can edit the operation step details.
Operation details
Parameter Description
444
Parameter Description
Operation
type:
send
mes-
sage
Send Select user groups to send the message to.
to The user group must have at least ”read” permissions to the host in order to be
user notified.
groups
Send Select users to send the message to.
to The user must have at least ”read” permissions to the host in order to be notified.
users
Send Send message to all defined media types or a selected one only.
only
to
Custom If selected, the custom message can be configured.
mes- For notifications about internal events via webhooks, custom message is mandatory.
sage
Subject Subject of the custom message. The subject may contain macros. It is limited to 255
characters.
Message The custom message. The message may contain macros. It is limited to certain
amount of characters depending on the type of database (see Sending message for
more information).
Operation
type:
re-
mote
com-
mand
Target Select targets to execute the command on:
list Current host - command is executed on the host of the trigger that caused the
problem event. This option will not work if there are multiple hosts in the trigger.
Host - select host(s) to execute the command on.
Host group - select host group(s) to execute the command on. Specifying a parent
host group implicitly selects all nested host groups. Thus the remote command will
also be executed on hosts from nested groups.
A command on a host is executed only once, even if the host matches more than once
(e.g. from several host groups; individually and from a host group).
The target list is meaningless if a custom script is executed on Zabbix server.
Selecting more targets in this case only results in the script being executed on the
server more times.
Note that for global scripts, the target selection also depends on the Host group
setting in global script configuration.
Target list option is not available for Service actions because in this case remote
commands are always executed on Zabbix server.
Conditions Condition for performing the operation:
Event is not acknowledged - only when the event is unacknowledged.
Event is acknowledged - only when the event is acknowledged.
Conditions option is only available for Trigger actions.
When done, click Add to add the operation to the list of Operations.
1 Sending message
Overview
Sending a message is one of the best ways of notifying people about a problem. That is why it is one of the primary actions offered
by Zabbix.
Configuration
445
To be able to send and receive notifications from Zabbix you have to:
If the operation takes place outside of the When active time period defined for the selected media in the user configuration, the
message will not be sent.
The default trigger severity (’Not classified’) must be checked in user media configuration if you want to receive notifications for
non-trigger events such as discovery, active agent autoregistration or internal evens.
• configure an action operation that sends a message to one of the defined media
Attention:
Zabbix sends notifications only to those users that have at least ’read’ permissions to the host that generated the event.
At least one host of a trigger expression must be accessible.
You can configure custom scenarios for sending messages using escalations.
To successfully receive and read emails from Zabbix, email servers/clients must support standard ’SMTP/MIME email’ format since
Zabbix sends UTF-8 data (If the subject contains ASCII characters only, it is not UTF-8 encoded.). The subject and the body of the
message are base64-encoded to follow ’SMTP/MIME email’ format standard.
Message limit after all macros expansion is the same as message limit for Remote commands.
Tracking messages
In the Actions column you can see summarized information about actions taken. In there green numbers represent messages sent,
red ones - failed messages. In progress indicates that an action is initiated. Failed informs that no action has executed successfully.
If you click on the event time to view event details, you will be able to see details of messages sent (or not sent) due to the event
in the Actions block.
In Reports → Action log you will see details of all actions taken for those events that have an action configured.
2 Remote commands
Overview
With remote commands you can define that a certain pre-defined command is automatically executed on the monitored host upon
some condition.
Thus remote commands are a powerful mechanism for smart pro-active monitoring.
In the most obvious uses of the feature you can try to:
• Automatically restart some application (web server, middleware, CRM) if it does not respond
• Use IPMI ’reboot’ command to reboot some remote server if it does not answer requests
• Automatically free disk space (removing older files, cleaning /tmp) if running out of disk space
• Migrate a VM from one physical box to another depending on the CPU load
• Add new nodes to a cloud environment upon insufficient CPU (disk, memory, whatever) resources
Configuring an action for remote commands is similar to that for sending a message, the only difference being that Zabbix will
execute a command instead of sending a message.
Remote commands can be executed by Zabbix server, proxy or agent. Remote commands on Zabbix agent can be executed
directly by Zabbix server or through Zabbix proxy. Both on Zabbix agent and Zabbix proxy remote commands are disabled by
default. They can be enabled by:
Remote commands executed by Zabbix server are run as described in Command execution including exit code checking.
Remote command limit after resolving all macros depends on the type of database and character set (non-ASCII characters require
more than one byte to be stored):
446
Database Limit in characters Limit in bytes
Remote command execution output (return value) is limited to 16MB (including trailing whitespace that is truncated). IPMI remote
command limit is based on the installed IPMI library. Note that database limits apply to all remote commands.
Configuration
Those remote commands that are executed on Zabbix agent (custom scripts) must be first enabled in the agent configuration.
Make sure that the AllowKey=system.run[<command>,*] parameter is added for each allowed command in agent configuration
to allow specific command with nowait mode. Restart agent daemon if changing this parameter.
1. Define the appropriate conditions, for example, set that the action is activated upon any disaster problems with one of
Apache applications.
2. In the Operations tab, click on Add in the Operations, Recovery operations, or Update operations block.
447
3. Select one of the predefined scripts from the Operation dropdown list and set the Target list for the script.
Predefined scripts
Scripts that are available for action operations (webhook, script, SSH, Telnet, IPMI) are defined in global scripts.
For example:
Attention:
Note the use of sudo - Zabbix user does not have permissions to restart system services by default. See below for hints
on how to configure sudo.
448
Note:
Starting with Zabbix agent 7.0, remote commands can also be executed on an agent that is operating in active mode.
Zabbix agent - whether active or passive - should run on the remote host, and executes the commands in background.
Remote commands on Zabbix agent are executed without timeout by the system.run[,nowait] key and are not checked for execution
results. On Zabbix server and Zabbix proxy, remote commands are executed with timeout as set in the TrapperTimeout parameter
of zabbix_server.conf or zabbix_proxy.conf file and are checked for execution results. For additional information, see Script timeout.
Access permissions
Make sure that the ’zabbix’ user has execute permissions for configured commands. One may be interested in using sudo to give
access to privileged commands. To configure access, execute as root:
visudo
Example lines that could be used in sudoers file:
Note:
On some systems sudoers file will prevent non-local users from executing commands. To change this, comment out
requiretty option in /etc/sudoers.
If the target system has multiple interfaces of the selected type (Zabbix agent or IPMI), remote commands will be executed on the
default interface.
It is possible to execute remote commands via SSH and Telnet using another interface than the Zabbix agent one. The available
interface to use is selected in the following order:
<command> [<value>]
where
Examples
Examples of global scripts that may be used as remote commands in action operations.
Example 1
In order to automatically restart Windows upon a problem detected by Zabbix, define the following script:
Example 2
449
Script parameter Value
Example 3
3 Additional operations
Overview
In this section you may find some details of additional operations for discovery/autoregistration events.
Adding host
Hosts are added during the discovery process, as soon as a host is discovered, rather than at the end of the discovery process.
Note:
As network discovery can take some time due to many unavailable hosts/services having patience and using reasonable
IP ranges is advisable.
When adding a host, its name is decided by the standard gethostbyname function. If the host can be resolved, resolved name
is used. If not, the IP address is used. Besides, if IPv6 address must be used for a host name, then all ”:” (colons) are replaced by
”_” (underscores), since colons are not allowed in host names.
Attention:
If performing discovery by a proxy, currently hostname lookup still takes place on Zabbix server.
Attention:
If a host already exists in Zabbix configuration with the same name as a newly discovered one, Zabbix will add _N to the
hostname, where N is increasing number, starting with 2.
Overview
In message subjects and message text you can use macros for more efficient problem reporting.
In addition to a number of built-in macros, user macros and expression macros are also supported. A full list of macros supported
by Zabbix is available.
Examples
Example 1
Message subject:
Problem: {TRIGGER.NAME}
When you receive the message, the message subject will be replaced by something like:
450
Example 2
Message:
Message:
Message:
http://<server_ip_or_name>/zabbix/tr_events.php?triggerid={TRIGGER.ID}&eventid={EVENT.ID}
When you receive the message, it will contain a link to the Event details page, which provides information about the event, its
trigger, and a list of latest events generated by the same trigger.
Example 5
Message:
1. Item value on Myhost: 0.83 (Processor load (1 min average per core))
2. Item value on Myotherhost: 5.125 (Processor load (1 min average per core))
Example 6
Receiving details of both the problem event and recovery event in a recovery message:
Message:
Problem:
Recovery:
451
Event status: {EVENT.RECOVERY.STATUS}
Event time: {EVENT.RECOVERY.TIME}
Event date: {EVENT.RECOVERY.DATE}
Operational data: {EVENT.OPDATA}
When you receive the message, the macros will be replaced by something like:
Problem:
Recovery:
3 Recovery operations
Overview
Both messages and remote commands are supported in recovery operations. While several operations can be added, escalation
is not supported - all operations are assigned to a single step and therefore will be performed simultaneously.
Use cases
452
To configure details of a new recovery operation, click on in the Recovery operations block. To edit an existing operation,
click on next to the operation. A pop-up window will open where you can edit the operation step details.
Parameters for each operation type are described below. All mandatory input fields are marked with a red asterisk. When done,
click on Add to add operation to the list of Recovery operations.
Note:
Note that if the same recipient is defined in several operation types without specified Custom message, duplicate notifica-
tions are not sent.
453
Parameter Description
Parameter Description
Parameter Description
4 Update operations
Overview
Update operations are available in actions with the following event sources:
• Triggers - when problems are updated by other users, i.e. commented upon, acknowledged, severity has been changed,
closed (manually);
• Services - when the severity of a service has changed but the service is still not recovered.
Both messages and remote commands are supported in update operations. While several operations can be added, escalation is
not supported - all operations are assigned to a single step and therefore will be performed simultaneously.
454
To configure details of a new update operation, click on in the Update operations block. To edit an existing operation, click
on next to the operation. A pop-up window will open where you can edit the operation step details.
5 Escalations
Overview
With escalations you can create custom scenarios for sending notifications or executing remote commands.
455
• Users can be informed about new problems immediately.
• Notifications can be repeated until the problem is resolved.
• Sending a notification can be delayed.
• Notifications can be escalated to another ”higher” user group.
• Remote commands can be executed immediately or when a problem is not resolved for a lengthy period.
Actions are escalated based on the escalation step. Each step has a duration in time.
You can define both the default duration and a custom duration of an individual step. The minimum duration of one escalation step
is 60 seconds.
You can start actions, such as sending notifications or executing commands, from any step. Step one is for immediate actions. If
you want to delay an action, you can assign it to a later step. For each step, several actions can be defined.
Escalations are defined when configuring an operation. Escalations are supported for problem operations only, not recovery.
Let’s consider what happens in different circumstances if an action contains several escalation steps.
Situation Behavior
The host in question goes into maintenance after the initial Depending on the Pause operations for suppressed
problem notification is sent problems setting in action configuration, all remaining
escalation steps are executed either with a delay caused
by the maintenance period or without delay. A
maintenance period does not cancel operations.
The time period defined in the Time period action condition All remaining escalation steps are executed. The Time
ends after the initial notification is sent period condition cannot stop operations; it has effect
with regard to when actions are started/not started, not
operations.
A problem starts during maintenance and continues (is not Depending on the Pause operations for suppressed
resolved) after maintenance ends problems setting in action configuration, all escalation
steps are executed either from the moment
maintenance ends or immediately.
A problem starts during a no-data maintenance and continues It must wait for the trigger to fire, before all escalation
(is not resolved) after maintenance ends steps are executed.
Different escalations follow in close succession and overlap The execution of each new escalation supersedes the
previous escalation, but for at least one escalation step
that is always executed on the previous escalation. This
behavior is relevant in actions upon events that are
created with EVERY problem evaluation of the trigger.
During an escalation in progress (like a message being sent), The message in progress is sent and then one more
based on any type of event:<br>- the action is message on the escalation is sent. The follow-up
disabled<br>Based on trigger event:<br>- the trigger is message will have the cancellation text at the beginning
disabled<br>- the host or item is disabled<br>Based on of the message body (NOTE: Escalation canceled)
internal event about triggers:<br>- the trigger is naming the reason (for example, NOTE: Escalation
disabled<br>Based on internal event about items/low-level canceled: action ’<Action name>’ disabled). This way
discovery rules:<br>- the item is disabled<br>- the host is the recipient is informed that the escalation is canceled
disabled and no more steps will be executed. This message is
sent to all who received the notifications before. The
reason of cancellation is also logged to the server log file
(starting from Debug Level 3=Warning).
Escalation examples
Example 1
456
Sending a repeated notification once every 30 minutes (5 times in total) to a ”MySQL Administrators” group. To configure:
• In Operations tab, set the Default operation step duration to ”30m” (30 minutes).
• Set the escalation Steps to be from ”1” to ”5”.
• Select the ”MySQL Administrators” group as the recipients of the message.
Notifications will be sent at 0:00, 0:30, 1:00, 1:30, 2:00 hours after the problem starts (unless, of course, the problem is resolved
sooner).
If the problem is resolved and a recovery message is configured, it will be sent to those who received at least one problem message
within this escalation scenario.
Note:
If the trigger that generated an active escalation is disabled, Zabbix sends an informative message about it to all those
that have already received notifications.
Example 2
• In Operations tab, set the Default operation step duration to ”10h” (10 hours).
• Set the escalation Steps to be from ”2” to ”2”.
A notification will only be sent at Step 2 of the escalation scenario, or 10 hours after the problem starts.
You can customize the message text to something like ”The problem is more than 10 hours old”.
Example 3
In the first example above we configured periodical sending of messages to MySQL administrators. In this case, the administrators
will get four messages before the problem will be escalated to the Database manager. Note that the manager will get a message
only in case the problem is not acknowledged yet, supposedly no one is working on it.
Details of Operation 2:
457
Note the use of {ESC.HISTORY} macro in the customized message. The macro will contain information about all previously executed
steps on this escalation, such as notifications sent and commands executed.
Example 4
A more complex scenario. After multiple messages to MySQL administrators and escalation to the manager, Zabbix will try to
restart the MySQL database. It will happen if the problem exists for 2:30 hours and it hasn’t been acknowledged.
If the problem still exists, after another 30 minutes Zabbix will send a message to all guest users.
If this does not help, after another hour Zabbix will reboot server with the MySQL database (second remote command) using IPMI
commands.
458
Example 5
An escalation with several operations assigned to one step and custom intervals used. The default operation step duration is 30
minutes.
• To MySQL administrators at 0:00, 0:30, 1:00, 1:30 after the problem starts.
• To Database manager at 2:00 and 2:10. (and not at 3:00; seeing that steps 5 and 6 overlap with the next operation, the
shorter custom step duration of 10 minutes in the next operation overrides the longer step duration of 1 hour tried to set
here).
• To Zabbix administrators at 2:00, 2:10, 2:20 after the problem starts (the custom step duration of 10 minutes working).
• To guest users at 4:00 hours after the problem start (the default step duration of 30 minutes returning between steps 8 and
11).
Overview
It is part of the concept of internal events in Zabbix, allowing users to be notified on these occasions. Internal events reflect a
change of state:
This section presents a how-to for receiving notification when an item turns unsupported.
Configuration
Overall, the process of setting up the notification should feel familiar to those who have set up alerts in Zabbix before.
Step 1
Configure some media, such as email, SMS, or script to use for the notifications. Refer to the corresponding sections of the manual
to perform this task.
459
Attention:
For notifying on internal events the default severity (’Not classified’) is used, so leave it checked when configuring user
media if you want to receive notifications for internal events.
Step 2
Click on Create action at the top right corner of the page to open an action configuration form.
Step 3
In the Action tab enter a name for the action. Then click on Add in the Conditions block to add a new condition.
In the New condition pop-up window select ”Event type” as the condition type and then select ”Item in ’not supported’ state” as
the event type.
Don’t forget to click on Add to actually list the condition in the Conditions block.
Step 4
In the Operations tab, click on Add in the Operations block to add a new operation.
460
Select some recipients of the message (user groups/users) and the media type (or ”All”) to use for delivery. Check the Custom
message checkbox if you wish to enter the custom subject/content of the problem message.
If you wish to receive more than one notification, set the operation step duration (interval between messages sent) and add another
step.
Step 5
The Recovery operations block allows to configure a recovery notification when an item goes back to the normal state. Click on
Add in the Recovery operations block to add a new recovery operation.
Select the operation type ”Notify all involved”. Select Custom message checkbox if you wish to enter the custom subject/content
of the problem message.
461
Click on Add in the Operation details pop-up window to actually list the operation in the Recovery operations block.
Step 6
When finished, click on the Add button at the bottom of the form.
And that’s it, you’re done! Now you can look forward to receiving your first notification from Zabbix if some item turns unsupported.
11 Macros
Overview
Zabbix supports a number of built-in macros which may be used in various situations. These macros are variables, identified by a
specific syntax:
{MACRO}
Macros resolve to a specific value depending on the context.
Effective use of macros allows to save time and make Zabbix configuration more transparent.
In one of typical uses, a macro may be used in a template. Thus a trigger on a template may be named ”Processor load is too high
on {HOST.NAME}”. When the template is applied to the host, such as Zabbix server, the name will resolve to ”Processor load is
too high on Zabbix server” when the trigger is displayed in the Monitoring section.
Macros may be used in item key parameters. A macro may be used for only a part of the parameter, for example
item.key[server_{HOST.HOST}_local]. Double-quoting the parameter is not necessary as Zabbix will take care of
any ambiguous special symbols, if present in the resolved macro.
462
Zabbix supports the following macros:
1 Macro functions
Macro functions offer the ability to customize macro values (for example, shorten or extract specific substrings), making them
easier to work with.
All functions listed here are supported with all types of macros:
• Built-in macros
• User macros
• Low-level discovery macros
• Expression macros
Macro functions can be used in all locations supporting the listed macros. This applies unless explicitly stated that only a macro is
expected (for example, when configuring host macros or low-level discovery rule filters).
The functions are listed without additional information. Click on the function to see the full details.
Function Description
fmtnum Number formatting to control the number of digits printed after the decimal point.
fmttime Time formatting.
iregsub Substring extraction by a regular expression match (case-insensitive).
regsub Substring extraction by a regular expression match (case-sensitive).
Function details
{macro.func(params)}
• macro - the macro to customize, for example {ITEM.VALUE} or {#LLDMACRO};
• func - the function to apply;
• params - a comma-delimited list of function parameters, which must be quoted if:
– start with a space or double quotes;
– contain closing parentheses ”“” or a comma.
fmtnum(digits)
Number formatting to control the number of digits printed after the decimal point.
Parameters:
• digits - the number of digits after decimal point. Valid range: 0-20. No trailing zeros will be produced.
Examples:
fmttime(format,<time_shift>)
Time formatting.<br> Note that this function can be used with macros that resolve to a value in one of the following time formats:
• hh:mm:ss
• yyyy-mm-ddThh:mm:ss[tz] (ISO8601 standard)
• unix timestamp
Parameters:
463
• format - mandatory format string, compatible with strftime function formatting;<br>
• time_shift (optional) - the time shift applied to the time before formatting; should start with -<N><time_unit> or
+<N><time_unit>, where:<br>
– N - the number of time units to add or subtract;
– time_unit - h (hour), d (day), w (week), M (month) or y (year).
Comments:
• The time_shift parameter supports multistep time operations and may include /<time_unit> for shifting to the begin-
ning of the time unit (/d - midnight,
/w - 1st day of the week (Monday), /M - 1st day of the month, etc.). Examples: -1w -
exactly 7 days back; -1w/w - Monday of the previous week; -1w/w+1d - Tuesday of the previous week.
• Time operations are calculated from left to right without priorities. For example, -1M/d+1h/w will be parsed as
((-1M/d)+1h)/w.
Examples:
iregsub(pattern,output)
Parameters:
Comments:
• If the function pattern is an incorrect regular expression, then the macro evaluates to ’UNKNOWN’ (except for low-level
discovery macros, in which case the function will be ignored, and the macro will remain unresolved).
regsub(pattern,output)
Parameters:
Comments:
• If the function pattern is an incorrect regular expression, then the macro evaluates to ’UNKNOWN’ (except for low-level
discovery macros, in which case the function will be ignored, and the macro will remain unresolved).
Examples:
464
Macro function Received value Output
{$MACRO:"{{#IFALIAS}.regsub(\"(.*)_([0-9]+)\",
customername_1{$MACRO:"customername"}
\1)}"}
{$MACRO:"{{#IFALIAS}.regsub(\"(.*)_([0-9]+)\",
customername_1{$MACRO:"1"}
\2)}"}
{$MACRO:"{{#IFALIAS}.regsub(\"(.*)_([0-9]+\",
customername_1{$MACRO:"{{#M}.regsub(\"(.*)_([0-9]+\",
\1)}"} \1)}"} (invalid regular expression)
"{$MACRO:\"{{#IFALIAS}.regsub(\\"(.*)_([0-9]+)\\",
customername_1"{$MACRO:\"customername\"}"
\1)}\"}"
"{$MACRO:\"{{#IFALIAS}.regsub(\\"(.*)_([0-9]+)\\",
customername_1"{$MACRO:\"1\"}")
\2)}\"}"
"{$MACRO:\"{{#IFALIAS}.regsub(\\"(.*)_([0-9]+\\",
customername_1"{$MACRO:\"{{#IFALIAS}.regsub(\\"(.*)_([0-9]+\\",
\1)}\"}" \1)}\"}") (invalid regular expression)
2 User macros
Overview
User macros are supported in Zabbix for greater flexibility, in addition to the macros supported out-of-the-box.
User macros can be defined on global, template and host level. These macros have a special syntax:
{$MACRO}
Zabbix resolves macros according to the following precedence:
In other words, if a macro does not exist for a host, Zabbix will try to find it in the host templates of increasing depth. If still not
found, a global macro will be used, if exists.
Warning:
If a macro with the same name exists on multiple linked templates of the same level, the macro from the template with
the lowest ID will be used. Thus having macros with the same name in multiple templates is a configuration risk.
Attention:
Macros (including user macros) are left unresolved in the Configuration section (for example, in the trigger list) by design
to make complex configuration more transparent.
• item name
• item key parameter
• item update intervals and flexible intervals
• trigger name and description
• trigger expression parameters and constants (see examples)
• many other locations - see the full list
• use a global macro in several locations; then change the macro value and apply configuration changes to all locations with
one click
• take advantage of templates with host-specific attributes: passwords, port numbers, file names, regular expressions, etc.
Note:
It is advisable to use host macros instead of global macros because adding, updating or deleting global macros forces
incremental configuration update for all hosts. For more information, see Passive and active agent checks.
Configuration
465
To define user macros, go to the corresponding location in the frontend:
Note:
If a user macro is used in items or triggers in a template, it is suggested to add that macro to the template even if it is
defined on a global level. That way, if the macro type is text exporting the template to XML and importing it in another
system will still allow it to work as expected. Values of secret macros are not exported.
Parameter Description
Macro Macro name. The name must be wrapped in curly brackets and start with a dollar sign.
Example: {$FRONTEND_URL}. The following characters are allowed in the macro names: A-Z
(uppercase only) , 0-9 , _ , .
Value Macro value. Three value types are supported:
Text (default) - plain-text value
Secret text - the value is masked with asterisks
Vault secret - the value contains a path/query to a vault secret.
To change the value type click on the button at the end of the value input field.
Attention:
In trigger expressions user macros will resolve if referencing a parameter or constant. They will NOT resolve if referencing
a host, item key, function, operator or another trigger expression. Secret macros cannot be used in trigger expressions.
Examples
Example 1
net.tcp.service[ssh,,{$SSH_PORT}]
This item can be assigned to multiple hosts, providing that the value of {$SSH_PORT} is defined on those hosts.
Example 2
last(/ca_001/system.cpu.load[,avg1])>{$MAX_CPULOAD}
Such a trigger would be created on the template, not edited in individual hosts.
466
Note:
If you want to use the amount of values as the function parameter (for example, max(/host/key,#3)), include hash mark
in the macro definition like this: SOME_PERIOD => #3
Example 3
min(/ca_001/system.cpu.load[,avg1],{$CPULOAD_PERIOD})>{$MAX_CPULOAD}
Note that a macro can be used as a parameter of trigger function, in this example function min().
Example 4
Synchronize the agent unavailability condition with the item update interval:
nodata(/ca_001/agent.ping,{$INTERVAL})=1
Example 5
Example 6
• on a host prototype define user macro {$SNMPVALUE} with {#SNMPVALUE} low-level discovery macro as a value:
Overview
467
An optional context can be used in user macros, allowing to override the default value with a context-specific one.
The context is appended to the macro name; the syntax depends on whether the context is a static text value:
{$MACRO:"static text"}
or a regular expression:
{$MACRO:regex:"regular expression"}
Note that a macro with regular expression context can only be defined in user macro configuration. If the regex: prefix is used
elsewhere as user macro context, like in a trigger expression, it will be treated as static context.
Example Description
Use cases
User macros with context can be defined to accomplish more flexible thresholds in trigger expressions (based on the values
retrieved by low-level discovery). For example, you may define the following macros:
• {$LOW_SPACE_LIMIT} = 10
• {$LOW_SPACE_LIMIT:/home} = 20
• {$LOW_SPACE_LIMIT:regex:”^\/[a-z]+$”} = 30
Then a low-level discovery macro may be used as macro context in a trigger prototype for mounted file system discovery:
last(/host/vfs.fs.size[{#FSNAME},pfree])<{$LOW_SPACE_LIMIT:"{#FSNAME}"}
After the discovery different low-space thresholds will apply in triggers depending on the discovered mount points or file system
types. Problem events will be generated if:
Important notes
• If more than one user macro with context exists, Zabbix will try to match the simple context macros first and then context
macros with regular expressions in an undefined order.
Warning:
Do not create different context macros matching the same string to avoid undefined behavior.
• If a macro with its context is not found on host, linked templates or globally, then the macro without context is searched for.
• Only low-level discovery macros are supported in the context. Any other macros are ignored and treated as plain text.
Technically, macro context is specified using rules similar to item key parameters, except macro context is not parsed as several
parameters if there is a , character:
• Macro context must be quoted with " if the context contains a } character or starts with a " character. Quotes inside quoted
context must be escaped with the \ character.
• The \ character itself is not escaped, which means it’s impossible to have a quoted context ending with the \ character -
the macro {$MACRO:”a:\b\c\”} is invalid.
• The leading spaces in context are ignored, the trailing spaces are not:
– For example {$MACRO:A} is the same as {$MACRO: A}, but not {$MACRO:A }.
• All spaces before leading quotes and after trailing quotes are ignored, but all spaces inside quotes are not:
– Macros {$MACRO:”A”}, {$MACRO: ”A”}, {$MACRO:”A” } and {$MACRO: ”A” } are the same, but macros {$MACRO:”A”}
and {$MACRO:” A ”} are not.
468
The following macros are all equivalent, because they have the same context: {$MACRO:A}, {$MACRO: A} and {$MACRO:”A”}.
This is in contrast with item keys, where ’key[a]’, ’key[ a]’ and ’key[”a”]’ are the same semantically, but different for uniqueness
purposes.
Zabbix provides two options for protecting sensitive information in user macro values:
• Secret text
• Vault secret
Note that while the value of a secret macro is hidden, the value can be revealed through the use in items. For example, in an
external script an ’echo’ statement referencing a secret macro may be used to reveal the macro value to the frontend because
Zabbix server has access to the real macro value. See also locations where secret macro values are unmasked.
Secret text
To make macro value ’secret’, click on the button at the end of the value field and select the option Secret text.
Once the configuration is saved, it will no longer be possible to view the value.
To enter a new value, hover over the value field and press Set new value button (appears on hover).
If you change macro value type or press Set new value, current value will be erased. To revert the original value, use the backwards
arrow at the right end of the Value field (only available before saving new configuration). Reverting the value will not expose
it.
Note:
URLs that contain a secret macro will not work as the macro in them will be resolved as ”******”.
Vault secret
With Vault secret macros, the actual macro value is stored in an external secret management software (vault).
To configure a Vault secret macro, click on the button at the end of the Value field and select the option Vault secret.
The macro value should point to a vault secret. The input format depends on the vault provider. For provider-specific configuration
examples, see:
• HashiCorp
• CyberArk
469
Vault secret values are retrieved by Zabbix server on every refresh of configuration data and then stored in the configuration cache.
To manually trigger refresh of secret values from a vault, use the ’secrets_reload’ command-line option.
Zabbix proxy receives values of vault secret macros from Zabbix server on each configuration sync and stores them in its own
configuration cache. The proxy never retrieves macro values from the vault directly. That means a Zabbix proxy cannot start data
collection after a restart until it receives the configuration data update from Zabbix server for the first time.
Encryption must be enabled between Zabbix server and proxy; otherwise a server warning message is logged.
Warning:
If a macro value cannot be retrieved successfully, the corresponding item using the value will turn unsupported.
Unmasked locations
This list provides locations of parameters where secret macro values are unmasked.
Context Parameter
Items
Item Item key parameters
SNMP agent SNMP community
Context name (SNMPv3)
Security name (SNMPv3)
Authentication passphrase (SNMPv3)
Privacy passphrase (SNMPv3)
HTTP agent URL
Query fields
Post
Headers
Username
Password
SSL key password
Script Parameters
Script
Browser Parameters
Script
Database monitor SQL query
Telnet Script
Username, password
SSH Script
Username, password
Simple check Username, password
JMX Username, password
Item value preprocessing
JavaScript preprocessing step Script
Web scenarios
Web scenario Variable value
Header value
URL
Query field value
Post field value
Raw post
Web scenario User
authentication
Password
SSL key password
Connectors
Connector URL
Username
Password
Token
HTTP proxy
SSL certificate file
SSL key file
470
Context Parameter
Overview
There is a type of macro used within the low-level discovery (LLD) function:
{#MACRO}
It is a macro that is used in an LLD rule and returns real values of the file system name, network interface, SNMP OID, etc.
These macros can be used for creating item, trigger and graph prototypes. Then, when discovering real file systems, network
interfaces etc., these macros are substituted with real values and are the basis for creating real items, triggers and graphs.
These macros are also used in creating host and host group prototypes in virtual machine discovery.
Some low-level discovery macros come ”pre-packaged” with the LLD function in Zabbix - {#FSNAME}, {#FSTYPE}, {#IFNAME},
{#SNMPINDEX}, {#SNMPVALUE}. However, adhering to these names is not compulsory when creating a custom low-level discovery
rule. Then you may use any other LLD macro name and refer to that name.
Supported locations
471
– HTTP agent request body field
– HTTP agent required status codes field
– HTTP agent headers field key and value
– HTTP agent HTTP authentication username field
– HTTP agent HTTP authentication password field
– HTTP agent HTTP proxy field
– HTTP agent HTTP SSL certificate file field
– HTTP agent HTTP SSL key file field
– HTTP agent HTTP SSL key password field
– tags
• for trigger prototypes in
– name
– operational data
– expression (only in constants and function parameters)
– URL
– description
– tags
• for graph prototypes in
– name
• for host prototypes in
– name
– visible name
– custom interface fields: IP, DNS, port, SNMP v1/v2 community, SNMP v3 context name, SNMP v3 security name, SNMP
v3 authentication passphrase, SNMP v3 privacy passphrase
– host group prototype name
– host tag value
– host macro value
– (see the full list)
In all those places, except the low-level discovery rule filter, LLD macros can be used inside static user macro context.
Macro functions are supported with low-level discovery macros (except in low-level discovery rule filter), allowing to extract a
certain part of the macro value using a regular expression.
For example, you may want to extract the customer name and interface number from the following LLD macro for the purposes of
event tagging:
{#IFALIAS}=customername_1
To do so, the regsub macro function can be used with the macro in the event tag value field of a trigger prototype:
Note that commas are not allowed in unquoted item key parameters, so the parameter containing a macro function has to be
quoted. The backslash (\) character should be used to escape double quotes inside the parameter. Example:
net.if.in["{{#IFALIAS}.regsub(\"(.*)_([0-9]+)\", \1)}",bytes]
For more information on macro function syntax, see: Macro functions
Macro functions are supported in low-level discovery macros since Zabbix 4.0.
Footnotes
1 1
In the fields marked with a single macro has to fill the whole field. Multiple macros in a field or macros mixed with text are not
supported.
6 Expression macros
Overview
472
Expression macros are useful for formula calculations. They are calculated by expanding all macros inside and evaluating the
resulting expression.
{?EXPRESSION}
The syntax in EXPRESSION is the same as in trigger expressions (see usage limitations below).
Usage
• graph names
• map element labels
• map shape labels
• map link labels
only a single function, from the following set: avg, last, max, min, is allowed as an expression macro, e.g.:
{?avg(/{HOST.HOST}/{ITEM.KEY},1h)}
Expressions such as {?last(/host/item1)/last(/host/item2)}, {?count(/host/item1,5m)} and {?last(/host/item1)*10}
are incorrect in these locations.
However, in:
{?trendavg(/host/item1,1M:now/M)/trendavg(/host/item1,1M:now/M-1y)*100}
See also:
Overview
All users in Zabbix access the Zabbix application through the web-based frontend. Each user is assigned a unique login name and
a password.
All user passwords are encrypted and stored in the Zabbix database. Users cannot use their user id and password to log directly
into the UNIX server unless they have also been set up accordingly to UNIX. Communication between the web server and the user
browser can be protected using SSL.
With a flexible user permission schema you can restrict and differentiate rights to:
1 Configuring a user
Overview
To configure a user:
473
• Go to Users → Users.
• Click on Create user (or on a user name to edit an existing user).
• Edit user attributes in the form.
General attributes
Parameter Description
474
Parameter Description
User media
The Media tab contains a listing of all media defined for the user. Media are used for sending notifications.
See the Media types section for details on configuring user media.
Permissions
• User role (mandatory for any newly created user) that can only be changed by a Super admin user.
Attention:
Users cannot be created without a user role (except with Zabbix User API). Previously created users which do not have a
role may still be edited without assigning a role to them. However, once a role is assigned, it can only be changed, not
removed. <br><br> Note that users without a role can log into Zabbix only using LDAP or SAML authentication, provided
their LDAP/SAML information matches the user group mappings configured in Zabbix.
• User type (User, Admin, Super admin) that is defined in the user role configuration.
• Host and template groups that the user has access to.
– User and Admin type users, by default, do not have access to any groups, templates, and hosts. To grant such access,
users must be included in user groups configured with permissions to the relevant entities.
• Access rights to sections and elements of Zabbix frontend, modules, and API methods.
– Elements with allowed access are displayed in green color, while those with denied access - in light gray color.
• Rights to perform specific actions.
475
– Actions that the user is allowed to perform are displayed in green color, while those that are denied - in light gray color.
2 Permissions
Overview
Permissions in Zabbix depend on the user type, customized user roles and access to hosts, which is specified based on the user
group.
User types
• User - has limited access rights to menu sections (see below) and no access to any resources by default. Any permissions
to host or template groups must be explicitly assigned;
• Admin - has incomplete access rights to menu sections (see below). The user has no access to any host groups by default.
Any permissions to host or template groups must be explicitly given;
• Super admin - has access to all menu sections. The user has a read-write access to all host and template groups. Permissions
cannot be revoked by denying access to specific groups.
Menu access
The following table illustrates access to Zabbix menu sections per user type:
Dashboards + + +
Monitoring + + +
Problems + + +
Hosts + + +
Latest data + + +
Maps + + +
Discovery + +
Services + + +
Services + + +
SLA + +
SLA report + + +
Inventory + + +
Overview + + +
Hosts + + +
Reports + + +
System information +
Scheduled reports + +
Availability report + + +
Top 100 triggers + + +
Audit log +
Action log +
Notifications + +
Data collection + +
Template groups + +
Host groups + +
Templates + +
Hosts + +
Maintenance + +
Event correlation +
Discovery + +
Alerts + +
Trigger actions + +
Service actions + +
Discovery actions + +
Autoregistration actions + +
Internal actions + +
Media types +
Scripts +
476
Menu section User Admin Super admin
Users +
User groups +
User roles +
Users +
API tokens +
Authentication +
Administration +
General +
Audit log +
Housekeeping +
Proxy groups +
Proxies +
Macros +
Queue +
User roles
User roles allow making custom adjustments to the permissions defined by the user type. While no permissions can be added (that
would exceed those of the user type), some permissions can be revoked.
Furthermore, a user role determines access not only to menu sections, but also to services, modules, API methods and various
actions in the frontend.
User roles are configured in the Users → User roles section by Super admin users.
User roles are assigned to users in the user configuration form, Permissions tab, by Super admin users.
477
Access to hosts
Access to any host and template data in Zabbix is granted to user groups on the host/template group level only.
That means that an individual user cannot be directly granted access to a host (or host group). It can only be granted access to a
host by being part of a user group that is granted access to the host group that contains the host.
Similarly, a user can only be granted access to a template by being part of a user group that is granted access to the template
group that contains the template.
3 User groups
Overview
User groups allow to group users both for organizational purposes and for assigning permissions to data. Permissions to viewing
and configuring data of host groups and template groups are assigned to user groups, not individual users.
It may often make sense to separate what information is available for one group of users and what - for another. This can be
accomplished by grouping users and then assigning varied permissions to host and template groups.
478
Configuration
Parameter Description
The Template permissions tab allows specifying user group access to template group (and thereby template) data:
479
The Host permissions tab allows specifying user group access to host group (and thereby host) data:
Click on to choose the template/host groups (be it a parent or a nested group) and assign permissions to those. Start typing
the group name (a dropdown of matching groups will appear) or click on Select for a popup window listing all groups to be opened.
Then use the option buttons to assign permissions to the chosen groups. Possible permissions are the following:
If the same template/host group is added in several rows with different permissions set, the strictest permission will be applied.
Note that a Super admin user can enforce nested groups to have the same level of permissions as the parent group; this can be
done in the host/template group configuration form.
Template permissions and Host permissions tabs support the same set of parameters.
Current permissions to groups are displayed in the Permissions block, and those can be modified or removed.
Note:
If a user group has Read-write permissions to a host and Deny or no permissions to a template linked to this host, the
users of such group will not be able to edit templated items on the host, and template name will be displayed as Inaccessible
template.
The Problem tag filter tab allows setting tag-based permissions for user groups to see problems filtered by tag name and value:
Click on to choose the host groups. To select a host group to apply a tag filter for, click Select to get the complete list of
existing host groups or start typing the name of a host group to get a dropdown of matching groups. Only host groups will be
displayed, because problem tag filter cannot be applied to template groups.
480
Then it is possible to switch from All tags to Tag list in order to set particular tags and their values for filtering. Tag names without
values can be added, but values without names cannot. Only the first three tags (with values, if any) are displayed in the Permissions
block; if there are more, those can be seen by clicking or hovering over the icon.
Tag filter allows separating the access to host group from the possibility to see problems.
For example, if a database administrator needs to see only ”MySQL” database problems, it is required to create a user group for
database administrators first, then specify ”Service” tag name and ”MySQL” value.
If ”Service” tag name is specified and value field is left blank, the user group will see all problems with tag name ”Service” for the
selected host group. If All tags is selected, the user group will see all problems for the specified host group.
Make sure tag name and tag value are correctly specified, otherwise, the user group will not see any problems.
Let’s review an example when a user is a member of several user groups selected. Filtering in this case will use OR condition for
tags.
Attention:
Adding a filter (for example, all tags in a certain host group ”Linux servers”) results in not being able to see the problems
of other host groups.
A user may belong to any number of user groups. These groups may have different access permissions to hosts or templates.
Therefore, it is important to know what entities an unprivileged user will be able to access as a result. For example, let us consider
how access to host X (in Hostgroup 1) will be affected in various situations for a user who is in user groups A and B.
• If Group A has only Read access to Hostgroup 1, but Group B Read-write access to Hostgroup 1, the user will get Read-write
access to ’X’.
481
Attention:
“Read-write” permissions have precedence over “Read” permissions.
• In the same scenario as above, if ’X’ is simultaneously also in Hostgroup 2 that is denied to Group A or B, access to ’X’ will
be unavailable, despite a Read-write access to Hostgroup 1.
• If Group A has no permissions defined and Group B has a Read-write access to Hostgroup 1, the user will get Read-write
access to ’X’.
• If Group A has Deny access to Hostgroup 1 and Group B has a Read-write access to Hostgroup 1, the user will get access to
’X’ denied.
Other details
• An Admin level user with Read-write access to a host will not be able to link/unlink templates, if he has no access to the
template group they belong to. With Read access to the template group he will be able to link/unlink templates to the host,
however, will not see any templates in the template list and will not be able to operate with templates in other places.
• An Admin level user with Read access to a host will not see the host in the configuration section host list; however, the host
triggers will be accessible in IT service configuration.
• Any non-Super Admin user (including ’guest’) can see network maps as long as the map is empty or has only images. When
hosts, host groups or triggers are added to the map, permissions are respected.
• Zabbix server will not send notifications to users defined as action operation recipients if access to the concerned host is
explicitly ”denied”.
13 Storage of secrets
Overview Zabbix can be configured to retrieve sensitive information from a secure vault. The following secret management
services are supported: HashiCorp Vault KV Secrets Engine - Version 2, CyberArk Vault CV12.
Zabbix provides read-only access to the secrets in a vault, assuming that secrets are managed by someone else.
• HashiCorp configuration
• CyberArk configuration
Caching of secret values Vault secret macro values are retrieved by Zabbix server on every refresh of configuration data
and then stored in the configuration cache. Zabbix proxy receives values of vault secret macros from Zabbix server on each
configuration sync and stores them in its own configuration cache.
Attention:
Encryption must be enabled between Zabbix server and proxy; otherwise a server warning message is logged.
To manually trigger refresh of cached secret values from a vault, use the ’secrets_reload’ command-line option.
For Zabbix frontend database credentials caching is disabled by default, but can be enabled by setting the option $DB['VAULT_CACHE']
= true in zabbix.conf.php. The credentials will be stored in a local cache using the filesystem temporary file directory. The web
server must allow writing in a private temporary folder (for example, for Apache the configuration option PrivateTmp=True
must be set). To control how often the data cache is refreshed/invalidated, use the ZBX_DATA_CACHE_TTL constant .
TLS configuration To configure TLS for communication between Zabbix components and the vault, add a certificate signed by a
certificate authority (CA) to the system-wide default CA store. To use another location, specify the directory in the SSLCALocation
Zabbix server/proxy configuration parameter, place the certificate file inside that directory, then run the CLI command:
c_rehash .
1 CyberArk configuration
This section explains how to configure Zabbix to retrieve secrets from CyberArk Vault CV12.
482
The vault should be installed and configured as described in the official CyberArk documentation.
Database credentials
Access to a secret with database credentials is configured for each Zabbix component separately.
To obtain database credentials from the vault for Zabbix server or proxy, specify the following configuration parameters in the
configuration file:
Attention:
Zabbix server also uses the Vault, VaultURL, VaultTLSCertFile and VaultTLSKeyFile, and VaultPrefix config-
uration parameters for vault authentication when processing vault secret macros.
Zabbix server and Zabbix proxy read the vault-related configuration parameters from zabbix_server.conf and zabbix_proxy.conf
files upon startup.
Example
curl \
--header "Content type: application/json" \
--cert cert.pem \
--key key.pem \
https://127.0.0.1:1858/AIMWebService/api/Accounts?AppID=zabbix_server&Query=Safe=passwordSafe;Object=zabbi
3. The vault response will contain the keys ”Content” and ”UserName”:
{
"Content": <password>,
"UserName": <username>,
"Address": <address>,
"Database": <Database>,
"PasswordChangeInProcess":<PasswordChangeInProcess>
}
4. As a result, Zabbix will use the following credentials for database authentication:
• Username: <username>
• Password: <password>
Frontend
To obtain database credentials from the vault for Zabbix frontend, specify the following parameters during frontend installation.
1. At the Configure DB Connection step, set the Store credentials in parameter to ”CyberArk Vault”.
483
2. Then, fill in the additional parameters:
Vault API endpoint yes https://localhost:1858 Specify the URL for connecting to the vault in the
format scheme://host:port
Vault prefix no /AIMWebService/api/Accounts?
Provide a custom prefix for the vault path or query. If
not specified, the default is used.
Vault secret query string yes A query, which specifies from where database
credentials should be retrieved.
Example:
AppID=foo&Query=Safe=bar;Object=buzz
Vault certificates no After marking the checkbox, additional parameters
will appear allowing to configure client
authentication. While this parameter is optional, it is
highly recommended to enable it for communication
with the CyberArk Vault.
SSL certificate file no conf/certs/cyberark- Path to the SSL certificate file. The file must be in
cert.pem PEM format.
If the certificate file also contains the private key,
leave the SSL key file parameter empty.
SSL key file no conf/certs/cyberark- Name of the SSL private key file used for client
key.pem authentication. The file must be in PEM format.
To use CyberArk Vault for storing Vault secret user macro values, make sure that:
Note:
Only Zabbix server requires access to Vault secret macro values from the vault. Other Zabbix components (proxy, frontend)
do not need such access.
484
See Vault secret macros for detailed information on macro value processing by Zabbix.
Query syntax
The colon symbol (”:”) is reserved for separating the query from the key.
If a query itself contains a forward slash or a colon, these symbols should be URL-encoded (”/” is encoded as ”%2F”, ”:” is encoded
as ”%3A”).
Example
1. In Zabbix, add a user macro {$PASSWORD} of type Vault secret and with the value AppID=zabbix_server&Query=Safe=passwordSa
curl \
--header "Content type: application/json" \
--cert cert.pem \
--key key.pem \
https://127.0.0.1:1858/AIMWebService/api/Accounts?AppID=zabbix_server&Query=Safe=passwordSafe;Object=zabbi
{
"Content": <password>,
"UserName": <username>,
"Address": <address>,
"Database" :<Database>,
"PasswordChangeInProcess":<PasswordChangeInProcess>
}
4. As a result, Zabbix will resolve the macro {$PASSWORD} to the value - <password>
1. Update the Zabbix server or proxy configuration file parameters as described in the Database credentials section.
2. Update the DB connection settings by reconfiguring Zabbix frontend and specifying the required parameters as described
in the Frontend section. To reconfigure Zabbix frontend, open the frontend setup URL in the browser:
Alternatively, these parameters can be set in the frontend configuration file (zabbix.conf.php):
$DB['VAULT'] = 'CyberArk';
$DB['VAULT_URL'] = 'https://127.0.0.1:1858';
$DB['VAULT_DB_PATH'] = 'AppID=foo&Query=Safe=bar;Object=buzz';
$DB['VAULT_TOKEN'] = '';
$DB['VAULT_CERT_FILE'] = 'conf/certs/cyberark-cert.pem';
$DB['VAULT_KEY_FILE'] = 'conf/certs/cyberark-key.pem';
$DB['VAULT_PREFIX'] = '';
3. Configure user macros as described in the User macro values section, if necessary.
To update an existing configuration for retrieving secrets from a HashiCorp Vault, see HashiCorp configuration.
485
2 HashiCorp configuration
Overview
This section explains how to configure Zabbix for retrieving secrets from HashiCorp Vault KV Secrets Engine - Version 2.
The vault should be deployed and configured as described in the official HashiCorp documentation.
• Zabbix server/proxy
• Zabbix frontend
Server/proxy
To configure Zabbix server or proxy, specify the following configuration parameters in the configuration file:
Attention:
Zabbix server also uses the Vault, VaultToken, VaultURL, and VaultPrefix configuration parameters for vault au-
thentication when processing vault secret macros.
Zabbix server and Zabbix proxy read the vault-related configuration parameters from zabbix_server.conf and zabbix_proxy.conf
upon startup. Additionally, Zabbix server and Zabbix proxy will read the VAULT_TOKEN environment variable once during startup
VaultToken and VAULT_TOKEN
and will unset it so that it would not be available through forked scripts; it is an error if both
parameters contain a value.
Example
2. Run the following CLI commands to create the required secret in the vault:
#### Enable "secret/" mount point if not already enabled; note that "kv-v2" must be used.
vault secrets enable -path=secret/ kv-v2
#### Put new secrets with keys username and password under mount point "secret/" and path "zabbix/database
vault kv put -mount=secret zabbix/database username=zabbix password=<password>
#### Finally test with Curl; note that "data" need to be manually added after mount point and "/v1" before
curl --header "X-Vault-Token: <VaultToken>" https://127.0.0.1:8200/v1/secret/data/zabbix/database
3. As a result, Zabbix server will retrieve the following credentials for database authentication:
• Username: zabbix
• Password: <password>
Frontend
Zabbix frontend can be configured to retrieve database credentials from the vault either during frontend installation or by updating
the frontend configuration file (zabbix.conf.php).
486
Attention:
If vault credentials have been changed since the previous frontend installation, rerun the frontend installation or update
zabbix.conf.php. See also: Updating existing configuration.
During frontend installation the configuration parameters must be specified at the Configure DB Connection step:
Vault API endpoint yes https://localhost:8200 Specify the URL for connecting to the vault in the
format scheme://host:port
Vault prefix no /v1/secret/data/ Provide a custom prefix for the vault path or query. If
not specified, the default is used.
Vault secret path no A path to the secret from where credentials for the
database shall be retrieved by the keys ”password”
and ”username”.
Example: secret/zabbix/database
Vault authentication no Provide an authentication token for read-only access
token to the secret path.
See HashiCorp documentation for information about
creating tokens and vault policies.
To use HashiCorp Vault for storing Vault secret user macro values, make sure that:
Note:
Only Zabbix server requires access to Vault secret macro values from the vault. Other Zabbix components (proxy, frontend)
do not need such access.
487
The macro value should contain a reference path (as path:key, for example, secret/zabbix:password). The authentication
token specified during Zabbix server configuration (by theVaultToken parameter) must provide read-only access to this path.
See Vault secret macros for detailed information on macro value processing by Zabbix.
Path syntax
The symbols forward slash (”/”) and colon (”:”) are reserved.
A forward slash can only be used to separate a mount point from a path (e.g., secret/zabbix where the mount point is ”secret” and
the path is ”zabbix”). In the case of Vault macros, a colon can only be used to separate a path/query from a key.
It is possible to URL-encode the forward slash and colon symbols if there is a need to create a mount point with the name that is
separated by a forward slash (e.g., foo/bar/zabbix, where the mount point is ”foo/bar” and the path is ”zabbix”, can be encoded
as ”foo%2Fbar/zabbix”) and if a mount point name or path need to contain a colon.
Example
1. In Zabbix, add a user macro {$PASSWORD} of type ”Vault secret” and with the value secret/zabbix:password
2. Run the following CLI commands to create required secret in the vault:
#### Enable "secret/" mount point if not already enabled; note that "kv-v2" must be used.
vault secrets enable -path=secret/ kv-v2
#### Put new secret with key password under mount point "secret/" and path "zabbix".
vault kv put -mount=secret zabbix password=<password>
#### Finally test with Curl; note that "data" need to be manually added after mount point and "/v1" before
curl --header "X-Vault-Token: <VaultToken>" https://127.0.0.1:8200/v1/secret/data/zabbix
3. As a result, Zabbix will resolve the macro {$PASSWORD} to the value: <password>
1. Update the Zabbix server or proxy configuration file parameters as described in the Database credentials section.
2. Update the DB connection settings by reconfiguring Zabbix frontend and specifying the required parameters as described
in the Frontend section. To reconfigure Zabbix frontend, open the frontend setup URL in the browser:
Alternatively, these parameters can be set in the frontend configuration file (zabbix.conf.php):
$DB['VAULT'] = 'HashiCorp';
$DB['VAULT_URL'] = 'https://localhost:8200';
$DB['VAULT_DB_PATH'] = 'secret/zabbix/database';
$DB['VAULT_TOKEN'] = '<mytoken>';
$DB['VAULT_CERT_FILE'] = '';
$DB['VAULT_KEY_FILE'] = '';
$DB['VAULT_PREFIX'] = '';
3. Configure user macros as described in the User macro values section, if necessary.
To update an existing configuration for retrieving secrets from a CyberArk Vault, see CyberArk configuration.
488
14 Scheduled reports
Overview
With the Scheduled reports feature, you can set up a PDF version of a given dashboard to be sent to specified recipients at recurring
intervals.
Pre-requisites:
• Zabbix web service must be installed and configured correctly to enable scheduled report generation - see Setting up sched-
uled reports for instructions.
• A user must have a user role of type Admin or Super admin with the following permissions:
– Scheduled reports in the Access to UI elements block (to view reports)
– Manage scheduled reports in the Access to actions block (to create/edit reports)
You can also create a report by opening an existing one, clicking the Clone button, and then saving it under a different name.
Configuration
489
All mandatory input fields are marked with a red asterisk.
Parameter Description
Owner User that creates a report. Super admin level users are allowed to change the owner. For Admin
level users, this field is read-only.
Name Name of the report; must be unique.
Dashboard Dashboard on which the report is based; only one dashboard can be selected at a time. To select
a dashboard, start typing the name - a list of matching dashboards will appear; scroll down to
select. Alternatively, you may click Select next to the field and select a dashboard from the
displayed list.
Period Period for which the report will be prepared. Select the previous day, week, month, or year.
Cycle Report generation frequency. The reports can be sent daily, weekly, monthly, or yearly. ”Weekly”
mode allows to select days of the week when the report will be sent.
Start time Time of day in the format hh:mm when the report will be prepared.
Repeat on Days of the week when the report will be sent. This field is available only if Cycle is set to
”Weekly”.
Start date Date when regular report generation should be started.
490
Parameter Description
Note that users with insufficient permissions (that is, users with a role based on the Admin user
type who are not members of the same user group as the recipient or report owner) will see
”Inaccessible user” or ”Inaccessible user group” instead of the actual names in the fields
Recipient and Generate report by; the fields Status and Action will be displayed as read-only.
Enabled Report status. Clearing this checkbox will disable the report.
Description An optional description of the report. This description is for internal use and will not be sent to
report recipients.
Form buttons
Testing
To test a report, click the Test button at the bottom of the report configuration form.
Note:
The Test button is not available if the report configuration form has been opened from the dashboard action menu.
If the configuration is correct, the test report is sent immediately to the current user. For test reports, subscribers and Generate
report by user settings are ignored.
If the configuration is incorrect, an error message is displayed describing the possible cause.
491
Updating a report
To update an existing report, click the report name, make the required configuration changes, and then click the Update button.
If an existing report is updated by another user and this user changes the Dashboard, upon clicking the Update button, a warning
message ”Report generated by other users will be changed to the current user” will be displayed.
• Generate report by settings will be updated to display the user who edited the report last (unless Generate report by is set
to the recipient).
• Users that have been displayed as ”Inaccessible user” or ”Inaccessible user group” will be deleted from the list of report
subscribers.
Clicking Cancel will close the configuration form and cancel the report update.
Cloning a report
To quickly clone an existing report, click the Clone button at the bottom of an existing report configuration form. When cloning a
report created by another user, the current user becomes the owner of the new report.
Report settings will be copied to the new report configuration form with respect to user permissions:
• If the user who clones a report has no permissions to a dashboard, the Dashboard field will be cleared.
• If the user who clones a report has no permissions to some users or user groups in the Subscriptions list, inaccessible
recipients will not be cloned.
• Generate report by settings will be updated to display the current user (unless Generate report by is set to the recipient).
Change the required settings and the report name, then click Add.
15 Data export
Overview
• export to files
• streaming to external systems
• trigger events
• item values
• trends (export to files only)
492
1 Export to files
Overview
It is possible to configure real-time exporting of trigger events, item values and trends in a newline-delimited JSON format.
Exporting is done into files, where each line of the export file is a JSON object. Value mappings are not applied.
In case of errors (data cannot be written to the export file or the export file cannot be renamed or a new one cannot be created
after renaming it), the data item is dropped and never written to the export file. It is written only in the Zabbix database. Writing
data to the export file is resumed when the writing problem is resolved.
For precise details on what information is exported, see the export protocol page.
Note that host/item can have no metadata (host groups, host name, item name) if the host/item was removed after the data was
received, but before server exported data.
Configuration
Real-time export of trigger events, item values and trends is configured by specifying a directory for the export files - see the
ExportDir parameter in server configuration.
• ExportFileSize may be used to set the maximum allowed size of an individual export file. When a process needs to write
to a file it checks the size of the file first. If it exceeds the configured size limit, the file is renamed by appending .old to its
name and a new file with the original name is created.
Attention:
A file will be created per each process that will write data (i.e. approximately 4-30 files). As the default size per export file
is 1G, keeping large export files may drain the disk space fast.
• ExportType allows to specify which entity types (events, history, trends) will be exported.
Overview
It is possible to stream item values and events from Zabbix to external systems over HTTP (see protocol details).
The tag filter can be used to stream subsets of item values or events.
Two Zabbix server processes are responsible for data streaming: connector manager and connector worker. A Zabbix
internal item zabbix[connector_queue] allows to monitor the count of values enqueued in the connector queue.
Configuration
The following steps are required to configure data streaming to an external system:
1. Have a remote system set up for receiving data from Zabbix. For this purpose, the following tools are available:
• An example of a simple receiver that logs the received information in events.ndjson and history.ndjson files.
• Kafka connector for Zabbix server - a lightweight server written in Go, designed to forward item values and events from a
Zabbix server to a Kafka broker.
2. Set the required number of connector workers in Zabbix by adjusting the StartConnectors parameter in zabbix_server.conf.
The number of connector workers should match (or exceed if concurrent sessions are more than 1) the configured connector count
in Zabbix frontend. Then, restart Zabbix server.
3. Configure a new connector in Zabbix frontend (Administration → General → Connectors) and reload the server cache with the
zabbix_server -R config_cache_reload command.
493
Mandatory fields are marked by an asterisk.
Parameter Description
494
Parameter Description
Tag filter Export only item values or events matching the tag filter. If not set, then export everything.
It is possible to include as well as exclude specific tags and tag values. Several conditions can be
set. Tag name matching is always case-sensitive.
495
Parameter Description
HTTP proxy You can specify an HTTP proxy to use in the following format:
[protocol://][username[:password]@]proxy.example.com[:port]
User macros are supported.
The optional protocol:// prefix may be used to specify alternative proxy protocols (the
protocol prefix support was added in cURL 7.21.7). With no protocol specified, the proxy will be
treated as an HTTP proxy. By default, 1080 port will be used.
If HTTP proxy is specified, the proxy will overwrite proxy related environment variables like
http_proxy, HTTPS_PROXY. If not specified, the proxy will not overwrite proxy-related
environment variables. The entered value is passed on as is, no sanity checking takes place.
You may also enter a SOCKS proxy address. If you specify the wrong protocol, the connection will
fail and the item will become unsupported.
Protocol
Communication between the server and the receiver is done over HTTP using REST API, NDJSON, ”Content-Type: application/x-
ndjson”.
Server request
496
Content-Type: application/x-ndjson
Receiver response
The response consists of the HTTP response status code and the JSON payload. The HTTP response status code must be ”200”,
”201”, ”202”, ”203”, or ”204” for requests that were handled successfully, other for failed requests.
Example of success:
8 Service monitoring
Overview Service monitoring is a business-level monitoring that can be used to get an overview of the entire IT infrastructure
service tree, identify weak places of the infrastructure, calculate SLA of various IT services, and check out other information at a
higher level. Service monitoring focuses on the overall availability of a service instead of low-level details, such as the lack of disk
space, high processor load, etc. Service monitoring also provides functionality to find the root cause of a problem if a service is
not performing as expected.
Service
|
|-Workstations
| |
| |-Workstation1
| |
| |-Workstation2
|
|-Servers
Each node of the structure has attribute status. The status is calculated and propagated to upper levels according to the selected
algorithm. The status of individual nodes is affected by the status of the mapped problems. Problem mapping is accomplished
with tagging.
Zabbix can send notifications or automatically execute a script on the Zabbix server in case service status change is detected. It
is possible to define flexible rules whether a parent service should go into a ’Problem state’ based on the statuses of child services.
Services problem data can then be used to calculate SLA and send SLA reports based on the flexible set of conditions.
Service monitoring is configured in the Services menu, which consists of the following sections:
• Services
Services section allows to build a hierarchy of your monitored infrastructure by adding parent services, and then - child services
to the parent services.
In addition to configuring service tree, this section provides an overview of the whole infrastructure and allows to quickly identify
the problems that led to a service status change.
• SLA
497
In this section you can define service level agreements and set service level objectives for specific services.
• SLA report
Service actions
See also:
1 Service tree
The service tree is configured in the Services -> Services menu section. In the upper right corner, switch from View to the Edit
mode.
To configure a new service, click on the Create service button in the top right-hand corner.
To quickly add a child service, you can alternatively press a plus icon next to the parent service. This will open the same service
configuration form, but the Parent services parameter will be pre-filled.
498
All mandatory input fields are marked with a red asterisk.
Parameter Description
Advanced configuration
499
Parameter Description
If several conditions are specified and the situation matches more than one condition, the
highest severity will be set.
N (W) Set the value of N or W (1-100000), or N% (1-100) in the condition.
Status Select the value of Status in the condition: OK (default), Not classified, Information, Warning,
Average, High or Disaster.
Status propagation Rule for propagating the service status to the parent service:
rule As is - the status is propagated without change
Increase by - you may increase the propagated status by 1 to 5 severities
Decrease by - you may decrease the propagated status by 1 to 5 severities
Ignore this service - the status is not propagated to the parent service at all
Fixed status - the status is propagated statically, i.e. as always the same
Weight Weight of the service (integer in the range from 0 (default) to 1000000).
Note:
Additional status calculation rules can only be used to increase severity level over the level calculated according to the
main Status calculation rule parameter. If according to additional rules the status should be Warning, but according to the
Status calculation rule the status is Disaster - the service will have status Disaster.
The Tags tab contains service-level tags. Service-level tags are used to identify a service. Tags of this type are not used to map
problems to the service (for that, use Problem tags from the first tab).
The Child services tab allows to specify dependant services. Click on Add to add a service from the list of existing services. If
you want to add a new child service, save this service first, then click on a plus icon next to the service that you have just created.
• Service tags
• Problem tags
Service tags
Service tags are used to match services with service actions and SLAs. These tags are specified at the Tags service configuration
tab. For mapping SLAs, OR logic is used: a service will be mapped to an SLA if it has at least one matching tag. In service actions,
mapping rules are configurable and can use either AND, OR, or AND/OR logic.
500
Problem tags
Problem tags are used to match problems and services. These tags are specified at the primary service configuration tab.
Only child services of the lowest hierarchy level may have problem tags defined and be directly correlated to problems. If problem
tags match, the service status will change to the same status as the problem has. In case of several problems, a service will have
the status of the most severe one. Status of a parent service is then calculated based on child services statuses according to
Status calculation rules.
If several tags are specified, AND logic is used: a problem must have all tags specified in the service configuration to be mapped
to the service.
Note:
A problem in Zabbix inherits tags from the whole chain of templates, hosts, items, web scenarios, and triggers. Any of
these tags can be used for matching problems to services.
Example:
Problem Web camera 3 is down has tags type:video surveillance, floor:1st and name:webcam 3 and status Warning
The service Web camera 3 has the only problem tag specified: name:webcam 3
Service status will change from OK to Warning when this problem is detected.
If the service Web camera 3 had problem tags name:webcam 3 and floor:2nd, its status would not be changed, when the
problem is detected, because the conditions are only partially met.
Note:
The buttons described below are visible only when Services section is in the Edit mode.
Modifying existing services
To edit an existing service, press the pencil icon next to the service.
To clone an existing service, press the pencil icon to open its configuration and then press Clone button. When a service is cloned,
its parent links are preserved, while the child links are not.
To delete a service, press on the x icon next to it. When you delete a parent service, its child services will not be deleted and will
move one level higher in the service tree (1st level children will get the same level as the deleted parent service).
501
Two buttons below the list of services offer some mass-editing options:
To use these options, mark the checkboxes before the respective services, then click on the required button.
2 SLA
Overview Once the services are created, you can start monitoring whether their performance is on track with service level
agreement (SLA).
Services->SLA menu section allows to configure SLAs for various services. An SLA in Zabbix defines service level objective (SLO),
expected uptime schedule and planned downtimes.
SLAs and services are matched by service tags. The same SLA may be applied to multiple services - performance will be measured
for each matching service separately. A single service may have multiple SLAs assigned - data for each of the SLAs will be displayed
separately.
In SLA reports Zabbix provides Service level indicator (SLI) data, which measures real service availability. Whether a service meets
the SLA targets is determined by comparing SLO (expected availability in %) with SLI (real-life availability in %).
Parameter Description
502
Parameter Description
Service tags Add service tags to identify the services towards which this SLA should be applied.
Name - service tag name, must be exact match, case-sensitive.
Operation - select Equals if the tag value must match exactly (case-sensitive) or Contains if part
of the tag value must match (case-insensitive).
Value - service tag value to search for according to selected operation.
The SLA is applied to a service, if at least one service tag matches.
Description Add a description for the SLA.
Enabled Mark the checkbox to enable the SLA calculation.
The Excluded downtimes tab allows to specify downtimes that are excluded from the SLA calculation.
Click on Add to configure excluded downtimes, then enter the period name, start date and duration.
SLA reports How a service performs compared to an SLA is visible in the SLA report. SLA reports can be viewed:
Once an SLA is configured, the Info tab in the services section will also display some information about service performance.
3 Setup example
Overview This section describes a simple setup for monitoring Zabbix high availability cluster as a service.
Pre-requisites Prior to configuring service monitoring, you need to have the hosts configured:
• HA node 1 with at least one trigger and a tag (preferably set on a trigger level) component:HA node 1
• HA node 2 with at least one trigger and a tag (preferably set on a trigger level) component:HA node 2
Service tree The next step is to build the service tree. In this example, the infrastructure is very basic and consists of three
services: Zabbix cluster (parent) and two child services Zabbix server node 1 and Zabbix server node 2.
Zabbix cluster
|
|- Zabbix server node 1
|- Zabbix server node 2
At the Services page, turn on Edit mode and press Create service:
In the service configuration window, enter name Zabbix cluster and click on the Advanced configuration label to display advanced
configuration options.
503
Configure additional rule:
Zabbix cluster will have two child services - one for each of the HA nodes. If both HA nodes have problems of at least Warning
status, parent service status should be set to Disaster. To achieve this, additional rule should be configured as:
Switch to the Tags tab and add a tag Zabbix:server. This tag will be used later for service actions and SLA reports.
504
Save the new service.
To add a child service, press on the plus icon next to the Zabbix cluster service (the icon is visible only in Edit mode).
In the service configuration window, enter name Zabbix server node 1. Note that the Parent services parameter is already pre-filled
with Zabbix cluster.
Availability of this service is affected by problems on the host HA node 1, marked with component:HA node 1 problem tag. In
the Problem tags parameter, enter:
• Name: component
• Operation: Equals
• Value: HA node 1
Switch to the Tags tab and add a service tag: Zabbix server:node 1. This tag will be used later for service actions and SLA
reports.
505
Save the new service.
Create another child service of Zabbix cluster with name ”Zabbix server node 2”.
• Name: component
• Operation: Equals
• Value: HA node 2
Switch to the Tags tab and add a service tag: Zabbix server:node 2.
Save the new service.
SLA In this example, expected Zabbix cluster performance is 100% excluding semi-annual one hour maintenance period.
Go to the Services->SLA menu section and press Create SLA. Enter name Zabbix cluster performance and set the SLO to 100%.
The service Zabbix cluster has a service tag Zabbix:server. To use this SLA for measuring performance of Zabbix cluster, in the
Service tags parameter, specify:
• Name: Zabbix
• Operation: Equals
• Value: server
506
In a real-life setup, you can also update desired reporting period, time zone and start date or change the schedule from 24/7 to
custom. For this example, the default settings are sufficient.
Switch to the Excluded downtimes tab and add downtimes for scheduled maintenance periods to exclude these periods from SLA
calculation. In the Excluded downtimes section press the Add link, enter downtime name, planned start time and duration.
Switch to the SLA reports section to view the SLA report for Zabbix cluster.
507
9 Web monitoring
Overview With Zabbix you can check several availability aspects of web sites.
Attention:
To perform web monitoring Zabbix server must be initially configured with cURL (libcurl) support.
To activate web monitoring you need to define web scenarios. A web scenario consists of one or several HTTP requests or ”steps”.
The steps are periodically executed by Zabbix server in a pre-defined order. If a host is monitored by proxy, the steps are executed
by the proxy.
Web scenarios are attached to hosts/templates in the same way as items, triggers, etc. That means that web scenarios can also
be created on a template level and then applied to multiple hosts in one move.
• average download speed per second for all steps of whole scenario
• number of the step that failed
• last error message
Data collected from executing web scenarios is kept in the database. The data is automatically used for graphs, triggers and
notifications.
Zabbix can also check if a retrieved HTML page contains a pre-defined string. It can execute a simulated login and follow a path
of simulated mouse clicks on the page.
Zabbix web monitoring supports both HTTP and HTTPS. When running a web scenario, Zabbix will optionally follow redirects (see
option Follow redirects below). Maximum number of redirects is hard-coded to 10 (using cURL option CURLOPT_MAXREDIRS). All
cookies are preserved during the execution of a single scenario.
508
• Enter parameters of the scenario in the form
The Scenario tab allows you to configure the general parameters of a web scenario.
Scenario parameters:
Parameter Description
509
Parameter Description
HTTP proxy You can specify an HTTP proxy to use, using the format
[protocol://][username[:password]@]proxy.example.com[:port].
This sets the CURLOPT_PROXY cURL option.
The optional protocol:// prefix may be used to specify alternative proxy protocols (the
protocol prefix support was added in cURL 7.21.7). With no protocol specified, the proxy will be
treated as an HTTP proxy.
By default, 1080 port will be used.
If specified, the proxy will overwrite proxy related environment variables like http_proxy,
HTTPS_PROXY. If not specified, the proxy will not overwrite proxy-related environment variables.
The entered value is passed on ”as is”, no sanity checking takes place.
You may also enter a SOCKS proxy address. If you specify the wrong protocol, the connection will
fail and the item will become unsupported.
Note that only simple authentication is supported with HTTP proxy.
User macros can be used in this field.
Variables Variables that may be used in scenario steps (URL, post variables).
They have the following format:
{macro1}=value1
{macro2}=value2
{macro3}=regex:<regular expression>
For example:
{username}=Alexei
{password}=kj3h5kJ34bd
{hostid}=regex:hostid is ([0-9]+)
The macros can then be referenced in the steps as {username}, {password} and {hostid}.
Zabbix will automatically replace them with actual values. Note that variables with regex: need
one step to get the value of the regular expression so the extracted value can only be applied to
the step after.
If the value part starts with regex: then the part after it is treated as a regular expression that
searches the web page and, if found, stores the match in the variable. At least one subgroup
must be present so that the matched value can be extracted.
User macros and {HOST.*} macros are supported.
Variables are automatically URL-encoded when used in query fields or form data for post
variables, but must be URL-encoded manually when used in raw post or directly in URL.
Headers HTTP Headers are used when performing a request. Default and custom headers can be used.
Headers will be assigned using default settings depending on the Agent type selected from a
drop-down list on a scenario level, and will be applied to all the steps, unless they are custom
defined on a step level.
It should be noted that defining the header on a step level automatically discards all
the previously defined headers, except for a default header that is assigned by
selecting the ’User-Agent’ from a drop-down list on a scenario level.
However, even the ’User-Agent’ default header can be overridden by specifying it on a step level.
To unset the header on a scenario level, the header should be named and attributed with no
value on a step level.
Headers should be listed using the same syntax as they would appear in the HTTP protocol,
optionally using some additional features supported by the CURLOPT_HTTPHEADER cURL option.
For example:
Accept-Charset=utf-8
Accept-Language=en-US
Content-Type=application/xml; charset=utf-8
User macros and {HOST.*} macros are supported.
Enabled The scenario is active if this box is checked, otherwise - disabled.
Note that when editing an existing scenario, two extra buttons are available in the form:
Delete history and trend data for the scenario. This will make the server perform the scenario
immediately after deleting the data.
510
Note:
If HTTP proxy field is left empty, another way for using an HTTP proxy is to set proxy related environment variables.
For HTTP checks - set the http_proxy environment variable for the Zabbix server user. For example,
http_proxy=http://proxy_ip:proxy_port.
For HTTPS checks - set the HTTPS_PROXY environment variable. For example,
HTTPS_PROXY=http://proxy_ip:proxy_port. More details are available by running a shell command: # man
curl.
The Steps tab allows you to configure the web scenario steps. To add a web scenario step, click on Add in the Steps block.
Note:
Secret user macros must not be used in URLs as they will resolve to ”******”.
511
Configuring steps
Step parameters:
Parameter Description
512
Parameter Description
513
Parameter Description
Timeout Zabbix will not spend more than the set amount of time on processing the URL (from one second
to maximum of 1 hour). Actually this parameter defines the maximum time for making
connection to the URL and maximum time for performing an HTTP request. Therefore, Zabbix will
not spend more than 2 x Timeout seconds on the step.
Time suffixes are supported, e.g. 30s, 1m, 1h. User macros are supported.
Required string Required regular expression pattern.
Unless retrieved content (HTML) matches the required pattern the step will fail. If empty, no
check on required string is performed.
For example:
Homepage of Zabbix
Welcome.*admin
Note: Referencing regular expressions created in the Zabbix frontend is not supported in this
field.
User macros and {HOST.*} macros are supported.
Required status codes List of expected HTTP status codes. If Zabbix gets a code which is not in the list, the step will fail.
If empty, no check on status codes is performed.
For example: 200,201,210-299
User macros are supported.
Note:
Any changes in web scenario steps will only be saved when the whole scenario is saved.
See also a real-life example of how web monitoring steps can be configured.
Configuring authentication The Authentication tab allows you to configure scenario authentication options. A green dot next
to the tab name indicates that some type of HTTP authentication is enabled.
514
Authentication parameters:
Parameter Description
515
Attention:
[1] Zabbix supports certificate and private key files in PEM format only. In case you have your certificate and private
key data in PKCS #12 format file (usually with extension *.p12 or *.pfx) you may generate the PEM file from it using the
following commands:
openssl pkcs12 -in ssl-cert.p12 -clcerts -nokeys -out ssl-cert.pem
openssl pkcs12 -in ssl-cert.p12 -nocerts -nodes -out ssl-cert.key
Note:
Zabbix server picks up changes in certificates without a restart.
Note:
If you have client certificate and private key in a single file just specify it in a ”SSL certificate file” field and leave ”SSL key
file” field empty. The certificate and key must still be in PEM format. Combining certificate and key is easy:
cat client.crt client.key > client.pem
Display To view web scenarios configured for a host, go to Monitoring → Hosts, locate the host in the list and click on the Web
hyperlink in the last column. Click on the scenario name to get detailed information.
An overview of web scenarios can also be displayed in Dashboards by the Web monitoring widget.
516
Recent results of the web scenario execution are available in the Monitoring → Latest data section.
Extended monitoring Sometimes it is necessary to log received HTML page content. This is especially useful if some web
scenario step fails. Debug level 5 (trace) serves that purpose. This level can be set in server and proxy configuration files or
using a runtime control option (-R log_level_increase="http poller,N", where N is the process number). The following
examples demonstrate how extended monitoring can be started provided debug level 4 is already set:
Overview
Some new items are automatically added for monitoring when web scenarios are created.
Scenario items
As soon as a scenario is created, Zabbix automatically adds the following items for monitoring.
Item Description
Download speed for scenario This item will collect information about the download speed (bytes per second) of the
<Scenario> whole scenario, i.e. average for all steps.
Item key: web.test.in[Scenario„bps]
Type: Numeric(float)
Failed step of scenario This item will display the number of the step that failed on the scenario. If all steps are
<Scenario> executed successfully, 0 is returned.
Item key: web.test.fail[Scenario]
Type: Numeric(unsigned)
Last error message of scenario This item returns the last error message text of the scenario. A new value is stored only if
<Scenario> the scenario has a failed step. If all steps are ok, no new value is collected.
Item key: web.test.error[Scenario]
Type: Character
Note:
If the scenario name contains user macros, these macros will be left unresolved in web monitoring item names. <br><br>
If the scenario name starts with a doublequote or contains a comma or a square bracket, it will be properly quoted in item
keys. In other cases no additional quoting will be performed.
Note:
Web monitoring items are added with a 30 day history and a 90 day trend retention period.
These items can be used to create triggers and define notification conditions.
Example 1
To create a ”Web scenario failed” trigger, you can define a trigger expression:
last(/host/web.test.fail[Scenario])<>0
Make sure to replace ’Scenario’ with the real name of your scenario.
Example 2
To create a ”Web scenario failed” trigger with a useful problem description in the trigger name, you can define a trigger with name:
517
Web scenario "Scenario" failed: {ITEM.VALUE}
and trigger expression:
Example 3
To create a ”Web application is slow” trigger, you can define a trigger expression:
last(/host/web.test.in[Scenario,,bps])<10000
Make sure to replace ’Scenario’ with the real name of your scenario.
As soon as a step is created, Zabbix automatically adds the following items for monitoring.
Item Description
Download speed for step This item will collect information about the download speed (bytes per second) of the
<Step> of scenario <Scenario> step.
Item key: web.test.in[Scenario,Step,bps]
Type: Numeric(float)
Response time for step <Step> This item will collect information about the response time of the step in seconds.
of scenario <Scenario> Response time is counted from the beginning of the request until all information has
been transferred.
Item key: web.test.time[Scenario,Step,resp]
Type: Numeric(float)
Response code for step <Step> This item will collect response codes of the step.
of scenario <Scenario> Item key: web.test.rspcode[Scenario,Step]
Type: Numeric(unsigned)
Actual scenario and step names will be used instead of ”Scenario” and ”Step” respectively.
Note:
Web monitoring items are added with a 30 day history and a 90 day trend retention period.
Note:
If scenario name starts with a doublequote or contains comma or square bracket, it will be properly quoted in item keys.
In other cases no additional quoting will be performed.
These items can be used to create triggers and define notification conditions. For example, to create a ”Zabbix GUI login is too
slow” trigger, you can define a trigger expression:
last(/zabbix/web.test.time[ZABBIX GUI,Login,resp])>3
2 Real-life scenario
Overview
This section presents a step-by-step real-life example of how web monitoring can be used.
Let’s use Zabbix web monitoring to monitor the web interface of Zabbix. We want to know if it is available, provides the right
content and how quickly it works. To do that we also must log in with our user name and password.
Scenario
Step 1
We will add a scenario to monitor the web interface of Zabbix. The scenario will execute a number of steps.
Go to Data collection → Hosts, pick a host and click on Web in the row of that host. Then click on Create web scenario.
518
All mandatory input fields are marked with a red asterisk.
In the new scenario form we will name the scenario as Zabbix frontend. We will also create two variables: {user} and {password}.
You may also want to add a new Application:Zabbix frontend tag in the Tags tab.
Step 2
We start by checking that the first page responds correctly, returns with HTTP response code 200 and contains text ”Zabbix SIA”.
519
When done configuring the step, click on Add.
We continue by logging in to the Zabbix frontend, and we do so by reusing the macros (variables) we defined on the scenario level
- {user} and {password}.
520
Attention:
Note that Zabbix frontend uses JavaScript redirect when logging in, thus first we must log in, and only in further steps we
may check for logged-in features. Additionally, the login step must use full URL to index.php file.
Take note also of how we are getting the content of the {sid} variable (session ID) using a variable syntax with regular expression:
regex:name="csrf-token" content="([0-9a-z]{16})". This variable will be required in step 4.
521
Being logged in, we should now verify the fact. To do so, we check for a string that is only visible when logged in - for example,
Administration.
Now that we have verified that frontend is accessible and we can log in and retrieve logged-in content, we should also log out -
otherwise Zabbix database will become polluted with lots and lots of open session records.
522
Web scenario step 5
We can also check that we have logged out by looking for the Username string.
523
Complete configuration of steps
524
Step 3
The scenario will be added to a host. To view web scenario information go to Monitoring → Hosts, locate the host in the list and
click on the Web hyperlink in the last column.
525
10 Virtual machine monitoring
Overview Zabbix can use low-level discovery rules to automatically discover VMware hypervisors and virtual machines, and
create hosts to monitor them based on pre-defined host prototypes.
Zabbix also includes templates for monitoring VMware vCenter or ESXi hypervisors.
1. Zabbix vmware collector processes collect virtual machine data - the processes obtain necessary information from VMware
web services over the SOAP protocol, pre-process it, and store it in Zabbix server shared memory.
2. Zabbix poller processes retrieve data using Zabbix simple check VMware monitoring item keys.
Zabbix divides collected data into VMware configuration data and VMware performance counter data. Both types of data are
collected independently by the vmware collector processes.
The following statistics are available based on the VMware performance counter information:
526
• Datastore
• Disk device
• CPU
• Power
• Network interface
• Custom performance counter items
For the complete list of items that obtain data from VMware performance counters, see VMware monitoring item keys.
Note that the frequency of VMware event retrieval depends on the polling interval of vmware.eventlog, but cannot be less then 5
seconds.
Configuration If Zabbix server is compiled from sources, it must be compiled with the --with-libcurl --with-libxml2
configuration options to enable virtual machine monitoring. Zabbix packages are compiled with these options already enabled.
The following Zabbix server configuration file parameters can be modified for virtual machine monitoring:
• StartVMwareCollectors
Note:
It is recommended to enable more collectors than the number of monitored VMware services; otherwise, the retrieval of
VMware performance counter statistics might be delayed by the retrieval of VMware configuration data (which takes a
StartVMwareCollectors should not dip below 2 and
while for large installations). <br><br> Generally, the value of
should not exceed twice the amount of monitored VMware services: Amount of services < StartVMwareCollectors <
(Amount of services * 2). For example, when monitoring one VMware service, set StartVMwareCollectors to 2; when
monitoring three services, set StartVMwareCollectors to 5. <br><br> Note that the required number of collectors also
depends on the scope of the VMware environment, and the VMwareFrequency and VMwarePerfFrequency configuration
parameters.
• VMwareCacheSize
• VMwareFrequency
• VMwarePerfFrequency
• VMwareTimeout
Attention:
To support datastore capacity metrics, ensure that the value of the VMware vpxd.stats.maxQueryMetrics key is set
to at least 64. For more information, see the VMware Knowledge Base article.
Discovery
Zabbix can use low-level discovery rules (for example, vmware.hv.discovery[{$VMWARE.URL}]) to automatically discover VMware
hypervisors and virtual machines. Moreover, Zabbix can use host prototypes to automatically generate real hosts for the discovered
entities. For more information, see Host prototypes.
Ready-to-use templates
Zabbix includes a range of ready-to-use templates designed for monitoring VMware vCenter or ESXi hypervisors. These templates
contain pre-configured low-level discovery rules, along with various built-in checks for monitoring virtual installations.
The following templates can be used for monitoring VMware vCenter or ESXi hypervisors:
Note:
For the correct functioning of the VMware FQDN template, each monitored virtual machine should have a unique OS name
adhering to FQDN rules. Additionally, VMware Tools/Open Virtual Machine tools must be installed on every machine. If
these prerequisites are met, using the VMware FQDN template is recommended. The VMware FQDN template has been
available since Zabbix 5.2 with the introduction of the ability to create hosts with custom interfaces. <br><br> A classic
VMware template is also available and can be used if FQDN requirements are unmet. However, the VMware template has
a known issue. Hosts for discovered virtual machines are created with names that are saved in vCenter (for example,
”VM1”, ”VM2”, etc.). If Zabbix agent is installed on these hosts, and active Zabbix agent autoregistration is enabled, the
autoregistration process will read host names as they were registered during launch (for example, ”vm1.example.com”,
”vm2.example.com”, etc.). This can lead to the creation of new hosts for existing virtual machines (since no name matches
have been found), resulting in duplicate hosts with different names.
The following templates are used for discovered entities and, typically, should not be manually linked to a host:
527
• VMware Hypervisor
• VMware Guest
To use VMware simple checks, the host must have the following user macros defined:
Configuration examples
For a basic example of how to set up Zabbix for monitoring VMware using the VMware FQDN template, see Monitor VMware with
Zabbix.
For a more detailed example of how to create a host, a low-level discovery rule, and a host prototype for monitoring VMware, see
Setup example.
Extended logging The data collected by the vmware collector processes can be logged for detailed debugging using debug
level 5. The debug level can be configured in the server and proxy configuration files or using the runtime control option -R
log_level_increase="vmware collector,N", where ”N” is the process number.
For example, to increase the debug level from 4 to 5 for all vmware collector processes, run the following command:
Troubleshooting
• In case of unavailable metrics, please ensure that they are not made unavailable or turned off by default in recent VMware
vSphere versions, or if some limits are not placed on performance-metric database queries. For more information, see
ZBX-12094.
• If config.vpxd.stats.maxQueryMetrics is invalid or exceeds the maximum number of characters permitted error, add
a config.vpxd.stats.maxQueryMetrics parameter to the vCenter Server settings. The value of this parameter should
be the same as the value of maxQuerysize in VMware’s web.xml file. For more information, see VMware Knowledge Base
article.
Overview This page provides details on the simple checks that can be used to monitor VMware environments. The metrics are
grouped by the monitoring target.
Supported item keys The item keys are listed without parameters and additional information. Click on the item key to see the
full details.
528
Item key Description Item group
529
Item key Description Item group
Item key details Parameters without angle brackets are mandatory. Parameters marked with angle brackets < > are optional.
530
vmware.eventlog[url,<mode>,<severity>]
Parameters:
Comments:
vmware.fullnameurl
Parameters:
vmware.versionurl
Parameters:
vmware.cl.perfcounter[url,id,path,<instance>]
<br> The VMware cluster performance counter metrics.<br> Return value: Integer.
Parameters:
vmware.cluster.alarms.get[url,id]
<br> The VMware cluster alarms data.<br> Return value: JSON object.
Parameters:
vmware.cluster.discoveryurl
Parameters:
vmware.cluster.property[url,id,prop]
Parameters:
vmware.cluster.status[url,name]
<br> The VMware cluster status.<br> Return value: 0 - gray; 1 - green; 2 - yellow; 3 - red.
Parameters:
531
vmware.cluster.tags.get[url,id]
<br> The VMware cluster tags array.<br> Return value: JSON object.
Parameters:
vmware.datastore.alarms.get[url,uuid]
<br> The VMware datastore alarms data.<br> Return value: JSON object.
Parameters:
vmware.datastore.discoveryurl
Parameters:
vmware.datastore.hv.list[url,datastore]
Parameters:
Output example:
esx7-01-host.zabbix.sandbox
esx7-02-host.zabbix.sandbox
vmware.datastore.perfcounter[url,uuid,path,<instance>]
2
<br> The VMware datastore performance counter value.<br> Return value: Integer .
Parameters:
vmware.datastore.property[url,uuid,prop]
Parameters:
vmware.datastore.read[url,datastore,<mode>]
2
<br> The amount of time for a read operation from the datastore (milliseconds).<br> Return value: Integer .
Parameters:
vmware.datastore.size[url,datastore,<mode>]
<br> The VMware datastore space in bytes or in percentage from total.<br> Return value: Integer - for bytes; Float - for percentage.
Parameters:
532
• datastore - the datastore name;
• mode - possible values: total (default), free, pfree (free percentage), uncommitted.
vmware.datastore.tags.get[url,uuid]
<br> The VMware datastore tags array.<br> Return value: JSON object.
Parameters:
vmware.datastore.write[url,datastore,<mode>]
2
<br> The amount of time for a write operation to the datastore (milliseconds).<br> Return value: Integer .
Parameters:
vmware.dc.alarms.get[url,id]
<br> The VMware datacenter alarms data.<br> Return value: JSON object.
Parameters:
vmware.dc.discoveryurl
Parameters:
vmware.dc.tags.get[url,id]
<br> The VMware datacenter tags array.<br> Return value: JSON object.
Parameters:
vmware.dvswitch.discoveryurl
<br> The discovery of VMware vSphere Distributed Switches.<br> Return value: JSON object.
Parameters:
vmware.dvswitch.fetchports.get[url,uuid,<filter>,<mode>]
<br> The VMware vSphere Distributed Switch ports data.<br> Return value: JSON object.
Parameters:
The filter parameter supports the criteria available in the VMware data object DistributedVirtualSwitchPortCriteria.
Example:
vmware.dvswitch.fetchports.get[{$VMWARE.URL},{$VMWARE.DVS.UUID},"active:true,connected:false,host:host-18,
533
vmware.hv.alarms.get[url,uuid]
<br> The VMware hypervisor alarms data.<br> Return value: JSON object.
Parameters:
vmware.hv.cluster.name[url,uuid]
Parameters:
vmware.hv.connectionstate[url,uuid]
<br> The VMware hypervisor connection state.<br> Return value: String: connected, disconnected, or notResponding.
Parameters:
vmware.hv.cpu.usage[url,uuid]
<br> The VMware hypervisor processor usage (Hz).<br> Return value: Integer.
Parameters:
vmware.hv.cpu.usage.perf[url,uuid]
<br> The VMware hypervisor processor usage as a percentage during the interval.<br> Return value: Float.
Parameters:
vmware.hv.cpu.utilization[url,uuid]
<br> The VMware hypervisor processor usage as a percentage during the interval, depends on power management or HT.<br>
Return value: Float.
Parameters:
vmware.hv.datacenter.name[url,uuid]
Parameters:
vmware.hv.datastore.discovery[url,uuid]
<br> The discovery of VMware hypervisor datastores.<br> Return value: JSON object.
Parameters:
vmware.hv.datastore.list[url,uuid]
Parameters:
534
• uuid - the VMware hypervisor global unique identifier.
Output example:
SSD-RAID1-VAULT1
SSD-RAID1-VAULT2
SSD-RAID10
vmware.hv.datastore.multipath[url,uuid,<datastore>,<partitionid>]
Parameters:
Parameters:
vmware.hv.datastore.size[url,uuid,datastore,<mode>]
<br> The VMware datastore space in bytes or in percentage from total.<br> Return value: Integer - for bytes; Float - for percentage.
Parameters:
vmware.hv.datastore.write[url,uuid,datastore,<mode>]
2
<br> The average amount of time for a write operation to the datastore (milliseconds).<br> Return value: Integer .
Parameters:
vmware.hv.discoveryurl
Parameters:
vmware.hv.diskinfo.get[url,uuid]
<br> The VMware hypervisor disk data.<br> Return value: JSON object.
Parameters:
vmware.hv.fullname[url,uuid]
Parameters:
535
vmware.hv.hw.cpu.freq[url,uuid]
<br> The VMware hypervisor processor frequency (Hz).<br> Return value: Integer.
Parameters:
vmware.hv.hw.cpu.model[url,uuid]
Parameters:
vmware.hv.hw.cpu.num[url,uuid]
<br> The number of processor cores on VMware hypervisor.<br> Return value: Integer.
Parameters:
vmware.hv.hw.cpu.threads[url,uuid]
<br> The number of processor threads on VMware hypervisor.<br> Return value: Integer.
Parameters:
vmware.hv.hw.memory[url,uuid]
<br> The VMware hypervisor total memory size (bytes).<br> Return value: Integer.
Parameters:
vmware.hv.hw.model[url,uuid]
Parameters:
vmware.hv.hw.sensors.get[url,uuid]
<br> The VMware hypervisor hardware sensors value.<br> Return value: JSON object.
Parameters:
vmware.hv.hw.serialnumber[url,uuid]
Parameters:
vmware.hv.hw.uuid[url,uuid]
Parameters:
536
• uuid - the VMware hypervisor global unique identifier.
vmware.hv.hw.vendor[url,uuid]
Parameters:
vmware.hv.maintenance[url,uuid]
<br> The VMware hypervisor maintenance status.<br> Return value: 0 - not in maintenance; 1 - in maintenance.
Parameters:
vmware.hv.memory.size.ballooned[url,uuid]
<br> The VMware hypervisor ballooned memory size (bytes).<br> Return value: Integer.
Parameters:
vmware.hv.memory.used[url,uuid]
<br> The VMware hypervisor used memory size (bytes).<br> Return value: Integer.
Parameters:
vmware.hv.net.if.discovery[url,uuid]
<br> The discovery of VMware hypervisor network interfaces.<br> Return value: JSON object.
Parameters:
vmware.hv.network.in[url,uuid,<mode>]
2
<br> The VMware hypervisor network input statistics (bytes per second).<br> Return value: Integer .
Parameters:
vmware.hv.network.linkspeed[url,uuid,ifname]
<br> The VMware hypervisor network interface speed.<br> Return value: Integer. Returns 0, if the network interface is down,
otherwise the speed value of the interface.
Parameters:
vmware.hv.network.out[url,uuid,<mode>]
2
<br> The VMware hypervisor network output statistics (bytes per second).<br> Return value: Integer .
Parameters:
537
• mode - bps (default), packets, dropped, errors, broadcast.
vmware.hv.perfcounter[url,uuid,path,<instance>]
2
<br> The VMware hypervisor performance counter value.<br> Return value: Integer .
Parameters:
vmware.hv.property[url,uuid,prop]
Parameters:
vmware.hv.power[url,uuid,<max>]
<br> The VMware hypervisor power usage (W).<br> Return value: Integer.
Parameters:
vmware.hv.sensor.health.state[url,uuid]
<br> The VMware hypervisor health state rollup sensor.<br> Return value: Integer: 0 - gray; 1 - green; 2 - yellow; 3 - red.
Parameters:
Note that the item might not work in VMware vSphere 6.5 and newer, because VMware has deprecated the VMware Rollup Health
State sensor.
vmware.hv.sensors.get[url,uuid]
<br> The VMware hypervisor HW vendor state sensors.<br> Return value: JSON object.
Parameters:
vmware.hv.status[url,uuid]
<br> The VMware hypervisor status.<br> Return value: Integer: 0 - gray; 1 - green; 2 - yellow; 3 - red.
Parameters:
vmware.hv.tags.get[url,uuid]
<br> The VMware hypervisor tags array.<br> Return value: JSON object.
Parameters:
vmware.hv.uptime[url,uuid]
538
Parameters:
vmware.hv.version[url,uuid]
Parameters:
vmware.hv.vm.num[url,uuid]
<br> The number of virtual machines on the VMware hypervisor.<br> Return value: Integer.
Parameters:
vmware.rp.cpu.usage[url,rpid]
<br> The CPU usage in hertz during the interval on VMware Resource Pool.<br> Return value: Integer.
Parameters:
vmware.rp.memory[url,rpid,<mode>]
<br> The memory metrics of VMware resource pool.<br> Return value: Integer.
Parameters:
vmware.alarms.geturl
<br> The VMware virtual center alarms data.<br> Return value: JSON object.
Parameters:
vmware.vm.alarms.get[url,uuid]
<br> The VMware virtual machine alarms data.<br> Return value: JSON object.
Parameters:
vmware.vm.attribute[url,uuid,name]
<br> The VMware virtual machine custom attribute value.<br> Return value: String.
Parameters:
vmware.vm.cluster.name[url,uuid]
Parameters:
539
• url - the VMware service URL;
• uuid - the VMware virtual machine global unique identifier;
• name - the custom attribute name.
vmware.vm.consolidationneeded[url,uuid]
<br> The VMware virtual machine disk requires consolidation.<br> Return value: String: true - consolidation is needed; false -
consolidation is not needed.
Parameters:
vmware.vm.cpu.latency[url,uuid]
<br> The percentage of time the virtual machine is unable to run because it is contending for access to the physical CPU(s).<br>
Return value: Float.
Parameters:
vmware.vm.cpu.num[url,uuid]
<br> The number of processors on VMware virtual machine.<br> Return value: Integer.
Parameters:
vmware.vm.cpu.readiness[url,uuid,<instance>]
<br> The percentage of time that the virtual machine was ready, but could not get scheduled to run on the physical CPU.<br>
Return value: Float.
Parameters:
vmware.vm.cpu.ready[url,uuid]
<br> The time (in milliseconds) that the virtual machine was ready, but could not get scheduled to run on the physical CPU. CPU
2
ready time is dependent on the number of virtual machines on the host and their CPU loads (%).<br> Return value: Integer .
Parameters:
vmware.vm.cpu.swapwait[url,uuid,<instance>]
<br> The percentage of CPU time spent waiting for swap-in.<br> Return value: Float.
Parameters:
vmware.vm.cpu.usage[url,uuid]
<br> The VMware virtual machine processor usage (Hz).<br> Return value: Integer.
Parameters:
vmware.vm.cpu.usage.perf[url,uuid]
<br> The VMware virtual machine processor usage as a percentage during the interval.<br> Return value: Float.
Parameters:
540
• url - the VMware service URL;
• uuid - the VMware virtual machine global unique identifier.
vmware.vm.datacenter.name[url,uuid]
<br> The VMware virtual machine datacenter name.<br> Return value: String.
Parameters:
vmware.vm.discoveryurl
<br> The discovery of VMware virtual machines.<br> Return value: JSON object.
Parameters:
vmware.vm.guest.memory.size.swapped[url,uuid]
<br> The amount of guest physical memory that is swapped out to the swap space (KB).<br> Return value: Integer.
Parameters:
vmware.vm.guest.osuptime[url,uuid]
<br> The total time elapsed since the last operating system boot-up (in seconds).<br> Return value: Integer.
Parameters:
vmware.vm.hv.name[url,uuid]
<br> The VMware virtual machine hypervisor name.<br> Return value: String.
Parameters:
vmware.vm.memory.size[url,uuid]
<br> The VMware virtual machine total memory size (bytes).<br> Return value: Integer.
Parameters:
vmware.vm.memory.size.ballooned[url,uuid]
<br> The VMware virtual machine ballooned memory size (bytes).<br> Return value: Integer.
Parameters:
vmware.vm.memory.size.compressed[url,uuid]
<br> The VMware virtual machine compressed memory size (bytes).<br> Return value: Integer.
Parameters:
vmware.vm.memory.size.consumed[url,uuid]
<br> The amount of host physical memory consumed for backing up guest physical memory pages (KB).<br> Return value:
Integer.
Parameters:
541
• url - the VMware service URL;
• uuid - the VMware virtual machine global unique identifier.
vmware.vm.memory.size.private[url,uuid]
<br> The VMware virtual machine private memory size (bytes).<br> Return value: Integer.
Parameters:
vmware.vm.memory.size.shared[url,uuid]
<br> The VMware virtual machine shared memory size (bytes).<br> Return value: Integer.
Parameters:
vmware.vm.memory.size.swapped[url,uuid]
<br> The VMware virtual machine swapped memory size (bytes).<br> Return value: Integer.
Parameters:
vmware.vm.memory.size.usage.guest[url,uuid]
<br> The VMware virtual machine guest memory usage (bytes).<br> Return value: Integer.
Parameters:
vmware.vm.memory.size.usage.host[url,uuid]
<br> The VMware virtual machine host memory usage (bytes).<br> Return value: Integer.
Parameters:
vmware.vm.memory.usage[url,uuid]
<br> The percentage of host physical memory that has been consumed.<br> Return value: Float.
Parameters:
vmware.vm.net.if.discovery[url,uuid]
<br> The discovery of VMware virtual machine network interfaces.<br> Return value: JSON object.
Parameters:
vmware.vm.net.if.in[url,uuid,instance,<mode>]
2
<br> The VMware virtual machine network interface input statistics (bytes/packets per second).<br> Return value: Integer .
Parameters:
542
vmware.vm.net.if.out[url,uuid,instance,<mode>]
2
<br> The VMware virtual machine network interface output statistics (bytes/packets per second).<br> Return value: Integer .
Parameters:
vmware.vm.net.if.usage[url,uuid,<instance>]
<br> The VMware virtual machine network utilization (combined transmit-rates and receive-rates) during the interval (KBps).<br>
Return value: Integer.
Parameters:
vmware.vm.perfcounter[url,uuid,path,<instance>]
2
<br> The VMware virtual machine performance counter value.<br> Return value: Integer .
Parameters:
vmware.vm.powerstate[url,uuid]
<br> The VMware virtual machine power state.<br> Return value: 0 - poweredOff; 1 - poweredOn; 2 - suspended.
Parameters:
vmware.vm.property[url,uuid,prop]
Parameters:
Examples:
vmware.vm.property[{$VMWARE.URL},{$VMWARE.VM.UUID},overallStatus]
vmware.vm.property[{$VMWARE.URL},{$VMWARE.VM.UUID},runtime.powerState]
vmware.vm.snapshot.get[url,uuid]
<br> The VMware virtual machine snapshot state.<br> Return value: JSON object.
Parameters:
vmware.vm.state[url,uuid]
<br> The VMware virtual machine state.<br> Return value: String: notRunning, resetting, running, shuttingDown, standby, or
unknown.
Parameters:
543
vmware.vm.storage.committed[url,uuid]
<br> The VMware virtual machine committed storage space (bytes).<br> Return value: Integer.
Parameters:
vmware.vm.storage.readoio[url,uuid,instance]
<br> The average number of outstanding read requests to the virtual disk during the collection interval.<br> Return value: Integer.
Parameters:
vmware.vm.storage.totalreadlatency[url,uuid,instance]
<br> The average time a read from the virtual disk takes (milliseconds).<br> Return value: Integer.
Parameters:
vmware.vm.storage.totalwritelatency[url,uuid,instance]
<br> The average time a write to the virtual disk takes (milliseconds).<br> Return value: Integer.
Parameters:
vmware.vm.storage.uncommitted[url,uuid]
<br> The VMware virtual machine uncommitted storage space (bytes).<br> Return value: Integer.
Parameters:
vmware.vm.storage.unshared[url,uuid]
<br> The VMware virtual machine unshared storage space (bytes).<br> Return value: Integer.
Parameters:
vmware.vm.storage.writeoio[url,uuid,instance]
<br> The average number of outstanding write requests to the virtual disk during the collection interval.<br> Return value: Integer.
Parameters:
vmware.vm.tags.get[url,uuid]
<br> The VMware virtual machine tags array.<br> Return value: JSON object.
Parameters:
544
This item works with vSphere 6.5 and newer.
vmware.vm.tools[url,uuid,mode]
<br> The VMware virtual machine guest tools state.<br> Return value: String: guestToolsExecutingScripts - VMware Tools is
starting; guestToolsNotRunning - VMware Tools is not running; guestToolsRunning - VMware Tools is running.
Parameters:
vmware.vm.uptime[url,uuid]
<br> The VMware virtual machine uptime (seconds).<br> Return value: Integer.
Parameters:
vmware.vm.vfs.dev.discovery[url,uuid]
<br> The discovery of VMware virtual machine disk devices.<br> Return value: JSON object.
Parameters:
vmware.vm.vfs.dev.read[url,uuid,instance,<mode>]
2
<br> The VMware virtual machine disk device read statistics (bytes/operations per second).<br> Return value: Integer .
Parameters:
vmware.vm.vfs.dev.write[url,uuid,instance,<mode>]
2
<br> The VMware virtual machine disk device write statistics (bytes/operations per second).<br> Return value: Integer .
Parameters:
vmware.vm.vfs.fs.discovery[url,uuid]
<br> The discovery of VMware virtual machine file systems.<br> Return value: JSON object.
Parameters:
VMware Tools must be installed on the guest virtual machine for this item to work.
vmware.vm.vfs.fs.size[url,uuid,fsname,<mode>]
<br> The VMware virtual machine file system statistics (bytes/percentages).<br> Return value: Integer.
Parameters:
VMware Tools must be installed on the guest virtual machine for this item to work.
Footnotes
545
1
See Creating custom performance counter names for VMware.
2
The value of these items is obtained from VMware performance counters and the VMwarePerfFrequency parameter is used to
refresh their data in Zabbix VMware cache:
• vmware.cl.perfcounter
• vmware.hv.datastore.read
• vmware.hv.datastore.write
• vmware.hv.network.in
• vmware.hv.network.out
• vmware.hv.perfcounter
• vmware.vm.cpu.ready
• vmware.vm.net.if.in
• vmware.vm.net.if.out
• vmware.vm.perfcounter
• vmware.vm.vfs.dev.read
• vmware.vm.vfs.dev.write
More info
See Virtual machine monitoring for detailed information how to configure Zabbix to monitor VMware environments.
The following table lists fields returned by virtual machine related discovery keys.
Item key
Array structure:
[{
"rpid":"resource group id",
"tags":[{}],
"rpath":"resource group path",
"vm_count":0
}]
Array structure:
[{
"tag":"tag name",
"tag_description":"tag description",
"category":"tag category"
}]
vmware.datastore.discovery
Performs datastore {#DATASTORE} Datastore name.
discovery.
{#DATASTORE.EXTENT} An array containing datastore extent partition ID and instance name.
Array structure:
[{
"partitionid":1,
"instance":"name"
}]
546
Item key
Array structure:
[{
"tag":"tag name",
"tag_description":"tag description",
"category":"tag category"
}]
vmware.dc.discovery
Performs datacenter {#DATACENTER} Datacenter name.
discovery.
{#DATACENTERID} Datacenter identifier.
”tags” An array containing tags with tag name, description and category.
Array structure:
[{
"tag":"tag name",
"tag_description":"tag description",
"category":"tag category"
}]
vmware.dvswitch.discovery
Performs vSphere {#DVS.NAME} Switch name.
distributed switches
discovery.
{#DVS.UUID} Switch identifier.
vmware.hv.discovery
Performs hypervisor {#HV.UUID} Unique hypervisor identifier.
discovery.
{#HV.ID} Hypervisor identifier (HostSystem managed object name).
{#HV.NAME} Hypervisor name.
{#HV.NETNAME} Hypervisor network host name.
{#HV.IP} Hypervisor IP address, might be empty.
Array structure:
[{
"rpid":"resource group id",
"tags":[{}],
"rpath":"resource group path",
"vm_count":0
}]
547
Item key
”tags” An array containing tags with tag name, description and category.
Array structure:
[{
"tag":"tag name",
"tag_description":"tag description",
"category":"tag category"
}]
vmware.hv.datastore.discovery
Performs hypervisor {#DATASTORE} Datastore name.
datastore discovery.
Note that multiple
hypervisors can use
the same datastore.
{#DATASTORE.TYPE} Datastore type.
Array structure:
[{
"partitionid":1,
"instance":"name"
}]
”tags” An array containing tags with tag name, description and category.
Array structure:
[{
"tag":"tag name",
"tag_description":"tag description",
"category":"tag category"
}]
vmware.hv.net.if.discovery
Performs hypervisor {#IFNAME} Interface name.
network interfaces
discovery.
{#IFDRIVER} Interface driver.
{#IFDUPLEX} Interface duplex settings.
{#IFSPEED} Interface speed.
{#IFMAC} Interface mac address.
vmware.vm.discovery
Performs virtual {#VM.UUID} Unique virtual machine identifier.
machine discovery.
{#VM.ID} Virtual machine identifier (VirtualMachine managed object name).
{#VM.NAME} Virtual machine name.
{#HV.NAME} Hypervisor name.
{#HV.UUID} Unique hypervisor identifier.
{#HV.ID} Hypervisor identifier (HostSystem managed object name).
{#CLUSTER.NAME} Cluster name, might be empty.
{#DATACENTER.NAME} Datacenter name.
{#DATASTORE.NAME} Datastore name.
{#DATASTORE.UUID} Datastore identifier.
{#VM.IP} Virtual machine IP address, might be empty.
{#VM.DNS} Virtual machine DNS name, might be empty.
{#VM.GUESTFAMILY} Guest virtual machine OS family, might be empty.
{#VM.GUESTFULLNAME} Full guest virtual machine OS name, might be empty.
{#VM.FOLDER} The chain of virtual machine parent folders, can be used as value for
nested groups; folder names are combined with ”/”. Might be empty.
548
Item key
Array structure:
[{
"tag":"tag name",
"tag_description":"tag description",
"category":"tag category"
}]
”vm_customattribute” An array of virtual machine custom attributes (if defined).
Array structure:
[{
"name":"custom field name",
"value":"custom field value"
}]
”net_if” An array of virtual machine network interfaces.
Array structure:
[{
"ifname": "interface name",
"ifdesc": "interface description",
"ifmac": "00:00:00:00:00:00",
"ifconnected": true,
"iftype": "interface type",
"ifbackingdevice": "interface backing device",
"ifdvswitch_uuid": "interface switch uuid",
"ifdvswitch_portgroup": "interface switch port
group",
"ifdvswitch_port": "interface switch port",
"ifip": ["interface ip addresses"]
}]
549
Item key
Overview This section provides additional information about JSON objects returned by various VMware items.
{
"alarms": [
{
"name": "Host connection and power state",
"system_name": "alarm.HostConnectionStateAlarm",
"description": "Default alarm to monitor host connection and power state",
"enabled": true,
"key": "alarm-1.host-2013",
"time": "2022-06-27T05:27:38.759976Z",
"overall_status": "red",
"acknowledged": false
},
{
"name": "Host memory usage",
"system_name": "alarm.HostMemoryUsageAlarm",
"description": "Default alarm to monitor host memory usage",
"enabled": true,
"key": "alarm-4.host-1004",
"time": "2022-05-16T13:32:42.47863Z",
"overall_status": "yellow",
"acknowledged": false
},
{
// other alarms
}
]
}
{
"tags": [
{
"name": "Windows",
"description": "tag for cat OS type",
"category": "OS type"
},
{
"name": "SQL Server",
"description": "tag for cat application name",
"category": "application name"
},
{
// other tags
550
}
]
}
vmware.hv.diskinfo.get The item vmware.hv.diskinfo.get[] returns JSON objects with the following structure (values are
provided as an example):
[
{
"instance": "mpx.vmhba32:C0:T0:L0",
"hv_uuid": "8002299e-d7b9-8728-d224-76004bbb6100",
"datastore_uuid": "",
"operational_state": [
"ok"
],
"lun_type": "disk",
"queue_depth": 1,
"model": "USB DISK",
"vendor": "SMI Corp",
"revision": "1100",
"serial_number": "CCYYMMDDHHmmSS9S62CK",
"vsan": {}
},
{
// other instances
}
]
vmware.dvswitch.fetchports.get The item vmware.dvswitch.fetchports.get[] returns JSON objects with the following
structure (values are provided as an example):
{
"FetchDVPortsResponse":
{
"returnval": [
{
"key": "0",
"dvsUuid": "50 36 6a 24 25 c0 10 9e-05 4a f6 ea 4e 3d 09 88",
"portgroupKey": "dvportgroup-2023",
"proxyHost":
{
"@type": "HostSystem",
"#text": "host-2021"
},
"connectee":
{
"connectedEntity":
{
"@type": "HostSystem",
"#text": "host-2021"
},
"nicKey": "vmnic0",
"type": "pnic"
},
"conflict": "false",
"state":
{
"runtimeInfo":
{
"linkUp": "true",
"blocked": "false",
"vlanIds":
{
551
"start": "0",
"end": "4094"
},
"trunkingMode": "true",
"linkPeer": "vmnic0",
"macAddress": "00:00:00:00:00:00",
"statusDetail": null,
"vmDirectPathGen2Active": "false",
"vmDirectPathGen2InactiveReasonOther": "portNptIncompatibleConnectee"
},
"stats":
{
"packetsInMulticast": "2385470",
"packetsOutMulticast": "45",
"bytesInMulticast": "309250248",
"bytesOutMulticast": "5890",
"packetsInUnicast": "155601537",
"packetsOutUnicast": "113008658",
"bytesInUnicast": "121609489384",
"bytesOutUnicast": "47240279759",
"packetsInBroadcast": "1040420",
"packetsOutBroadcast": "7051",
"bytesInBroadcast": "77339771",
"bytesOutBroadcast": "430392",
"packetsInDropped": "0",
"packetsOutDropped": "0",
"packetsInException": "0",
"packetsOutException": "0"
}
},
"connectionCookie": "1702765133",
"lastStatusChange": "2022-03-25T14:01:11Z",
"hostLocalPort": "false"
},
{
//other keys
}
]
}
}
vmware.hv.hw.sensors.get The item vmware.hv.hw.sensors.get[] returns JSON objects with the following structure (values
are provided as an example):
{
"val":
{
"@type": "HostHardwareStatusInfo",
"storageStatusInfo": [
{
"name": "Intel Corporation HD Graphics 630 #2",
"status":
{
"label": "Unknown",
"summary": "Cannot report on the current status of the physical element",
"key": "Unknown"
}
},
{
"name": "Intel Corporation 200 Series/Z370 Chipset Family USB 3.0 xHCI Controller #20"
"status":
{
"label": "Unknown",
552
"summary": "Cannot report on the current status of the physical element",
"key": "Unknown"
}
},
{
// other hv hw sensors
}
]
}
}
vmware.hv.sensors.get The item vmware.hv.sensors.get[] returns JSON objects with the following structure (values are
provided as an example):
{
"val":
{
"@type": "ArrayOfHostNumericSensorInfo", "HostNumericSensorInfo": [
{
"@type": "HostNumericSensorInfo",
"name": "System Board 1 PwrMeter Output --- Normal",
"healthState":
{
"label": "Green",
"summary": "Sensor is operating under normal conditions",
"key": "green"
},
"currentReading": "10500",
"unitModifier": "-2",
"baseUnits": "Watts",
"sensorType": "other"
},
{
"@type": "HostNumericSensorInfo",
"name": "Power Supply 1 PS 1 Output --- Normal",
"healthState":
{
"label": "Green",
"summary": "Sensor is operating under normal conditions",
"key": "green"
},
"currentReading": "10000",
"unitModifier": "-2",
"baseUnits": "Watts",
"sensorType": "power"
},
{
// other hv sensors
}
]
}
}
vmware.vm.snapshot.get If any snapshots exist, the item vmware.snapshot.get[] returns a JSON object with the following
structure (values are provided as an example):
{
"snapshot": [
{
"name": "VM Snapshot 4%2f1%2f2022, 9:16:39 AM",
"description": "Descr 1",
"createtime": "2022-04-01T06:16:51.761Z",
"size": 5755795171,
553
"uniquesize": 5755795171
},
{
"name": "VM Snapshot 4%2f1%2f2022, 9:18:21 AM",
"description": "Descr 2",
"createtime": "2022-04-01T06:18:29.164999Z",
"size": 118650595,
"uniquesize": 118650595
},
{
"name": "VM Snapshot 4%2f1%2f2022, 9:37:29 AM",
"description": "Descr 3",
"createtime": "2022-04-01T06:37:53.534999Z",
"size": 62935016,
"uniquesize": 62935016
}
],
"count": 3,
"latestdate": "2022-04-01T06:37:53.534999Z",
"latestage": 22729203,
"oldestdate": "2022-04-01T06:16:51.761Z",
"oldestage": 22730465,
"size": 5937380782,
"uniquesize": 5937380782
}
If no snapshot exists, the item vmware.snapshot.get[] returns a JSON object with empty values:
{
"snapshot": [],
"count": 0,
"latestdate": null,
"latestage": 0,
"oldestdate": null,
"oldestage": 0,
"size": 0,
"uniquesize": 0
}
Overview
The following example describes how to set up Zabbix for monitoring VMware virtual machines. This involves:
Prerequisites
Note:
This example does not cover the configuration of VMware. It is assumed that VMware is already configured.
Before proceeding, set the StartVMwareCollectors parameter in Zabbix server configuration file to 2 or more (the default
value is 0).
Create a host
2. Create a host:
554
• In the Host name field, enter a host name (for example, ”VMware VMs”).
• In the Host groups field, type or select a host group (for example, ”Virtual machines”).
3. Click the Add button to create the host. This host will represent your VMware environment.
1. Click Discovery for the created host to go to the list of low-level discovery rules for that host.
• In the Name field, enter a low-level discovery rule name (for example, ”Discover VMware VMs”).
• In the Type field, select ”Simple check”.
• In the Key field, enter the built-in item key for discovering VMware virtual machines: vmware.vm.discovery[{$VMWARE.URL}]
• In the User name and Password fields, enter the corresponding macros previously configured on the host.
555
3. Click the Add button to create the low-level discovery rule. This discovery rule will discover virtual machines in your VMware
environment.
1. In the list of low-level discovery rules, click Host prototypes for the previously created low-level discovery rule.
2. Create a host prototype. Since host prototypes are blueprints for creating hosts through low-level discovery rules, most fields
will contain low-level discovery macros. This ensures that the hosts are created with properties based on the content retrieved by
the previously created low-level discovery rule.
556
• In the Macros tab, set the {$VMWARE.VM.UUID} macro with the value {#VM.UUID}. This is necessary for the correct func-
tioning of the VMware Guest template that uses this macro as a host-level user macro in item parameters (for example,
vmware.vm.net.if.discovery[{$VMWARE.URL}, {$VMWARE.VM.UUID}]).
3. Click the Add button to create the host prototype. This host prototype will be used to create hosts for virtual machines discovered
by the previously created low-level discovery rule.
After the host prototype has been created, the low-level discovery rule will create hosts for discovered VMware virtual machines,
and Zabbix will start to monitor them. Note that the discovery and creation of hosts can also be executed manually, if necessary.
To view the created hosts, navigate to the Data collection → Hosts menu section.
To view collected metrics, navigate to the Monitoring → Hosts menu section and click Latest data for one of the hosts.
557
Advanced host interface configuration
The vmware.vm.discovery[{$VMWARE.URL}] item key, configured in the Create a low-level discovery rule section, returns
network interfaces data in the ”net_if” field:
"net_if": [
{
"ifname": "5000",
"ifdesc": "Network adapter 1",
"ifmac": "00:11:22:33:44:55",
"ifconnected": true,
"iftype": "VirtualVmxnet3",
"ifbackingdevice": "VLAN(myLab)",
"ifdvswitch_uuid": "",
"ifdvswitch_portgroup": "",
"ifdvswitch_port": "",
"ifip": [
"127.0.0.1",
"::1"
]
},
{
"ifname": "5001",
"ifdesc": "Network adapter 2",
"ifmac": "00:11:22:33:44:55",
"ifconnected": false,
"iftype": "VirtualVmxnet3",
"ifbackingdevice": "VLAN(myLab2)",
"ifdvswitch_uuid": "",
"ifdvswitch_portgroup": "",
"ifdvswitch_port": "",
"ifip": []
}
]
1. When creating a low-level discovery rule, additionally configure a low-level discovery macro. In the LLD macros tab, create a
custom LLD macro with a JSONPath value. For example:
• {#MYLAB.NET.IF} - $.net_if[?(@.ifbackingdevice=="VLAN(myLab)")].ifip[0].first()
2. When creating a host prototype, add a custom host interface and enter the LLD macro in the DNS name or IP address field.
558
11 Maintenance
Overview You can define maintenance periods for hosts and host groups in Zabbix.
Furthermore, it is possible to define maintenance only for a single trigger (or subset of triggers) by specifying trigger tags. In this
case maintenance will be activated only for those triggers; all other triggers of the host or host group will not be in maintenance.
There are two maintenance types - with data collection and with no data collection.
During a maintenance ”with data collection” triggers are processed as usual and events are created when required. However,
problem escalations are paused for hosts/triggers in maintenance, if the Pause operations for suppressed problems option is
checked in action configuration. In this case, escalation steps that may include sending notifications or remote commands will be
ignored for as long as the maintenance period lasts. Note that problem recovery and update operations are not suppressed during
maintenance, only escalations.
For example, if escalation steps are scheduled at 0, 30 and 60 minutes after a problem start, and there is a half-hour long main-
tenance lasting from 10 minutes to 40 minutes after a real problem arises, steps two and three will be executed a half-hour later,
or at 60 minutes and 90 minutes (providing the problem still exists). Similarly, if a problem arises during the maintenance, the
escalation will start after the maintenance.
To receive problem notifications during the maintenance normally (without delay), you have to uncheck the Pause operations for
suppressed problems option in action configuration.
Note:
If at least one host (used in the trigger expression) is not in maintenance mode, Zabbix will send a problem notification.
Zabbix server must be running during maintenance. Maintenances are recalculated every minute or as soon as the configuration
cache is reloaded if there are changes to the maintenance period.
Timer processes check if host status must be changed to/from maintenance at 0 seconds of every minute. Additionally, every sec-
ond the timer process checks if any maintenances must be started/stopped based on whether there are changes to the maintenance
periods after the configuration update. Thus the speed of starting/stopping maintenance periods depends on the configuration
update interval (10 seconds by default). Note that maintenance period changes do not include Active since/Active till settings.
Also, if a host/host group is added to an existing active maintenance period, the changes will only be activated by the timer process
at the start of next minute.
Note that when a host enters maintenance, Zabbix server timer processes will read all open problems to check if it is required
to suppress those. This may have a performance impact if there are many open problems. Zabbix server will also read all open
problems upon startup, even if there are no maintenances configured at the time.
559
Note that the Zabbix server (or proxy) always collects data regardless of the maintenance type (including ”no data” maintenance).
The data is later ignored by the server if ’no data collection’ is set.
When ”no data” maintenance ends, triggers using nodata() function will not fire before the next check during the period they are
checking.
If a log item is added while a host is in maintenance and the maintenance ends, only new logfile entries since the end of the
maintenance will be gathered.
If a timestamped value is sent for a host that is in a “no data” maintenance type (e.g. using Zabbix sender) then this value will be
dropped however it is possible to send a timestamped value in for an expired maintenance period and it will be accepted.
If maintenance period, hosts, groups or tags are changed by the user, the changes will only take effect after configuration cache
synchronization.
Parameter Description
560
Parameter Description
Tags If maintenance tags are specified, maintenance for the selected hosts will be activated, but only
problems with matching tags will be suppressed (that is, no actions will be taken).
Maintenance periods
The maintenance period window is for scheduling time for a recurring or a one-time maintenance. The form is dynamic with
available fields changing based on the Period type selected.
561
Period type Description
When Every day(s) parameter is greater than ”1”, the starting day is the day that the Active
since time falls on. Examples:
- if Active since is set to ”2021-01-01 12:00”, Every day(s) is set to ”2”, and At (hour:minute) is
set to ”23:00”, then the first maintenance period will start on January 1 at 23:00, while the
second maintenance period will start on January 3 at 23:00;
- if Active since is set to ”2021-01-01 12:00”, Every day(s) is set to ”2”, and At (hour:minute) is
set to ”01:00”, then the first maintenance period will start on January 3 at 01:00, while the
second maintenance period will start on January 5 at 01:00.
Weekly Configure a weekly maintenance period:
Every week(s) - maintenance frequency (1 - (default) every week, 2 - every two weeks, etc.);
Day of week - on which day the maintenance should take place;
At (hour:minute) - time of the day when maintenance starts;
Maintenance period length - for how long the maintenance will be active.
When Every week(s) parameter is greater than ”1”, the starting week is the week that the Active
since time falls on. For examples, see parameter Daily description above.
Monthly Configure a monthly maintenance period:
Month - select all months during which the regular maintenance is carried out;
Date: Day of month - select this option if the maintenance should take place on the same date
each month (for example, every 1st day of the month), and then select the required day in the
field Day of month that appears;
Date: Day of week - select this option if the maintenance should take place only on certain days
(for example, every first Monday of the month), then select (in the drop-down) the required week
of the month (first, second, third, fourth, or last), and then mark the checkboxes for maintenance
day(s);
At (hour:minute) - time of the day when maintenance starts;
Maintenance period length - for how long the maintenance will be active.
Attention:
When creating a maintenance period, the time zone of the user who creates it is used. However, when recurring mainte-
nance periods (Daily, Weekly, Monthly) are scheduled, the time zone of the Zabbix server is used. To ensure predictable
behavior of recurring maintenance periods, it is required to use a common time zone for all parts of Zabbix.
When done, press Add to add the maintenance period to the Periods block.
Note that Daylight Saving Time (DST) changes do not affect how long the maintenance will be. For example, let’s say that we have
a two-hour maintenance configured that usually starts at 01:00 and finishes at 03:00:
• if after one hour of maintenance (at 02:00) a DST change happens and current time changes from 02:00 to 03:00, the
maintenance will continue for one more hour (till 04:00);
• if after two hours of maintenance (at 03:00) a DST change happens and current time changes from 03:00 to 02:00, the
maintenance will stop, because two hours have passed;
• if a maintenance period starts during the hour that is skipped by a DST change, then the maintenance will not start.
If a maintenance period is set to ”1 day” (the actual period of the maintenance is 24 hours, since Zabbix calculates days in hours),
starts at 00:00 and finishes at 00:00 the next day:
• the maintenance will stop at 01:00 the next day if current time changes forward one hour;
• the maintenance will stop at 23:00 that day if current time changes back one hour.
An orange wrench icon next to the host name indicates that this host is in maintenance in:
562
• Dashboards
• Monitoring → Problems
• Inventory → Hosts → Host inventory details
• Data collection → Hosts (See ’Status’ column)
Maintenance details are displayed when the mouse pointer is positioned over the icon.
Normally problems for hosts in maintenance are suppressed, i.e. not displayed in the frontend. However, it is also possible to
configure that suppressed problems are shown, by selecting the Show suppressed problems option in these locations:
• Dashboards (in Problem hosts, Problems, Problems by severity, Trigger overview widget configuration)
• Monitoring → Problems (in the filter)
• Monitoring → Maps (in map configuration)
• Global notifications (in user profile configuration)
When suppressed problems are displayed, the following icon is displayed: . Rolling a mouse over the icon displays more details.
12 Regular expressions
Overview Perl Compatible Regular Expressions (PCRE, PCRE2) are supported in Zabbix.
Regular expressions You may manually enter a regular expression in supported places. Note that the expression may not start
with @ because that symbol is used in Zabbix for referencing global regular expressions.
Warning:
It’s possible to run out of stack when using regular expressions. See the pcrestack man page for more information.
Note that in multiline matching, the ^ and $ anchors match at the beginning/end of each line respectively, instead of the begin-
ning/end of the entire string.
Global regular expressions There is an advanced editor for creating and testing complex regular expressions in Zabbix fron-
tend.
Once a regular expression has been created this way, it can be used in several places in the frontend by referring to its name,
prefixed with @, for example, @mycustomregexp.
563
To create a global regular expression:
The Expressions tab allows to set the regular expression name and add subexpressions.
Parameter Description
Name Set the regular expression name. Any Unicode characters are allowed.
Expressions Click on Add in the Expressions block to add a new subexpression.
Expression type Select expression type:
Character string included - match the substring
Any character string included - match any substring from a delimited list. The delimited
list includes a comma (,), a dot (.) or a forward slash (/).
Character string not included - match any string except the substring
Result is TRUE - match the regular expression
Result is FALSE - do not match the regular expression
Expression Enter substring/regular expression.
Delimiter A comma (,), a dot (.) or a forward slash (/) to separate text strings in a regular expression.
This parameter is active only when ”Any character string included” expression type is
selected.
Case A checkbox to specify whether a regular expression is sensitive to capitalization of letters.
sen-
si-
tive
A forward slash (/) in the expression is treated literally, rather than a delimiter. This way it is possible to save expressions containing
a slash, without errors.
Attention:
A custom regular expression name in Zabbix may contain commas, spaces, etc. In those cases where that may lead to
misinterpretation when referencing (for example, a comma in the parameter of an item key) the whole reference may be
put in quotes like this: ”@My custom regexp for purpose1, purpose2”.
Regular expression names must not be quoted in other locations (for example, in LLD rule properties).
564
In the Test tab the regular expression and its subexpressions can be tested by providing a test string.
Results show the status of each subexpression and total custom expression status.
Total custom expression status is defined as Combined result. If several sub expressions are defined Zabbix uses AND logical
operator to calculate Combined result. It means that if at least one Result is False Combined result has also False status.
Default global regular expressions Zabbix comes with several global regular expression in its default dataset.
565
Name Expression Matches
Examples Example 1
Use of the following expression in low-level discovery to discover databases except a database with a specific name:
^TESTDATABASE$
Chosen Expression type: ”Result is FALSE”. Doesn’t match name, containing string ”TESTDATABASE”.
Use of the following regular expression including an inline modifier (?i) to match the characters ”error”:
(?i)error
Use of the following regular expression including multiple inline modifiers to match the characters after a specific line:
(?<=match (?i)everything(?-i) after this line\n)(?sx).*# we add s modifier to allow . match newline charac
566
Chosen Expression type: ”Result is TRUE”. Characters after a specific line are matched.
Attention:
g modifier can’t be specified in line. The list of available modifiers can be found in pcresyntax man page. For more
information about PCRE syntax please refer to PCRE HTML documentation.
Agent
items
eventlog[] Yes Yes Yes regexp, severity, source,
eventid parameters
eventlog.count[] regexp, severity, source,
eventid parameters
log[] regexp parameter
log.count[]
logrt[] Yes/No regexp parameter supports both,
file_regexp parameter supports
non-global expressions only
logrt.count[]
proc.cpu.util[] No No cmdline parameter
proc.get[]
proc.mem[]
proc.num[]
sensor[] device and sensor parameters on
Linux 2.4
system.hw.macaddr[] interface parameter
system.sw.packages[] regexp parameter
system.sw.packages.get[] regexp parameter
vfs.dir.count[] regex_incl, regex_excl,
regex_excl_dir parameters
vfs.dir.get[] regex_incl, regex_excl,
regex_excl_dir parameters
vfs.dir.size[] regex_incl, regex_excl,
regex_excl_dir parameters
vfs.file.regexp[] Yes regexp parameter
vfs.file.regmatch[]
web.page.regexp[]
SNMP
traps
snmptrap[] Yes Yes No regexp parameter
Item Yes No No pattern parameter
value
pre-
pro-
cess-
ing
567
Regular Global regular Multiline
Location expression expression matching Comments
Functions
for
trig-
gers/calculated
items
count() Yes Yes Yes pattern parameter if operator
parameter is regexp or iregexp
countunique() Yes Yes
find() Yes Yes
logeventid() Yes Yes No pattern parameter
logsource()
Low-
level
dis-
cov-
ery
Filters Yes Yes No Regular expression field
Overrides Yes No In matches, does not match options
for Operation conditions
Action Yes No No In matches, does not match options
con- for Host name and Host metadata
di- autoregistration conditions
tions
Scripts Yes Yes No Input validation rule field
Web Yes No Yes Variables with a regex: prefix
mon- Required string field
i-
tor-
ing
User Yes No No In macro context with a regex:
macro prefix
con-
text
Macro
func-
tions
regsub() Yes No No pattern parameter
iregsub()
Icon Yes Yes No Expression field
map-
ping
Value Yes No No Value field if mapping type is regexp
map-
ping
13 Problem acknowledgment
If a user gets notified about a problem event, they can go to Zabbix frontend, open the problem update popup window of that
problem using one of the ways listed below and acknowledge the problem. When acknowledging, they can enter their comment
for it, saying that they are working on it or whatever else they may feel like saying about it.
This way, if another system user spots the same problem, they immediately see if it has been acknowledged and the comments
so far.
This way the workflow of resolving problems with more than one system user can take place in a coordinated way.
568
Acknowledgment status is also used when defining action operations. You can define, for example, that a notification is sent to a
higher level manager only if an event is not acknowledged for some time.
To acknowledge events and comment on them, a user must have at least read permissions to the corresponding triggers. To change
problem severity or close problem, a user must have read-write permissions to the corresponding triggers.
There are several ways to access the problem update popup window, which allows acknowledging a problem.
• You may select problems in Monitoring → Problems and then click on Mass update below the list
• You can click on Update in the Update column of a problem in:
– Dashboards (Problems and Problems by severity widgets)
– Monitoring → Problems
– Monitoring → Problems → Event details
• You can click on an unresolved problem cell in:
– Dashboards (Trigger overview widget)
The popup menu contains an Update option that will take you to the problem update window.
569
All mandatory input fields are marked with a red asterisk.
Parameter Description
570
Parameter Description
Display Based on acknowledgment information it is possible to configure how the problem count is displayed in the dashboard or
maps. To do that, you have to make selections in the Problem display option, available in both map configuration and the Problems
by severity dashboard widget. It is possible to display all problem count, unacknowledged problem count as separated from the
total or unacknowledged problem count only.
Based on problem update information (acknowledgment, etc.), it is possible to configure update operations - send a message or
execute remote commands.
1 Problem suppression
Overview
Problem suppression offers a way of temporarily hiding a problem that can be dealt with later. This is useful for cleaning up the
problem list in order to give the highest priority to the most urgent issues. For example, sometimes an issue may arise on the
weekend that is not urgent enough to be dealt with immediately, so it can be ”snoozed” until Monday morning.
Problem suppression allows to hide a single problem, in contrast to problem suppression through host maintenance when all
problems of the maintenance host are hidden.
Operations for trigger actions will be paused for suppressed problems the same way as it is done with host maintenance.
Configuration
A problem can be suppressed through the problem update window, where suppression is one of the problem update options
along with commenting, changing severity, acknowledging, etc.
A problem may also be unsuppressed through the same problem update window.
Display
Once suppressed the problem is marked by a blinking suppression icon in the Info column, before being hidden.
The suppression icon is blinking while the suppression task is in the waiting list. Once the task manager has suppressed the
problem, the icon will stop blinking. If the suppression icon keeps blinking for a long time, this may indicate a server problem, for
example, if the server is down and the task manager cannot complete the task. The same logic applies to unsuppression. In the
short period after the task is submitted and the server has not completed it, the unsuppression icon is blinking.
571
A suppressed problem may be both hidden or shown, depending on the problem filter/widget settings.
When shown in the problem list, a suppressed problem is marked by the suppression icon and suppression details are shown on
mouseover:
Suppression details are also displayed in a popup when positioning the mouse on the suppression icon in the Actions column.
14 Configuration export/import
Overview Zabbix export/import functionality makes it possible to exchange various configuration entities between one Zabbix
system and another.
• share templates or network maps - Zabbix users may share their configuration parameters
• upload a template to Zabbix Community templates. Then others can download the template and import the file into Zabbix.
• integrate with third-party tools - universal YAML, XML and JSON formats make integration and data import/export possible
with third-party tools and applications
Export format
Data can be exported using the Zabbix web frontend or Zabbix API. Supported export formats are YAML, XML and JSON.
572
• When importing hosts/templates using the ”Delete missing” option, host/template macros not present in the import file will
be deleted from
the host/template after the import.
• Empty tags for items, triggers, graphs, discoveryRules, itemPrototypes, triggerPrototypes, graphPrototypes are meaningless,
i.e., it’s the same as if it was missing.
• If entities of the imported host/template have their own timeouts configured, they will be applied; otherwise, proxy/global
timeouts will be applied.
• Import supports YAML, XML and JSON, the import file must have a correct file extension: .yaml and .yml for YAML, .xml for
XML and .json for JSON. See compatibility information about supported XML versions.
• Import supports configuration files only in UTF-8 encoding (with or without BOM); other encodings (UTF16LE, UTF16BE,
UTF32LE, UTF32BE, etc.) will result in an import conversion error.
YAML base format The YAML export format contains the following nodes:
XML format The XML export format contains the following tags:
JSON format The JSON export format contains the following objects:
{
"zabbix_export": {
"version": "7.0"
}
}
1 Template groups
Overview
In the frontend, template groups can be exported only with template export. When a template is exported, all groups it belongs
to are exported with it automatically.
Export format
template_groups:
- uuid: 36bff6c29af64692839d077febfc7079
name: 'Network devices'
Exported elements
573
Element Type Description
2 Host groups
Overview
In the frontend, host groups can be exported only with host export. When a host is exported, all groups it belongs to are exported
with it automatically.
Export format
host_groups:
- uuid: 6f6799aa69e844b4b3918f779f2abf08
name: 'Zabbix servers'
Exported elements
3 Templates
Overview
Templates are exported with many related objects and object relations.
Exporting
574
Depending on the selected format, templates are exported to a local file with a default name:
575
If you mark the Advanced options checkbox, a detailed list of all importable elements will be displayed - mark or unmark each
import rule as required.
If you click the checkbox in the All row, all elements below it will be marked/unmarked.
Import rules:
Rule Description
Update existing Existing elements will be updated using data from the import file. Otherwise, they will
not be updated.
Create new New elements will be created using data from the import file. Otherwise, they will not be
created.
Delete missing Existing elements not present in the import file will be removed. Otherwise, they will not
be removed.
If Delete missing is marked for Template linkage, current template linkage not present in
the import file will be unlinked. Entities (items, triggers, graphs, etc.) inherited from the
unlinked templates will not be removed (unless the Delete missing option is selected for
each entity as well).
On the next screen, you will be able to view the content of a template being imported. If this is a new template, all elements will be
listed in green. If updating an existing template, new template elements will be highlighted in green; removed template elements
will be highlighted in red; elements that have not changed will be listed on a gray background.
576
The menu on the left can be used to navigate through the list of changes. The Updated section highlights all changes made to
existing template elements. The Added section lists new template elements. The elements in each section are grouped by element
type; click the gray arrow to expand or collapse the group of elements.
Review template changes and then click Import to perform the template import. A success or failure message of the import will
be displayed in the frontend.
Export format
577
Template tooling version used: 0.41
groups:
- name: Templates/Applications
items:
- uuid: 5ce209f4d94f460488a74a92a52d92b1
name: 'VMware: Event log'
type: SIMPLE
key: 'vmware.eventlog[{$VMWARE.URL},skip]'
history: 7d
trends: '0'
value_type: LOG
username: '{$VMWARE.USERNAME}'
password: '{$VMWARE.PASSWORD}'
description: 'Collect VMware event log.'
tags:
- tag: component
value: log
- uuid: ee2edadb8ce943ef81d25dbbba8667a4
name: 'VMware: Full name'
type: SIMPLE
key: 'vmware.fullname[{$VMWARE.URL}]'
delay: 1h
history: 7d
trends: '0'
value_type: CHAR
username: '{$VMWARE.USERNAME}'
password: '{$VMWARE.PASSWORD}'
description: 'VMware service full name.'
preprocessing:
- type: DISCARD_UNCHANGED_HEARTBEAT
parameters:
- 1d
tags:
- tag: component
value: system
- uuid: a0ec9145f2234fbea79a28c57ebdb44d
name: 'VMware: Version'
type: SIMPLE
key: 'vmware.version[{$VMWARE.URL}]'
delay: 1h
history: 7d
trends: '0'
value_type: CHAR
username: '{$VMWARE.USERNAME}'
password: '{$VMWARE.PASSWORD}'
description: 'VMware service version.'
preprocessing:
- type: DISCARD_UNCHANGED_HEARTBEAT
parameters:
- 1d
tags:
- tag: component
value: system
discovery_rules:
- uuid: 16ffc933cce74cf28a6edf306aa99782
name: 'Discover VMware clusters'
type: SIMPLE
key: 'vmware.cluster.discovery[{$VMWARE.URL}]'
delay: 1h
username: '{$VMWARE.USERNAME}'
password: '{$VMWARE.PASSWORD}'
description: 'Discovery of clusters'
578
item_prototypes:
- uuid: 46111f91dd564a459dbc1d396e2e6c76
name: 'VMware: Status of "{#CLUSTER.NAME}" cluster'
type: SIMPLE
key: 'vmware.cluster.status[{$VMWARE.URL},{#CLUSTER.NAME}]'
history: 7d
username: '{$VMWARE.USERNAME}'
password: '{$VMWARE.PASSWORD}'
description: 'VMware cluster status.'
valuemap:
name: 'VMware status'
tags:
- tag: cluster
value: '{#CLUSTER.NAME}'
- tag: component
value: cluster
- uuid: 8fb6a45cbe074b0cb6df53758e2c6623
name: 'Discover VMware datastores'
type: SIMPLE
key: 'vmware.datastore.discovery[{$VMWARE.URL}]'
delay: 1h
username: '{$VMWARE.USERNAME}'
password: '{$VMWARE.PASSWORD}'
item_prototypes:
- uuid: 4b61838ba4c34e709b25081ae5b059b5
name: 'VMware: Average read latency of the datastore {#DATASTORE}'
type: SIMPLE
key: 'vmware.datastore.read[{$VMWARE.URL},{#DATASTORE},latency]'
history: 7d
username: '{$VMWARE.USERNAME}'
password: '{$VMWARE.PASSWORD}'
description: 'Amount of time for a read operation from the datastore (milliseconds).'
tags:
- tag: component
value: datastore
- tag: datastore
value: '{#DATASTORE}'
- uuid: 5355c401dc244bc588ccd18767577c93
name: 'VMware: Free space on datastore {#DATASTORE} (percentage)'
type: SIMPLE
key: 'vmware.datastore.size[{$VMWARE.URL},{#DATASTORE},pfree]'
delay: 5m
history: 7d
value_type: FLOAT
units: '%'
username: '{$VMWARE.USERNAME}'
password: '{$VMWARE.PASSWORD}'
description: 'VMware datastore space in percentage from total.'
tags:
- tag: component
value: datastore
- tag: datastore
value: '{#DATASTORE}'
- uuid: 84f13c4fde2d4a17baaf0c8c1eb4f2c0
name: 'VMware: Total size of datastore {#DATASTORE}'
type: SIMPLE
key: 'vmware.datastore.size[{$VMWARE.URL},{#DATASTORE}]'
delay: 5m
history: 7d
units: B
username: '{$VMWARE.USERNAME}'
password: '{$VMWARE.PASSWORD}'
579
description: 'VMware datastore space in bytes.'
tags:
- tag: component
value: datastore
- tag: datastore
value: '{#DATASTORE}'
- uuid: 540cd0fbc56c4b8ea19f2ff5839ce00d
name: 'VMware: Average write latency of the datastore {#DATASTORE}'
type: SIMPLE
key: 'vmware.datastore.write[{$VMWARE.URL},{#DATASTORE},latency]'
history: 7d
username: '{$VMWARE.USERNAME}'
password: '{$VMWARE.PASSWORD}'
description: 'Amount of time for a write operation to the datastore (milliseconds).'
tags:
- tag: component
value: datastore
- tag: datastore
value: '{#DATASTORE}'
- uuid: a5bc075e89f248e7b411d8f960897a08
name: 'Discover VMware hypervisors'
type: SIMPLE
key: 'vmware.hv.discovery[{$VMWARE.URL}]'
delay: 1h
username: '{$VMWARE.USERNAME}'
password: '{$VMWARE.PASSWORD}'
description: 'Discovery of hypervisors.'
host_prototypes:
- uuid: 051a1469d4d045cbbf818fcc843a352e
host: '{#HV.UUID}'
name: '{#HV.NAME}'
group_links:
- group:
name: Applications
group_prototypes:
- name: '{#CLUSTER.NAME}'
- name: '{#DATACENTER.NAME}'
templates:
- name: 'VMware Hypervisor'
macros:
- macro: '{$VMWARE.HV.UUID}'
value: '{#HV.UUID}'
description: 'UUID of hypervisor.'
custom_interfaces: 'YES'
interfaces:
- ip: '{#HV.IP}'
- uuid: 9fd559f4e88c4677a1b874634dd686f5
name: 'Discover VMware VMs'
type: SIMPLE
key: 'vmware.vm.discovery[{$VMWARE.URL}]'
delay: 1h
username: '{$VMWARE.USERNAME}'
password: '{$VMWARE.PASSWORD}'
description: 'Discovery of guest virtual machines.'
host_prototypes:
- uuid: 23b9ae9d6f33414880db1cb107115810
host: '{#VM.UUID}'
name: '{#VM.NAME}'
group_links:
- group:
name: Applications
group_prototypes:
580
- name: '{#CLUSTER.NAME} (vm)'
- name: '{#DATACENTER.NAME}/{#VM.FOLDER} (vm)'
- name: '{#HV.NAME}'
templates:
- name: 'VMware Guest'
macros:
- macro: '{$VMWARE.VM.UUID}'
value: '{#VM.UUID}'
description: 'UUID of guest virtual machine.'
custom_interfaces: 'YES'
interfaces:
- ip: '{#VM.IP}'
tags:
- tag: class
value: software
- tag: target
value: vmware
macros:
- macro: '{$VMWARE.PASSWORD}'
description: 'VMware service {$USERNAME} user password'
- macro: '{$VMWARE.URL}'
description: 'VMware service (vCenter or ESX hypervisor) SDK URL (https://servername/sdk)'
- macro: '{$VMWARE.USERNAME}'
description: 'VMware service user name'
valuemaps:
- uuid: 3c59c22905054d42ac4ee8b72fe5f270
name: 'VMware status'
mappings:
- value: '0'
newvalue: gray
- value: '1'
newvalue: green
- value: '2'
newvalue: yellow
- value: '3'
newvalue: red
Templates
581
Element Type Description
Template items
582
Element Type Description
583
Element Type Description
Note:
See also: Item object (refer to the relevant property with a matching name).
Note:
See also: Item preprocessing object (refer to the relevant property with a matching name).
584
Element Type Description
Note:
See also: Trigger object (refer to the relevant property with a matching name).
Attention:
Most template low-level discovery rule elements are the same as for template items. The table below describes those
elements that differ from template items.
585
Element Type Description
trigger_prototypes Root element for template trigger prototype elements, which are the same as for
template item triggers.
graph_prototypes Root element for template graph prototype elements, which are the same as for host
graphs.
host_prototypes Root element for template host prototype elements, which are the same as for hosts.
master_item string (required for DEPENDENT rules) Root element for the dependent rule’s master item.
lld_macro_paths Root element for low-level discovery rule macro paths.
lld_macro string (required) Low-level discovery rule macro name.
path string (required) Selector for value, which will be assigned to the corresponding macro.
preprocessing Root element for low-level discovery rule value preprocessing.
step Root element for low-level discovery rule value preprocessing step elements, which are
the same as for template item value preprocessing steps, except with fewer possible
values. See also: LLD rule preprocessing object.
overrides Root element for low-level discovery rule override rules.
name string (required) Unique override name.
step string (required) Unique order number of the override.
stop string Stop processing next overrides if matches.
filter Root element for template low-level discovery rule override rule filter elements, which
are the same as for template low-level discovery rule filters.
operations Root element for template low-level discovery rule override operations.
Note:
See also: LLD rule object (refer to the relevant property with a matching name).
Note:
See also: LLD rule filter object (refer to the relevant property with a matching name).
586
Element Type Description
trends string Trend storage period set for the item prototype upon the override operation.
severity string Trigger prototype severity set upon the override operation.
tags Root element for the tags set for the object upon the override operation.
tag string (required) Tag name.
value string Tag value.
templates Root element for the templates linked to the host prototype upon the override operation.
name string (required) Template name.
inventory_mode string Host prototype inventory mode set upon the override operation.
Note:
See also: LLD rule override operation object (refer to the relevant property with a matching name).
Note:
See also: Web scenario object (refer to the relevant property with a matching name).
587
Element Type Description
Note:
See also: Web scenario step object (refer to the relevant property with a matching name).
Template dashboards
Note:
See also: Template dashboard object (refer to the relevant property with a matching name).
588
Element Type Description
x integer Horizontal position from the left side of the template dashboard.
1
Possible values: 0-71.
y integer Vertical position from the top of the template dashboard.
1
Possible values: 0-63.
width integer Widget width.
1
Possible values: 1-72.
height integer Widget height.
1
Possible values: 1-64.
hide_header string Hide widget header.
1
Possible values: NO (0, default), YES (1).
fields Root element for the template dashboard widget fields.
type string (required) Widget field type.
1
Possible values: INTEGER (0), STRING (1), ITEM (4), ITEM_PROTOTYPE (5), GRAPH (6),
GRAPH_PROTOTYPE (7), MAP (8), SERVICE (9), SLA (10), USER (11), ACTION (12),
MEDIA_TYPE (13).
name string (required) Widget field name.
value mixed (required) Widget field value, depending on the field type.
Note:
See also: Template dashboard widget object (refer to the relevant property with a matching name).
Note:
See also: Value map object (refer to the relevant property with a matching name).
1
Footnotes API integer values in brackets, for example, ENABLED (0), are mentioned only for reference. For more information,
see the linked API object page in the table entry or at the end of each section.
4 Hosts
Overview
Hosts are exported with many related objects and object relations.
589
• Host macros
• Host inventory data
• Value maps
• Linked graphs
Exporting
Depending on the selected format, hosts are exported to a local file with a default name:
590
If you mark the Advanced options checkbox, a detailed list of all importable elements will be displayed - mark or unmark each
import rule as required.
If you click the checkbox in the All row, all elements below it will be marked/unmarked.
Import rules:
Rule Description
Update existing Existing elements will be updated using data from the import file. Otherwise, they will
not be updated.
Create new New elements will be created using data from the import file. Otherwise, they will not be
created.
Delete missing Existing elements not present in the import file will be removed. Otherwise, they will not
be removed.
If Delete missing is marked for Template linkage, current template linkage not present in
the import file will be unlinked. Entities (items, triggers, graphs, etc.) inherited from the
unlinked templates will not be removed (unless the Delete missing option is selected for
each entity as well).
Export format
591
- name: 'Linux by Zabbix agent'
- name: 'Zabbix server health'
groups:
- name: 'Discovered hosts'
- name: 'Zabbix servers'
interfaces:
- ip: 192.168.1.1
interface_ref: if1
items:
- name: 'Zabbix trap'
type: TRAP
key: trap
delay: '0'
history: 1w
preprocessing:
- type: MULTIPLIER
parameters:
- '8'
tags:
- tag: Application
value: 'Zabbix server'
triggers:
- expression: 'last(/Zabbix server 1/trap)=0'
name: 'Last value is zero'
priority: WARNING
tags:
- tag: Process
value: 'Internal test'
tags:
- tag: Process
value: Zabbix
macros:
- macro: '{$HOST.MACRO}'
value: '123'
- macro: '{$PASSWORD1}'
type: SECRET_TEXT
inventory:
type: 'Zabbix server'
name: yyyyyy-HP-Pro-3010-Small-Form-Factor-PC
os: 'Linux yyyyyy-HP-Pro-3010-Small-Form-Factor-PC 4.4.0-165-generic #193-Ubuntu SMP Tue Sep 17 17
inventory_mode: AUTOMATIC
graphs:
- name: 'CPU utilization server'
show_work_period: 'NO'
show_triggers: 'NO'
graph_items:
- drawtype: FILLED_REGION
color: FF5555
item:
host: 'Zabbix server 1'
key: 'system.cpu.util[,steal]'
- sortorder: '1'
drawtype: FILLED_REGION
color: 55FF55
item:
host: 'Zabbix server 1'
key: 'system.cpu.util[,softirq]'
- sortorder: '2'
drawtype: FILLED_REGION
color: '009999'
item:
host: 'Zabbix server 1'
592
key: 'system.cpu.util[,interrupt]'
- sortorder: '3'
drawtype: FILLED_REGION
color: '990099'
item:
host: 'Zabbix server 1'
key: 'system.cpu.util[,nice]'
- sortorder: '4'
drawtype: FILLED_REGION
color: '999900'
item:
host: 'Zabbix server 1'
key: 'system.cpu.util[,iowait]'
- sortorder: '5'
drawtype: FILLED_REGION
color: '990000'
item:
host: 'Zabbix server 1'
key: 'system.cpu.util[,system]'
- sortorder: '6'
drawtype: FILLED_REGION
color: '000099'
calc_fnc: MIN
item:
host: 'Zabbix server 1'
key: 'system.cpu.util[,user]'
- sortorder: '7'
drawtype: FILLED_REGION
color: '009900'
item:
host: 'Zabbix server 1'
key: 'system.cpu.util[,idle]'
Hosts
593
Element Type Description
Note:
See also: Host object (refer to the relevant property with a matching name).
Host interfaces
default string Whether this is the primary host interface. Note that there can be only one primary
interface of one type on a host.
1
Possible values: NO (0), YES (1, default).
type string Interface type.
1
Possible values: ZABBIX (1, default), SNMP (2), IPMI (3), JMX (4).
useip string Whether to use IP as the interface for connecting to the host (otherwise, DNS will be
used).
1
Possible values: NO (0), YES (1, default).
ip string (required for IP connections) IP address (IPv4 or IPv6).
dns string (required for DNS connections) DNS name.
port string Port number.
details Root element for interface details.
version string Use this SNMP version.
1
Possible values: SNMPV1 (1), SNMP_V2C (2, default), SNMP_V3 (3).
community string (required for SNMPv1 and SNMPv2 items) SNMP community.
max_repetitions
string Max repetition value for native SNMP bulk requests (GetBulkRequest-PDUs).
Supported for SNMPv2 and SNMPv3 items (discovery[] and walk[] items).
Default: 10.
contextname string SNMPv3 context name.
Supported for SNMPv3 items.
securityname string SNMPv3 security name.
Supported for SNMPv3 items.
securitylevel string SNMPv3 security level.
Supported for SNMPv3 items.
1
Possible values: NOAUTHNOPRIV (0, default), AUTHNOPRIV (1), AUTHPRIV (2).
594
Element Type Description
Note:
See also: Host interface object (refer to the relevant property with a matching name).
Host items
595
Element Type Description
password string (required for SSH and TELNET items) Password for authentication.
Supported for SIMPLE, ODBC, JMX and HTTP_AGENT items.
When used for JMX items, username (see above) should also be specified or both
elements should be left blank.
publickey string (required for SSH items) Name of the public key file.
privatekey string (required for SSH items) Name of the private key file.
description text Item description.
inventory_link string Host inventory field that is populated by the item.
1
Possible values: NONE (0), ALIAS (4), etc. (see Host inventory for supported fields).
valuemap Root element for item value maps.
name string (required) Name of the value map to use for the item.
logtimefmt string Format of the time in log entries.
Supported for items of LOG value type.
preprocessing Root element for item value preprocessing.
step Root element for host item value preprocessing steps.
interface_ref string Reference to the host interface (format: if<N>).
jmx_endpoint string JMX endpoint.
Supported for JMX items.
master_item (required for DEPENDENT items) Root element for dependent item’s master item.
key string (required) Dependent item’s master item key.
timeout string Item data polling request timeout.
Supported for the Timeouts list of item types.
url string (required for HTTP_AGENT items) URL string.
query_fields Root element for query parameters.
Supported for HTTP_AGENT items.
name string (required for HTTP_AGENT items) Query parameter name.
value string Query parameter value.
Supported for HTTP_AGENT items.
parameters Root element for user-defined parameters.
Supported for ITEM_TYPE_SCRIPT and ITEM_TYPE_BROWSER items.
name string (required for ITEM_TYPE_SCRIPT and ITEM_TYPE_BROWSER items) User-defined
parameter name.
value string User-defined parameter value.
Supported for ITEM_TYPE_SCRIPT and ITEM_TYPE_BROWSER items.
posts string HTTP(S) request body data.
Supported for HTTP_AGENT items.
status_codes string Ranges of required HTTP status codes, separated by commas.
Supported for HTTP_AGENT items.
follow_redirects string Follow response redirects while polling data.
Supported for HTTP_AGENT items.
1
Possible values: NO (0), YES (1, default).
post_type string Type of post data body.
Supported for HTTP_AGENT items.
1
Possible values: RAW (0, default), JSON (2), XML (3).
http_proxy string HTTP(S) proxy connection string.
Supported for HTTP_AGENT items.
headers Root element for HTTP(S) request headers.
Supported for HTTP_AGENT items.
name string (required for HTTP_AGENT items) Header name.
value string (required for HTTP_AGENT items) Header value.
retrieve_mode string What part of response should be stored.
Supported for HTTP_AGENT items.
1
Possible values: BODY (0, default), HEADERS (1), BOTH (2).
request_method string Request method type.
Supported for HTTP_AGENT items.
1
Possible values: GET (0, default), POST (1), PUT (2), HEAD (3).
output_format string How to process response.
Supported for HTTP_AGENT items.
1
Possible values: RAW (0, default), JSON (1).
allow_traps string Allow to populate value similarly to the trapper item.
Supported for HTTP_AGENT items.
1
Possible values: NO (0, default), YES (1).
596
Element Type Description
Note:
See also: Item object (refer to the relevant property with a matching name).
Note:
See also: Item preprocessing object (refer to the relevant property with a matching name).
597
Element Type Description
Note:
See also: Trigger object (refer to the relevant property with a matching name).
Attention:
Most host low-level discovery rule elements are the same as for host items. The table below describes those elements
that differ from host items.
598
Element Type Description
step Root element for low-level discovery rule value preprocessing step elements, which are
the same as for host item value preprocessing steps, except with fewer possible values.
See also: LLD rule preprocessing object.
overrides Root element for low-level discovery rule override rules.
name string (required) Unique override name.
step string (required) Unique order number of the override.
stop string Stop processing next overrides if matches.
filter Root element for low-level discovery rule override rule filter elements, which are the
same as for host low-level discovery rule filters.
operations Root element for host low-level discovery rule override operations.
Note:
See also: LLD rule object (refer to the relevant property with a matching name).
Note:
See also: LLD rule filter object (refer to the relevant property with a matching name).
599
Note:
See also: LLD rule override operation object (refer to the relevant property with a matching name).
Note:
See also: Web scenario object (refer to the relevant property with a matching name).
600
Element Type Description
variables Root element of step-level variables (macros) that should be applied after this step.
If the variable value has a ’regex:’ prefix, then its value is extracted from the data
returned by this step according to the regular expression pattern following the ’regex:’
prefix
name string (required) Variable name.
value text (required) Variable value.
headers Root element for HTTP headers to be sent when performing a request.
name string (required) Header name.
value text (required) Header value.
follow_redirects string Follow HTTP redirects.
1
Possible values: NO (0), YES (1, default).
retrieve_mode string HTTP response retrieve mode.
1
Possible values: BODY (0, default), HEADERS (1), BOTH (2).
timeout string Timeout (using seconds, time suffix, or user macro) of step execution.
Default: 15s.
required string Text that must be present in the response (ignored if empty).
status_codes string A comma-delimited list of accepted HTTP status codes (e.g., 200-201,210-299;
ignored if empty).
Note:
See also: Web scenario step object (refer to the relevant property with a matching name).
Host graphs
601
Element Type Description
Note:
See also: Graph object (refer to the relevant property with a matching name).
sortorder integer Draw order. The smaller value is drawn first. Can be used to draw lines or regions
behind (or in front of) another.
drawtype string Draw style of the graph item.
Supported for NORMAL graphs.
1
Possible values: SINGLE_LINE (0, default), FILLED_REGION (1), BOLD_LINE (2),
DOTTED_LINE (3), DASHED_LINE (4), GRADIENT_LINE (5).
color string Element color (6 symbols, hex).
yaxisside string Side of the graph where the graph item’s Y scale will be drawn.
Supported for NORMAL and STACKED graphs.
calc_fnc string Data to draw if more than one value exists for an item.
1
Possible values: MIN (1), AVG (2, default), MAX (4), ALL (7; minimum, average, and
maximum; supported for simple graphs), LAST (9, supported for pie/exploded graphs).
type string Graph item type.
1
Possible values: SIMPLE (0, default), GRAPH_SUM (2; value of the item represents the
whole pie; supported for pie/exploded graphs).
item (required) Individual item.
host string (required) Item host.
key string (required) Item key.
Note:
See also: Graph item object (refer to the relevant property with a matching name).
Note:
See also: Value map object (refer to the relevant property with a matching name).
602
1
Footnotes API integer values in brackets, for example, ENABLED (0), are mentioned only for reference. For more information,
see the linked API object page in the table entry or at the end of each section.
5 Network maps
Overview
Warning:
Any host groups, hosts, triggers, other maps or other elements that may be related to the exported map are not exported.
Thus, if at least one of the elements the map refers to is missing, importing it will fail.
Exporting
1. Go to Monitoring → Maps.
2. Mark the checkboxes of the network maps to export.
3. Click Export below the list.
Depending on the selected format, maps are exported to a local file with a default name:
1. Go to Monitoring → Maps.
2. Click Import in the top right corner.
3. Select the import file.
4. Mark the required options in import rules.
5. Click Import in the bottom right corner of the configuration form.
603
Import rules:
Rule Description
Update existing Existing maps will be updated using data from the import file. Otherwise, they will not be
updated.
Create new New maps will be created using data from the import file. Otherwise, they will not be
created.
If you uncheck both map options and check the respective options for images, images only will be imported. Image importing is
only available to Super admin users.
Warning:
If replacing an existing image, it will affect all maps that are using this image.
Export format
Export to YAML:
zabbix_export:
version: '7.0'
images:
- name: Zabbix_server_3D_(128)
imagetype: '1'
encodedImage: iVBOR...5CYII=
maps:
- name: 'Local network'
width: '680'
height: '200'
label_type: '0'
label_location: '0'
highlight: '1'
expandproblem: '1'
markelements: '1'
show_unack: '0'
severity_min: '0'
show_suppressed: '0'
grid_size: '50'
grid_show: '1'
grid_align: '1'
label_format: '0'
label_type_host: '2'
label_type_hostgroup: '2'
label_type_trigger: '2'
label_type_map: '2'
label_type_image: '2'
label_string_host: ''
label_string_hostgroup: ''
label_string_trigger: ''
604
label_string_map: ''
label_string_image: ''
expand_macros: '1'
background: { }
iconmap: { }
urls: { }
selements:
- elementtype: '0'
elements:
- host: 'Zabbix server'
label: |
{HOST.NAME}
{HOST.CONN}
label_location: '0'
x: '111'
'y': '61'
elementsubtype: '0'
areatype: '0'
width: '200'
height: '200'
viewtype: '0'
use_iconmap: '0'
selementid: '1'
icon_off:
name: Zabbix_server_3D_(128)
icon_on: { }
icon_disabled: { }
icon_maintenance: { }
urls: { }
evaltype: '0'
shapes:
- type: '0'
x: '0'
'y': '0'
width: '680'
height: '15'
text: '{MAP.NAME}'
font: '9'
font_size: '11'
font_color: '000000'
text_halign: '0'
text_valign: '0'
border_type: '0'
border_width: '0'
border_color: '000000'
background_color: ''
zindex: '0'
lines: { }
links: { }
605
Maps
606
Element Type Description
607
Element Type Description
608
Element Type Description
Note:
See also: Map object (refer to the relevant property with a matching name).
Map selements
609
Element Type Description
Note:
See also: Map element object (refer to the relevant property with a matching name).
Note:
See also: Map link trigger object (refer to the relevant property with a matching name).
610
6 Media types
Overview
Media types are exported with all related objects and object relations.
Exporting
Depending on the selected format, media types are exported to a local file with a default name:
611
Import rules:
Rule Description
Update existing Existing elements will be updated using data from the import file. Otherwise, they will
not be updated.
Create new New elements will be created using data from the import file. Otherwise, they will not be
created.
Export format
Export to YAML:
zabbix_export:
version: '7.0'
media_types:
- name: Pushover
type: WEBHOOK
parameters:
- name: endpoint
value: 'https://api.pushover.net/1/messages.json'
- name: eventid
value: '{EVENT.ID}'
- name: event_nseverity
value: '{EVENT.NSEVERITY}'
- name: event_source
value: '{EVENT.SOURCE}'
- name: event_value
value: '{EVENT.VALUE}'
- name: expire
value: '1200'
- name: message
value: '{ALERT.MESSAGE}'
- name: priority_average
value: '0'
- name: priority_default
value: '0'
- name: priority_disaster
value: '0'
- name: priority_high
value: '0'
- name: priority_information
value: '0'
- name: priority_not_classified
value: '0'
- name: priority_warning
value: '0'
- name: retry
value: '60'
- name: title
value: '{ALERT.SUBJECT}'
- name: token
value: '<PUSHOVER TOKEN HERE>'
- name: triggerid
value: '{TRIGGER.ID}'
- name: url
value: '{$ZABBIX.URL}'
- name: url_title
value: Zabbix
- name: user
value: '{ALERT.SENDTO}'
status: DISABLED
612
max_sessions: '0'
script: |
try {
var params = JSON.parse(value),
request = new HttpRequest(),
data,
response,
severities = [
{name: 'not_classified', color: '#97AAB3'},
{name: 'information', color: '#7499FF'},
{name: 'warning', color: '#FFC859'},
{name: 'average', color: '#FFA059'},
{name: 'high', color: '#E97659'},
{name: 'disaster', color: '#E45959'},
{name: 'resolved', color: '#009900'},
{name: 'default', color: '#000000'}
],
priority;
if (isNaN(params.eventid)) {
throw 'field "eventid" is not a number';
}
data = {
token: params.token,
user: params.user,
title: params.title,
613
message: params.message,
url: (params.event_source === '0')
? params.url + '/tr_events.php?triggerid=' + params.triggerid + '&eventid=' + params.e
: params.url,
url_title: params.url_title,
priority: priority
};
if (priority == 2) {
if (isNaN(params.retry) || params.retry < 30) {
throw 'field "retry" should be a number with value of at least 30 if "priority" is set
}
data.retry = params.retry;
data.expire = params.expire;
}
data = JSON.stringify(data);
Zabbix.log(4, '[ Pushover Webhook ] Sending request: ' + params.endpoint + '\n' + data);
request.addHeader('Content-Type: application/json');
response = request.post(params.endpoint, data);
Zabbix.log(4, '[ Pushover Webhook ] Received response with status code ' + request.getStatus()
if (request.getStatus() != 200 || response === null || typeof response !== 'object' || respons
if (response !== null && typeof response === 'object' && typeof response.errors === 'objec
&& typeof response.errors[0] === 'string') {
throw response.errors[0];
}
else {
throw 'Unknown error. Check debug log for more information.';
}
}
return 'OK';
}
catch (error) {
Zabbix.log(4, '[ Pushover Webhook ] Pushover notification failed: ' + error);
throw 'Pushover notification failed: ' + error;
}
description: |
Please refer to setup guide here: https://git.zabbix.com/projects/ZBX/repos/zabbix/browse/template
614
subject: 'Problem: {EVENT.NAME}'
message: |
Problem started at {EVENT.TIME} on {EVENT.DATE}
Problem name: {EVENT.NAME}
Host: {HOST.NAME}
Severity: {EVENT.SEVERITY}
Operational data: {EVENT.OPDATA}
Original problem ID: {EVENT.ID}
{TRIGGER.URL}
- event_source: TRIGGERS
operation_mode: RECOVERY
subject: 'Resolved in {EVENT.DURATION}: {EVENT.NAME}'
message: |
Problem has been resolved at {EVENT.RECOVERY.TIME} on {EVENT.RECOVERY.DATE}
Problem name: {EVENT.NAME}
Problem duration: {EVENT.DURATION}
Host: {HOST.NAME}
Severity: {EVENT.SEVERITY}
Original problem ID: {EVENT.ID}
{TRIGGER.URL}
- event_source: TRIGGERS
operation_mode: UPDATE
subject: 'Updated problem in {EVENT.AGE}: {EVENT.NAME}'
message: |
{USER.FULLNAME} {EVENT.UPDATE.ACTION} problem at {EVENT.UPDATE.DATE} {EVENT.UPDATE.TIME}.
{EVENT.UPDATE.MESSAGE}
Exported elements
615
Element Type Description
max_sessions integer The maximum number of alerts that can be processed in parallel.
1
Possible values for SMS: 1 (default).
1
Possible values for other media types: 0-100 (where 0 - unlimited).
attempts integer The maximum number of attempts to send an alert.
1
Possible values: 1-10 (default: 3).
attempt_interval string The interval (using seconds or time suffix) between retry attempts.
1
Possible values: 0-60s (default: 10s).
description string Media type description.
message_templates Root element for media type message templates.
event_source string (required) Event source.
1
Possible values: TRIGGERS (0), DISCOVERY (1), AUTOREGISTRATION (2), INTERNAL (3),
SERVICE (4).
operation_mode
string Operation mode.
1
Possible values: PROBLEM (0), RECOVERY (1), UPDATE (2).
subject string Message subject.
message string Message body.
Note:
See also: Media type object (refer to the relevant property with a matching name).
The following additional elements are exported only for the Email media type.
Note:
See also: Media type object (refer to the relevant property with a matching name).
SMS
The following additional elements are exported only for the SMS media type.
Note:
See also: Media type object (refer to the relevant property with a matching name).
616
Script
The following additional elements are exported only for the Script media type.
Note:
See also: Media type object (refer to the relevant property with a matching name).
Webhook
The following additional elements are exported only for the Webhook media type.
Note:
See also: Media type object (refer to the relevant property with a matching name).
Footnotes
1
API integer values in brackets, for example, ENABLED (0), are mentioned only for reference. For more information, see the linked
API object page in the table entry or at the end of each section.
15 Discovery
1 Network discovery
Overview
Zabbix offers automatic network discovery functionality that is effective and very flexible.
617
Zabbix network discovery is based on the following information:
• IP ranges
• Availability of external services (FTP, SSH, WEB, POP3, IMAP, TCP, etc)
• Information received from Zabbix agent (only unencrypted mode is supported)
• Information received from SNMP agent
Discovery
Zabbix periodically scans the IP ranges defined in network discovery rules. The frequency of the check is configurable for each
rule individually.
Each rule has a set of service checks defined to be performed for the IP range.
Discovery rules are processed by the discovery manager. The discovery manager creates a job per each rule with a list of tasks
(network checks). Network checks are performed in parallel by the available discovery workers (the number is configurable in the
frontend per each rule). Only checks with the same IP and port are scheduled sequentially because some devices will not accept
parallel connections on the same port.
The queue size of network checks is limited to 2000000 or 4 GB of memory approximately. If the queue becomes full then the
discovery rule will be skipped and a warning message will be printed in the log. You may use the zabbix[discovery_queue]
internal item to monitor the number of discovery checks in the queue.
Discovery checks are processed independently from the other checks. If any checks do not find a service (or fail), other checks
will still be processed.
Note:
If a discovery rule is changed during execution, then the current discovery execution will be aborted.
Every check of a service and a host (IP) performed by the network discovery module generates a discovery event.
Service Discovered The service is ’up’ after it was ’down’ or when discovered for the first time.
Service Up The service is ’up’, after it was already ’up’.
Service Lost The service is ’down’ after it was ’up’.
Service Down The service is ’down’, after it was already ’down’.
Host Discovered At least one service of a host is ’up’ after all services of that host were ’down’ or a service is
discovered which belongs to a not registered host.
Host Up At least one service of a host is ’up’, after at least one service was already ’up’.
Host Lost All services of a host are ’down’ after at least one was ’up’.
Host Down All services of a host are ’down’, after they were already ’down’.
Actions
• Sending notifications
• Adding/removing hosts
• Enabling/disabling hosts
• Adding hosts to a group
• Removing hosts from a group
• Adding tags to a host
• Removing tags from a host
• Linking a template to hosts/unlinking a template from hosts
• Executing remote scripts
These actions can be configured with respect to the device type, IP, status, uptime/downtime, etc. For full details on configuring
actions for network-discovery based events, see action operation and conditions pages.
Since network discovery actions are event-based, they will be triggered both when a discovered host is online and when it is offline.
It is highly recommended to add an action condition Discovery status: up to avoid such actions as Add host being triggered upon
618
Service Lost/Service Down events. Otherwise, if a discovered host is manually removed, it will still generate Service Lost/Service
Down events and will be recreated during the next discovery cycle.
Note:
Linking templates to a discovered host will fail collectively if any of the linkable templates has a unique entity (e.g. item
key) that is the same as a unique entity (e.g. item key) already existing on the host or on another of the linkable templates.
Host creation
A host is added if the Add host operation is selected. A host is also added, even if the Add host operation is missing, if you select
operations resulting in actions on a host. Such operations are:
• enable host
• disable host
• add host to a host group
• link template to a host
Created hosts are added to the Discovered hosts group (by default, configurable in Administration → General → Other). If you wish
hosts to be added to another group, add a Remove from host groups operation (specifying ”Discovered hosts”) and also add an
Add to host groups operation (specifying another host group), because a host must belong to a host group.
Host naming
When adding hosts, a host name is the result of reverse DNS lookup or IP address if reverse lookup fails. Lookup is performed from
the Zabbix server or Zabbix proxy, depending on which is doing the discovery. If lookup fails on the proxy, it is not retried on the
server. If the host with such a name already exists, the next host would get _2 appended to the name, then _3 and so on.
It is also possible to override DNS/IP lookup and instead use an item value for host name, for example:
• You may discover multiple servers with Zabbix agent running using a Zabbix agent item for discovery and assign proper
names to them automatically, based on the string value returned by this item
• You may discover multiple SNMP network devices using an SNMP agent item for discovery and assign proper names to them
automatically, based on the string value returned by this item
If the host name has been set using an item value, it is not updated during the following discovery checks. If it is not possible to
set host name using an item value, default value (DNS name) is used.
If a host already exists with the discovered IP address, a new host is not created. However, if the discovery action contains
operations (link template, add to host group, etc), they are performed on the existing host.
Host removal
Hosts discovered by a network discovery rule are removed automatically from Monitoring → Discovery if a discovered entity is not
in the rule’s IP range any more. Hosts are removed immediately.
When hosts are added as a result of network discovery, they get interfaces created according to these rules:
• the services detected - for example, if an SNMP check succeeded, an SNMP interface will be created
• if a host responded both to Zabbix agent and SNMP requests, both types of interfaces will be created
• if uniqueness criteria are Zabbix agent or SNMP-returned data, the first interface found for a host will be created as the
default one. Other IP addresses will be added as additional interfaces. Action’s conditions (such as Host IP) do not impact
adding interfaces. Note that this will work if all interfaces are discovered by the same discovery rule. If a different discovery
rule discovers a different interface of the same host, an additional host will be added.
• if a host responded to agent checks only, it will be created with an agent interface only. If it would start responding to SNMP
later, additional SNMP interfaces would be added.
• if 3 separate hosts were initially created, having been discovered by the ”IP” uniqueness criteria, and then the discovery rule
is modified so that hosts A, B and C have identical uniqueness criteria result, B and C are created as additional interfaces
for A, the first host. The individual hosts B and C remain. In Monitoring → Discovery the added interfaces will be displayed
in the ”Discovered device” column, in black font and indented, but the ”Monitored host” column will only display A, the first
created host. ”Uptime/Downtime” is not measured for IPs that are considered to be additional interfaces.
The hosts discovered by different proxies are always treated as different hosts. While this allows to perform discovery on matching
IP ranges used by different subnets, changing proxy for an already monitored subnet is complicated because the proxy changes
must be also applied to all discovered hosts.
619
2. sync proxy configuration
3. replace the proxy in the discovery rule
4. replace the proxy for all hosts discovered by this rule
5. enable discovery rule
Overview
To configure a network discovery rule used by Zabbix to discover hosts and services:
Rule attributes
620
Parameter Description
Checks Zabbix will use this list of checks for discovery. Click on to configure a new check in a
popup window.
Supported checks: SSH, LDAP, SMTP, FTP, HTTP, HTTPS, POP, NNTP, IMAP, TCP, Telnet, Zabbix
agent, SNMPv1 agent, SNMPv2 agent, SNMPv3 agent, ICMP ping.
A protocol-based discovery uses the net.tcp.service[] functionality to test each host, except for
SNMP which queries an SNMP OID. Zabbix agent is tested by querying an item in unencrypted
mode. Please see agent items for more details.
The ’Ports’ parameter may be one of following:
Single port: 22
Range of ports: 22-45
List: 22-45,55,60-70
Since Zabbix 7.0. all service checks are performed asynchronously, except for LDAP checks.
Since Zabbix 7.0 HTTP/HTTPs checking is done via libcurl. If Zabbix server/proxy is compiled
without libcurl, then HTTP checks will work like in previous versions (i.e. as TCP checks), but
HTTPS checks will not work.
Device uniqueness Uniqueness criteria may be:
criteria IP address - no processing of multiple single-IP devices. If a device with the same IP already
exists it will be considered already discovered and a new host will not be added.
<discovery check> - either Zabbix agent or SNMP agent check.
Host name Set the technical host name of a created host using:
DNS name - DNS name (default)
IP address - IP address
<discovery check> - received string value of the discovery check (e.g. Zabbix agent, SNMP
agent check)
See also: Host naming.
Visible name Set the visible host name of a created host using:
Host name - technical host name (default)
DNS name - DNS name
IP address - IP address
<discovery check> - received string value of the discovery check (e.g. Zabbix agent, SNMP
agent check)
See also: Host naming.
621
Parameter Description
Enabled With the check-box marked the rule is active and will be executed by Zabbix server.
If unmarked, the rule is not active. It won’t be executed.
In case of large number of concurrent checks it is possible to exhaust the file descriptor limit for the discovery manager.
The number of file descriptors required for detection equates to the number of discovery workers * 1000. By default there are 5
discovery workers, while the soft limit of the system is ~1024.
In this case Zabbix will reduce the default number of concurrent checks per type for each worker and write a warning to the log
file. But, if the user has set a higher value for Maximum concurrent checks per type than the value calculated by Zabbix, Zabbix
will use the user value for one worker.
In this example, we would like to set up network discovery for the local network having an IP range of 192.168.1.1-192.168.1.254.
Step 1
622
Zabbix will try to discover hosts in the IP range of 192.168.1.1-192.168.1.254 by connecting to Zabbix agents and getting the value
from the system.uname key. The value received from the agent can be used to name the hosts and also to apply different actions
for different operating systems. For example, link Windows servers to the template Windows, Linux servers to the template Linux.
When this rule is added, Zabbix will automatically start the discovery and generation of the discovery-based events for further
processing.
Step 2
Defining a discovery action for adding the discovered Linux servers to the respective group/template.
623
The action will be activated if:
• add the discovered host to the ”Linux servers” group (and also add host if it wasn’t added previously)
• link host to the Linux template. Zabbix will automatically start monitoring the host using items and triggers from the ”Linux”
template.
Step 3
Defining a discovery action for adding the discovered Windows servers to the respective group/template.
624
Step 4
625
A server will be removed if ”Zabbix agent” service is ’down’ for more than 24 hours (86400 seconds).
Overview
It is possible to allow active Zabbix agent autoregistration, after which the server can start monitoring them. This way new hosts
can be added for monitoring without configuring them manually on the server.
Autoregistration can happen when a previously unknown active agent asks for checks.
The feature might be very handy for automatic monitoring of new Cloud nodes. As soon as you have a new node in the Cloud
Zabbix will automatically start the collection of performance and availability data of the host.
Active agent autoregistration also supports the monitoring of added hosts with passive checks. When the active agent asks for
checks, providing it has the ’ListenIP’ or ’ListenPort’ configuration parameters defined in the configuration file, these are sent along
to the server. (If multiple IP addresses are specified, the first one is sent to the server.)
Server, when adding the new autoregistered host, uses the received IP address and port to configure the agent. If no IP address
value is received, the one used for the incoming connection is used. If no port value is received, 10050 is used.
It is possible to specify that the host should be autoregistered with a DNS name as the default agent interface.
626
Autoregistration is rerun:
The active agent autoregistration heartbeat for Zabbix server and Zabbix proxy is 120 seconds. So in case a discovered host is
deleted, the autoregistration will be rerun in 120 seconds.
Configuration
Specify server
Make sure you have the Zabbix server identified in the agent configuration file - zabbix_agentd.conf
ServerActive=10.0.0.1
Unless you specifically define a Hostname in zabbix_agentd.conf, the system hostname of agent location will be used by server
for naming the host. The system hostname in Linux can be obtained by running the ’hostname’ command.
If Hostname is defined in Zabbix agent configuration as a comma-delimited list of hosts, hosts will be created for all listed host-
names.
Restart the agent after making any changes to the configuration file.
When server receives an autoregistration request from an agent it calls an action. An action of event source ”Autoregistration”
must be configured for agent autoregistration.
Note:
Setting up network discovery is not required to have active agents autoregister.
In the Zabbix frontend, go to Alerts → Actions, select Autoregistration as the event source and click on Create action:
Note:
If the hosts that will be autoregistering are likely to be supported for active monitoring only (such as hosts that are firewalled
from your Zabbix server) then you might want to create a specific template like Template_Linux-active to link to.
Created hosts are added to the Discovered hosts group (by default, configurable in Administration → General → Other). If you wish
hosts to be added to another group, add a Remove from host group operation (specifying ”Discovered hosts”) and also add an Add
to host group operation (specifying another host group), because a host must belong to a host group.
Secure autoregistration
A secure way of autoregistration is possible by configuring PSK-based authentication with encrypted connections.
The level of encryption is configured globally in Administration → General → Autoregistration. It is possible to select no encryp-
tion, TLS encryption with PSK authentication or both (so that some hosts may register without encryption while others through
encryption).
Authentication by PSK is verified by Zabbix server before adding a host. If successful, the host is added and Connections from/to
host are set to ’PSK’ only with identity/pre-shared key the same as in the global autoregistration setting.
Attention:
To ensure security of autoregistration on installations using proxies, encryption between Zabbix server and proxy should
be enabled.
HostInterface and HostInterfaceItem configuration parameters allow to specify a custom value for the host interface during au-
toregistration.
627
More specifically, they are useful if the host should be autoregistered with a DNS name as the default agent interface rather than
its IP address. In that case the DNS name should be specified or returned as the value of either HostInterface or HostInterfaceItem
parameters. Note that if the value of one of the two parameters changes, the autoregistered host interface is updated. So it is
possible to update the default interface to another DNS name or update it to an IP address. For the changes to take effect though,
the agent has to be restarted.
Note:
If HostInterface or HostInterfaceItem parameters are not configured, the listen_dns parameter is resolved from the IP
address. If such resolving is configured incorrectly, it may break autoregistration because of invalid hostname.
When agent is sending an autoregistration request to the server it sends its hostname. In some cases (for example, Amazon cloud
nodes) a hostname is not enough for Zabbix server to differentiate discovered hosts. Host metadata can be optionally used to
send other information from an agent to the server.
Host metadata is configured in the agent configuration file - zabbix_agentd.conf. There are 2 ways of specifying host metadata in
the configuration file:
HostMetadata
HostMetadataItem
See the description of the options in the link above.
The HostMetadataItem parameter may return up to 65535 UTF-8 code points. A longer value will be truncated.
Note that on MySQL, the effective maximum length in characters will be less if the returned value contains multibyte characters.
For example, a value containing 3-byte characters only will be limited to 21844 characters in total, while a value containing 4-byte
characters only will be limited to 16383 symbols.
Attention:
An autoregistration attempt happens every time an active agent sends a request to refresh active checks to the server.
The delay between requests is specified in the RefreshActiveChecks parameter of the agent. The first request is sent
immediately after the agent is restarted.
Example 1
Say you would like the hosts to be autoregistered by the Zabbix server. You have active Zabbix agents (see ”Configuration”
section above) on your network. There are Windows hosts and Linux hosts on your network and you have ”Linux by Zabbix
agent” and ”Windows by Zabbix agent” templates available in your Zabbix frontend. So at host registration, you would like the
appropriate Linux/Windows template to be applied to the host being registered. By default, only the hostname is sent to the server
at autoregistration, which might not be enough. In order to make sure the proper template is applied to the host you should use
host metadata.
Frontend configuration
The first thing to do is to configure the frontend. Create 2 actions. The first action:
Note:
You can skip an ”Add host” operation in this case. Linking a template to a host requires adding the host first so the server
will do that automatically.
Agent configuration
Now you need to configure the agents. Add the next line to the agent configuration files:
HostMetadataItem=system.uname
628
This way you make sure host metadata will contain ”Linux” or ”Windows” depending on the host an agent is running on. An
example of host metadata in this case:
Example 2
Step 1
Using host metadata to allow some basic protection against unwanted hosts registering.
Frontend configuration
Create an action in the frontend, using some hard-to-guess secret code to disallow unwanted hosts:
Please note that this method alone does not provide strong protection because data is transmitted in plain text. Configuration
cache reload is required for changes to have an immediate effect.
Agent configuration
HostMetadata=Linux 21df83bf21bf0be663090bb8d4128558ab9b95fba66a6dbf834f8b91ae5e08ae
where ”Linux” is a platform, and the rest of the string is the hard-to-guess secret text.
Do not forget to restart the agent after making any changes to the configuration file.
Step 2
Frontend configuration
Agent configuration
3 Low-level discovery
629
Overview Low-level discovery provides a way to automatically create items, triggers, and graphs for different entities on a
computer. For instance, Zabbix can automatically start monitoring file systems or network interfaces on your machine, without the
need to create items for each file system or network interface manually. Additionally, it is possible to configure Zabbix to remove
unneeded entities automatically based on the actual results of periodically performed discovery.
A user can define their own types of discovery, provided they follow a particular JSON protocol.
First, a user creates a discovery rule in Data collection → Templates, in the Discovery column. A discovery rule consists of (1) an
item that discovers the necessary entities (for instance, file systems or network interfaces) and (2) prototypes of items, triggers,
and graphs that should be created based on the value of that item.
An item that discovers the necessary entities is like a regular item seen elsewhere: the server asks a Zabbix agent (or whatever
the type of the item is set to) for a value of that item, the agent responds with a textual value. The difference is that the value
the agent responds with should contain a list of discovered entities in a JSON format. While the details of this format are only
important for implementers of custom discovery checks, it is necessary to know that the returned value contains a list of macro →
value pairs. For instance, item ”net.if.discovery” might return two pairs: ”{#IFNAME}” → ”lo” and ”{#IFNAME}” → ”eth0”.
These macros are used in names, keys and other prototype fields where they are then substituted with the received values for
creating real items, triggers, graphs or even hosts for each discovered entity. See the full list of options for using LLD macros.
When the server receives a value for a discovery item, it looks at the macro → value pairs and for each pair generates real items,
triggers, and graphs, based on their prototypes. In the example with ”net.if.discovery” above, the server would generate one set
of items, triggers, and graphs for the loopback interface ”lo”, and another set for interface ”eth0”.
Note that since Zabbix 4.2, the format of the JSON returned by low-level discovery rules has been changed. It is no longer
expected that the JSON will contain the ”data” object. Low-level discovery will now accept a normal JSON containing an array, in
order to support new features such as the item value preprocessing and custom paths to low-level discovery macro values in a
JSON document.
Built-in discovery keys have been updated to return an array of LLD rows at the root of JSON document. Zabbix will automatically
extract a macro and value if an array field uses the {#MACRO} syntax as a key. Any new native discovery checks will use the new
syntax without the ”data” elements. When processing a low-level discovery value first the root is located (array at $. or $.data).
While the ”data” element has been removed from all native items related to discovery, for backward compatibility Zabbix will still
accept the JSON notation with a ”data” element, though its use is discouraged. If the JSON contains an object with only one ”data”
array element, then it will automatically extract the content of the element using JSONPath $.data. Low-level discovery now
accepts optional user-defined LLD macros with a custom path specified in JSONPath syntax.
Warning:
As a result of the changes above, newer agents no longer will be able to work with an older Zabbix server.
Configuring low-level discovery We will illustrate low-level discovery based on an example of file system discovery.
• Click on Create discovery rule in the upper right corner of the screen
• Fill in the discovery rule form with the required details
Discovery rule
The discovery rule form contains five tabs, representing, from left to right, the data flow during discovery:
630
• Discovery rule - specifies, most importantly, the built-in item or custom script to retrieve discovery data
• Preprocessing - applies some preprocessing to the discovered data
• LLD macros - allows to extract some macro values to use in discovered items, triggers, etc
• Filters - allows to filter the discovered values
• Overrides - allows to modify items, triggers, graphs or host prototypes when applying to specific discovered objects
The Discovery rule tab contains the item key to use for discovery (as well as some general discovery rule attributes):
Parameter Description
631
Parameter Description
Custom intervals You can create custom rules for checking the item:
Flexible - create an exception to the Update interval (interval with different frequency)
Scheduling - create a custom polling schedule.
For detailed information see Custom intervals.
Timeout Set the discovery check timeout. Select the timeout option:
Global - proxy/global timeout is used (displayed in the grayed out Timeout field);
Override - custom timeout is used (set in the Timeout field; allowed range: 1 - 600s). Time
suffixes, e.g. 30s, 1m, and user macros are supported.
Clicking the Timeouts link allows you to configure proxy timeouts or global timeouts (if a proxy is
not used). Note that the Timeouts link is visible only to users of Super admin type with
permissions to Administration → General or Administration → Proxies frontend sections.
Delete lost resources Specify how soon the discovered entity will be deleted once its discovery status becomes ”Not
discovered anymore”:
Never - it will not be deleted;
Immediately - it will be deleted immediately;
After - it will be deleted after the specified time period. The value must be greater than Disable
lost resources value.
Time suffixes are supported, e.g. 2h, 1d.
User macros are supported.
Note: Using ”Immediately” is not recommended, since just wrongly editing the filter may end up
in the entity being deleted with all the historical data.
Note that manually disabled resources will not be deleted by low-level discovery.
Disable lost resources Specify how soon the discovered entity will be disabled once its discovery status becomes ”Not
discovered anymore”:
Never - it will not be disabled;
Immediately - it will be disabled immediately;
After - it will be disabled after the specified time period. The value should be greater than the
discovery rule update interval.
Note that automatically disabled resources will become enabled again, if re-discovered by
low-level discovery. Manually disabled resources will not become enabled again if re-discovered.
This field is not displayed if Delete lost resources is set to ”Immediately”.
Time suffixes are supported, e.g. 2h, 1d.
User macros are supported.
Description Enter a description.
Enabled If checked, the rule will be processed.
Note:
Discovery rule history is not preserved.
Preprocessing
The Preprocessing tab allows to define transformation rules to apply to the result of discovery. One or several transformations
are possible in this step. Transformations are executed in the order in which they are defined. All preprocessing is done by Zabbix
server.
See also:
• Preprocessing details
• Preprocessing testing
632
Type
Transformation Description
Text
Regular expression Match the received value to the <pattern> regular expression and replace value with the
extracted <output>. The regular expression supports extraction of maximum 10 captured
groups with the \N sequence.
Parameters:
pattern - regular expression
output - output formatting template. An \N (where N=1…9) escape sequence is replaced
with the Nth matched group. A \0 escape sequence is replaced with the matched text.
If you mark the Custom on fail checkbox, it is possible to specify custom error-handling
options: either to discard the value, set a specified value or set a specified error message.
Replace Find the search string and replace it with another (or nothing). All occurrences of the search
string will be replaced.
Parameters:
search string - the string to find and replace, case-sensitive (required)
replacement - the string to replace the search string with. The replacement string may also
be empty effectively allowing to delete the search string when found.
It is possible to use escape sequences to search for or replace line breaks, carriage return,
tabs and spaces ”\n \r \t \s”; backslash can be escaped as ”\\” and escape sequences can be
escaped as ”\\n”. Escaping of line breaks, carriage return, tabs is automatically done during
low-level discovery.
Structured
data
JSONPath Extract value or fragment from JSON data using JSONPath functionality.
If you mark the Custom on fail checkbox, it is possible to specify custom error-handling
options: either to discard the value, set a specified value or set a specified error message.
XML XPath Extract value or fragment from XML data using XPath functionality.
For this option to work, Zabbix server must be compiled with libxml support.
Examples:
number(/document/item/value) will extract 10 from
<document><item><value>10</value></item></document>
number(/document/item/@attribute) will extract 10 from <document><item
attribute="10"></item></document>
/document/item will extract <item><value>10</value></item> from
<document><item><value>10</value></item></document>
Note that namespaces are not supported.
If you mark the Custom on fail checkbox, it is possible to specify custom error-handling
options: either to discard the value, set a specified value or set a specified error message.
CSV to JSON Convert CSV file data into JSON format.
For more information, see: CSV to JSON preprocessing.
XML to JSON Convert data in XML format to JSON.
For more information, see: Serialization rules.
If you mark the Custom on fail checkbox, it is possible to specify custom error-handling
options: either to discard the value, set a specified value or set a specified error message.
SNMP
633
Type
SNMP walk value Extract value by the specified OID/MIB name and apply formatting options:
Unchanged - return Hex-STRING as unescaped hex string;
UTF-8 from Hex-STRING - convert Hex-STRING to UTF-8 string;
MAC from Hex-STRING - convert Hex-STRING to MAC address string (which will have ' '
replaced by ':');
Integer from BITS - convert the first 8 bytes of a bit string expressed as a sequence of hex
characters (e.g. ”1A 2B 3C 4D”) into a 64-bit unsigned integer. In bit strings longer than 8
bytes, consequent bytes will be ignored.
If you mark the Custom on fail checkbox, it is possible to specify custom error-handling
options: either to discard the value, set a specified value or set a specified error message.
SNMP walk to JSON Convert SNMP values to JSON. Specify a field name in the JSON and the corresponding SNMP
OID path. Field values will be populated by values in the specified SNMP OID path.
You may use this preprocessing step for SNMP OID discovery.
Similar value formatting options as in the SNMP walk value step are available.
If you mark the Custom on fail checkbox, it is possible to specify custom error-handling
options: either to discard the value, set a specified value or set a specified error message.
SNMP get value Apply formatting options to the SNMP get value:
UTF-8 from Hex-STRING - convert Hex-STRING to UTF-8 string;
MAC from Hex-STRING - convert Hex-STRING to MAC address string (which will have ' '
replaced by ':');
Integer from BITS - convert the first 8 bytes of a bit string expressed as a sequence of hex
characters (e.g. ”1A 2B 3C 4D”) into a 64-bit unsigned integer. In bit strings longer than 8
bytes, consequent bytes will be ignored.
If you mark the Custom on fail checkbox, it is possible to specify custom error-handling
options: either to discard the value, set a specified value or set a specified error message.
Custom
scripts
JavaScript Enter JavaScript code in the block that appears when clicking in the parameter field or on the
pencil icon.
Note that available JavaScript length depends on the database used.
For more information, see: Javascript preprocessing
Validation
Does not match Specify a regular expression that a value must not match.
regular expression E.g. Error:(.*?)\.
If you mark the Custom on fail checkbox, it is possible to specify custom error-handling
options: either to discard the value, set a specified value or set a specified error message.
Check for error in JSON Check for an application-level error message located at JSONPath. Stop processing if
succeeded and message is not empty; otherwise continue processing with the value that was
before this preprocessing step. Note that these external service errors are reported to user
as is, without adding preprocessing step information.
E.g. $.errors. If a JSON like {"errors":"e1"} is received, the next preprocessing step
will not be executed.
If you mark the Custom on fail checkbox, it is possible to specify custom error-handling
options: either to discard the value, set a specified value or set a specified error message.
Check for error in XML Check for an application-level error message located at Xpath. Stop processing if succeeded
and message is not empty; otherwise continue processing with the value that was before this
preprocessing step. Note that these external service errors are reported to user as is, without
adding preprocessing step information.
No error will be reported in case of failing to parse invalid XML.
If you mark the Custom on fail checkbox, it is possible to specify custom error-handling
options: either to discard the value, set a specified value or set a specified error message.
Matches regular Specify a regular expression that a value must match.
expression If you mark the Custom on fail checkbox, it is possible to specify custom error-handling
options: either to discard the value, set a specified value or set a specified error message.
Throttling
634
Type
Discard unchanged Discard a value if it has not changed within the defined time period (in seconds).
with heartbeat Positive integer values are supported to specify the seconds (minimum - 1 second). Time
suffixes can be used in this field (e.g. 30s, 1m, 2h, 1d). User macros and low-level discovery
macros can be used in this field.
Only one throttling option can be specified for a discovery item.
E.g. 1m. If identical text is passed into this rule twice within 60 seconds, it will be discarded.
Note: Changing item prototypes does not reset throttling. Throttling is reset only when
preprocessing steps are changed.
Prometheus
Prometheus to JSON Convert required Prometheus metrics to JSON.
See Prometheus checks for more details.
Note that if the discovery rule has been applied to the host via template then the content of this tab is read-only.
Custom macros
The LLD macros tab allows to specify custom low-level discovery macros.
Custom macros are useful in cases when the returned JSON does not have the required macros already defined. So, for example:
• The native vfs.fs.discovery key for filesystem discovery returns a JSON with some pre-defined LLD macros such as
{#FSNAME}, {#FSTYPE}. These macros can be used in item, trigger prototypes (see subsequent sections of the page)
directly; defining custom macros is not needed;
• The vfs.fs.get agent item also returns a JSON with filesystem data, but without any pre-defined LLD macros. In this case
you may define the macros yourself, and map them to the values in the JSON using JSONPath:
The extracted values can be used in discovered items, triggers, etc. Note that values will be extracted from the result of discovery
and any preprocessing steps so far.
Parameter Description
LLD macro Name of the low-level discovery macro, using the following syntax: {#MACRO}.
JSONPath Path that is used to extract LLD macro value from a LLD row, using JSONPath syntax.
For example, $.foo will extract ”bar” and ”baz” from this JSON: [{"foo":"bar"},
{"foo":"baz"}]
The values extracted from the returned JSON are used to replace the LLD macros in item, trigger,
etc. prototype fields.
JSONPath can be specified using the dot notation or the bracket notation. Bracket notation
should be used in case of any special characters and Unicode, like $['unicode + special
chars #1']['unicode + special chars #2'].
Filter
A filter can be used to generate real items, triggers, and graphs only for entities that match the criteria. The Filters tab contains
discovery rule filter definitions allowing to filter discovery values:
635
Parameter Description
Type of calculation The following options for calculating filters are available:
And - all filters must be passed;
Or - enough if one filter is passed;
And/Or - uses And with different macro names and Or with the same macro name;
Custom expression - offers the possibility to define a custom calculation of filters. The formula
must include all filters in the list. Limited to 255 symbols.
Filters The following filter condition operators are available: matches, does not match, exists, does not
exist.
Matches and does not match operators expect a Perl Compatible Regular Expression (PCRE). For
instance, if you are only interested in C:, D:, and E: file systems, you could put {#FSNAME} into
”Macro” and ”^C|^D|^E” regular expression into ”Regular expression” text fields. Filtering is
also possible by file system types using {#FSTYPE} macro (e.g. ”^ext|^reiserfs”) and by drive
types (supported only by Windows agent) using {#FSDRIVETYPE} macro (e.g., ”fixed”).
You can enter a regular expression or reference a global regular expression in ”Regular
expression” field.
for f in ext2 nfs
In order to test a regular expression you can use ”grep -E”, for example:
reiserfs smbfs; do echo $f | grep -E '^ext|^reiserfs' || echo "SKIP: $f";
done
Exists and does not exist operators allow to filter entities based on the presence or absence of
the specified LLD macro in the response.
Note that if a macro from the filter is missing in the response, the found entity will be ignored,
unless a ”does not exist” condition is specified for this macro.
A warning will be displayed, if the absence of a macro affects the expression result. For example,
if {#B} is missing in:
{#A} matches 1 and {#B} matches 2 - will give a warning
{#A} matches 1 or {#B} matches 2 - no warning
Warning:
A mistake or a typo in the regular expression used in the LLD rule (for example, an incorrect ”File systems for discovery”
regular expression) may cause deletion of thousands of configuration elements, historical values, and events for many
hosts.
Attention:
Zabbix database in MySQL must be created as case-sensitive if file system names that differ only by case are to be
discovered correctly.
Override
636
The Override tab allows setting rules to modify the list of item, trigger, graph and host prototypes or their attributes for discovered
objects that meet given criteria.
Overrides (if any) are displayed in a reorderable drag-and-drop list and executed in the order in which they are defined. To configure
details of a new override, click on in the Overrides block. To edit an existing override, click on the override name. A popup
window will open allowing to edit the override rule details.
Parameter Description
Configuring an operation
637
To configure details of a new operation, click on in the Operations block. To edit an existing operation, click on next to
the operation. A popup window where you can edit the operation details will open.
Parameter Description
638
Parameter Description
History When the checkbox is marked, the buttons will appear, allowing different history
storage period to be set for the item:
Do not store - if selected, the history will not be stored.
Store up to - if selected, an input field for specifying storage period will appear to the
right. User macros and LLD macros are supported.
Trends When the checkbox is marked, the buttons will appear, allowing different trend storage
period to be set for the item:
Do not store - if selected, the trends will not be stored.
Store up to - if selected, an input field for specifying storage period will appear to the
right. User macros and LLD macros are supported.
Tags When the checkbox is marked, a new block will appear, allowing to specify tag-value
pairs.
These tags will be appended to the tags specified in the item prototype, even if the tag
names match.
Object:
Trig-
ger
pro-
to-
type
Create enabled When the checkbox is marked, the buttons will appear, allowing to override original
trigger prototype settings:
Yes - the trigger will be added in an enabled state.
No - the trigger will be added to a discovered entity, but in a disabled state.
Discover When the checkbox is marked, the buttons will appear, allowing to override original
trigger prototype settings:
Yes - the trigger will be added.
No - the trigger will not be added.
Severity When the checkbox is marked, trigger severity buttons will appear, allowing to modify
trigger severity.
Tags When the checkbox is marked, a new block will appear, allowing to specify tag-value
pairs.
These tags will be appended to the tags specified in the trigger prototype, even if the
tag names match.
Object:
Graph
pro-
to-
type
Discover When the checkbox is marked, the buttons will appear, allowing to override original
graph prototype settings:
Yes - the graph will be added.
No - the graph will not be added.
Object:
Host
pro-
to-
type
Create enabled When the checkbox is marked, the buttons will appear, allowing to override original host
prototype settings:
Yes - the host will be created in an enabled state.
No - the host will be created in a disabled state.
Discover When the checkbox is marked, the buttons will appear, allowing to override original host
prototype settings:
Yes - the host will be discovered.
No - the host will not be discovered.
Link templates When the checkbox is marked, an input field for specifying templates will appear. Start
typing the template name or click on Select next to the field and select templates from
the list in a popup window.
All templates linked to a host prototype will be replaced by templates from this override.
639
Parameter Description
Tags When the checkbox is marked, a new block will appear, allowing to specify tag-value
pairs.
These tags will be appended to the tags specified in the host prototype, even if the tag
names match.
Host inventory When the checkbox is marked, the buttons will appear, allowing to select different
inventory mode for the host prototype:
Disabled - do not populate host inventory
Manual - provide details manually
Automated - auto-fill host inventory data based on collected metrics.
Form buttons
Add a discovery rule. This button is only available for new discovery rules.
Update the properties of a discovery rule. This button is only available for existing discovery
rules.
Create another discovery rule based on the properties of the current discovery rule.
Perform discovery based on the discovery rule immediately. The discovery rule must already
exist. See more details.
Note that when performing discovery immediately, configuration cache is not updated, thus the
result will not reflect very recent changes to discovery rule configuration.
Discovered entities The screenshots below illustrate how discovered items, triggers, and graphs look like in the host’s config-
uration. Discovered entities are prefixed with an orange link to a discovery rule they come from.
Note that discovered entities will not be created in case there are already existing entities with the same uniqueness criteria, for
example, an item with the same key or graph with the same name. An error message is displayed in this case in the frontend that
the low-level discovery rule could not create certain entities. The discovery rule itself, however, will not turn unsupported because
some entity could not be created and had to be skipped. The discovery rule will go on creating/updating other entities.
If a discovered entity (host, file system, interface, etc) stops being discovered (or does not pass the filter anymore) the entities
that were created based on it may be automatically disabled and eventually deleted.
640
Lost resources may be automatically disabled based on the value of the Disable lost resources parameter. This affects lost hosts,
items and triggers.
Lost resources may be automatically deleted based on the value of the Delete lost resources parameter. This affects lost hosts,
host groups, items, triggers, and graphs.
When discovered entities become ’Not discovered anymore’, a lifetime indicator is displayed in the entity list. Move your mouse
pointer over it and a message will be displayed indicating its status details.
If entities were marked for deletion, but were not deleted at the expected time (disabled discovery rule or item host), they will be
deleted the next time the discovery rule is processed.
Entities containing other entities, which are marked for deletion, will not update if changed on the discovery rule level. For example,
LLD-based triggers will not update if they contain items that are marked for deletion.
Other types of discovery More detail and how-tos on other types of out-of-the-box discovery is available in the following
sections:
For more detail on the JSON format for discovery items and an example of how to implement your own file system discoverer as a
Perl script, see creating custom LLD rules.
1 Item prototypes
641
Once a rule is created, go to the items for that rule and press ”Create item prototype” to create an item prototype.
Note how the {#FSNAME} macro is used where a file system name is required. The use of a low-level discovery macro is mandatory
in the item key to make sure that the discovery is processed correctly. When the discovery rule is processed, this macro will be
substituted with the discovered file system.
Low-level discovery macros and user macros are supported in item prototype configuration and item value preprocessing param-
eters. Note that when used in update intervals, a single macro has to fill the whole field. Multiple macros in one field or macros
mixed with text are not supported.
Note:
Context-specific escaping of low-level discovery macros is performed for safe use in regular expression and XPath prepro-
cessing parameters.
Parameter Description
We can create several item prototypes for each file system metric we are interested in:
642
Click on the three-dot icon to open the menu for the specific item prototype with these options:
• Create trigger prototype - create a trigger prototype based on this item prototype
• Trigger prototypes - click to see a list with links to already-configured trigger prototypes of this item prototype
• Create dependent item - create a dependent item for this item prototype
Mass update option is available if you want to update properties of several item prototypes at once.
2 Trigger prototypes
643
Attributes that are specific for trigger prototypes:
Parameter Description
When real triggers are created from the prototypes, there may be a need to be flexible as to what constant (’20’ in our example)
is used for comparison in the expression. See how user macros with context can be useful to accomplish such flexibility.
You can define dependencies between trigger prototypes. To do that, go to the Dependencies tab. A trigger prototype may depend
on another trigger prototype from the same low-level discovery (LLD) rule or on a regular trigger. A trigger prototype may not
depend on a trigger prototype from a different LLD rule or on a trigger created from trigger prototype. Host trigger prototype
cannot depend on a trigger from a template.
3 Graph prototypes
644
Attributes that are specific for graph prototypes:
Parameter Description
Finally, we have created a discovery rule that looks as shown below. It has five item prototypes, two trigger prototypes, and one
graph prototype.
645
4 Host prototypes
Host prototypes are blueprints for creating hosts through low-level discovery rules. Before being discovered as hosts, these proto-
types cannot have items and triggers, except those linked from templates.
Configuration
2. Click Discovery for the required host to navigate to the list of low-level discovery rules configured for that host.
4. Click the Create host prototype button in the upper right corner.
Host prototypes have the same parameters as regular hosts; however, the following parameters support different or additional
configuration:
Parameter Description
Host name This parameter must contain at least one low-level discovery macro to ensure unique host
names for created hosts.
Visible name Low-level discovery macros are supported.
646
Parameter Description
Group prototypes Allows specifying host group prototypes by using low-level discovery macros.
Based on the specified group prototypes, host groups will be discovered, created and linked to
the created hosts; discovered groups that have already been created by other low-level
discovery rules will also be linked to the created hosts. However, discovered host groups that
match manually created host groups will not be linked to the created hosts.
Interfaces Set whether discovered hosts inherit the IP from the host that the discovery rule belongs to
(default), or get custom interfaces.
Low-level discovery macros and user macros are supported.
Create enabled Set the status of discovered hosts; if unchecked, hosts will be created as disabled.
Discover Set whether hosts will be created from the host prototype; if unchecked, hosts will not be created
from the host prototype (unless this setting is overridden in the low-level discovery rule).
Note:
Low-level discovery macros are also supported for tag values and host prototype user macro values.<br> Value maps are
not supported for host prototypes.
For an example of how to configure a host prototype, see Virtual machine monitoring.
Host interfaces
To add custom interfaces, switch the Interface selector from ”Inherit” to ”Custom”. Click and select the interface type -
Zabbix agent, SNMP, JMX, IPMI.
Note:
If Custom is selected, but no interfaces have been set, the hosts will be created without interfaces.<br> If Inherit is selected
and the host prototype belongs to a template, all discovered hosts will inherit the host interface from the host to which the
template is linked.
If multiple custom interfaces are specified, the primary interface can be set in the Default column.
For an example of how to configure a custom host interface, see VMware monitoring setup example.
Warning:
A host will only be created if a host interface contains correct data.
Discovered hosts
In the host list, discovered hosts are prefixed with the name of the discovery rule that created them.
Discovered hosts inherit most parameters from host prototypes as read-only. Only the following discovered host parameters can
be configured:
• Templates - link additional templates or unlink manually added ones. Templates inherited from a host prototype cannot be
unlinked.
• Status - manually enable/disable a host.
• Tags - manually add tags alongside tags inherited from the host prototype. Manual or inherited tags cannot have duplicates
(tags with the same name and value). If an inherited tag has the same name and value as a manual tag, it will replace the
manual tag during discovery.
• Macros - manually add host macros alongside macros inherited from the host prototype; change macro values and types on
the host level.
• Description.
Discovered hosts can be deleted manually. Note, however, that they will be discovered again if discovery is enabled for them.
• automatically disabled (based on the Disable lost resources value of the discovery rule)
• automatically deleted (based on the Delete lost resources value of the discovery rule).
Note:
Zabbix does not support nested host prototypes, that is, host prototypes on hosts discovered by low-level discovery rules.
647
5 Notes on low-level discovery
LLD macros may be used inside user macro context, for example, in trigger prototypes.
It is possible to define several low-level discovery rules with the same discovery item.
To do that you need to define the Alias agent parameter, allowing to use altered discovery item keys in different discovery rules,
for example vfs.fs.discovery[foo], vfs.fs.discovery[bar], etc.
Data limits for return values
There is no limit for low-level discovery rule JSON data if it is received directly by Zabbix server. This is because the return values
are processed without being stored in a database.
There is also no limit for custom low-level discovery rules. However, if custom low-level discovery rule data is retrieved using a
user parameter, the user parameter return value limit applies.
If data has to go through Zabbix proxy, it has to store this data in the database. In such a case, database limits apply.
6 Discovery rules
Please use the sidebar to see discovery rule configuration examples for various cases.
Overview
• mountpoint name
• filesystem type
• filesystem size
• inode statistics
• mount options
Configuration
Master item
vfs.fs.get
648
Set the type of information to ”Text” for possibly big JSON data.
The data returned by this item will contain something like the following for a mounted filesystem:
[
{
"fsname": "/",
"fstype": "ext4",
"bytes": {
"total": 249405239296,
"free": 24069537792,
"used": 212595294208,
"pfree": 10.170306,
"pused": 89.829694
},
"inodes": {
"total": 15532032,
"free": 12656665,
"used": 2875367,
"pfree": 81.487503,
"pused": 18.512497
},
"options": "rw,noatime,errors=remount-ro"
}
]
649
As master item select the vfs.fs.get item we created.
In the ”LLD macros” tab define custom macros with the corresponding JSONPath:
In the ”Filters” tab you may add a regular expression that filters only read-write filesystems:
Create an item prototype with ”Dependent item” type in this LLD rule. As master item for this prototype select the vfs.fs.get
item we created.
650
Note the use of custom macros in the item prototype name and key:
In the item prototype ”Preprocessing” tab select JSONPath and use the following JSONPath expression as parameter:
$.[?(@.fsname=='{#FSNAME}')].bytes.free.first()
When discovery starts, one item per each mountpoint will be created. This item will return the number of free bytes for the given
mountpoint.
In a similar way as file systems are discovered, it is possible to also discover network interfaces.
Item key
net.if.discovery
Supported macros
You may use the {#IFNAME} macro in the discovery rule filter and prototypes of items, triggers and graphs.
Examples of item prototypes that you might wish to create based on ”net.if.discovery”:
• ”net.if.in[{#IFNAME},bytes]”,
• ”net.if.out[{#IFNAME},bytes]”.
651
3 Discovery of CPUs and CPU cores
In a similar way as file systems are discovered, it is possible to also discover CPUs and CPU cores.
Item key
system.cpu.discovery
Supported macros
This discovery key returns two macros - {#CPU.NUMBER} and {#CPU.STATUS} identifying the CPU order number and status respec-
tively. Note that a clear distinction cannot be made between actual, physical processors, cores and hyperthreads. {#CPU.STATUS}
on Linux, UNIX and BSD systems returns the status of the processor, which can be either ”online” or ”offline”. On Windows systems,
this same macro may represent a third value - ”unknown” - which indicates that a processor has been detected, but no information
has been collected for it yet.
CPU discovery relies on the agent’s collector process to remain consistent with the data provided by the collector and save resources
on obtaining the data. This has the effect of this item key not working with the test (-t) command line flag of the agent binary, which
will return a NOT_SUPPORTED status and an accompanying message indicating that the collector process has not been started.
Item prototypes that can be created based on CPU discovery include, for example:
• system.cpu.util[{#CPU.NUMBER},<type>,<mode>]
• system.hw.cpu[{#CPU.NUMBER},<info>]
For detailed item key description, see Zabbix agent item keys.
Overview
This discovery method of SNMP OIDs has been supported since Zabbix server/proxy 6.4.
Item key
Create an SNMP item, using the following item key in the SNMP OID field:
walk[1.3.6.1.2.1.2.2.1.2,1.3.6.1.2.1.2.2.1.3]
This item will perform an snmpwalk for the OIDs specified in the parameters (1.3.6.1.2.1.2.2.1.2, 1.3.6.1.2.1.2.2.1.3), returning a
concatenated list of values, e.g.:
652
.1.3.6.1.2.1.2.2.1.3.1 = INTEGER: 24
.1.3.6.1.2.1.2.2.1.3.2 = INTEGER: 6
.1.3.6.1.2.1.2.2.1.3.3 = INTEGER: 6
Dependent discovery rule
Go to the discovery rules of your template/host. Click on Create discovery rule in the upper right corner of the screen.
In the Preprocessing tab, select the SNMP walk to JSON preproccesing step.
In the field name specify a valid LLD macro name. Select the corresponding OID path to discover values from.
[
{
"{#SNMPINDEX}": "1",
"{#IFDESCR}": "lo",
"{#IFTYPE}": "24"
},
{
"{#SNMPINDEX}": "2",
"{#IFDESCR}": "ens33",
"{#IFTYPE}": "6"
},
{
"{#SNMPINDEX}": "3",
"{#IFDESCR}": "ens37",
"{#IFTYPE}": "6"
}
]
653
If an entity does not have the specified OID, then the corresponding macro will be omitted for this entity.
Item prototypes must be created as dependent item prototypes, using macros from the discovery rule.
Dependent items will obtain their values from the walk[] master item. Thus it will not be necessary for each discovered item to
query the SNMP device independently.
Trigger and graph prototypes may also be created also by using macros from the discovery rule.
Discovered entities
When server runs, it will create real dependent items, triggers and graphs based on the values the SNMP discovery rule returns.
Overview
Item key
Unlike with file system and network interface discovery, the item does not necessarily has to have an ”snmp.discovery” key - item
type of SNMP agent is sufficient.
• Click on Create discovery rule in the upper right corner of the screen
• Fill in the discovery rule form with the required details as in the screenshot below
The OIDs to discover are defined in SNMP OID field in the following format: discovery[{#MACRO1}, oid1, {#MACRO2},
oid2, …,]
where {#MACRO1}, {#MACRO2} … are valid lld macro names and oid1, oid2... are OIDs capable of generating meaningful values
for these macros. A built-in macro {#SNMPINDEX} containing index of the discovered OID is applied to discovered entities. The
discovered entities are grouped by {#SNMPINDEX} macro value.
654
IF-MIB::ifDescr.2 = STRING: LAN1
IF-MIB::ifDescr.3 = STRING: LAN2
[
{
"{#SNMPINDEX}": "1",
"{#IFDESCR}": "WAN",
"{#IFPHYSADDRESS}": "8:0:27:90:7a:75"
},
{
"{#SNMPINDEX}": "2",
"{#IFDESCR}": "LAN1",
"{#IFPHYSADDRESS}": "8:0:27:90:7a:76"
},
{
"{#SNMPINDEX}": "3",
"{#IFDESCR}": "LAN2",
"{#IFPHYSADDRESS}": "8:0:27:2b:af:9e"
}
]
If an entity does not have the specified OID, then the corresponding macro will be omitted for this entity. For example if we have
the following data:
ifAlias.1 "eth0"
ifAlias.2 "eth1"
ifAlias.3 "eth2"
ifAlias.5 "eth4"
Then in this case SNMP discovery discovery[{#IFDESCR}, ifDescr, {#IFALIAS}, ifAlias] will return the following
structure:
[
{
"{#SNMPINDEX}": 1,
"{#IFDESCR}": "Interface #1",
"{#IFALIAS}": "eth0"
},
{
"{#SNMPINDEX}": 2,
"{#IFDESCR}": "Interface #2",
"{#IFALIAS}": "eth1"
},
{
"{#SNMPINDEX}": 3,
"{#IFALIAS}": "eth2"
},
{
"{#SNMPINDEX}": 4,
"{#IFDESCR}": "Interface #4"
},
655
{
"{#SNMPINDEX}": 5,
"{#IFALIAS}": "eth4"
}
]
Item prototypes
The following screenshot illustrates how we can use these macros in item prototypes:
Trigger prototypes
The following screenshot illustrates how we can use these macros in trigger prototypes:
656
Graph prototypes
The following screenshot illustrates how we can use these macros in graph prototypes:
657
A summary of our discovery rule:
Discovered entities
When server runs, it will create real items, triggers and graphs based on the values the SNMP discovery rule returns. In the host
configuration they are prefixed with an orange link to a discovery rule they come from.
658
6 Discovery of JMX objects
659
Overview
It is possible to discover all JMX MBeans or MBean attributes or to specify a pattern for the discovery of these objects.
It is mandatory to understand the difference between an MBean and MBean attributes for discovery rule configuration. An MBean
is an object which can represent a device, an application, or any resource that needs to be managed.
For example, there is an MBean which represents a web server. Its attributes are connection count, thread count, request timeout,
http file cache, memory usage, etc. Expressing this thought in human comprehensive language we can define a coffee machine as
an MBean which has the following attributes to be monitored: water amount per cup, average consumption of water for a certain
period of time, number of coffee beans required per cup, coffee beans and water refill time, etc.
Item key
Two item keys are supported for JMX object discovery - jmx.discovery[] and jmx.get[]:
Item
key
660
Item
key
This item returns a JSON array discovery mode - one of the When using this item, it is needed to
with MBean objects or their following: attributes (retrieve JMX define custom low-level discovery macros,
attributes. MBean attributes, default) or pointing to values extracted from the
beans (retrieve JMX MBeans) returned JSON using JSONPath.
Compared to jmx.discovery[] object name - object name
it does not define LLD macros. pattern (see documentation)
identifying the MBean names to
be retrieved (empty by default,
retrieving all registered beans)
unique short description - a
unique description that allows
multiple JMX items with the same
discovery mode and object name
on the host (optional)
Attention:
If no parameters are passed, all MBean attributes from JMX are requested. Not specifying parameters for JMX discovery or
trying to receive all attributes for a wide range like *:type=*,name=* may lead to potential performance problems.
Using jmx.discovery
This item returns a JSON object with low-level discovery macros describing MBean objects or attributes. For example, in the
discovery of MBean attributes (reformatted for clarity):
[
{
"{#JMXVALUE}":"0",
"{#JMXTYPE}":"java.lang.Long",
"{#JMXOBJ}":"java.lang:type=GarbageCollector,name=PS Scavenge",
"{#JMXDESC}":"java.lang:type=GarbageCollector,name=PS Scavenge,CollectionCount",
"{#JMXATTR}":"CollectionCount"
},
{
"{#JMXVALUE}":"0",
"{#JMXTYPE}":"java.lang.Long",
"{#JMXOBJ}":"java.lang:type=GarbageCollector,name=PS Scavenge",
"{#JMXDESC}":"java.lang:type=GarbageCollector,name=PS Scavenge,CollectionTime",
"{#JMXATTR}":"CollectionTime"
},
{
"{#JMXVALUE}":"true",
"{#JMXTYPE}":"java.lang.Boolean",
"{#JMXOBJ}":"java.lang:type=GarbageCollector,name=PS Scavenge",
"{#JMXDESC}":"java.lang:type=GarbageCollector,name=PS Scavenge,Valid",
"{#JMXATTR}":"Valid"
},
{
"{#JMXVALUE}":"PS Scavenge",
"{#JMXTYPE}":"java.lang.String",
"{#JMXOBJ}":"java.lang:type=GarbageCollector,name=PS Scavenge",
"{#JMXDESC}":"java.lang:type=GarbageCollector,name=PS Scavenge,Name",
"{#JMXATTR}":"Name"
},
{
"{#JMXVALUE}":"java.lang:type=GarbageCollector,name=PS Scavenge",
"{#JMXTYPE}":"javax.management.ObjectName",
"{#JMXOBJ}":"java.lang:type=GarbageCollector,name=PS Scavenge",
"{#JMXDESC}":"java.lang:type=GarbageCollector,name=PS Scavenge,ObjectName",
"{#JMXATTR}":"ObjectName"
}
661
]
[
{
"{#JMXDOMAIN}":"java.lang",
"{#JMXTYPE}":"GarbageCollector",
"{#JMXOBJ}":"java.lang:type=GarbageCollector,name=PS Scavenge",
"{#JMXNAME}":"PS Scavenge"
}
]
Supported macros
The following macros are supported for use in the discovery rule filter and prototypes of items, triggers and graphs:
Macro Description
Limitations
There are some limitations associated with the algorithm of creating LLD macro names from MBean property names:
Please consider this jmx.discovery (with ”beans” mode) example. MBean has the following properties defined (some of which will
be ignored; see below):
name=test
���=Type
attributes []=1,2,3
Name=NameOfTheTest
domAin=some
As a result of JMX discovery, the following LLD macros will be generated:
Examples
Let’s review two more practical examples of a LLD rule creation with the use of MBean. To understand the difference between a
LLD rule collecting MBeans and a LLD rule collecting MBean attributes better please take a look at following table:
662
MBean1Attribute2 MBean2Attribute2 MBean3Attribute2
MBean1Attribute3 MBean2Attribute3 MBean3Attribute3
This rule will return 3 objects: the top row of the column: MBean1, MBean2, MBean3.
For more information about objects please refer to supported macros table, Discovery of MBeans section.
Discovery rule configuration collecting MBeans (without the attributes) looks like the following:
jmx.discovery[beans,"*:type=GarbageCollector,name=*"]
All the garbage collectors without attributes will be discovered. As Garbage collectors have the same attribute set, we can use
desired attributes in item prototypes the following way:
jmx[{#JMXOBJ},CollectionCount]
jmx[{#JMXOBJ},CollectionTime]
jmx[{#JMXOBJ},Valid]
LLD discovery rule will result in something close to this (items are discovered for two Garbage collectors):
663
Example 2: Discovering MBean attributes
This rule will return 9 objects with the following fields: MBean1Attribute1, MBean2Attribute1, MBean3Attribute1,MBean1Attribute2,MBean2Attr
MBean3Attribute2, MBean1Attribute3, MBean2Attribute3, MBean3Attribute3.
For more information about objects please refer to supported macros table, Discovery of MBean attributes section.
Discovery rule configuration collecting MBean attributes looks like the following:
jmx.discovery[attributes,"*:type=GarbageCollector,name=*"]
All the garbage collectors with a single item attribute will be discovered.
In this particular case an item will be created from prototype for every MBean attribute. The main drawback of this configuration
is that trigger creation from trigger prototypes is impossible as there is only one item prototype for all attributes. So this setup can
be used for data collection, but is not recommended for automatic monitoring.
Using jmx.get
jmx.get[] is similar to the jmx.discovery[] item, but it does not turn Java object properties into low-level discovery macro
names and therefore can return values without limitations that are associated with LLD macro name generation such as hyphens
or non-ASCII characters.
When using jmx.get[] for discovery, low-level discovery macros can be defined separately in the custom LLD macro tab of the
discovery rule configuration, using JSONPath to point to the required values.
Discovering MBeans
664
Discovery item: jmx.get[beans,"com.example:type=*,*"]
Response:
[
{
"object": "com.example:type=Hello,data-src=data-base,����=��������",
"domain": "com.example",
"properties": {
"data-src": "data-base",
"����": "��������",
"type": "Hello"
}
},
{
"object": "com.example:type=Atomic",
"domain": "com.example",
"properties": {
"type": "Atomic"
}
}
]
[
{
"object": "com.example:type=*",
"domain": "com.example",
"properties": {
"type": "Simple"
}
},
{
"object": "com.zabbix:type=yes,domain=zabbix.com,data-source=/dev/rand,����=��������,obj=true",
"domain": "com.zabbix",
"properties": {
"type": "Hello",
"domain": "com.example",
"data-source": "/dev/rand",
"����": "��������",
"obj": true
}
}
]
Overview
Configuration
Master item
ipmi.get
665
Set the type of information to ”Text” for possibly big JSON data.
Create an item prototype with ”Dependent item” type in this LLD rule. As master item for this prototype select the ipmi.get item
we created.
666
Note the use of the {#SENSOR_ID} macro in the item prototype name and key:
In the item prototype ”Preprocessing” tab select JSONPath and use the following JSONPath expression as parameter:
$.[?(@.id=='{#SENSOR_ID}')].value.first()
When discovery starts, one item per each IPMI sensor will be created. This item will return the integer value of the given sensor.
Overview
Item key
systemd.unit.discovery
Attention:
This item key is only supported in Zabbix agent 2.
This item returns a JSON with information about systemd units, for example:
[{
"{#UNIT.NAME}": "mysqld.service",
"{#UNIT.DESCRIPTION}": "MySQL Server",
"{#UNIT.LOADSTATE}": "loaded",
"{#UNIT.ACTIVESTATE}": "active",
"{#UNIT.SUBSTATE}": "running",
"{#UNIT.FOLLOWED}": "",
"{#UNIT.PATH}": "/org/freedesktop/systemd1/unit/mysqld_2eservice",
667
"{#UNIT.JOBID}": 0,
"{#UNIT.JOBTYPE}": "",
"{#UNIT.JOBPATH}": "/",
"{#UNIT.UNITFILESTATE}": "enabled"
}, {
"{#UNIT.NAME}": "systemd-journald.socket",
"{#UNIT.DESCRIPTION}": "Journal Socket",
"{#UNIT.LOADSTATE}": "loaded",
"{#UNIT.ACTIVESTATE}": "active",
"{#UNIT.SUBSTATE}": "running",
"{#UNIT.FOLLOWED}": "",
"{#UNIT.PATH}": "/org/freedesktop/systemd1/unit/systemd_2djournald_2esocket",
"{#UNIT.JOBID}": 0,
"{#UNIT.JOBTYPE}": "",
"{#UNIT.JOBPATH}": "/",
"{#UNIT.UNITFILESTATE}": "enabled"
}]
Discovery of disabled systemd units
It is also possible to discover disabled systemd units. In this case three macros are returned in the resulting JSON:
• {#UNIT.PATH}
• {#UNIT.ACTIVESTATE}
• {#UNIT.UNITFILESTATE}.
Attention:
To have items and triggers created from prototypes for disabled systemd units, make sure to adjust (or remove) prohibiting
LLD filters for {#UNIT.ACTIVESTATE} and {#UNIT.UNITFILESTATE}.
Supported macros
The following macros are supported for use in the discovery rule filter and prototypes of items, triggers and graphs:
Macro Description
Item prototypes
Item prototypes that can be created based on systemd service discovery include, for example:
Overview
In a similar way as file systems are discovered, it is possible to also discover Windows services.
Item key
668
service.discovery
Supported macros
The following macros are supported for use in the discovery rule filter and prototypes of items, triggers and graphs:
Macro Description
Based on Windows service discovery you may create an item prototype like
service.info[{#SERVICE.NAME},<param>]
where param accepts the following values: state, displayname, path, user, startup or description.
For example, to acquire the display name of a service you may use a ”service.info[{#SERVICE.NAME},displayname]” item. If
param value is not specified (”service.info[{#SERVICE.NAME}]”), the default state parameter is used.
Overview
It is possible to discover object instances of Windows performance counters. This is useful for multi-instance performance counters.
Item key
perf_instance.discovery[object]
or, to be able to provide the object name in English only, independently of OS localization:
perf_instance_en.discovery[object]
For example:
perf_instance.discovery[Processador]
perf_instance_en.discovery[Processor]
Supported macros
The discovery will return all instances of the specified object in the {#INSTANCE} macro, which may be used in the prototypes of
perf_count and perf_count_en items.
[
{"{#INSTANCE}":"0"},
{"{#INSTANCE}":"1"},
{"{#INSTANCE}":"_Total"}
]
For example, if the item key used in the discovery rule is:
669
perf_instance.discovery[Processor]
you may create an item prototype:
• If the specified object is not found or does not support variable instances then the discovery item will become NOTSUP-
PORTED.
• If the specified object supports variable instances, but currently does not have any instances, then an empty JSON array will
be returned.
• In case of duplicate instances they will be skipped.
Overview
WMI is a powerful interface in Windows that can be used for retrieving various information about Windows components, services,
state and software installed.
It can be used for physical disk discovery and their performance data collection, network interface discovery, Hyper-V guest
discovery, monitoring Windows services and many other things in Windows OS.
This type of low-level discovery is done using WQL queries whose results get automatically transformed into a JSON object suitable
for low-level discovery.
Item key
wmi.getall[<namespace>,<query>]
This item transforms the query result into a JSON array. For example:
[
{
"DeviceID" : "\\.\PHYSICALDRIVE0",
"BytesPerSector" : 512,
"Capabilities" : [
3,
4
],
"CapabilityDescriptions" : [
"Random Access",
"Supports Writing"
],
"Caption" : "VBOX HARDDISK ATA Device",
"ConfigManagerErrorCode" : "0",
"ConfigManagerUserConfig" : "false",
"CreationClassName" : "Win32_DiskDrive",
"Description" : "Disk drive",
"FirmwareRevision" : "1.0",
"Index" : 0,
"InterfaceType" : "IDE"
},
{
"DeviceID" : "\\.\PHYSICALDRIVE1",
"BytesPerSector" : 512,
"Capabilities" : [
3,
4
],
"CapabilityDescriptions" : [
"Random Access",
670
"Supports Writing"
],
"Caption" : "VBOX HARDDISK ATA Device",
"ConfigManagerErrorCode" : "0",
"ConfigManagerUserConfig" : "false",
"CreationClassName" : "Win32_DiskDrive",
"Description" : "Disk drive",
"FirmwareRevision" : "1.0",
"Index" : 1,
"InterfaceType" : "IDE"
}
]
Even though no low-level discovery macros are created in the returned JSON, these macros can be defined by the user as an
additional step, using the custom LLD macro functionality with JSONPath pointing to the discovered values in the returned JSON.
The macros then can be used to create item, trigger, etc prototypes.
Overview
This type of low-level discovery is done using SQL queries, whose results get automatically transformed into a JSON object suitable
for low-level discovery.
Item key
SQL queries are performed using a ”Database monitor” item type. Therefore, most of the instructions on ODBC monitoring page
apply in order to get a working ”Database monitor” discovery rule.
• db.odbc.discovery[<unique short description>,<dsn>,<connection string>] - this item transforms the SQL query result
into a JSON array, turning the column names from the query result into low-level discovery macro names paired with the dis-
covered field values. These macros can be used in creating item, trigger, etc prototypes. See also: Using db.odbc.discovery.
• db.odbc.get[<unique short description>,<dsn>,<connection string>] - this item transforms the SQL query result into a
JSON array, keeping the original column names from the query result as a field name in JSON paired with the discovered
values. Compared to db.odbc.discovery[], this item does not create low-level discovery macros in the returned JSON,
therefore there is no need to check if the column names can be valid macro names. The low-level discovery macros can be
defined as an additional step as required, using the custom LLD macro functionality with JSONPath pointing to the discovered
values in the returned JSON. See also: Using db.odbc.get.
Using db.odbc.discovery
As a practical example to illustrate how the SQL query is transformed into JSON, let us consider low-level discovery of Zabbix proxies
by performing an ODBC query on Zabbix database. This is useful for automatic creation of ”zabbix[proxy,<name>,lastaccess]”
internal items to monitor which proxies are alive.
671
All mandatory input fields are marked with a red asterisk.
Here, the following direct query on Zabbix database is used to select all Zabbix proxies, together with the number of hosts they
are monitoring. The number of hosts can be used, for instance, to filter out empty proxies:
mysql> SELECT h1.host, COUNT(h2.host) AS count FROM hosts h1 LEFT JOIN hosts h2 ON h1.hostid = h2.proxy_ho
+---------+-------+
| host | count |
+---------+-------+
| Japan 1 | 5 |
| Japan 2 | 12 |
| Latvia | 3 |
+---------+-------+
3 rows in set (0.01 sec)
By the internal workings of ”db.odbc.discovery[,{$DSN}]” item, the result of this query gets automatically transformed into the
following JSON:
[
{
"{#HOST}": "Japan 1",
"{#COUNT}": "5"
},
{
"{#HOST}": "Japan 2",
"{#COUNT}": "12"
},
{
"{#HOST}": "Latvia",
"{#COUNT}": "3"
}
]
It can be seen that column names become macro names and selected rows become the values of these macros.
672
Note:
If it is not obvious how a column name would be transformed into a macro name, it is suggested to use column aliases like
”COUNT(h2.host) AS count” in the example above.
In case a column name cannot be converted into a valid macro name, the discovery rule becomes not supported, with
the error message detailing the offending column number. If additional help is desired, the obtained column names are
provided under DebugLevel=4 in Zabbix server log file:
$ grep db.odbc.discovery /tmp/zabbix_server.log
...
23876:20150114:153410.856 In db_odbc_discovery() query:'SELECT h1.host, COUNT(h2.host) FROM hosts h1 L
23876:20150114:153410.860 db_odbc_discovery() column[1]:'host'
23876:20150114:153410.860 db_odbc_discovery() column[2]:'COUNT(h2.host)'
23876:20150114:153410.860 End of db_odbc_discovery():NOTSUPPORTED
23876:20150114:153410.860 Item [Zabbix server:db.odbc.discovery[proxies,{$DSN}]] error: Cannot convert
Now that we understand how a SQL query is transformed into a JSON object, we can use {#HOST} macro in item prototypes:
Using db.odbc.get
[
{
673
"host": "Japan 1",
"count": "5"
},
{
"host": "Japan 2",
"count": "12"
},
{
"host": "Latvia",
"count": "3"
}
]
As you can see, there are no low-level discovery macros there. However, custom low-level discovery macros can be created in the
LLD macros tab of a discovery rule using JSONPath, for example:
{#HOST} → $.host
Now this {#HOST} macro may be used in item prototypes:
Overview
Data provided in Prometheus line format can be used for low-level discovery.
See Prometheus checks for details how Prometheus data querying is implemented in Zabbix.
Configuration
The low-level discovery rule should be created as a dependent item to the HTTP master item that collects Prometheus data.
Prometheus to JSON
In the discovery rule, go to the Preprocessing tab and select the Prometheus to JSON preprocessing option. Data in JSON format
are needed for discovery and the Prometheus to JSON preprocessing option will return exactly that, with the following attributes:
• metric name
• metric value
• help (if present)
• type (if present)
• labels (if present)
• raw line
674
from these Prometheus lines:
[
{
"name": "wmi_logical_disk_free_bytes",
"help": "Free space in bytes (LogicalDisk.PercentFreeSpace)",
"type": "gauge",
"labels": {
"volume": "C:"
},
"value": "3.5180249088e+11",
"line_raw": "wmi_logical_disk_free_bytes{volume=\"C:\"} 3.5180249088e+11"
},
{
"name": "wmi_logical_disk_free_bytes",
"help": "Free space in bytes (LogicalDisk.PercentFreeSpace)",
"type": "gauge",
"labels": {
"volume": "D:"
},
"value": "2.627731456e+09",
"line_raw": "wmi_logical_disk_free_bytes{volume=\"D:\"} 2.627731456e+09"
},
{
"name": "wmi_logical_disk_free_bytes",
"help": "Free space in bytes (LogicalDisk.PercentFreeSpace)",
"type": "gauge",
"labels": {
"volume": "HarddiskVolume4"
},
"value": "4.59276288e+08",
"line_raw": "wmi_logical_disk_free_bytes{volume=\"HarddiskVolume4\"} 4.59276288e+08"
}
]
Next you have to go to the LLD macros tab and make the following mappings:
{#VOLUME}=$.labels['volume']
{#METRIC}=$['name']
{#HELP}=$['help']
Item prototype
675
with preprocessing options:
In a similar way as file systems are discovered, it is possible to also discover block devices and their type.
Item key
vfs.dev.discovery
This item is supported on Linux platforms only.
You may create discovery rules using this discovery item and:
• filter: {#DEVNAME} matches sd[\D]$ - to discover devices named ”sd0”, ”sd1”, ”sd2”, ...
• filter: {#DEVTYPE} matches disk AND {#DEVNAME} does not match ^loop.* - to discover disk type devices whose
name does not start with ”loop”
Supported macros
This discovery key returns two macros - {#DEVNAME} and {#DEVTYPE} identifying the block device name and type respectively,
e.g.:
[
{
"{#DEVNAME}":"loop1",
"{#DEVTYPE}":"disk"
},
{
"{#DEVNAME}":"dm-0",
676
"{#DEVTYPE}":"disk"
},
{
"{#DEVNAME}":"sda",
"{#DEVTYPE}":"disk"
},
{
"{#DEVNAME}":"sda1",
"{#DEVTYPE}":"partition"
}
]
Block device discovery allows to use vfs.dev.read[] and vfs.dev.write[] items to create item prototypes using the {#DE-
VNAME} macro, for example:
• ”vfs.dev.read[{#DEVNAME},sps]”
• ”vfs.dev.write[{#DEVNAME},sps]”
Overview
Item key
zabbix[host,discovery,interfaces]
internal item.
For example:
[{"{#IF.CONN}":"192.168.3.1","{#IF.IP}":"192.168.3.1","{#IF.DNS}":"","{#IF.PORT}":"10050","{#IF.TYPE}":"AG
With multiple interfaces their records in JSON are ordered by:
• Interface type,
• Default - the default interface is put before non-default interfaces,
• Interface ID (in ascending order).
Supported macros
The following macros are supported for use in the discovery rule filter and prototypes of items, triggers and graphs:
Macro Description
677
7 Custom LLD rules
Overview
It is also possible to create a completely custom LLD rule, discovering any type of entities - for example, databases on a database
server.
To do so, a custom item should be created that returns JSON, specifying found objects and optionally - some properties of them.
The amount of macros per entity is not limited - while the built-in discovery rules return either one or two macros (for example,
two for filesystem discovery), it is possible to return more.
Example
The required JSON format is best illustrated with an example. Suppose we are running an old Zabbix 1.8 agent (one that does not
support ”vfs.fs.discovery”), but we still need to discover file systems. Here is a simple Perl script for Linux that discovers mounted
file systems and outputs JSON, which includes both file system name and type. One way to use it would be as a UserParameter
with key ”vfs.fs.discovery_perl”:
####!/usr/bin/perl
$first = 1;
print "[\n";
print "\t{\n";
print "\t\t\"{#FSNAME}\":\"$fsname\",\n";
print "\t\t\"{#FSTYPE}\":\"$fstype\"\n";
print "\t}\n";
}
print "]\n";
Attention:
Allowed symbols for LLD macro names are 0-9 , A-Z , _ , . Lowercase letters are not supported in the names.
An example of its output (reformatted for clarity) is shown below. JSON for custom discovery checks has to follow the same format.
[
{ "{#FSNAME}":"/", "{#FSTYPE}":"rootfs" },
{ "{#FSNAME}":"/sys", "{#FSTYPE}":"sysfs" },
{ "{#FSNAME}":"/proc", "{#FSTYPE}":"proc" },
{ "{#FSNAME}":"/dev", "{#FSTYPE}":"devtmpfs" },
{ "{#FSNAME}":"/dev/pts", "{#FSTYPE}":"devpts" },
{ "{#FSNAME}":"/lib/init/rw", "{#FSTYPE}":"tmpfs" },
{ "{#FSNAME}":"/dev/shm", "{#FSTYPE}":"tmpfs" },
{ "{#FSNAME}":"/home", "{#FSTYPE}":"ext3" },
{ "{#FSNAME}":"/tmp", "{#FSTYPE}":"ext3" },
{ "{#FSNAME}":"/usr", "{#FSTYPE}":"ext3" },
{ "{#FSNAME}":"/var", "{#FSTYPE}":"ext3" },
{ "{#FSNAME}":"/sys/fs/fuse/connections", "{#FSTYPE}":"fusectl" }
]
In the previous example it is required that the keys match the LLD macro names used in prototypes, the alternative is to extract
LLD macro values using JSONPath {#FSNAME} → $.fsname and {#FSTYPE} → $.fstype, thus making such script possible:
####!/usr/bin/perl
$first = 1;
678
print "[\n";
print "\t{\n";
print "\t\t\"fsname\":\"$fsname\",\n";
print "\t\t\"fstype\":\"$fstype\"\n";
print "\t}\n";
}
print "]\n";
An example of its output (reformatted for clarity) is shown below. JSON for custom discovery checks has to follow the same format.
[
{ "fsname":"/", "fstype":"rootfs" },
{ "fsname":"/sys", "fstype":"sysfs" },
{ "fsname":"/proc", "fstype":"proc" },
{ "fsname":"/dev", "fstype":"devtmpfs" },
{ "fsname":"/dev/pts", "fstype":"devpts" },
{ "fsname":"/lib/init/rw", "fstype":"tmpfs" },
{ "fsname":"/dev/shm", "fstype":"tmpfs" },
{ "fsname":"/home", "fstype":"ext3" },
{ "fsname":"/tmp", "fstype":"ext3" },
{ "fsname":"/usr", "fstype":"ext3" },
{ "fsname":"/var", "fstype":"ext3" },
{ "fsname":"/sys/fs/fuse/connections", "fstype":"fusectl" }
]
Then, in the discovery rule’s ”Filter” field, we could specify ”{#FSTYPE}” as a macro and ”rootfs|ext3” as a regular expression.
Note:
You don’t have to use macro names FSNAME/FSTYPE with custom LLD rules, you are free to use whatever names you like.
In case JSONPath is used then LLD row will be an array element that can be an object, but it can be also another array or
a value.
Note that, if using a user parameter, the return value is limited to 16MB. For more details, see data limits for LLD return values.
16 Distributed monitoring
Overview Zabbix provides an effective and reliable way of monitoring a distributed IT infrastructure using Zabbix proxies.
Proxies can be used to collect data locally on behalf of a centralized Zabbix server and then report the data to the server.
Proxy features
When making a choice of using/not using a proxy, several considerations must be taken into account.
Proxy
Lightweight Yes
GUI No
Works independently Yes
Easy maintenance Yes
1
Automatic DB creation Yes
Local administration No
Ready for embedded hardware Yes
679
Proxy
1
Automatic DB creation feature works only with SQLite. Other supported databases require manual setup.
1 Proxies
Overview A Zabbix proxy can collect performance and availability data on behalf of the Zabbix server. This way, a proxy can
take on itself some of the load of collecting data and offload the Zabbix server.
Also, using a proxy is the easiest way of implementing centralized and distributed monitoring, when all agents and proxies report
to one Zabbix server and all data is collected centrally.
The proxy requires only one TCP connection to the Zabbix server. This way it is easier to get around a firewall as you only need to
configure one firewall rule.
Attention:
Zabbix proxy must use a separate database. Pointing it to the Zabbix server database will break the configuration.
All data collected by the proxy is stored locally before transmitting it over to the server. This way no data is lost due to any temporary
communication problems with the server. The ProxyLocalBuffer and ProxyOfflineBuffer parameters in the proxy configuration file
control for how long the data are kept locally.
Attention:
It may happen that a proxy, which receives the latest configuration changes directly from Zabbix server database, has
a more up-to-date configuration than Zabbix server whose configuration may not be updated as fast due to the value of
CacheUpdateFrequency. As a result, proxy may start gathering data and send them to Zabbix server that ignores these
data.
Zabbix proxy is a data collector. It does not calculate triggers, process events or send alerts. For an overview of what proxy
functionality is, review the following table:
Items
Zabbix agent checks Yes
1
Zabbix agent checks (active) Yes
Simple checks Yes
680
Function Supported by proxy
Note:
[1] To make sure that an agent asks the proxy (and not the server) for active checks, the proxy must be listed in the
ServerActive parameter in the agent configuration file.
If Zabbix server was down for some time, and proxies have collected a lot of data, and then the server starts, it may get overloaded
(history cache usage stays at 95-100% for some time). This overload could result in a performance hit, where checks are processed
slower than they should. Protection from this scenario was implemented to avoid problems that arise due to overloading history
cache.
When Zabbix server history cache is full the history cache write access is being throttled, stalling server data gathering processes.
The most common history cache overload case is after server downtime when proxies are uploading gathered data. To avoid this
proxy throttling was added (currently it cannot be disabled).
Zabbix server will stop accepting data from proxies when history cache usage reaches 80%. Instead those proxies will be put on
a throttling list. This will continue until the cache usage falls down to 60%. Now server will start accepting data from proxies one
by one, defined by the throttling list. This means the first proxy that attempted to upload data during the throttling period will be
served first and until it’s done the server will not accept data from other proxies.
This throttling mode will continue until either cache usage hits 80% again or falls down to 20% or the throttling list is empty. In the
first case the server will stop accepting proxy data again. In the other two cases the server will start working normally, accepting
data from all proxies.
History write
cache usage Zabbix server mode Zabbix server action
Reaches 80% Wait Stops accepting proxy data, but maintains a throttling list (prioritized
list of proxies to be contacted later).
Drops to 60% Throttled Starts processing the throttling list, but still not accepting proxy data.
Drops to 20% Normal Drops the throttling list and starts accepting proxy data normally.
You may use the zabbix[wcache,history,pused] internal item to correlate this behavior of Zabbix server with a metric.
Configuration Once you have installed and configured a proxy, it is time to configure it in the Zabbix frontend.
Adding proxies
681
To configure a proxy in Zabbix frontend:
Parameter Description
Proxy Enter the proxy name. It must be the same name as in the Hostname parameter in the proxy
name configuration file.
Proxy Select the proxy group for proxy load balancing/high availability.
group Only one group can be selected.
Address Enter address the monitored active agents or senders must connect to. Supported only for
for Zabbix 7.0 agents or later.
ac- This address is used to connect to both active and passive proxies. This field is only available
tive if a proxy group is selected in the Proxy group field.
agents
Address IP address/DNS name to connect to.
Port TCP port number (10051 by default) to connect to. User macros are supported.
Proxy Select the proxy mode.
mode Active - the proxy will connect to the Zabbix server and request configuration data
Passive - Zabbix server connects to the proxy
Note that without encrypted communications (sensitive) proxy configuration data may
become available to parties having access to the Zabbix server trapper port when using an
active proxy. This is possible because anyone may pretend to be an active proxy and request
configuration data if authentication does not take place or proxy addresses are not limited in
the Proxy address field.
Proxy If specified then active proxy requests are only accepted from this list of comma-delimited IP
ad- addresses, optionally in CIDR notation, or DNS names of active Zabbix proxy.
dress This field is only available if an active proxy is selected in the Proxy mode field. Macros are
not supported.
Interface Enter interface details for a passive proxy.
This field is only available if a passive proxy is selected in the Proxy mode field.
Address IP address/DNS name of the passive proxy.
Port TCP port number of the passive proxy (10051 by default). User macros are supported.
Description Enter the proxy description.
The Encryption tab allows you to require encrypted connections with the proxy.
682
Parameter Description
Connections to proxy How the server connects to the passive proxy: no encryption (default), using PSK (pre-shared
key) or certificate.
Connections from proxy Select what type of connections are allowed from the active proxy. Several connection types can
be selected at the same time (useful for testing and switching to other connection type). Default
is ”No encryption”.
Issuer Allowed issuer of certificate. Certificate is first validated with CA (certificate authority). If it is
valid, signed by the CA, then the Issuer field can be used to further restrict allowed CA. This field
is optional, intended to use if your Zabbix installation uses certificates from multiple CAs.
Subject Allowed subject of certificate. Certificate is first validated with CA. If it is valid, signed by the CA,
then the Subject field can be used to allow only one value of Subject string. If this field is empty
then any valid certificate signed by the configured CA is accepted.
PSK identity Pre-shared key identity string.
Do not put sensitive information in the PSK identity, it is transmitted unencrypted over the
network to inform a receiver which PSK to use.
PSK Pre-shared key (hex-string). Maximum length: 512 hex-digits (256-byte PSK) if Zabbix uses
GnuTLS or OpenSSL library, 64 hex-digits (32-byte PSK) if Zabbix uses mbed TLS (PolarSSL)
library. Example: 1f87b595725ac58dd977beef14b97461a7c1045b9a1c963065002c5473194952
The Timeouts tab allows you to override global timeouts for item types that support it.
683
Parameter Description
Clicking the Global timeouts link allows you to configure global timeouts. Note that the Global
timeouts link is visible only to users of Super admin type with permissions to the Administration
→ General frontend section.
Note that the timeouts set under Override will prevail over the global ones but will be
overridden by individual item timeouts if those are set under item configuration.
Note:
If proxy major version does not match server major version, icon will be displayed next to Timeouts for item types, with
the hover message ”Timeouts disabled because the proxy and server versions do not match”. In such cases, the proxy will
use the Timeout parameter from the proxy configuration file.
The editing form of an existing proxy has the following additional buttons:
Host configuration
You can specify that an individual host should be monitored by a proxy or proxy group in the host configuration form, using the
Monitored by field.
Host mass update is another way of specifying that hosts should be monitored by a proxy or proxy group.
Overview
This page provides details on the monitoring configuration update for the proxy, i.e. how changes made to the monitoring config-
uration on the server are synchronized to the proxy.
Incremental update
The proxy configuration update is incremental. During a configuration sync only the modified entities are updated (thus, if no
entities have been modified, nothing will be sent). This approach allows to save resources and set a smaller interval (almost
instant) for the proxy configuration update.
Proxy configuration changes are tracked using revision numbers. Only entities with revisions larger than the proxy configuration
revision are included in configuration data sent to the proxy.
684
The entities for a configuration sync are as follows:
Entity Details
That means:
An exception are the host macros which are sent also if anything on the host has been changed.
The -R config_cache_reload command on the proxy will also initiate an incremental update.
Note that a full configuration sync will take place on a proxy start/restart, HA failover, if the session token has changed, or if the
configuration update failed on the proxy, for example, if the connection was broken while receiving configuration data.
Configuration parameters
The ProxyConfigFrequency parameter determines how often the proxy configuration is synced with the server (10 seconds by
default).
On active proxies ProxyConfigFrequency is a new parameter since Zabbix 6.4 and must be used instead of the now-deprecated
ConfigFrequency.
Attention:
If both ProxyConfigFrequency and ConfigFrequency are used, the proxy will log an error and terminate.
Overview
Proxy load balancing allows monitoring hosts by a proxy group with automated distribution of hosts between proxies and high
proxy availability.
If one proxy from the proxy group goes offline, its hosts will be immediately distributed among other proxies having the least
assigned hosts in the group. Or, if a proxy has too many/too few hosts compared to the group average, group re-balancing by
distributing hosts evenly will be triggered.
Host redistribution happens only in online proxy groups. A proxy group is ”online” if the configured minimum number of its proxies
are online (not offline or unknown).
Note:
The minimum number of online proxies should be less than the proxy total in the group. In a group of 10 proxies, setting
the minimum online proxy count to 10 creates a situation where the whole group will go offline if only one proxy fails. It is
better to have 6 online proxies required. This will support 4 unhealthy proxies.
• online - if there was communication with it for the failover delay period (passive proxy responded to server requests and
active proxy sent a request to server);
• offline - if there was no communication with it for the failover delay period;
• unknown - after proxy creation or server start.
685
You can monitor the proxy group state with the zabbix[proxy group,<name>,state] internal item.
Proxy load balancing and high availability is managed by the proxy group manager process. The proxy group manager always
knows which other proxies are healthy or unhealthy.
Version compatibility
• Only Zabbix agents 7.0 and later are supported for working with proxy groups in active mode;
• Zabbix pre-7.0 version proxies and the hosts monitored by these proxies are excluded from re-balancing operations until
they are upgraded.
Host reassignment
Zabbix server checks the balance between host assignments to the proxies. The group is considered ”out of balance” if there is:
• host excess - a proxy has many more hosts than the group average;
• host deficit - a proxy has far fewer hosts than the group average.
The group is considered ”out of balance” if the number of hosts assigned to the proxy is above/below the group average by more
than 10 and a factor of 2. In this case the group is marked by the server for host reassignment after the grace period (10 x failover
delay), if the balance is not restored.
The following table illustrates with example numbers when host reassignment is (or is not) triggered:
>100 50 Yes
60 50 No
40 50 No
<25 50 Yes
>15 5 Yes
10 5 No
The proxy group manager will re-distribute hosts in proxy groups in the following way:
For passive checks, all proxies of the group must be listed in the Server parameter of agents.
Adding all proxies of the group to the ServerActive agent parameter (separated by a semicolon) of monitored hosts is beneficial,
but not mandatory. An active agent can have a single proxy in the ServerActive field and proxy load balancing will work. When the
agent service starts, the agent will receive a full list of all IP addresses of all Zabbix proxies, load and keep into memory. Active
checks (and Zabbix sender data requests) will be redirected to the correct online proxy for the host, based on the current proxy-host
assignment.
Warning:
Having only a single proxy in ServerActive field may lead to lost monitoring data if the agent is started/rebooted while that
particular proxy is offline.
3. Configure that hosts are monitored by proxy group (not individual proxies). You may use host mass update to move hosts
from proxy to the proxy group.
Attention:
Hosts that are monitored by a single proxy (even if the proxy is part of proxy group) are not involved in load balancing/high
availability at all.
4. Wait a few seconds for configuration update and for host distribution among proxies in the proxy group. Observe the change
by refreshing the host list in Monitoring -> Hosts.
686
When a host is created based on auto registration/network discovery data from a proxy belonging to proxy group - then this host
is set to be monitored by this proxy group.
Limitations
Agents must always be allowed to reach all proxies at the firewall level. Consider the following scenarios:
• In Zabbix agent active checks, on agent startup, the first proxy responds and redirects to another proxy. The other proxy is
not reachable because of a firewall problem and the communication stops in a state of waiting for the other proxy to respond.
The root cause of this situation is that the first proxy knew that the other proxy was healthy for sure. This is not a problem
if the first proxy fails; then it will try different addresses configured in the ”ServerActive” parameter.
• The HA setup has been stable for multiple months. Host rebalancing never happens; it is not needed. The agent does
not need to validate the ”backup” channel to any other proxies. In a failover scenario, it might fail because a firewall was
modified half a year ago.
Parameter Description
17 Encryption
687
Overview Zabbix supports encrypted communications between Zabbix components using Transport Layer Security (TLS) protocol
v.1.2 and 1.3 (depending on the crypto library). Certificate-based and pre-shared key-based encryption is supported.
• Between Zabbix server, Zabbix proxy, Zabbix agent, zabbix_sender and zabbix_get utilities
• To Zabbix database from Zabbix frontend and server/proxy
• Some proxies and agents can be configured to use certificate-based encryption with the server, while others can use pre-
shared key-based encryption, and yet others continue with unencrypted communications (as before)
• Server (proxy) can use different encryption configurations for different hosts
Zabbix daemon programs use one listening port for encrypted and unencrypted incoming connections. Adding an encryption does
not require opening new ports on firewalls.
Limitations
• Private keys are stored in plain text in files readable by Zabbix components during startup
• Pre-shared keys are entered in Zabbix frontend and stored in Zabbix database in plain text
• Built-in encryption does not protect communications:
– Between the web server running Zabbix frontend and user web browser
– Between Zabbix frontend and Zabbix server
• Currently each encrypted connection opens with a full TLS handshake, no session caching and tickets are implemented
• Adding encryption increases the time for item checks and actions, depending on network latency:
– For example, if packet delay is 100ms then opening a TCP connection and sending unencrypted request takes around
200ms. With encryption about 1000 ms are added for establishing the TLS connection;
– Timeouts may need to be increased, otherwise some items and actions running remote scripts on agents may work
with unencrypted connections, but fail with timeout with encrypted.
• Encryption is not supported by network discovery. Zabbix agent checks performed by network discovery will be unencrypted
and if Zabbix agent is configured to reject unencrypted connections such checks will not succeed.
Compiling Zabbix with encryption support To support encryption Zabbix must be compiled and linked with one of the sup-
ported crypto libraries:
Note:
You can find out more about setting up SSL for Zabbix frontend by referring to these best practices.
• --with-gnutls[=DIR]
• --with-openssl[=DIR] (also used for LibreSSL)
For example, to configure the sources for server and agent with OpenSSL you may use something like:
Attention:
If you plan to use pre-shared keys (PSK), consider using GnuTLS or OpenSSL 1.1.0 (or newer) libraries in Zabbix components
using PSKs. GnuTLS and OpenSSL 1.1.0 libraries support PSK ciphersuites with Perfect Forward Secrecy. Older versions
of the OpenSSL library (1.0.1, 1.0.2c) also support PSKs, but available PSK ciphersuites do not provide Perfect Forward
Secrecy.
• no encryption (default)
• RSA certificate-based encryption
688
• PSK-based encryption
There are two important parameters used to specify encryption between Zabbix components:
• TLSConnect - specifies what encryption to use for outgoing connections (unencrypted, PSK or certificate)
• TLSAccept - specifies what types of connections are allowed for incoming connections (unencrypted, PSK or certificate). One
or more values can be specified.
TLSConnect is used in the configuration files for Zabbix proxy (in active mode, specifies only connections to server) and Zabbix
agent (for active checks). In Zabbix frontend the TLSConnect equivalent is the Connections to host field in Data collection → Hosts
→ <some host> → Encryption tab and the Connections to proxy field in Administration → Proxies → <some proxy> → Encryption
tab. If the configured encryption type for connection fails, no other encryption types will be tried.
TLSAccept is used in the configuration files for Zabbix proxy (in passive mode, specifies only connections from server) and Zabbix
agent (for passive checks). In Zabbix frontend the TLSAccept equivalent is the Connections from host field in Data collection →
Hosts → <some host> → Encryption tab and the Connections from proxy field in Administration → Proxies → <some proxy> →
Encryption tab.
Normally you configure only one type of encryption for incoming encryptions. But you may want to switch the encryption type,
e.g. from unencrypted to certificate-based with minimum downtime and rollback possibility. To achieve this:
• Set TLSAccept=unencrypted,cert in the agent configuration file and restart Zabbix agent
• Test connection with zabbix_get to the agent using certificate. If it works, you can reconfigure encryption for that agent
in Zabbix frontend in the Data collection → Hosts → <some host> → Encryption tab by setting Connections to host to
”Certificate”.
• When server configuration cache gets updated (and proxy configuration is updated if the host is monitored by proxy) then
connections to that agent will be encrypted
• If everything works as expected you can set TLSAccept=cert in the agent configuration file and restart Zabbix agent. Now
the agent will be accepting only encrypted certificate-based connections. Unencrypted and PSK-based connections will be
rejected.
In a similar way it works on server and proxy. If in Zabbix frontend in host configuration Connections from host is set to ”Certificate”
then only certificate-based encrypted connections will be accepted from the agent (active checks) and zabbix_sender (trapper
items).
Most likely you will configure incoming and outgoing connections to use the same encryption type or no encryption at all. But
technically it is possible to configure it asymmetrically, e.g. certificate-based encryption for incoming and PSK-based for outgoing
connections.
Encryption configuration for each host is displayed in the Zabbix frontend, in Data collection → Hosts in the Agent encryption
column. For example:
Example Connections to host Allowed connections from host Rejected connections from host
Attention:
Connections are unencrypted by default. Encryption must be configured for each host and proxy individually.
zabbix_get and zabbix_sender with encryption See zabbix_get and zabbix_sender manpages for using them with encryption.
Also user-configured ciphersuites are supported for GnuTLS and OpenSSL. Users may configure ciphersuites according to their
security policies. Using this feature is optional (built-in default ciphersuites still work).
For crypto libraries compiled with default settings Zabbix built-in rules typically result in the following ciphersuites (in order from
higher to lower priority):
689
Library Certificate ciphersuites PSK ciphersuites
User-configured ciphersuites The built-in ciphersuite selection criteria can be overridden with user-configured ciphersuites.
Attention:
User-configured ciphersuites is a feature intended for advanced users who understand TLS ciphersuites, their security and
consequences of mistakes, and who are comfortable with TLS troubleshooting.
The built-in ciphersuite selection criteria can be overridden using the following parameters:
Override
scope Parameter Value Description
Ciphersuite TLSCipherCert13 Valid OpenSSL 1.1.1 cipher strings for TLS Certificate-based ciphersuite selection
selection for 1.3 protocol (their values are passed to criteria for TLS 1.3
certificates the OpenSSL function
SSL_CTX_set_ciphersuites()). Only OpenSSL 1.1.1 or newer.
TLSCipherCert Valid OpenSSL cipher strings for TLS 1.2 or Certificate-based ciphersuite selection
valid GnuTLS priority strings. Their values criteria for TLS 1.2/1.3 (GnuTLS), TLS 1.2
are passed to the (OpenSSL)
SSL_CTX_set_cipher_list() or
gnutls_priority_init() functions,
respectively.
Ciphersuite TLSCipherPSK13 Valid OpenSSL 1.1.1 cipher strings for TLS PSK-based ciphersuite selection criteria for
selection for 1.3 protocol (their values are passed to TLS 1.3
PSK the OpenSSL function
SSL_CTX_set_ciphersuites()). Only OpenSSL 1.1.1 or newer.
690
Override
scope Parameter Value Description
TLSCipherPSK Valid OpenSSL cipher strings for TLS 1.2 or PSK-based ciphersuite selection criteria for
valid GnuTLS priority strings. Their values TLS 1.2/1.3 (GnuTLS), TLS 1.2 (OpenSSL)
are passed to the
SSL_CTX_set_cipher_list() or
gnutls_priority_init() functions,
respectively.
Combined TLSCipherAll13 Valid OpenSSL 1.1.1 cipher strings for TLS Ciphersuite selection criteria for TLS 1.3
ciphersuite 1.3 protocol (their values are passed to
list for the OpenSSL function Only OpenSSL 1.1.1 or newer.
certificate SSL_CTX_set_ciphersuites()).
and PSK
TLSCipherAll Valid OpenSSL cipher strings for TLS 1.2 or Ciphersuite selection criteria for TLS
valid GnuTLS priority strings. Their values 1.2/1.3 (GnuTLS), TLS 1.2 (OpenSSL)
are passed to the
SSL_CTX_set_cipher_list() or
gnutls_priority_init() functions,
respectively.
To override the ciphersuite selection in zabbix_get and zabbix_sender utilities - use the command-line parameters:
• --tls-cipher13
• --tls-cipher
The new parameters are optional. If a parameter is not specified, the internal default value is used. If a parameter is defined it
cannot be empty.
If the setting of a TLSCipher* value in the crypto library fails then the server, proxy or agent will not start and an error is logged.
Outgoing connections
It is a bit more complicated with incoming connections because rules are specific for components and configuration.
691
• TLSCipherAll and TLSCipherAll13 can be specified only if a combined list of certificate- and PSK-based ciphersuites is used.
There are two cases when it takes place: server (proxy) with a configured certificate (PSK ciphersuites are always config-
ured on server, proxy if crypto library supports PSK), agent configured to accept both certificate- and PSK-based incoming
connections
• in other cases TLSCipherCert* and/or TLSCipherPSK* are sufficient
The following tables show the TLSCipher* built-in default values. They could be a good starting point for your own custom values.
TLSCipherCert NONE:+VERS-TLS1.2:+ECDHE-RSA:+RSA:+AES-128-GCM:+AES-128-
CBC:+AEAD:+SHA256:+SHA1:+CURVE-ALL:+COMP-NULL:+SIGN-ALL:+CTYPE-X.509
TLSCipherPSK NONE:+VERS-TLS1.2:+ECDHE-PSK:+PSK:+AES-128-GCM:+AES-128-
CBC:+AEAD:+SHA256:+SHA1:+CURVE-ALL:+COMP-NULL:+SIGN-ALL
TLSCipherAll NONE:+VERS-TLS1.2:+ECDHE-RSA:+RSA:+ECDHE-PSK:+PSK:+AES-128-GCM:+AES-128-
CBC:+AEAD:+SHA256:+SHA1:+CURVE-ALL:+COMP-NULL:+SIGN-ALL:+CTYPE-X.509
1
Parameter OpenSSL 1.1.1d
TLSCipherCert13
TLSCipherCert EECDH+aRSA+AES128:RSA+aRSA+AES128
TLSCipherPSK13 TLS_CHACHA20_POLY1305_SHA256:TLS_AES_128_GCM_SHA256
TLSCipherPSK kECDHEPSK+AES128:kPSK+AES128
TLSCipherAll13
TLSCipherAll EECDH+aRSA+AES128:RSA+aRSA+AES128:kECDHEPSK+AES128:kPSK+AES128
1
Default values are different for older OpenSSL versions (1.0.1, 1.0.2, 1.1.0), for LibreSSL and if OpenSSL is compiled without PSK
support.
To see which ciphersuites have been selected you need to set ’DebugLevel=4’ in the configuration file, or use the -vv option for
zabbix_sender.
Some experimenting with TLSCipher* parameters might be necessary before you get the desired ciphersuites. It is inconvenient
to restart Zabbix server, proxy or agent multiple times just to tweak TLSCipher* parameters. More convenient options are using
zabbix_sender or the openssl command. Let’s show both.
1. Using zabbix_sender.
Let’s make a test configuration file, for example /home/zabbix/test.conf, with the syntax of a zabbix_agentd.conf file:
Hostname=nonexisting
ServerActive=nonexisting
TLSConnect=cert
TLSCAFile=/home/zabbix/ca.crt
TLSCertFile=/home/zabbix/agent.crt
TLSKeyFile=/home/zabbix/agent.key
TLSPSKIdentity=nonexisting
TLSPSKFile=/home/zabbix/agent.psk
You need valid CA and agent certificates and PSK for this example. Adjust certificate and PSK file paths and names for your
environment.
If you are not using certificates, but only PSK, you can make a simpler test file:
Hostname=nonexisting
ServerActive=nonexisting
TLSConnect=psk
692
TLSPSKIdentity=nonexisting
TLSPSKFile=/home/zabbix/agentd.psk
The selected ciphersuites can be seen by running zabbix_sender (example compiled with OpenSSL 1.1.d):
With newer systems you can choose to tighten security by allowing only a few ciphersuites, e.g. only ciphersuites with PFS (Perfect
Forward Secrecy). Let’s try to allow only ciphersuites with PFS using TLSCipher* parameters.
Attention:
The result will not be interoperable with systems using OpenSSL 1.0.1 and 1.0.2, if PSK is used. Certificate-based encryption
should work.
2. TLSCipherAll and TLSCipherAll13 cannot be tested with zabbix_sender; they do not affect ”certificate and PSK ciphersuites”
value shown in the example above. To tweak TLSCipherAll and TLSCipherAll13 you need to experiment with the agent, proxy or
server.
So, to allow only PFS ciphersuites you may need to add up to three parameters
TLSCipherCert=EECDH+aRSA+AES128
TLSCipherPSK=kECDHEPSK+AES128
TLSCipherAll=EECDH+aRSA+AES128:kECDHEPSK+AES128
to zabbix_agentd.conf, zabbix_proxy.conf and zabbix_server_conf if each of them has a configured certificate and agent has also
PSK.
If your Zabbix environment uses only PSK-based encryption and no certificates, then only one:
TLSCipherPSK=kECDHEPSK+AES128
Now that you understand how it works you can test the ciphersuite selection even outside of Zabbix, with the openssl command.
Let’s test all three TLSCipher* parameter values:
$ openssl ciphers EECDH+aRSA+AES128 | sed 's/:/ /g'
TLS_AES_256_GCM_SHA384 TLS_CHACHA20_POLY1305_SHA256 TLS_AES_128_GCM_SHA256 ECDHE-RSA-AES128-GCM-SHA256 E
$ openssl ciphers kECDHEPSK+AES128 | sed 's/:/ /g'
TLS_AES_256_GCM_SHA384 TLS_CHACHA20_POLY1305_SHA256 TLS_AES_128_GCM_SHA256 ECDHE-PSK-AES128-CBC-SHA256 E
$ openssl ciphers EECDH+aRSA+AES128:kECDHEPSK+AES128 | sed 's/:/ /g'
TLS_AES_256_GCM_SHA384 TLS_CHACHA20_POLY1305_SHA256 TLS_AES_128_GCM_SHA256 ECDHE-RSA-AES128-GCM-SHA256 E
You may prefer openssl ciphers with option -V for a more verbose output:
$ openssl ciphers -V EECDH+aRSA+AES128:kECDHEPSK+AES128
0x13,0x02 - TLS_AES_256_GCM_SHA384 TLSv1.3 Kx=any Au=any Enc=AESGCM(256) Mac=AEAD
0x13,0x03 - TLS_CHACHA20_POLY1305_SHA256 TLSv1.3 Kx=any Au=any Enc=CHACHA20/POLY1305(256
0x13,0x01 - TLS_AES_128_GCM_SHA256 TLSv1.3 Kx=any Au=any Enc=AESGCM(128) Mac=AEAD
0xC0,0x2F - ECDHE-RSA-AES128-GCM-SHA256 TLSv1.2 Kx=ECDH Au=RSA Enc=AESGCM(128) Mac=AEAD
0xC0,0x27 - ECDHE-RSA-AES128-SHA256 TLSv1.2 Kx=ECDH Au=RSA Enc=AES(128) Mac=SHA256
0xC0,0x13 - ECDHE-RSA-AES128-SHA TLSv1 Kx=ECDH Au=RSA Enc=AES(128) Mac=SHA1
693
0xC0,0x37 - ECDHE-PSK-AES128-CBC-SHA256 TLSv1 Kx=ECDHEPSK Au=PSK Enc=AES(128) Mac=SHA256
0xC0,0x35 - ECDHE-PSK-AES128-CBC-SHA TLSv1 Kx=ECDHEPSK Au=PSK Enc=AES(128) Mac=SHA1
Similarly, you can test the priority strings for GnuTLS:
$ gnutls-cli -l --priority=NONE:+VERS-TLS1.2:+ECDHE-RSA:+AES-128-GCM:+AES-128-CBC:+AEAD:+SHA256:+CURVE-A
Cipher suites for NONE:+VERS-TLS1.2:+ECDHE-RSA:+AES-128-GCM:+AES-128-CBC:+AEAD:+SHA256:+CURVE-ALL:+COMP-
TLS_ECDHE_RSA_AES_128_GCM_SHA256 0xc0, 0x2f TLS1.2
TLS_ECDHE_RSA_AES_128_CBC_SHA256 0xc0, 0x27 TLS1.2
Protocols: VERS-TLS1.2
Ciphers: AES-128-GCM, AES-128-CBC
MACs: AEAD, SHA256
Key Exchange Algorithms: ECDHE-RSA
Groups: GROUP-SECP256R1, GROUP-SECP384R1, GROUP-SECP521R1, GROUP-X25519, GROUP-X448, GROUP-FFDHE2048, GR
PK-signatures: SIGN-RSA-SHA256, SIGN-RSA-PSS-SHA256, SIGN-RSA-PSS-RSAE-SHA256, SIGN-ECDSA-SHA256, SIGN-E
Switching from AES128 to AES256
Zabbix uses AES128 as the built-in default for data. Let’s assume you are using certificates and want to switch to AES256, on
OpenSSL 1.1.1.
Attention:
Although only certificate-related ciphersuites will be used, TLSCipherPSK* parameters are defined as well to avoid their
default values which include less secure ciphers for wider interoperability. PSK ciphersuites cannot be completely disabled
on server/proxy.
And in zabbix_agentd.conf:
TLSConnect=cert
TLSAccept=cert
TLSCAFile=/home/zabbix/ca.crt
TLSCertFile=/home/zabbix/agent.crt
TLSKeyFile=/home/zabbix/agent.key
TLSCipherCert13=TLS_AES_256_GCM_SHA384
TLSCipherCert=EECDH+aRSA+AES256:-SHA1:-SHA384
1 Using certificates
Overview
Zabbix can use RSA certificates in PEM format, signed by a public or in-house certificate authority (CA). Certificate verification is
done against a pre-configured CA certificate. Optionally certificate revocation lists (CRL) can be used. Each Zabbix component
can have only one certificate configured.
For more information how to set up and operate internal CA, how to generate certificate requests and sign them, how to revoke
certificates you can find numerous online how-tos, for example, OpenSSL PKI Tutorial v1.1 .
Carefully consider and test your certificate extensions - see Limitations on using X.509 v3 certificate extensions.
694
Parameter Mandatory Description
TLSCAFile yes Full pathname of a file containing the top-level CA(s) certificates for peer certificate
verification.
In case of certificate chain with several members they must be ordered: lower level
CA certificates first followed by certificates of higher level CA(s).
Certificates from multiple CA(s) can be included in a single file.
TLSCRLFile no Full pathname of a file containing Certificate Revocation Lists. See notes in
Certificate Revocation Lists (CRL).
TLSCertFile yes Full pathname of a file containing certificate (certificate chain).
In case of certificate chain with several members they must be ordered: server,
proxy, or agent certificate first, followed by lower level CA certificates then
certificates of higher level CA(s).
TLSKeyFile yes Full pathname of a file containing private key. Set access rights to this file - it must be
readable only by Zabbix user.
TLSServerCertIssuer no Allowed server certificate issuer.
TLSServerCertSubject no Allowed server certificate subject.
1. In order to verify peer certificates, Zabbix server must have access to file with their top-level self-signed root CA cer-
tificates. For example, if we expect certificates from two independent root CAs, we can put their certificates into file
/home/zabbix/zabbix_ca_file like this:
Certificate:
Data:
Version: 3 (0x2)
Serial Number: 1 (0x1)
Signature Algorithm: sha1WithRSAEncryption
Issuer: DC=com, DC=zabbix, O=Zabbix SIA, OU=Development group, CN=Root1 CA
...
Subject: DC=com, DC=zabbix, O=Zabbix SIA, OU=Development group, CN=Root1 CA
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
Public-Key: (2048 bit)
...
X509v3 extensions:
X509v3 Key Usage: critical
Certificate Sign, CRL Sign
X509v3 Basic Constraints: critical
CA:TRUE
...
-----BEGIN CERTIFICATE-----
MIID2jCCAsKgAwIBAgIBATANBgkqhkiG9w0BAQUFADB+MRMwEQYKCZImiZPyLGQB
....
9wEzdN8uTrqoyU78gi12npLj08LegRKjb5hFTVmO
-----END CERTIFICATE-----
Certificate:
Data:
Version: 3 (0x2)
Serial Number: 1 (0x1)
Signature Algorithm: sha1WithRSAEncryption
Issuer: DC=com, DC=zabbix, O=Zabbix SIA, OU=Development group, CN=Root2 CA
...
Subject: DC=com, DC=zabbix, O=Zabbix SIA, OU=Development group, CN=Root2 CA
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
Public-Key: (2048 bit)
....
X509v3 extensions:
X509v3 Key Usage: critical
Certificate Sign, CRL Sign
X509v3 Basic Constraints: critical
CA:TRUE
695
....
-----BEGIN CERTIFICATE-----
MIID3DCCAsSgAwIBAgIBATANBgkqhkiG9w0BAQUFADB/MRMwEQYKCZImiZPyLGQB
...
vdGNYoSfvu41GQAR5Vj5FnRJRzv5XQOZ3B6894GY1zY=
-----END CERTIFICATE-----
2. Put Zabbix server certificate chain into file, for example, /home/zabbix/zabbix_server.crt:
Certificate:
Data:
Version: 3 (0x2)
Serial Number: 1 (0x1)
Signature Algorithm: sha1WithRSAEncryption
Issuer: DC=com, DC=zabbix, O=Zabbix SIA, OU=Development group, CN=Signing CA
...
Subject: DC=com, DC=zabbix, O=Zabbix SIA, OU=Development group, CN=Zabbix server
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
Public-Key: (2048 bit)
...
X509v3 extensions:
X509v3 Key Usage: critical
Digital Signature, Key Encipherment
X509v3 Basic Constraints:
CA:FALSE
...
-----BEGIN CERTIFICATE-----
MIIECDCCAvCgAwIBAgIBATANBgkqhkiG9w0BAQUFADCBgTETMBEGCgmSJomT8ixk
...
h02u1GHiy46GI+xfR3LsPwFKlkTaaLaL/6aaoQ==
-----END CERTIFICATE-----
Certificate:
Data:
Version: 3 (0x2)
Serial Number: 2 (0x2)
Signature Algorithm: sha1WithRSAEncryption
Issuer: DC=com, DC=zabbix, O=Zabbix SIA, OU=Development group, CN=Root1 CA
...
Subject: DC=com, DC=zabbix, O=Zabbix SIA, OU=Development group, CN=Signing CA
Subject Public Key Info:
Public Key Algorithm: rsaEncryption
Public-Key: (2048 bit)
...
X509v3 extensions:
X509v3 Key Usage: critical
Certificate Sign, CRL Sign
X509v3 Basic Constraints: critical
CA:TRUE, pathlen:0
...
-----BEGIN CERTIFICATE-----
MIID4TCCAsmgAwIBAgIBAjANBgkqhkiG9w0BAQUFADB+MRMwEQYKCZImiZPyLGQB
...
dyCeWnvL7u5sd6ffo8iRny0QzbHKmQt/wUtcVIvWXdMIFJM0Hw==
-----END CERTIFICATE-----
Here the first is Zabbix server certificate, followed by intermediate CA certificate.
Note:
Use of any attributes except of the ones mentioned above is discouraged for both client and server certificates, because
it may affect certificate verification process. For example, OpenSSL might fail to establish encrypted connection if X509v3
Extended Key Usage or Netscape Cert Type are set. See also: Limitations on using X.509 v3 certificate extensions.
3. Put Zabbix server private key into file, for example, /home/zabbix/zabbix_server.key:
696
-----BEGIN PRIVATE KEY-----
MIIEwAIBADANBgkqhkiG9w0BAQEFAASCBKowggSmAgEAAoIBAQC9tIXIJoVnNXDl
...
IJLkhbybBYEf47MLhffWa7XvZTY=
-----END PRIVATE KEY-----
4. Edit TLS parameters in Zabbix server configuration file like this:
TLSCAFile=/home/zabbix/zabbix_ca_file
TLSCertFile=/home/zabbix/zabbix_server.crt
TLSKeyFile=/home/zabbix/zabbix_server.key
Configuring certificate-based encryption for Zabbix proxy
1. Prepare files with top-level CA certificates, proxy certificate (chain) and private key as described in Configuring certificate on
Zabbix server. Edit parameters TLSCAFile, TLSCertFile, TLSKeyFile in proxy configuration accordingly.
2. For active proxy edit TLSConnect parameter:
TLSConnect=cert
For passive proxy edit TLSAccept parameter:
TLSAccept=cert
3. Now you have a minimal certificate-based proxy configuration. You may prefer to improve proxy security by setting
TLSServerCertIssuer and TLSServerCertSubject parameters (see Restricting allowed certificate Issuer and Subject).
4. In final proxy configuration file TLS parameters may look like:
TLSConnect=cert
TLSAccept=cert
TLSCAFile=/home/zabbix/zabbix_ca_file
TLSServerCertIssuer=CN=Signing CA,OU=Development group,O=Zabbix SIA,DC=zabbix,DC=com
TLSServerCertSubject=CN=Zabbix server,OU=Development group,O=Zabbix SIA,DC=zabbix,DC=com
TLSCertFile=/home/zabbix/zabbix_proxy.crt
TLSKeyFile=/home/zabbix/zabbix_proxy.key
5. Configure encryption for this proxy in Zabbix frontend:
In examples below Issuer and Subject fields are filled in - see Restricting allowed certificate Issuer and Subject why and how to use
these fields.
697
Configuring certificate-based encryption for Zabbix agent
1. Prepare files with top-level CA certificates, agent certificate (chain) and private key as described in Configuring certificate on
Zabbix server. Edit parameters TLSCAFile, TLSCertFile, TLSKeyFile in agent configuration accordingly.
2. For active checks edit TLSConnect parameter:
TLSConnect=cert
For passive checks edit TLSAccept parameter:
TLSAccept=cert
3. Now you have a minimal certificate-based agent configuration. You may prefer to improve agent security by setting
TLSServerCertIssuer and TLSServerCertSubject parameters. (see Restricting allowed certificate Issuer and Subject).
TLSConnect=cert
TLSAccept=cert
TLSCAFile=/home/zabbix/zabbix_ca_file
TLSServerCertIssuer=CN=Signing CA,OU=Development group,O=Zabbix SIA,DC=zabbix,DC=com
TLSServerCertSubject=CN=Zabbix proxy,OU=Development group,O=Zabbix SIA,DC=zabbix,DC=com
TLSCertFile=/home/zabbix/zabbix_agentd.crt
TLSKeyFile=/home/zabbix/zabbix_agentd.key
(Example assumes that host is monitored via proxy, hence proxy certificate Subject.)
In example below Issuer and Subject fields are filled in - see Restricting allowed certificate Issuer and Subject why and how to use
these fields.
698
Restricting allowed certificate Issuer and Subject
When two Zabbix components (e.g. server and agent) establish a TLS connection they both check each others certificates. If a
peer certificate is signed by a trusted CA (with pre-configured top-level certificate in TLSCAFile), is valid, has not expired and
passes some other checks then communication can proceed. Certificate issuer and subject are not checked in this simplest case.
Here is a risk - anybody with a valid certificate can impersonate anybody else (e.g. a host certificate can be used to impersonate
server). This may be acceptable in small environments where certificates are signed by a dedicated in-house CA and risk of
impersonating is low.
If your top-level CA is used for issuing other certificates which should not be accepted by Zabbix or you want to reduce risk of
impersonating you can restrict allowed certificates by specifying their Issuer and Subject strings.
1. Issuer and Subject strings are checked independently. Both are optional.
2. UTF-8 characters are allowed.
3. Unspecified string means any string is accepted.
4. Strings are compared ”as-is”, they must be exactly the same to match.
5. Wildcards and regexp’s are not supported in matching.
6. Only some requirements from RFC 4514 Lightweight Directory Access Protocol (LDAP): String Representation of Distinguished
Names are implemented:
1. escape characters ’”’ (U+0022), ’+’ U+002B, ’,’ U+002C, ’;’ U+003B, ’<’ U+003C, ’>’ U+003E, ’\’ U+005C anywhere
in string.
2. escape characters space (’ ’ U+0020) or number sign (’#’ U+0023) at the beginning of string.
3. escape character space (’ ’ U+0020) at the end of string.
7. Match fails if a null character (U+0000) is encountered (RFC 4514 allows it).
8. Requirements of RFC 4517 Lightweight Directory Access Protocol (LDAP): Syntaxes and Matching Rules and RFC 4518
Lightweight Directory Access Protocol (LDAP): Internationalized String Preparation are not supported due to amount of work
required.
Order of fields in Issuer and Subject strings and formatting are important! Zabbix follows RFC 4514 recommendation and uses
”reverse” order of fields.
OpenSSL by default shows certificate Issuer and Subject fields in ”normal” order, depending on additional options used:
699
issuer= /DC=com/DC=zabbix/O=Zabbix SIA/OU=Development group/CN=Signing CA
subject= /DC=com/DC=zabbix/O=Zabbix SIA/OU=Development group/CN=Zabbix proxy
Attention:
To get proper Issuer and Subject strings usable in Zabbix invoke OpenSSL with special options
(-nameopt esc_2253,esc_ctrl,utf8,dump_nostr,dump_unknown,dump_der,sep_comma_plus,dn_rev,sname):
If a certificate is compromised CA can revoke it by including in CRL. CRLs can be configured in server, proxy and agent configuration
file using parameter TLSCRLFile. For example:
TLSCRLFile=/home/zabbix/zabbix_crl_file
where zabbix_crl_file may contain CRLs from several CAs and look like:
-----BEGIN X509 CRL-----
MIIB/DCB5QIBATANBgkqhkiG9w0BAQUFADCBgTETMBEGCgmSJomT8ixkARkWA2Nv
...
treZeUPjb7LSmZ3K2hpbZN7SoOZcAoHQ3GWd9npuctg=
-----END X509 CRL-----
-----BEGIN X509 CRL-----
MIIB+TCB4gIBATANBgkqhkiG9w0BAQUFADB/MRMwEQYKCZImiZPyLGQBGRYDY29t
...
CAEebS2CND3ShBedZ8YSil59O6JvaDP61lR5lNs=
-----END X509 CRL-----
CRL file is loaded only on Zabbix start. CRL update requires restart.
Attention:
If Zabbix component is compiled with OpenSSL and CRLs are used then each top and intermediate level CA in certificate
chains must have a corresponding CRL (it can be empty) in TLSCRLFile.
700
2 Using pre-shared keys
Overview
PSK identity string is a non-empty UTF-8 string. For example, ”PSK ID 001 Zabbix agentd”. It is a unique name by which this
specific PSK is referred to by Zabbix components. Do not put sensitive information in PSK identity string - it is transmitted over the
network unencrypted.
PSK value is a hard to guess string of hexadecimal digits, for example, ”e560cb0d918d26d31b4f642181f5f570ad89a390931102e5391d08327b
Size limits
There are size limits for PSK identity and value in Zabbix, in some cases a crypto library can have lower limit:
Component PSK identity max size PSK value min size PSK value max size
Zabbix 128 UTF-8 characters 128-bit (16-byte PSK, entered 2048-bit (256-byte PSK,
as 32 hexadecimal digits) entered as 512 hexadecimal
digits)
GnuTLS 128 bytes (may include UTF-8 - 2048-bit (256-byte PSK,
characters) entered as 512 hexadecimal
digits)
OpenSSL 1.0.x, 127 bytes (may include UTF-8 - 2048-bit (256-byte PSK,
1.1.0 characters) entered as 512 hexadecimal
digits)
OpenSSL 1.1.1 127 bytes (may include UTF-8 - 512-bit (64-byte PSK, entered
characters) as 128 hexadecimal digits)
OpenSSL 127 bytes (may include UTF-8 - 2048-bit (256-byte PSK,
1.1.1a and characters) entered as 512 hexadecimal
later digits)
Attention:
Zabbix frontend allows configuring up to 128-character long PSK identity string and 2048-bit long PSK regardless of crypto
libraries used.
If some Zabbix components support lower limits, it is the user’s responsibility to configure PSK identity and value with
allowed length for these components.
Exceeding length limits results in communication failures between Zabbix components.
Before Zabbix server connects to agent using PSK, the server looks up the PSK identity and PSK value configured for that agent
in database (actually in configuration cache). Upon receiving a connection the agent uses PSK identity and PSK value from its
configuration file. If both parties have the same PSK identity string and PSK value the connection may succeed.
Attention:
Each PSK identity must be paired with only one value. It is the user’s responsibility to ensure that there are no two PSKs
with the same identity string but different values. Failing to do so may lead to unpredictable errors or disruptions of
communication between Zabbix components using PSKs with this PSK identity string.
Generating PSK
For example, a 256-bit (32 bytes) PSK can be generated using the following commands:
• with OpenSSL:
701
Key stored to database.psk
$ cat database.psk
psk_identity:9b8eafedfaae00cece62e85d5f4792c7d9c9bcc851b23216a1d300311cc4f7cb
Note that ”psktool” above generates a database file with a PSK identity and its associated PSK. Zabbix expects just a PSK in the
PSK file, so the identity string and colon (’:’) should be removed from the file.
On the agent host, write the PSK value into a file, for example, /home/zabbix/zabbix_agentd.psk. The file must contain PSK
in the first text string, for example:
1f87b595725ac58dd977beef14b97461a7c1045b9a1c963065002c5473194952
Set access rights to PSK file - it must be readable only by Zabbix user.
Edit TLS parameters in agent configuration file zabbix_agentd.conf, for example, set:
TLSConnect=psk
TLSAccept=psk
TLSPSKFile=/home/zabbix/zabbix_agentd.psk
TLSPSKIdentity=PSK 001
The agent will connect to server (active checks) and accept from server and zabbix_get only connections using PSK. PSK identity
will be ”PSK 001”.
Restart the agent. Now you can test the connection using zabbix_get, for example:
zabbix_get -s 127.0.0.1 -k "system.cpu.load[all,avg1]" --tls-connect=psk --tls-psk-identity="PSK 001" --tl
(To minimize downtime see how to change connection type in Connection encryption management).
Example:
When configuration cache is synchronized with database the new connections will use PSK. Check server and agent logfiles for
error messages.
On the proxy, write the PSK value into a file, for example, /home/zabbix/zabbix_proxy.psk. The file must contain PSK in the
first text string, for example:
e560cb0d918d26d31b4f642181f5f570ad89a390931102e5391d08327ba434e9
Set access rights to PSK file - it must be readable only by Zabbix user.
Edit TLS parameters in proxy configuration file zabbix_proxy.conf, for example, set:
702
TLSConnect=psk
TLSPSKFile=/home/zabbix/zabbix_proxy.psk
TLSPSKIdentity=PSK 002
The proxy will connect to server using PSK. PSK identity will be ”PSK 002”.
(To minimize downtime see how to change connection type in Connection encryption management).
Configure PSK for this proxy in Zabbix frontend. Go to Administration→Proxies, select the proxy, go to ”Encryption” tab. In ”Connec-
tions from proxy” mark PSK. Paste into ”PSK identity” field ”PSK 002” and ”e560cb0d918d26d31b4f642181f5f570ad89a390931102e5391d083
into ”PSK” field. Click ”Update”.
Restart proxy. It will start using PSK-based encrypted connections to server. Check server and proxy logfiles for error messages.
For a passive proxy the procedure is very similar. The only difference - set TLSAccept=psk in proxy configuration file and set
”Connections to proxy” in Zabbix frontend to PSK.
3 Troubleshooting
General recommendations
• Start with understanding which component acts as a TLS client and which one acts as a TLS server in problem case.
Zabbix server, proxies and agents, depending on interaction between them, all can work as TLS servers and clients.
For example, Zabbix server connecting to agent for a passive check, acts as a TLS client. The agent is in role of TLS server.
Zabbix agent, requesting a list of active checks from proxy, acts as a TLS client. The proxy is in role of TLS server.
zabbix_get and zabbix_sender utilities always act as TLS clients.
• Zabbix uses mutual authentication.
Each side verifies its peer and may refuse connection.
For example, Zabbix server connecting to agent can close connection immediately if agent’s certificate is invalid. And vice
versa - Zabbix agent accepting a connection from server can close connection if server is not trusted by agent.
• Examine logfiles in both sides - in TLS client and TLS server.
The side which refuses connection may log a precise reason why it was refused. Other side often reports rather general error
(e.g. ”Connection closed by peer”, ”connection was non-properly terminated”).
• Sometimes misconfigured encryption results in confusing error messages in no way pointing to real cause.
In subsections below we try to provide a (far from exhaustive) collection of messages and possible causes which could help
in troubleshooting.
Please note that different crypto toolkits (OpenSSL, GnuTLS) often produce different error messages in same problem situa-
tions.
Sometimes error messages depend even on particular combination of crypto toolkits on both sides.
Server is configured to connect with PSK to agent but agent accepts only unencrypted connections
Get value from agent failed: TCP connection successful, cannot establish TLS to [[127.0.0.1]:10050]: \
Connection closed by peer. Check allowed connection types and access rights
One side connects with certificate but other side accepts only PSK or vice versa
failed to accept an incoming connection: from 127.0.0.1: TLS handshake returned error code 1:\
file .\ssl\s3_srvr.c line 1411: error:1408A0C1:SSL routines:ssl3_get_client_hello:no shared cipher:\
TLS write fatal alert "handshake failure"
703
Attempting to use Zabbix sender compiled with TLS support to send data to Zabbix server/proxy compiled without TLS
In connecting-side log:
Linux:
...In zbx_tls_init_child()
...OpenSSL library (version OpenSSL 1.1.1 11 Sep 2018) initialized
...
...In zbx_tls_connect(): psk_identity:"PSK test sender"
...End of zbx_tls_connect():FAIL error:'connection closed by peer'
...send value error: TCP successful, cannot establish TLS to [[localhost]:10051]: connection closed by pee
Windows:
...failed to accept an incoming connection: from 127.0.0.1: support for TLS was not compiled in
One side connects with PSK but other side uses LibreSSL or has been compiled without encryption support
In connecting-side log:
...TCP successful, cannot establish TLS to [[192.168.1.2]:10050]: SSL_connect() I/O error: [0] Success
In accepting-side log:
...failed to accept an incoming connection: from 192.168.1.2: support for PSK was not compiled in
In Zabbix frontend:
Get value from agent failed: TCP successful, cannot establish TLS to [[192.168.1.2]:10050]: SSL_connect()
One side connects with PSK but other side uses OpenSSL with PSK support disabled
In connecting-side log:
...TCP successful, cannot establish TLS to [[192.168.1.2]:10050]: SSL_connect() set result code to SSL_ERR
In accepting-side log:
...failed to accept an incoming connection: from 192.168.1.2: TLS handshake set result code to 1: file ssl
2 Certificate problems
OpenSSL used with CRLs and for some CA in the certificate chain its CRL is not included in TLSCRLFile
In TLS server log in case of OpenSSL peer:
failed to accept an incoming connection: from 127.0.0.1: TLS handshake with 127.0.0.1 returned error code
file s3_srvr.c line 3251: error:14089086: SSL routines:ssl3_get_client_certificate:certificate verify
TLS write fatal alert "unknown CA"
In TLS server log in case of GnuTLS peer:
failed to accept an incoming connection: from 127.0.0.1: TLS handshake with 127.0.0.1 returned error code
file rsa_pk1.c line 103: error:0407006A: rsa routines:RSA_padding_check_PKCS1_type_1:\
block type is not 01 file rsa_eay.c line 705: error:04067072: rsa routines:RSA_EAY_PUBLIC_DECRYPT:padd
CRL expired or expires during server operation
• before expiration:
704
cannot connect to proxy "proxy-openssl-1.0.1e": TCP successful, cannot establish TLS to [[127.0.0.1]:20004
SSL_connect() returned SSL_ERROR_SSL: file s3_clnt.c line 1253: error:14090086:\
SSL routines:ssl3_get_server_certificate:certificate verify failed:\
TLS write fatal alert "certificate revoked"
• after expiration:
cannot connect to proxy "proxy-openssl-1.0.1e": TCP successful, cannot establish TLS to [[127.0.0.1]:20004
SSL_connect() returned SSL_ERROR_SSL: file s3_clnt.c line 1253: error:14090086:\
SSL routines:ssl3_get_server_certificate:certificate verify failed:\
TLS write fatal alert "certificate expired"
The point here is that with valid CRL a revoked certificate is reported as ”certificate revoked”. When CRL expires the error message
changes to ”certificate expired” which is quite misleading.
cannot connect to proxy "proxy-openssl-1.0.1e": TCP successful, cannot establish TLS to [[127.0.0.1]:20004
invalid peer certificate: The certificate is NOT trusted. The certificate chain is revoked.
Self-signed certificate, unknown CA
OpenSSL, in log:
error:'self signed certificate: SSL_connect() set result code to SSL_ERROR_SSL: file ../ssl/statem/statem_
line 1924: error:1416F086:SSL routines:tls_process_server_certificate:certificate verify failed:\
TLS write fatal alert "unknown CA"'
This was observed when server certificate by mistake had the same Issuer and Subject string, although it was signed by CA. Issuer
and Subject are equal in top-level CA certificate, but they cannot be equal in server certificate. (The same applies to proxy and
agent certificates.)
To check whether a certificate contains the same Issuer and Subject entries, run:
3 PSK problems
Proxy or agent does not start, message in the proxy or agent log:
In connecting-side log:
705
18 Web interface
Overview For an easy access to Zabbix from anywhere and from any platform, the web-based interface is provided.
Note:
If using more than one frontend instance make sure that the locales and libraries (LDAP, SAML etc.) are installed and
configured identically for all frontends.
Frontend help A help link is provided in Zabbix frontend forms with direct links to the corresponding parts of the documen-
tation.
1 Menu
Overview
• To collapse, click on next to Zabbix logo. In the collapsed menu only the icons are visible.
706
• To hide, click on next to Zabbix logo. In the hidden menu everything is hidden.
Collapsed menu
When the menu is collapsed to icons only, a full menu reappears as soon as the mouse cursor is placed upon it. Note that it
reappears over page content; to move page content to the right you have to click on the expand button. If the mouse cursor again
is placed outside the full menu, the menu will collapse again after two seconds.
You can also make a collapsed menu reappear fully by hitting the Tab key. Hitting the Tab key repeatedly will allow to focus on the
next menu element.
Hidden menu
707
Even when the menu is hidden completely, a full menu is just one mouse click away, by clicking on the burger icon . Note that
it reappears over page content; to move page content to the right you have to unhide the menu by clicking on the show sidebar
button.
Menu levels
708
Context menus
In addition to the main menu, Zabbix provides context menus for host, item, and event for quick access to frequently used entities,
such as the latest values, simple graph, configuration form, related scripts or external links.
The context menus are accessible by clicking on the host, item or problem/trigger name in supported locations.
1 Event menu
Overview
The event menu contains shortcuts to actions or frontend sections that are frequently required for an event.
Content
The event context menu has six sections: View, Actions, Configuration, Problem, Links, and Scripts. For the entities that are not
configured, links are disabled and displayed in gray. The sections Scripts and Links are displayed if their entities are configured.
The Actions section is available in Trigger overview widgets only. It contains a link to:
709
• Items used in the trigger expression.
Note:
Note that configuration section is available only for Admin and Super admin users.
The Scripts section contains links to execute a global script (with scope Manual event action). This feature may be handy for
running scripts used for managing problem tickets in external systems.
Supported locations
The event context menu is accessible by clicking on a problem or event name in various frontend sections, for example:
2 Host menu
Overview
The host menu contains shortcuts to actions or frontend sections that are frequently required for a host.
Content
The host context menu has four sections: View, Configuration, Links, and Scripts. For the entities that are not configured, links are
disabled and displayed in gray color. The sections Scripts and Links are displayed if their entities are configured.
710
View section contains links to:
Note:
Note that configuration section is available only for Admin and Super admin users.
Scripts section allows to execute global scripts configured for the current host. These scripts need to have their scope defined as
Manual host action to be available in the host menu.
Supported locations
The host menu is accessible by clicking on a host in various frontend sections, for example:
3 Item menu
Overview
The item menu contains shortcuts to actions or frontend sections that are frequently required for an item.
711
Content
The item context menu has three sections: View, Configuration, and Actions.
• Latest data - opens the Latest data section filtered by the current host and item.
• Graph - opens simple graph of the current item.
• Values - opens the list of all values received for the current item within the past 60 minutes.
• 500 latest values - opens the list of 500 latest values for the current item.
Links to Create dependent item and Create dependent discovery rule are enabled only when the item context menu is accessed
from Data collection section.
Note:
Note that configuration section is available only for Admin and Super admin users.
Actions section contains a link to Execute now - the option to execute a check for a new item value immediately.
Supported locations
The item context menu is accessible by clicking on an item name in various frontend sections, for example:
2 Frontend sections
Menu structure The Zabbix frontend menu has the following structure:
• Dashboards
• Monitoring
– Problems
712
– Hosts
– Latest data
– Maps
– Discovery
• Services
– Services
– SLA
– SLA report
• Inventory
– Overview
– Hosts
• Reports
– System information
– Scheduled reports
– Availability report
– Top 100 triggers
– Audit log
– Action log
– Notifications
• Data collection
– Template groups
– Host groups
– Templates
– Hosts
– Maintenance
– Event correlation
– Discovery
• Alerts
– Actions
∗ Trigger actions
∗ Service actions
∗ Discovery actions
∗ Autoregistration actions
∗ Internal actions
– Media types
– Scripts
• Users
– User groups
– User roles
– Users
– API tokens
– Authentication
• Administration
– General
∗ GUI
∗ Autoregistration
∗ Images
∗ Icon mapping
∗ Regular expressions
∗ Trigger displaying options
∗ Geographical maps
∗ Modules
∗ Other
– Audit log
– Housekeeping
– Proxy groups
– Proxies
– Macros
– Queue
∗ Queue overview
∗ Queue overview by proxy
∗ Queue details
713
1 Dashboards
Overview
The Dashboards section is designed to display summaries of all the important information in a dashboard.
While only one dashboard can displayed at one time, it is possible to configure several dashboards. Each dashboard may contain
one or several pages that can be rotated in a slideshow.
A dashboard page consists of widgets and each widget is designed to display information of a certain kind and source, which can
be a summary, a map, a graph, the clock, etc.
Pages and widgets are added to the dashboard and edited in the dashboard editing mode. Pages can be viewed and rotated in the
dashboard viewing mode.
The time period that is displayed in graph widgets is controlled by the Time period selector located above the widgets. The Time
period selector label, located to the right, displays the currently selected time period. Clicking the tab label allows expanding and
collapsing the Time period selector.
Note that when the dashboard is displayed in kiosk mode and widgets only are displayed, it is possible to zoom out the graph
period by double-clicking in the graph.
Dashboard size
The minimum width of a dashboard is 1200 pixels. The dashboard will not shrink below this width; instead a horizontal scrollbar is
displayed if the browser window is smaller than that.
The maximum width of a dashboard is the browser window width. Dashboard widgets stretch horizontally to fit the window. At the
same time, a dashboard widget cannot be stretched horizontally beyond the window limits.
Horizontally, the dashboard is made up of 72 columns of always equal width that stretch/shrink dynamically (but not to less than
1200 pixels total).
Vertically, the dashboard may contain a maximum of 64 rows; each row has a fixed height of 70 pixels.
Viewing dashboards
To view all configured dashboards, click on All dashboards just below the section title.
714
Dashboards are displayed with a sharing tag:
The filter located to the right above the list allows to filter dashboards by name and by those created by the current user.
To delete one or several dashboards, mark the checkboxes of the respective dashboards and click on Delete below the list.
Viewing a dashboard
715
Display only page content (kiosk mode).
Kiosk mode can also be accessed with the following URL parameters:
/zabbix.php?action=dashboard.view&kiosk=1.
/zabbix.php?action=dashboard.view&kiosk=0
To exit to normal mode:
Sharing
Public dashboards are visible to all users. Private dashboards are visible only to their owner. Private dashboards can be shared by
the owner with other users and user groups.
The sharing status of a dashboard is displayed in the list of all dashboards. To edit the sharing status of a dashboard, click on the
Sharing option in the action menu when viewing a single dashboard:
Parameter Description
Editing a dashboard
Creating a dashboard
716
Parameter Description
Click on the Save changes button to save the dashboard. If you click on Cancel, the dashboard will not be created.
Adding widgets
• Click on the button or the Add widget option in the action menu that can be opened by clicking on the
arrow. Fill the widget configuration form. The widget will be created in its default size and placed after the existing widgets
(if any);
Or
• Move your mouse to the desired empty spot for the new widget. Notice how a placeholder appears, on mouseover, on any
empty slot on the dashboard. Then click to open the widget configuration form. After filling the form the widget will be
created in its default size or, if its default size is bigger than is available, take up the available space. Alternatively, you may
click and drag the placeholder to the desired widget size, then release, and then fill the widget configuration form. (Note
that when there is a widget copied onto the clipboard, you will be first prompted to select between Add widget and Paste
widget options to create a widget.)
717
In the widget configuration form:
Widgets
A wide variety of widgets (e.g. Clock, Host availability or Trigger overview) can be added to a dashboard: these can be resized
and moved around the dashboard in dashboard editing mode by clicking on the widget title bar and dragging it to a new location.
718
Also, you can click on the following buttons in the top-right corner of the widget to:
• - edit a widget;
• - access the widget menu
Click on Save changes for the dashboard to make any changes to the widgets permanent.
Copying/pasting widgets
Dashboard widgets can be copied and pasted, allowing to create a new widget with the properties of an existing one. They can be
copy-pasted within the same dashboard, or between dashboards opened in different tabs.
A widget can be copied using the widget menu. To paste the widget:
• click on the arrow next to the Add button and selecting the Paste widget option, when editing the dashboard
• use the Paste widget option when adding a new widget by selecting some area in the dashboard (a widget must be copied
first for the paste option to become available)
A copied widget can be used to paste over an existing widget using the Paste option in the widget menu.
Creating a slideshow
A slideshow will run automatically if the dashboard contains two or more pages (see Adding pages) and if one of the following is
true:
Slideshow-related controls are also available in kiosk mode (where only the page content is shown):
• - stop slideshow
• - start slideshow
Adding pages
• Fill the general page parameters and click on Apply. If you leave the name empty, the page will be added with a Page N
name where ’N’ is the incremental number of the page. The page display period allows to customize how long a page is
displayed in a slideshow.
719
A new page will be added, indicated by a new tab (Page 2).
The pages can be reordered by dragging-and-dropping the page tabs. Reordering maintains the original page naming. It is always
possible to go to each page by clicking on its tab.
When a new page is added, it is empty. You can add widgets to it as described above.
Copying/pasting pages
Dashboard pages can be copied and pasted, allowing to create a new page with the properties of an existing one. They can be
pasted from the same dashboard or a different dashboard.
To paste an existing page to the dashboard, first copy it, using the page menu:
Page menu
The page menu can be opened by clicking on the three dots next to the page name:
720
• Properties - customize the page parameters (the name and the page display period in a slideshow)
Widget menu
The widget menu contains different options based on whether the dashboard is in the edit or view mode:
Permissions to dashboards
Permissions to dashboards for regular users and users of ’Admin’ type are limited in the following way:
• They can see and clone a dashboard if they have at least READ rights to it;
• They can edit and delete dashboard only if they have READ/WRITE rights to it;
• They cannot change the dashboard owner.
1 Dashboard widgets
Overview
This section provides the details of parameters that are common for dashboard widgets.
Common parameters
721
Name Enter a widget name.
Refresh interval Configure the default refresh interval.
Default refresh intervals for widgets range from No refresh to 15 minutes depending on the type
of the widget. For example:
- No refresh for URL widget;
- 1 minute for Action log widget;
- 15 minutes for Clock widget.
Refresh intervals can be set to a default value for all users. Switch the dashboard to editing
mode, click the edit a widget button and select the desired refresh interval from the dropdown
list.
Each user can also set their own widget refresh interval. In dashboard viewing mode, click the
three dots button on a widget and select the desired refresh interval from the dropdown
list. Note that a user’s unique refresh interval takes priority over the widget setting and is
preserved even when the widget’s setting is modified.
Show header Mark the checkbox to permanently display the widget header.
When unmarked, the header is hidden to save space and becomes visible only on widget
mouseover (both in view and edit modes). The header is also semi-visible when dragging a
widget to a new place.
Specific parameters
• Action log
• Clock
• Data overview (deprecated; will be removed in the upcoming major release)
• Discovery status
• Favorite graphs
• Favorite maps
• Gauge
• Geomap
• Graph
• Graph (classic)
• Graph prototype
• Honeycomb
• Host availability
• Host navigator
• Item history
• Item navigator
• Item value
• Map
• Map navigation tree
• Pie chart
• Problem hosts
• Problems
• Problems by severity
• SLA report
• System information
• Top hosts
• Top triggers
• Trigger overview
• URL
• Web monitoring
Dynamic parameters
Multiple widgets have parameters that enable them to share configuration data between other widgets or the dashboard.
The Host groups, Hosts, Item, and Item list (since Zabbix 7.0.1) parameters allow selecting either the respective entities or a data
722
source containing either host groups, hosts or items for which the widget can display data.
For Host groups, Item, and Item list parameters, the data source can be a compatible widget from the same dashboard.
For Hosts parameter, the data source can be a compatible widget from the same dashboard or the dashboard itself.
Note:
The Map widget can also broadcast host group and host data to compatible widgets. For more information, see Widget
behavior.
Override host
The Override host parameter allows selecting a data source containing a host for which the widget can display data. The data
source can be a compatible widget from the same dashboard or the dashboard itself.
• To specify a compatible widget, enter its name and select it. Alternatively, click the Select button (or the dropdown button,
then ”Widget”) to open a pop-up of available widgets.
• To specify a dashboard, click the dropdown button, then ”Dashboard”. After saving the dashboard, the Host selector will
appear at the top of the dashboard.
Time period
The Time period parameter allows selecting a data source containing a time period for which the widget can display data. The data
source can be a compatible widget from the same dashboard, the dashboard itself, or the time period configured on the widget
itself.
• To specify a compatible widget, set Time period to ”Widget”, enter its name and select it. Alternatively, click the Select
button to open a pop-up of available widgets.
• To specify a dashboard, set Time period to ”Dashboard”. After saving the dashboard, the Time period selector will appear at
the top of the dashboard.
• To configure the time period on the widget itself, set Time period to ”Custom” and enter or select the start and end of the
time period.
Note:
Regardless of the widget’s Time period configuration, compatible widgets can use it as a data source for the time period.
Widget behavior
Widgets vary in their behavior when broadcasting data. Some widgets immediately broadcast data, while others require an entity
to be selected.
For instance, the Graph widget always broadcasts time period data to listening widgets.
723
In contrast, the Item navigator widget broadcasts item data only when a specific item is selected within it. On mouseover, the item
is highlighted in light blue color, while on selection it is highlighted in yellow color and broadcasted to a listening widget.
Similarly, the Map widget broadcasts host data when a host is selected. However, by default, it immediately broadcasts the first
available host in it. On mouseover, the entity is highlighted in light blue color, while on selection it is highlighted in dark blue color.
For the broadcasting and listening capabilities of each widget, see Widget compatibility.
• If the data source (widget) is not broadcasting data, the listening widget enters the Awaiting data state.
• If the data source (widget) has been deleted, replaced with an incompatible widget, or moved to another dashboard page,
the listening widget enters the Referred widget is unavailable state.
• If the specified host in the data source (widget or dashboard) lacks the entity configured in the listening widget (item, graph,
map, etc.) or if the user lacks permissions to access the host, the listening widget displays the following message: ”No
permissions to referred object or it does not exist!”
724
Widget compatibility
Some widgets can broadcast configuration data to other widgets, some can listen for data, and some can do both. For example:
• The Action log widget can only retrieve time period data from Graph, Graph (classic), and Graph prototype widgets.
• The Geomap widget can broadcast host data to widgets that listen for it (Data overview, Honeycomb, etc.) and can also
listen for host group and host data from widgets that broadcast it (Honeycomb, Problem hosts, etc.).
• The Clock widget cannot broadcast or listen for any data.
The following table outlines the broadcasting and listening capabilities of each widget.
725
Widget Broadcasts Listens
1 Action log
Overview
In the action log widget, you can display details of action operations (notifications, remote commands). It replicates information
from Reports → Action log.
Configuration
In addition to the parameters that are common for all widgets, you may set the following specific options:
Recipients Filter entries by recipients. This field is auto-complete, so starting to type the name of a recipient
will offer a dropdown of matching recipients. If no recipients are selected, details of action
operations for all recipients will be displayed.
Actions Filter entries by actions. This field is auto-complete, so starting to type the name of an action will
offer a dropdown of matching actions. If no actions are selected, details of action operations for
all actions will be displayed.
Media types Filter entries by media types. This field is auto-complete, so starting to type the name of a media
type will offer a dropdown of matching media types. If no media types are selected, details of
action operations for all media types will be displayed.
Status Mark the checkbox to filter entries by the respective status:
In progress - action operations that are in progress are displayed;
Sent/Executed - action operations that have sent a notification or have been executed are
displayed;
Failed - action operations that have failed are displayed.
Search string Filter entries by the content of the message/remote command. If you enter a string here, only
those action operations whose message/remote command contains the entered string will be
displayed. Macros are not resolved.
726
Time period Filter entries by time period. Select the data source for the time period:
Dashboard - set the Time period selector as the data source;
Widget - set a compatible widget specified in the Widget parameter as the data source;
Custom - set the time period specified in the From and To parameters as the data source; if set,
a clock icon will be displayed in the top right corner of the widget, indicating the set time on
mouseover.
Widget Enter or select a compatible widget as the data source for the time period.
This parameter is available if Time period is set to ”Widget”.
From Enter or select the start of the time period.
Relative time syntax (now, now/d, now/w-1w, etc.) is supported.
This parameter is available if Time period is set to ”Custom”.
To Enter or select the end of the time period.
Relative time syntax (now, now/d, now/w-1w, etc.) is supported.
This parameter is available if Time period is set to ”Custom”.
Sort entries by Sort entries by:
Time (descending or ascending);
Type (descending or ascending);
Status (descending or ascending);
Recipient (descending or ascending).
Show lines Set how many action log lines will be displayed in the widget.
2 Clock
Overview
In the clock widget, you may display local, server, or specified host time.
727
Configuration
In addition to the parameters that are common for all widgets, you may set the following specific options:
Advanced configuration
728
Advanced configuration options are available in the collapsible Advanced configuration section, and only for those elements that
are selected in the Show field (see above).
Additionally, advanced configuration allows to change the background color for the whole widget.
Background color Select the background color from the color picker.
D stands for default color (depends on the frontend theme). To return to the default value, click
the Use default button in the color picker.
Date
Size Enter font size height for the date (in percent relative to total widget height).
Bold Mark the checkbox to display date in bold type.
Color Select the date color from the color picker.
D stands for default color (depends on the frontend theme). To return to the default value, click
the Use default button in the color picker.
Time
Size Enter font size height for the time (in percent relative to total widget height).
Bold Mark the checkbox to display time in bold type.
Color Select the time color from the color picker.
D stands for default color (depends on the frontend theme). To return to the default value, click
the Use default button in the color picker.
Seconds Mark the checkbox to display seconds. Otherwise only hours and minutes will be displayed.
Format Select to display a 24-hour or 12-hour time.
Time zone
Size Enter font size height for the time zone (in percent relative to total widget height).
Bold Mark the checkbox to display time zone in bold type.
Color Select the time zone color from the color picker.
D stands for default color (depends on the frontend theme). To return to the default value, click
the Use default button in the color picker.
Time zone Select the time zone.
Format Select to display time zone in short format (e.g. New York) or full format (e.g.(UTC-04:00)
America/New York).
3 Data overview
Attention:
This widget is deprecated and will be removed in the upcoming major release. Consider using the Top hosts widget instead.
Overview
In the data overview widget, you can display the latest data for a group of hosts.
The color of problem items is based on the problem severity color, which can be adjusted in the problem update screen.
729
By default, only values that fall within the last 24 hours are displayed. This limit has been introduced with the aim of improving
initial loading times for large pages of latest data. This limit is configurable in Administration → General → GUI, using the Max
history display period option.
Clicking on a piece of data offers links to some predefined graphs or latest values.
Note that 50 records are displayed by default (configurable in Administration → General → GUI, using the Max number of columns
and rows in overview tables option). If more records exist than are configured to display, a message is displayed at the bottom of
the table, asking to provide more specific filtering criteria. There is no pagination. Note that this limit is applied first, before any
further filtering of data, for example, by tags.
Configuration
In addition to the parameters that are common for all widgets, you may set the following specific options:
730
Item tags Specify tags to limit the number of item data displayed in the widget.
It is possible to include as well as exclude specific tags and tag values. Several conditions can be
set. Tag name matching is always case-sensitive.
4 Discovery status
Overview
This widget displays a status summary of the active network discovery rules.
5 Favorite graphs
Overview
This widget contains shortcuts to the most needed graphs, sorted alphabetically.
The list of shortcuts is populated when you view a graph in Monitoring -> Latest data -> Graphs, and then click on its Add
to favorites button.
731
All configuration parameters are common for all widgets.
6 Favorite maps
Overview
This widget contains shortcuts to the most needed maps, sorted alphabetically.
The list of shortcuts is populated when you view a map and then click on its Add to favorites button.
7 Gauge
Overview
The widget can be visually fine-tuned using the advanced configuration options to create a wide variety of visual styles:
732
The gauge widget can display only numeric values. Displaying binary values is not supported.
Configuration
In addition to the parameters that are common for all widgets, you may set the following specific options:
733
Item Select the item.
Alternatively, select a compatible widget as the data source for items.
This field is auto-complete, so starting to type the name of an item will offer a dropdown of
matching items.
Note that you can select only items that return numeric data.
Min Enter the minimum value of the gauge.
Suffixes (for example, ”1d”, ”2w”, ”4K”, ”8G”) are supported. Value mappings are supported.
Max Enter the maximum value of the gauge.
Suffixes (for example, ”1d”, ”2w”, ”4K”, ”8G”) are supported. Value mappings are supported.
Colors Select the color from the color picker:
Value arc - select the gauge value arc color;
Arc background - select the gauge value arc and gauge thresholds arc background color;
Background - select the widget background color.
”D” stands for the default color, which depends on the frontend theme. If Thresholds are set, the
default color for Value arc depends on the threshold color. To return to the default color, click
the Use default button in the color picker.
Show Mark the checkbox to display the respective gauge element - description, value, value arc,
needle, scale (the minimum and maximum value of the gauge at the beginning and end of the
gauge arc). Unmark to hide. At least one element must be selected.
Note that the gauge needle and scale can be displayed if the gauge value arc or gauge
thresholds arc (see advanced configuration options) is displayed. Also note that if the gauge
needle is displayed, the value is placed under the needle; if the needle is hidden, the value is
aligned with the bottom of the gauge arc.
Override host Select a compatible widget or the dashboard as the data source for hosts.
This parameter is not available when configuring the widget on a template dashboard.
Advanced configuration Click the Advanced configuration label to display advanced configuration options. This is also
where you’ll be able to adjust the gauge elements selected in the Show field.
Advanced configuration
Advanced configuration options are available in the collapsible Advanced configuration section:
734
Angle Select the gauge angle (180° or 270°).
Description
Description Enter the item description. This description may override the default item name.
Multiline descriptions are supported. A combination of text and supported macros is possible.
{HOST.*}, {ITEM.*}, {INVENTORY.*} and user macros are supported.
Size Enter the font size height for the item description (in percent, relative to the total widget height).
Vertical position Select the vertical position of the item description (top or bottom, relative to the gauge arc).
Bold Mark the checkbox to display the item description in bold.
Color Select the item description color from the color picker.
”D” stands for the default color, which depends on the frontend theme. To return to the default
color, click the Use default button in the color picker.
Value
Decimal places Enter the number of decimal places to display with the value.
This option affects only items that return numeric (float) data.
Size Enter the font size height for the value (in percent, relative to the gauge arc height).
Bold Mark the checkbox to display the value in bold.
Color Select the value color from the color picker.
”D” stands for the default color, which depends on the frontend theme. To return to the default
color, click the Use default button in the color picker.
Units
735
Units Mark the checkbox to display units with the item value.
If you enter a unit name, it will override the units set in the item configuration.
Size Enter the font size height for the item units (in percent, relative to the gauge arc height).
Bold Mark the checkbox to display item units in bold.
Position Select the position of the item units (above, below, before or after, relative to the item value).
This option is ignored for the following time-related units: unixtime, uptime, s.
Color Select the item units color from the color picker.
”D” stands for the default color, which depends on the frontend theme. To return to the default
color, click the Use default button in the color picker.
Value arc
Arc size Enter the size height of the gauge value arc (in percent, relative to the gauge arc radius).
Needle
Color Select the gauge needle color from the color picker.
”D” stands for the default color, which depends on the frontend theme. If Thresholds are set, the
default color for the needle depends on the threshold color. To return to the default color, click
the Use default button in the color picker.
Scale
Show units Mark the checkbox to display units with the minimum and maximum value of the gauge.
Size Enter the font size height for the minimum and maximum value of the gauge (in percent, relative
to the gauge arc height).
Decimal places Enter the number of decimal places to display with the minimum and maximum value of the
gauge.
This option affects only items that return numeric (float) data.
Thresholds
Thresholds Click Add to add a threshold, select a threshold color from the color picker, and specify a numeric
value.
The thresholds list will be sorted in ascending order when saved.
Note that the colors configured as thresholds will be displayed correctly only for numeric items.
Suffixes (for example, ”1d”, ”2w”, ”4K”, ”8G”) are supported. Value mappings are supported.
Show labels Mark the checkbox to display threshold values as labels on the gauge scale.
Show arc Mark the checkbox to display the gauge thresholds arc.
Arc size Enter the size height of the gauge thresholds arc (in percent, relative to the gauge arc radius).
The information displayed by the gauge widget can be downloaded as a .png image using the widget menu:
8 Geomap
736
Overview
Geomap widget displays hosts as markers on a geographical map using open-source JavaScript interactive maps library Leaflet.
Note:
Zabbix offers multiple predefined map tile service providers and an option to add a custom tile service provider or even
host tiles themselves (configurable in the Administration → General → Geographical maps menu section).
By default, the widget displays all enabled hosts with valid geographical coordinates defined in the host configuration. It is possible
to configure host filtering in the widget parameters.
Configuration
In addition to the parameters that are common for all widgets, you may set the following specific options:
737
Tags Specify tags to limit the number of hosts displayed in the widget.
It is possible to include as well as exclude specific tags and tag values. Several conditions can be
set. Tag name matching is always case-sensitive.
This parameter is not available when configuring the widget on a template dashboard.
Initial view Comma-separated center coordinates and an optional zoom level to display when the widget is
initially loaded in the format <latitude>,<longitude>,<zoom>
If initial zoom is specified, the Geomap widget is loaded at the given zoom level. Otherwise,
initial zoom is calculated as half of the max zoom for the particular tile provider.
The initial view is ignored if the default view is set (see below).
Examples:
40.6892494,-74.0466891,14
40.6892494,-122.0466891
Host markers displayed on the map have the color of the host’s most serious problem and green color if a host has no problems.
Clicking on a host marker allows viewing the host’s visible name and the number of unresolved problems grouped by severity.
Clicking on the visible name will open host menu.
Hosts displayed on the map can be filtered by problem severity. Press on the filter icon in the widget’s upper right corner and mark
the required severities.
It is possible to zoom in and out the map by using the plus and minus buttons in the widget’s upper left corner or by using the
mouse scroll wheel or touchpad. To set the current view as default, right-click anywhere on the map and select Set this view as
default. This setting will override Initial view widget parameter for the current user. To undo this action, right-click anywhere on
the map again and select Reset to initial view.
When Initial view or Default view is set, you can return to this view at any time by pressing on the home icon on the left.
738
9 Graph
Overview
The graph widget provides a modern and versatile way of visualizing data collected by Zabbix using a vector image drawing
technique. This graph widget is supported since Zabbix 4.0. Note that the graph widget supported before Zabbix 4.0 can still be
used as Graph (classic). See also Adding widgets section on Dashboards page for more details.
Configuration
739
Data set
The Data set tab allows selecting data for the graph by adding data sets. Two types of data sets can be added:
• Item patterns - data from matching items is displayed. The graph is drawn using different shades of single color for each
item.
• Item list - data from selected items is displayed. The graph is drawn using different colors for each item.
When configuring the widget on a template dashboard, the parameter for specifying host
patterns is not available, and the parameter for specifying an item list allows to select only
the items configured on the template.
Draw Choose the draw type of the metric.
Possible draw types: Line (set by default), Points, Staircase, and Bar.
Note that if there is only one data point in the line/staircase graph, it is drawn as a point
regardless of the draw type. The point size is calculated from the line width, but it cannot be
smaller than 3 pixels, even if the line width is less.
Stacked Mark the checkbox to display data as stacked (filled areas displayed).
This option is disabled when Points draw type is selected.
Width Set the line width.
This option is available when Line or Staircase draw type is selected.
Point size Set the point size.
This option is available when Points draw type is selected.
Transparency Set the transparency level.
Fill Set the fill level.
This option is available when Line or Staircase draw type is selected.
Missing data Select the option for displaying missing data:
None - the gap is left empty;
Connected - two border values are connected;
Treat as 0 - the missing data is displayed as 0 values;
Last known - the missing data is displayed with the same value as the last known value; not
applicable for the Points and Bar draw type.
Y-axis Select the side of the graph where the Y-axis will be displayed.
Time shift Specify time shift if required.
You may use time suffixes in this field. Negative values are allowed.
740
Aggregation function Specify which aggregation function to use:
min - display the smallest value;
max - display the largest value;
avg - display the average value;
sum - display the sum of values;
count - display the count of values;
first - display the first value;
last - display the last value;
none - display all values (no aggregation).
Aggregation allows to display an aggregated value for the chosen interval (5 minutes, an
hour, a day), instead of all values. See also: Aggregation in graphs.
Aggregation interval Specify the interval for aggregating values.
You may use time suffixes in this field. A numeric value without a suffix will be regarded as
seconds.
Aggregate Specify whether to aggregate:
Each item - each item in the dataset will be aggregated and displayed separately;
Data set - all dataset items will be aggregated and displayed as one value.
Approximation Specify what value to display when more than one value exists per vertical graph pixel:
all - display the smallest, the largest and the average values;
min - display the smallest value;
max - display the largest value;
avg - display the average value.
This setting is useful when displaying a graph for a large time period with frequent update
interval (such as one year of values collected every 10 minutes).
Data set label Specify the data set label that is displayed in graph Data set configuration and in graph
Legend (for aggregated data sets).
All data sets are numbered including those with a specified Data set label. If no label is
specified, the data set will be labeled automatically according to its number (e.g. ”Data set
#2”, ”Data set #3”, etc.). Data set numbering ir recalculated after reordering/dragging data
sets.
Data set labels that are too long will be shortened to fit where displayed (e.g. ”Number of
proc...”).
• Click on the move icon and drag a data set to a new place in the list.
• Click on the expand icon to expand data set details. When expanded, this icon turns into a collapse icon.
• Click on the color icon to change the color, either from the color picker or manually. For Item patterns data set, the
color is used to calculate different color shades for each item. For Item list data set, the color is used for the specified item.
• Click on the Add new data set button to add an empty data set allowing to select host and item patterns. If you click on the
downward pointing icon next to the Add new data set button, a drop-down menu appears, allowing you to add a new Item
patterns or Item list data set or allowing you to Clone the currently open data set. If all data sets are collapsed, the Clone
option is not available.
The Item patterns data set contains Host patterns and Item patterns fields that both recognize full names or patterns containing
a wildcard symbol (*). This functionality allows you to select all the host names and item names containing the selected pattern.
While typing the item name or item pattern in the Item patterns field, only items belonging to the selected host name(s) are
displayed in the dropdown list.
741
For example, having typed a pattern z* in the Host patterns field, the dropdown list displays all host names containing this pattern:
z*, Zabbix server, Zabbix proxy. After pressing Enter, this pattern is accepted and is displayed as z*. Similarly, having typed
the pattern a* in the Item patterns field, the dropdown list displays all item names containing this pattern: a*, Available memory,
Available memory in %.
After pressing Enter, this pattern is accepted, is displayed as a*, and all items belonging to the selected host name(s) are displayed
in the graph.
The Item list data set contains the Add item button that allows you to add items to be displayed on the graph. Since Zabbix 7.0.1,
you can also add compatible widgets as the data source for items by clicking the Add widget button.
For example, clicking the Add item button opens a pop-up window containing a Host parameter. Having selected a host, all its
items that are available for selection are displayed in a list. After selecting one or more items, they will be displayed in the data
set item list and in the graph.
Displaying options
742
Simple triggers Mark the checkbox to show the trigger thresholds for simple triggers. The thresholds will be
drawn as dashed lines using the trigger severity color.
A simple trigger is a trigger with one function (only last, max, min, avg) for one item in the
expression.
A maximum of three triggers can be drawn. Note that the trigger has to be within the drawn
range to be visible.
Working time Mark the checkbox to show working time on the graph. Working time (working days) is displayed
in graphs as a white background, while non-working time is displayed in gray (with the Original
blue default frontend theme).
Percentile line (left) Mark the checkbox and enter the percentile value to show the specified percentile as a line on
the left Y-axis of the graph.
If, for example, a 95% percentile is set, then the percentile line will be at the level where 95
percent of the values fall under.
Percentile line (right) Mark the checkbox and enter the percentile value to show the specified percentile as a line on
the right Y-axis of the graph.
If, for example, a 95% percentile is set, then the percentile line will be at the level where 95
percent of the values fall under.
Time period
The Time period tab allows to set a time period for which to display data in the graph:
Time period Select the data source for the time period:
Dashboard - set the Time period selector as the data source;
Widget - set a compatible widget specified in the Widget parameter as the data source;
Custom - set the time period specified in the From and To parameters as the data source; if set,
a clock icon will be displayed in the top right corner of the widget, indicating the set time on
mouseover.
Widget Enter or select a compatible widget as the data source for the time period.
This parameter is available if Time period is set to ”Widget”.
From Enter or select the start of the time period.
Relative time syntax (now, now/d, now/w-1w, etc.) is supported.
This parameter is available if Time period is set to ”Custom”.
To Enter or select the end of the time period.
Relative time syntax (now, now/d, now/w-1w, etc.) is supported.
This parameter is available if Time period is set to ”Custom”.
Axes
743
Left Y Mark this checkbox to make left Y-axis visible.
The checkbox may be disabled if unselected either in Data set or in Overrides tab.
Right Y Mark this checkbox to make right Y-axis visible.
The checkbox may be disabled if unselected either in Data set or in Overrides tab.
X-Axis Unmark this checkbox to hide X-axis (marked by default).
Min Set the minimum value of the corresponding axis.
Visible range minimum value of Y-axis is specified.
Max Set the maximum value of the corresponding axis.
Visible range maximum value of Y-axis is specified.
Units Choose the unit for the graph axis values from the dropdown.
If the Auto option is chosen, axis values are displayed using units of the first item of the
corresponding axis.
Static option allows you to assign the corresponding axis’ custom name. If the Static option is
chosen and the value input field left blank the corresponding axis’ name will only consist of a
numeric value.
Legend
Show legend Unmark this checkbox to hide the legend on the graph (marked by default).
Display min/avg/max Mark this checkbox to display the minimum, average, and maximum values of the item in the
legend.
Show aggregation Mark this checkbox to show the aggregation function in the legend.
function
Rows Select the display mode for legend rows:
Fixed - the number of rows displayed is determined by the Number of rows parameter value;
Variable - the number of rows displayed is determined by the amount of configured items while
not exceeding the Maximum number of rows parameter value.
Number of rows/ If Rows is set to ”Fixed”, set the number of legend rows to be displayed (1-10).
Maximum number of If Rows is set to ”Variable”, set the maximum number of legend rows to be displayed (1-10).
rows
Number of columns Set the number of legend columns to be displayed (1-4).
This parameter is available if Display min/avg/max is unmarked.
Problems
744
Show problems Mark this checkbox to enable problem displaying on the graph (unmarked, i.e., disabled by
default).
Selected items only Mark this checkbox to include problems for the selected items only to be displayed on the graph.
Problem hosts Select the problem hosts to be displayed on the graph.
Wildcard patterns may be used (for example, * will return results that match zero or more
characters).
To specify a wildcard pattern, just enter the string manually and press Enter.
While you are typing, note how all matching hosts are displayed in the dropdown.
This parameter is not available when configuring the widget on a template dashboard.
Severity Mark problem severities to filter problems to be displayed on the graph.
If no severities are marked, all problems will be displayed.
Problem Specify the problem’s name to be displayed on the graph.
Problem tags Specify problem tags to limit the number of problems displayed in the widget.
It is possible to include as well as exclude specific tags and tag values. Several conditions can be
set. Tag name matching is always case-sensitive.
Overrides
The Overrides tab allows to add custom overrides for data sets:
745
Overrides are useful when several items are selected for a data set using the * wildcard and you want to change how the items
are displayed by default (e.g. default base color or any other property).
Existing overrides (if any) are displayed in a list. To add a new override:
• Click on , to select override parameters. At least one override parameter should be selected. For parameter descrip-
tions, see the Data set tab above.
Information displayed by the graph widget can be downloaded as a .png image using the widget menu:
746
10 Graph (classic)
Overview
In the classic graph widget, you can display a single custom graph or simple graph.
Configuration
In addition to the parameters that are common for all widgets, you may set the following specific options:
Information displayed by the classic graph widget can be downloaded as .png image using the widget menu:
747
A screenshot of the widget will be saved to the Downloads folder.
11 Graph prototype
Overview
In the graph prototype widget, you can display a grid of graphs created from either a graph prototype or an item prototype by
low-level discovery.
Configuration
748
In addition to the parameters that are common for all widgets, you may set the following specific options:
Source Select the source of the graphs: Graph prototype or Simple graph prototype.
Graph prototype Select a graph prototype to display graphs discovered by the graph prototype.
This parameter is available if Source is set to ”Graph prototype”.
Item prototype Select an item prototype to display simple graphs for items discovered by the item prototype.
This parameter is available if Source is set to ”Simple graph prototype”.
Time period Set a time period for which to display data in the graphs. Select the data source for the time
period:
Dashboard - set the Time period selector as the data source;
Widget - set a compatible widget specified in the Widget parameter as the data source;
Custom - set the time period specified in the From and To parameters as the data source; if set,
a clock icon will be displayed in the top right corner of the widget, indicating the set time on
mouseover.
Widget Enter or select a compatible widget as the data source for the time period.
This parameter is available if Time period is set to ”Widget”.
From Enter or select the start of the time period.
Relative time syntax (now, now/d, now/w-1w, etc.) is supported.
This parameter is available if Time period is set to ”Custom”.
To Enter or select the end of the time period.
Relative time syntax (now, now/d, now/w-1w, etc.) is supported.
This parameter is available if Time period is set to ”Custom”.
Show legend Unmark this checkbox to hide the legend on the graphs (marked by default).
Override host Select a compatible widget or the dashboard as the data source for hosts.
This parameter is not available when configuring the widget on a template dashboard.
Columns Enter the number of columns of graphs to display within a graph prototype widget.
Rows Enter the number of rows of graphs to display within a graph prototype widget.
While the Columns and Rows parameters allow fitting more than one graph in the widget, there still may be more discovered graphs
than there are columns/rows in the widget. In this case, paging becomes available in the widget, and a slide-up header allows to
switch between pages using the left and right arrows:
12 Honeycomb
Overview
The honeycomb widget offers a dynamic overview of the monitored network infrastructure and resources, where host groups, such
as virtual machines and network devices, along with their respective items, are visually represented as interactive hexagonal cells.
749
The widget can be visually fine-tuned using the advanced configuration options to create a wide variety of visual styles.
On mouseover, the focused honeycomb cell enlarges for improved visibility. Clicking a cell highlights its border until another cell
is selected.
750
The number of displayed honeycomb cells is constrained by the widget’s size and the minimum cell size (32 pixels). If all cells
cannot fit within the widget, an ellipsis is shown as the final cell.
The widget can be resized to fit more cells. On resize, the honeycomb cell size and positioning are dynamically adjusted. Each row
in the honeycomb will maintain an equal cell count, except for the last row if the total cell count is not divisible by the row’s cell
count.
Configuration
751
In addition to the parameters that are common for all widgets, you may set the following specific options:
752
Host tags Specify tags to limit the number of hosts displayed in the widget.
It is possible to include as well as exclude specific tags and tag values. Several conditions can be
set.
Tag name matching is always case-sensitive.
This parameter is not available when configuring the widget on a template dashboard.
Item patterns Enter item patterns or select existing items as item patterns. Data of items that match the
entered or selected patterns will be displayed on the honeycomb. The Item patterns parameter
is mandatory.
Wildcard patterns may be used for selection (for example, * will return items that match zero or
more characters; Zabbix* will return items that start with ”Zabbix”).
To specify a wildcard pattern, enter the string manually and press Enter. When you start typing,
a dropdown list will show matching items limited to those belonging to selected Hosts or hosts
within selected Host groups, if any. The wildcard symbol is always interpreted, therefore, it is not
possible to add, for example, an item named item* individually, if there are other matching items
(e.g., item2, item3).
When configuring the widget on a template dashboard, this parameter allows selecting only
items configured on the template.
Item tags Specify tags to limit the number of items displayed in the widget. For more information, see Host
tags above.
Show hosts in Mark this checkbox to display hosts in maintenance (in this case, maintenance icon will be shown
maintenance next to the host name).
This parameter is labeled Show data in maintenance when configuring the widget on a template
dashboard.
Show Mark this checkbox to display the respective honeycomb cell element - primary label, secondary
label.
At least one element must be selected.
Advanced configuration Click the Advanced configuration label to display advanced configuration options.
Advanced configuration
Advanced configuration options are available in the collapsible Advanced configuration section and only for elements selected in
the Show field (see above) and the background color or thresholds for honeycomb cells.
753
Primary/Secondary
label
Type Select the label type:
Text - the label will display the text specified in the Text parameter;
Value - the label will display the item value with decimal places as specified in the Decimal
places parameter.
Text Enter the label text. This text may override the default item name.
Multiline text is supported. A combination of text and supported macros is possible.
{HOST.*}, {ITEM.*}, {INVENTORY.*}, and user macros are supported.
Honeycomb cells are ordered alphabetically by host name, and, within each host, by item name.
This parameter is available if Type is set to ”Text”.
Decimal places Enter the number of decimal places to display with the value.
This parameter is available if Type is set to ”Value”, and affects only items that return numeric
(float) data.
Size Select the label size:
Auto - use automatically adjusted label size;
Custom - enter a custom label size (in percent, relative to the honeycomb cell size).
Note that labels that do not fit the honeycomb cell size are truncated.
Bold Mark the checkbox to display item units in bold.
Color Select the item units color from the color picker.
”D” stands for the default color, which depends on the frontend theme. To return to the default
color, click the Use default button in the color picker.
Units
Units Mark the checkbox to display units with the item value.
If you enter a unit name, it will override the units set in the item configuration.
This parameter is available if Type is set to ”Text”.
Position Select the position of the item units (before or after the item value).
This parameter is ignored for the following time-related units: unixtime, uptime, s.
This parameter is available if Type is set to ”Text”.
754
Background color
Background color Select the honeycomb cells background color from the color picker.
”D” stands for the default color, which depends on the frontend theme. To return to the default
color, click the Use default button in the color picker.
Thresholds
Color interpolation Mark the checkbox to enable smooth transitioning between threshold colors for honeycomb cells.
This parameter is available if two or more thresholds are set.
Threshold Click Add to add a threshold, select a threshold color from the color picker, and specify a numeric
value.
The thresholds list will be sorted in ascending order when saved.
Note that the colors configured as thresholds will be displayed correctly only for numeric items.
Suffixes (for example, ”1d”, ”2w”, ”4K”, ”8G”) are supported. Value mappings are supported.
The information displayed by the honeycomb widget can be downloaded as a .png image using the widget menu:
13 Host availability
Overview
In the host availability widget, high-level statistics about host availability are displayed in colored columns/lines, depending on the
chosen layout.
755
Host availability in each column/line is counted as follows:
Note:
For Zabbix agent (active checks), the Mixed cell will always be empty since this type of items cannot have multiple inter-
faces.
Configuration
In addition to the parameters that are common for all widgets, you may set the following specific options:
756
Host groups Select host groups.
Alternatively, select a compatible widget as the data source for host groups.
This field is auto-complete, so starting to type the name of a group will offer a dropdown of
matching groups.
This parameter is not available when configuring the widget on a template dashboard.
Interface type Select which host interfaces you want to see availability data for.
Availability of all interfaces is displayed by default if nothing is selected.
Layout Select horizontal display (columns) or vertical display (lines).
Include hosts in Include hosts that are in maintenance in the statistics.
maintenance This parameter is labeled Show data in maintenance when configuring the widget on a template
dashboard.
Show only totals If checked, the total of hosts, without breakdown by interfaces, is displayed. This option is
disabled if only one interface is selected.
14 Host navigator
Overview
The host navigator widget displays hosts based on various filtering and grouping options.
757
The widget also allows to control the information displayed in other widgets based on the selected host.
758
Groups by which hosts are organized can be expanded or collapsed.
For groups, problems, and hosts in maintenance, additional details are accessible by mouseover hints.
Configuration
759
In addition to the parameters that are common for all widgets, you may set the following specific options:
This parameter is not available when configuring the widget on a template dashboard.
Host patterns Enter host patterns or select existing hosts as host patterns. Hosts that match the specified
patterns will be displayed on the host navigator.
This field is auto-complete, so starting to type the name of a host will offer a dropdown of
matching hosts.
If no hosts are selected, the widget will display all hosts.
Wildcard patterns may be used for selection (for example, * will return hosts that match zero or
more characters; Zabbix* will return hosts that start with ”Zabbix”).
To specify a wildcard pattern, enter the string manually and press Enter. When you start typing,
a dropdown list will show matching hosts limited to those belonging to hosts within selected Host
groups, if any.
This parameter is not available when configuring the widget on a template dashboard.
Host status Filter which hosts to display based on their status (any, enabled, disabled).
This parameter is not available when configuring the widget on a template dashboard.
760
Host tags Specify tags to filter the hosts displayed in the widget.
It is possible to include as well as exclude specific tags and tag values. Several conditions can be
set.
Tag name matching is always case-sensitive.
This parameter is not available when configuring the widget on a template dashboard.
Severity Mark problem severities to filter hosts with problems to be displayed in the widget.
If no severities are marked, all hosts with all problems will be displayed.
Show hosts in Mark this checkbox to display hosts in maintenance (in this case, maintenance icon will be shown
maintenance next to the host name).
This parameter is labeled Show data in maintenance when configuring the widget on a template
dashboard.
Show problems Filter which problems to display with the hosts in the widget based on their status (all,
unsuppressed, none).
Group by Add a grouping attribute by which to group the selected hosts:
Host group - group hosts by their host group;
Tag value - enter a tag name to group hosts by the values of this tag (for example, enter ”City”
to group hosts by values ”Riga”, ”Tokyo”, etc.);
Severity - group hosts by their problem severities.
Grouping attributes can be reordered by dragging up or down by the handle before the group
name. Note that grouping attribute order determines group nesting order. For example,
specifying multiple tag names (1: Color, 2: City) will result in hosts being grouped by color (Red,
Blue, etc.) and then by city (Riga, Tokyo, etc.).
A host may be displayed in multiple groups depending on the configured grouping attributes (for
example, when grouping by host group and the host belongs to multiple host groups). Clicking
such hosts selects and highlights them in all groups.
Hosts that do not match the configured grouping attributes are displayed in the Uncategorized
group.
761
Host limit Enter the maximum number of hosts to be displayed. Possible values range from 1-9999.
When more hosts are available for displaying than the set limit, a corresponding message is
shown below the displayed hosts (for example, ”100 of 100+ hosts are shown”).
Note that the configured host limit also affects the display of configured groups; for example, if
host limit is set to 100 and hosts are grouped by tag values (more than 200), only the first 100
tag values with the corresponding hosts will be displayed in the widget.
This parameter is not affected by the Limit for search and filter results parameter in
Administration → General → GUI.
This parameter is not available when configuring the widget on a template dashboard.
15 Item history
Overview
The item history widget displays the latest data for various item types (numeric, character, log, text, and binary) in a table format.
It can also show progress bars, images for binary data types (useful for browser items), and highlight values (useful for log file
monitoring).
762
Configuration
In addition to the parameters that are common for all widgets, you may set the following specific options:
763
Override host Select a compatible widget or the dashboard as the data source for hosts.
This parameter is not available when configuring the widget on a template dashboard.
Advanced configuration Click the Advanced configuration label to display advanced configuration options.
Column configuration
764
Thresholds Click Add to add a threshold, select a threshold color from the color picker, and specify a numeric
value.
The thresholds list will be sorted in ascending order when saved.
Suffixes (for example, ”1d”, ”2w”, ”4K”, ”8G”) are supported. Value mappings are supported.
History data Select whether to take data from history or trends:
Auto - automatic selection;
History - take history data;
Trends - take trend data.
Highlights Click Add to add a highlight, select a highlight color from the color picker, and specify a regular
expression.
The selected color will be used as the background color for item values where the specified
regular expression matches the text.
Display Select how the item value should be displayed:
As is - as regular text;
HTML - as HTML text;
Single line - as a single line, truncated to a specified length (1-500 characters); clicking the
truncated value will open a hintbox with the full value.
Use monospace font Mark this checkbox to display the item value in monospace font (unmarked by default).
Display local time Mark this checkbox to display local time instead of timestamp in the timestamp column.
Note that the Show timestamp checkbox in advanced configuration must also be marked.
This parameter is available only for log type items.
Show thumbnail Mark this checkbox to display a thumbnail for image binaries or a Show link for non-image
binaries.
Unmark this checkbox to display a Show link for all binary item values.
The Show link, when clicked, opens a pop-up window with the item value (either image or
Base64 string).
Advanced configuration
Advanced configuration options are available in the collapsible Advanced configuration section:
765
Time period Select the data source for the time period:
Dashboard - set the Time period selector as the data source;
Widget - set a compatible widget specified in the Widget parameter as the data source;
Custom - set the time period specified in the From and To parameters as the data source; if set,
a clock icon will be displayed in the top right corner of the widget, indicating the set time on
mouseover.
Widget Enter or select a compatible widget (Graph, Graph (classic), Graph prototype) as the data source
for the time period.
This parameter is available if Time period is set to ”Widget”.
From Enter or select the start of the time period.
Relative time syntax (now, now/d, now/w-1w, etc.) is supported.
This parameter is available if Time period is set to ”Custom”.
To Enter or select the end of the time period.
Relative time syntax (now, now/d, now/w-1w, etc.) is supported.
This parameter is available if Time period is set to ”Custom”.
16 Item navigator
Overview
The item navigator widget displays items based on various filtering and grouping options.
766
The widget also allows to control the information displayed in other widgets based on the selected item.
767
Groups by which items are organized can be expanded or collapsed.
For groups and problems, additional details are accessible by mouseover hints.
Configuration
768
In addition to the parameters that are common for all widgets, you may set the following specific options:
This parameter is not available when configuring the widget on a template dashboard.
Hosts Select hosts.
Alternatively, select a compatible widget or the dashboard as the data source for hosts.
This field is auto-complete, so starting to type the name of a host will offer a dropdown of
matching hosts.
If no hosts are selected, the widget will display items belonging to all hosts.
This parameter is not available when configuring the widget on a template dashboard.
769
Host tags Specify tags to filter the items displayed in the widget.
It is possible to include as well as exclude specific tags and tag values. Several conditions can be
set.
Tag name matching is always case-sensitive.
This parameter is not available when configuring the widget on a template dashboard.
Item patterns Enter item patterns or select existing items as item patterns. Items that match the specified
patterns will be displayed on the item navigator.
Wildcard patterns may be used for selection (for example, * will return items that match zero or
more characters; Zabbix* will return items that start with ”Zabbix”).
To specify a wildcard pattern, enter the string manually and press Enter. When you start typing,
a dropdown list will show matching items limited to those belonging to selected Hosts or hosts
within selected Host groups, if any. The wildcard symbol is always interpreted, therefore, it is not
possible to add, for example, an item named item* individually, if there are other matching items
(e.g., item2, item3).
When configuring the widget on a template dashboard, this parameter allows selecting only
items configured on the template.
Item tags Specify tags to filter the items displayed in the widget.
It is possible to include as well as exclude specific tags and tag values. Several conditions can be
set.
Tag name matching is always case-sensitive.
770
Group by Add a grouping attribute by which to group items:
Host group - group items by host groups of their hosts;
Host name - group items by their hosts;
Host tag value - enter a tag name to group items by the values of this host tag (for example,
enter ”City” to group items by values ”Riga”, ”Tokyo”, etc.);
Item tag value - enter a tag name to group items by the values of this item tag (for example,
enter ”Component” to group items by values ”CPU”, ”Memory”, etc.).
Grouping attributes can be reordered by dragging up or down by the handle before the group
name. Note that grouping attribute order determines group nesting order. For example,
specifying multiple host tag names (1: Color, 2: City) will result in items being grouped by color
(Red, Blue, etc.) and then by city (Riga, Tokyo, etc.).
An item may be displayed in multiple groups depending on the configured grouping attributes
(for example, when grouping by host group and the item’s host belongs to multiple host groups).
Clicking such items selects and highlights them in all groups.
Items that do not match the configured grouping attributes are displayed in the Uncategorized
group.
When more items are available for displaying than the set limit, a corresponding message is
shown below the displayed items (for example, ”100 of 100+ items are shown”).
Note that the configured item limit also affects the display of configured groups; for example, if
item limit is set to 100 and items are grouped by their hosts (each containing 200 items), only
the first host with its 100 items will be displayed in the widget.
This parameter is not affected by the Limit for search and filter results parameter in
Administration → General → GUI.
17 Item value
Overview
This widget is useful for displaying the value of a single item prominently. This can be the latest value as well as an aggregated
value for some period in the past.
771
Besides the value itself, additional elements can be displayed, if desired:
The widget can display numeric and string values. Displaying binary values is not supported. String values are displayed on a
single line and truncated, if needed. ”No data” is displayed, if there is no value for the item.
The change indicator always compares with the same period in the past. So, for example, the latest value will be compared with
the previous value, while the latest month will be compared with the month before. Note that the previous period for aggregations
is calculated as time period of the same length as the original one with ending time directly before the starting time of the original
period.
Clicking on the value leads to an ad-hoc graph for numeric items or latest data for string items.
The widget and all elements in it can be visually fine-tuned using advanced configuration options, allowing to create a wide variety
of visual styles:
Configuration
772
To configure, select Item value as the widget type:
In addition to the parameters that are common for all widgets, you may set the following specific options:
Advanced configuration
Advanced configuration options are available in the collapsible Advanced configuration section, and only for those elements that
are selected in the Show field (see above).
Additionally, advanced configuration allows to change the background color (static or dynamic) for the whole widget.
773
Description
Description Enter the item description. This description may override the default item name. Multiline
descriptions are supported. A combination of text and supported macros is possible.
{HOST.*}, {ITEM.*}, {INVENTORY.*} and user macros are supported.
Horizontal position Select horizontal position of the item description - left, right or center.
Vertical position Select vertical position of the item description - top, bottom or middle.
Size Enter font size height for the item description (in percent relative to total widget height).
Bold Mark the checkbox to display item description in bold type.
774
Color Select the item description color from the color picker.
D stands for default color (depends on the frontend theme). To return to the default value, click
the Use default button in the color picker.
Value
Decimal places Select how many decimal places will be displayed with the value. This value will affect only float
items.
Size Enter font size height for the decimal places (in percent relative to total widget height).
Horizontal position Select horizontal position of the item value - left, right or center.
Vertical position Select vertical position of the item value - top, bottom or middle.
Size Enter font size height for the item value (in percent relative to total widget height).
Note that the size of item value is prioritized; other elements have to concede space for the
value. With the change indicator though, if the value is too large, it will be truncated to show the
change indicator.
Bold Mark the checkbox to display item value in bold type.
Color Select the item value color from the color picker.
D stands for default color (depends on the frontend theme). To return to the default value, click
the Use default button in the color picker.
Units
Units Mark the checkbox to display units with the item value. If you enter a unit name, it will override
the unit from item configuration.
Position Select the item unit position - above, below, before or after the value.
Size Enter font size height for the item unit (in percent relative to total widget height).
Bold Mark the checkbox to display item unit in bold type.
Color Select the item unit color from the color picker.
D stands for default color (depends on the frontend theme). To return to the default value, click
the Use default button in the color picker.
Time (clock value from
item history)
Horizontal position Select horizontal position of the time - left, right or center.
Vertical position Select vertical position of the time - top, bottom or middle.
Size Enter font size height for the time (in percent relative to total widget height).
Bold Mark the checkbox to display time in bold type.
Color Select the time color from the color picker.
D stands for default color (depends on the frontend theme). To return to the default value, click
the Use default button in the color picker.
Change indicator Select the color of change indicators from the color picker. The change indicators are as follows:
↑ - item value is up (for numeric items)
↓ - item value is down (for numeric items)
↕ - item value has changed (for string items and items with value mapping)
D stands for default color (depends on the frontend theme). To return to the default value, click
the Use default button in the color picker.
Vertical size of the change indicator is equal to the size of the value (integer part of the value for
numeric items).
Note that up and down indicators are not shown with just one value.
Background color Select the background color for the whole widget from the color picker.
D stands for default color (depends on the frontend theme). To return to the default value, click
the Use default button in the color picker.
Thresholds Configure the dynamic background color for the whole widget. Click Add to add a threshold,
select the background color from the color picker, and specify a numeric value. Once the item
value equals or is greater than the threshold value, the background color will change.
The list will be sorted in ascending order when saved.
Note that the dynamic background color will be displayed correctly only for numeric items.
775
Aggregation function Specify which aggregation function to use:
min - display the smallest value;
max - display the largest value;
avg - display the average value;
count - display the count of values;
sum - display the sum of values;
first - display the first value;
last - display the last value;
not used - display the most recent value (no aggregation).
Aggregation allows to display an aggregated value for the chosen interval (5 minutes, an hour, a
day), instead of the most recent value.
Only numeric data can be displayed for min, max, avg and sum. For count, non-numeric data will
be changed to numeric.
Time period Specify the time period to use for aggregating values:
Dashboard - use time period of the dashboard;
Widget - use time period of the specified widget;
Custom - use a custom time period.
<This parameter will not be displayed if Aggregation function is set to ”not used”.
Widget Select the widget.
This parameter will only be displayed if Time period is set to ”Widget”.
From Select the time period from (default value now-1h). See relative time syntax.
This parameter will only be displayed if Time period is set to ”Custom”.
To Select the time period to (default value now). See relative time syntax.
This parameter will only be displayed if Time period is set to ”Custom”.
History data Take data from history or trends:
Auto - automatic selection;
History - take history data;
Trends - take trend data.
This setting applies only to numeric data. Non-numeric data will always be taken from history.
Note that multiple elements cannot occupy the same space; if they are placed in the same space, an error message will be
displayed.
18 Map
Overview
Configuration
In addition to the parameters that are common for all widgets, you may set the following specific options:
776
Map Set a map to display.
Alternatively, select a compatible widget as the data source for the map to display.
This field is auto-complete, so starting to type the name of the map or widget will offer a
dropdown of matching maps or widgets.
Overview
This widget allows building a hierarchy of existing maps while also displaying problem statistics with each included map and map
group.
It becomes even more powerful if you link the Map widget to the navigation tree. In this case, clicking on a map name in the
navigation tree displays the map in full in the Map widget.
Statistics with the top-level map in the hierarchy display a sum of problems of all submaps and their own problems.
Configuration
In addition to the parameters that are common for all widgets, you may set the following specific options:
777
Show unavailable maps Mark this checkbox to display maps that the user does not have read permission to.
Unavailable maps in the navigation tree will be displayed with a grayed-out icon.
Note that if this checkbox is marked, available submaps are displayed even if the parent level
map is unavailable. If unmarked, available submaps to an unavailable parent map will not be
displayed at all.
Problem count is calculated based on available maps and available map elements.
• drag an element (including its child elements) to a new place in the list;
• expand or collapse an element to display or hide its child elements;
• add a child element (with or without a linked map) to an element;
• add multiple child elements (with linked maps) to an element;
• edit an element;
• remove an element (including its child elements).
Element configuration
To configure a navigation tree element, either add a new element or edit an existing element.
778
Name Enter the navigation tree element name.
Linked map Select the map to link to the navigation tree element.
This field is auto-complete, so starting to type the name of a map will offer a dropdown of
matching maps.
Add submaps Mark this checkbox to add the submaps of the linked map as child elements to the navigation
tree element.
20 Pie chart
Overview
The pie chart widget allows to display values of selected items as a pie or doughnut chart.
Pie chart.
779
Doughnut chart.
On mouseover, the focused sector enlarges outwards and the legend for this sector is displayed. Clicking on the focused sector
makes the pop-out effect permanent, until closed with ”x”.
Configuration
Data set
The Data set tab allows selecting data for the pie chart by adding data sets. Two types of data sets can be added:
• Item patterns - data from matching items is displayed. The chart is drawn using different shades of single color for each
item.
• Item list - data from selected items is displayed. The chart is drawn using different colors for each item.
780
Data For Item patterns data set:
set Select or enter host and item patterns; data of items that match the entered patterns will be
displayed on the pie chart; up to 50 items may be displayed.
Wildcard patterns may be used for selection (for example, * will return results that match
zero or more characters).
To specify a wildcard pattern, enter the string manually and press Enter.
The wildcard symbol is always interpreted, so it is not possible to add, for example, an item
named item* individually if there are other matching items (for example, item2, item3).
Specifying host and item patterns is mandatory for ”Item patterns” data sets.
See also: Data set configuration details.
When configuring the widget on a template dashboard, the parameter for specifying host
patterns is not available, and the parameter for specifying an item list allows to select only
the items configured on the template.
Aggregation function Specify which aggregation function to use for each item in the data set:
min - display the smallest value;
max - display the largest value;
avg - display the average value;
sum - display the sum of values;
count - display the count of values;
first - display the first value;
last - display the last value (default).
Aggregation allows to display an aggregated value for the interval (5 minutes, an hour, a
day) selected in the Time period tab or used for the whole dashboard.
Data set aggregation Specify which aggregation function to use for the whole data set:
not used - no aggregation, items are displayed separately (default);
min - display the smallest value;
max - display the largest value;
avg - display the average value;
sum - display the sum of values;
count - display the count of values.
Aggregation allows to display an aggregated value for the interval (5 minutes, an hour, a
day) selected in the Time period tab or used for the whole dashboard.
Data set label Specify a custom label for the data set.
The label is displayed in the data set configuration and the pie chart legend (for aggregated
data sets).
All data sets are numbered including those with a specified Data set label. If no label is
specified, the data set will be labeled automatically according to its number (e.g. ”Data set
#2”, ”Data set #3”, etc.). Data set numbering ir recalculated after reordering/dragging data
sets.
Data set labels that are too long will be shortened to fit where displayed (e.g. ”Number of
proc...”).
Existing data sets are displayed in a list. You can rearrange, expand/collapse, change colors, and clone these data sets.
781
For more information, see data set configuration details in the Graph widget. These details also apply to the Pie chart widget.
Displaying options
The Displaying options tab allows to define history data selection and visualization options for the pie chart:
Time period
The Time period tab allows to set a custom time period for the aggregation settings of the pie chart:
782
Time period Select the data source for the time period:
Dashboard - set the Time period selector as the data source;
Widget - set a compatible widget specified in the Widget parameter as the data source;
Custom - set the time period specified in the From and To parameters as the data source; if set,
a clock icon will be displayed in the top right corner of the widget, indicating the set time on
mouseover.
Widget Enter or select a compatible widget (Graph, Graph (classic), Graph prototype) as the data source
for the time period.
This parameter is available if Time period is set to ”Widget”.
From Enter or select the start of the time period.
Relative time syntax (now, now/d, now/w-1w, etc.) is supported.
This parameter is available if Time period is set to ”Custom”.
To Enter or select the end of the time period.
Relative time syntax (now, now/d, now/w-1w, etc.) is supported.
This parameter is available if Time period is set to ”Custom”.
Legend
Show legend Unmark this checkbox to hide the legend on the pie chart (marked by default).
Show value Mark this checkbox to show the value of the item in the legend.
Show aggregation Mark this checkbox to show the aggregation function in the legend.
function
Rows Select the display mode for legend rows:
Fixed - the number of rows displayed is determined by the Number of rows parameter value;
Variable - the number of rows displayed is determined by the amount of configured items while
not exceeding the Maximum number of rows parameter value.
Number of rows/ If Rows is set to ”Fixed”, set the number of legend rows to be displayed (1-10).
Maximum number of If Rows is set to ”Variable”, set the maximum number of legend rows to be displayed (1-10).
rows
Number of columns Set the number of legend columns to be displayed (1-4).
This parameter is available if Show value is unmarked.
The information displayed by the pie chart widget can be downloaded as a .png image using the widget menu.
783
21 Problem hosts
Overview
In the problem host widget, you can display problem count by host group and the highest problem severity within a group.
Configuration
In addition to the parameters that are common for all widgets, you may set the following specific options:
784
Hosts Select hosts to display in the widget.
Alternatively, select a compatible widget or the dashboard as the data source for hosts.
This field is auto-complete, so starting to type the name of a host will offer a dropdown of
matching hosts.
If no hosts are entered, all hosts will be displayed.
This parameter is not available when configuring the widget on a template dashboard.
Problem You can limit the number of problem hosts displayed by the problem name.
If you enter a string here, only those hosts with problems whose name contains the entered
string will be displayed.
Macros are not expanded.
Severity Mark problem severities to filter problems to be displayed in the widget.
If no severities are marked, all problems will be displayed.
Problem tags Specify problem tags to limit the number of problems displayed in the widget.
It is possible to include as well as exclude specific tags and tag values. Several conditions can be
set. Tag name matching is always case-sensitive.
22 Problems
Overview
In this widget you can display current problems. The information in this widget is similar to Monitoring → Problems.
Configuration
785
You can limit how many problems are displayed in the widget in various ways - by problem status, problem name, severity, host
group, host, event tag, acknowledgment status, etc.
In addition to the parameters that are common for all widgets, you may set the following specific options:
786
Exclude host groups Select host groups to hide problems of from the widget.
This field is auto-complete, so starting to type the name of a group will offer a dropdown of
matching groups.
Specifying a parent host group implicitly selects all nested host groups.
Problems from these host groups will not be displayed in the widget. For example, hosts 001,
002, 003 may be in Group A and hosts 002, 003 in Group B as well. If we select to show Group A
and exclude Group B at the same time, only problems from host 001 will be displayed in the
widget.
This parameter is not available when configuring the widget on a template dashboard.
Hosts Select hosts to display problems of in the widget.
Alternatively, select a compatible widget or the dashboard as the data source for hosts.
This field is auto-complete, so starting to type the name of a host will offer a dropdown of
matching hosts.
If no hosts are entered, problems of all hosts will be displayed.
This parameter is not available when configuring the widget on a template dashboard.
Problem You can limit the number of problems displayed by their name.
If you enter a string here, only those problems whose name contains the entered string will be
displayed.
Macros are not expanded.
Severity Mark problem severities to filter problems to be displayed in the widget.
If no severities are marked, all problems will be displayed.
Problem tags Specify problem tags to limit the number of problems displayed in the widget.
It is possible to include as well as exclude specific tags and tag values. Several conditions can be
set. Tag name matching is always case-sensitive.
When filtered, the tags specified here will be displayed first with the problem, unless overridden
by the Tag display priority (see below) list.
Show tags Select the number of displayed tags:
None - no Tags column;
1 - Tags column contains one tag;
2 - Tags column contains two tags;
3 - Tags column contains three tags.
To see all tags for the problem roll your mouse over the three dots icon.
Tag name Select tag name display mode:
Full - tag names and values are displayed in full;
Shortened - tag names are shortened to 3 symbols, but tag values are displayed in full;
None - only tag values are displayed; no names.
Tag display priority Enter tag display priority for a problem, as a comma-separated list of tags.
Only tag names should be used, no values.
Example: Services,Applications,Application
The tags of this list will always be displayed first, overriding the natural ordering by alphabet.
Show operational data Select the mode for displaying operational data:
None - no operational data is displayed;
Separately - operational data is displayed in a separate column;
With problem name - append operational data to the problem name, using parentheses for the
operational data.
Show symptoms Mark the checkbox to display in its own line problems classified as symptoms.
787
Show suppressed Mark the checkbox to display problems that would otherwise be suppressed (not shown) because
problems of host maintenance or single problem suppression.
Acknowledgement Filter to display all problems, unacknowledged problems only, or acknowledged problems only.
status Mark the additional checkbox to filter out those problems ever acknowledged by you.
Sort entries by Sort entries by:
Time (descending or ascending);
Severity (descending or ascending);
Problem name (descending or ascending);
Host (descending or ascending).
Sorting entries by Host (descending or ascending) is not available when configuring the widget
on a template dashboard.
Show timeline Mark the checkbox to display a visual timeline.
Show lines Specify the number of problem lines to display.
The problem event popup includes the list of problem events for this trigger and, if defined, the trigger description and a clickable
URL.
• Roll a mouse over the problem duration in the Duration column of the Problems widget. The popup disappears once you
remove the mouse from the duration.
• Click on the duration in the Duration column of the Problems widget. The popup disappears only if you click on the duration
again.
23 Problems by severity
Overview
In this widget, you can display the problem count by severity. You can limit what hosts and triggers are displayed in the widget
and define how the problem count is displayed.
788
The problem count is displayed only for cause problems.
Configuration
In addition to the parameters that are common for all widgets, you may set the following specific options:
789
Hosts Select hosts to display in the widget.
Alternatively, select a compatible widget or the dashboard as the data source for hosts.
This field is auto-complete, so starting to type the name of a host will offer a dropdown of
matching hosts.
If no hosts are entered, all hosts will be displayed.
This parameter is not available when configuring the widget on a template dashboard.
Problem You can limit the number of problem hosts displayed by the problem name.
If you enter a string here, only those hosts with problems whose name contains the entered
string will be displayed.
Macros are not expanded.
Severity Mark problem severities to filter problems to be displayed in the widget.
If no severities are marked, all problems will be displayed.
Problem tags Specify problem tags to limit the number of problems displayed in the widget.
It is possible to include as well as exclude specific tags and tag values. Several conditions can be
set. Tag name matching is always case-sensitive.
24 SLA report
Overview
This widget is useful for displaying SLA reports. Functionally it is similar to the Services -> SLA report section.
Configuration
790
To configure, select SLA report as type:
In addition to the parameters that are common for all widgets, you may set the following specific options:
25 System information
Overview
This widget displays the same information as in Reports → System information, however, a single dashboard widget can only display
either the system stats or the high availability nodes at a time (not both).
Configuration
791
In addition to the parameters that are common for all widgets, you may set the following specific options:
26 Top hosts
Overview
This widget provides a way to create custom tables for displaying the data situation, allowing to display Top N-like reports and
progress-bar reports useful for capacity planning.
Configuration
792
In addition to the parameters that are common for all widgets, you may set the following specific options:
This parameter is not available when configuring the widget on a template dashboard.
Show hosts in Mark this checkbox for hosts in maintenance to be displayed as well (in this case, maintenance
maintenance icon will be shown next to the host name). Unmarked by default.
793
Columns Add data columns to display.
The column order determines their display from left to right.
Columns can be reordered by dragging up and down by the handle before the column name.
Order by Specify the column from the defined Columns list to use for Top N or Bottom N ordering.
Order Specify the ordering of rows:
Top N - in descending order according to the Order by aggregated value;
Bottom N - in ascending order according to the Order by aggregated value.
Host limit Number of host rows to be shown (1-100).
This parameter is not available when configuring the widget on a template dashboard.
Column configuration
794
Specific parameters for item value columns:
Note that only numeric items can be displayed in this column if this setting is not ”as is”.
Min Minimum value for bar/indicators.
Max Maximum value for bar/indicators.
Thresholds Specify threshold values when the background/fill color should change.
The list will be sorted in ascending order when saved.
Note that only numeric items can be displayed in this column if thresholds are used.
Decimal places Specify how many decimal places will be displayed with the value.
This setting applies only to numeric data.
Aggregation function Specify which aggregation function to use:
min - display the smallest value;
max - display the largest value;
avg - display the average value;
count - display the count of values;
sum - display the sum of values;
first - display the first value;
last - display the last value;
not used - display the most recent value (no aggregation).
Aggregation allows to display an aggregated value for the chosen interval (5 minutes, an hour, a
day), instead of the most recent value.
Only numeric data can be displayed for min, max, avg and sum. For count, non-numeric data will
be changed to numeric.
Time period Specify the time period to use for aggregating values:
Dashboard - use time period of the dashboard;
Widget - use time period of the specified widget;
Custom - use a custom time period.
<This parameter will not be displayed if Aggregation function is set to ”not used”.
Widget Select the widget.
This parameter will only be displayed if Time period is set to ”Widget”.
From Select the time period from (default value now-1h). See relative time syntax.
This parameter will only be displayed if Time period is set to ”Custom”.
To Select the time period to (default value now). See relative time syntax.
This parameter will only be displayed if Time period is set to ”Custom”.
History data Take data from history or trends:
Auto - automatic selection;
History - take history data;
Trends - take trend data.
This setting applies only to numeric data. Non-numeric data will always be taken from history.
27 Top triggers
Overview
In the Top triggers widget, you can see the triggers with the highest number of problems.
795
The maximum number of triggers that can be shown is 100. When viewing the widget on a dashboard, it is possible to select the
time period for displaying the data.
The information on top triggers is also available in the Reports → Top 100 triggers menu section.
Configuration
In addition to the parameters that are common for all widgets, you may set the following specific options:
796
Problem You can view the triggers for particular problems only. For this, enter the string to be matched in
the problem name.
Macros are not expanded.
Severity Mark trigger severities to filter triggers to be displayed in the widget.
If no severities are marked, all triggers will be displayed.
Problem tags Specify the tags of the problems to be displayed in the widget.
It is possible to include as well as exclude specific tags and tag values. Several conditions can be
set. Tag name matching is always case-sensitive.
28 Trigger overview
Overview
In the trigger overview widget, you can display the trigger states for a group of hosts.
• The trigger states are displayed as colored blocks (the color of the blocks for PROBLEM triggers depends on the problem
severity color, which can be adjusted in the problem update screen). Note that recent trigger state changes (within the last
2 minutes) will be displayed as blinking blocks.
• Gray up and down arrows indicate triggers that have dependencies. On mouseover, dependency details are revealed.
• A checkbox icon indicates acknowledged problems. All problems or resolved problems of the trigger must be acknowledged
for this icon to be displayed.
Clicking on a trigger block provides context-dependent links to problem events of the trigger, the problem acknowledgment screen,
trigger configuration, trigger URL or a simple graph/latest values list.
Note that 50 records are displayed by default (configurable in Administration → General → GUI, using the Max number of columns
and rows in overview tables option). If more records exist than are configured to display, a message is displayed at the bottom of
the table, asking to provide more specific filtering criteria. There is no pagination. Note that this limit is applied first, before any
further filtering of data, for example, by tags.
Configuration
797
To configure, select Trigger overview as type:
In addition to the parameters that are common for all widgets, you may set the following specific options:
798
Problem tags Specify tags to filter the triggers displayed in the widget.
It is possible to include as well as exclude specific tags and tag values. Several conditions can be
set. Tag name matching is always case-sensitive.
Note: If the parameter Show is set to ’Any’, all triggers will be displayed even if tags are
specified. However, while recent trigger state changes (displayed as blinking blocks) will update
for all triggers, the trigger state details (problem severity color and whether the problem is
acknowledged) will only update for triggers that match the specified tags.
29 URL
Overview
This widget displays the content retrieved from the specified URL.
Configuration
In addition to the parameters that are common for all widgets, you may set the following specific options:
799
Override host Select a compatible widget or the dashboard as the data source for hosts.
This parameter is not available when configuring the widget on a template dashboard.
Attention:
Browsers might not load an HTTP page configured in the widget if Zabbix frontend is accessed over HTTPS.
30 Web monitoring
Overview
This widget displays a status summary of the active web monitoring scenarios. See the Web monitoring widget section for detailed
information.
Configuration
Note:
In cases when a user does not have permission to access certain widget elements, that element’s name will appear as
Inaccessible during the widget’s configuration. This results in Inaccessible Item, Inaccessible Host, Inaccessible Group,
Inaccessible Map, and Inaccessible Graph appearing instead of the ”real” name of the element.
In addition to the parameters that are common for all widgets, you may set the following specific options:
800
Exclude host groups Select host groups to hide from the widget.
This field is auto-complete, so starting to type the name of a group will offer a dropdown of
matching groups.
Specifying a parent host group implicitly selects all nested host groups.
Host data from these host groups will not be displayed in the widget. For example, hosts 001,
002, 003 may be in Group A and hosts 002, 003 in Group B as well. If we select to show Group A
and exclude Group B at the same time, only data from host 001 will be displayed in the
dashboard.
This parameter is not available when configuring the widget on a template dashboard.
Hosts Select hosts to display in the widget.
Alternatively, select a compatible widget or the dashboard as the data source for hosts.
This field is auto-complete, so starting to type the name of a host will offer a dropdown of
matching hosts.
If no hosts are entered, all hosts will be displayed.
This parameter is not available when configuring the widget on a template dashboard.
Scenario tags Specify tags to limit the number of web scenarios displayed in the widget.
It is possible to include as well as exclude specific tags and tag values. Several conditions can be
set. Tag name matching is always case-sensitive.
Once you have completed the configuration, you might like to see the widget with the data it displays. To do it, go to Dashboards,
click on the name of a dashboard where you created the widget.
In this example, you can see the widget named ”Zabbix frontend” displaying the status of the web monitoring for three host groups:
”Internal network,” ”Linux servers,” and ”Web servers.”
801
– Ok - displays a number of web scenarios (in green color) when two conditions are observed:
∗ Zabbix has collected the latest data for a web scenario(s);
∗ all steps that were configured in a web scenario are in ”Ok” Status.
– Failed - displays a number of web scenarios (in red color), which have some failed steps:
∗ click on the host name, and it will open a new window; the Status column provides detailed information (in red
color) on the step where Zabbix failed to collect the data; and also,
∗ gives a hint for the parameter that has to be corrected in the configuration form.
• Unknown - displays a number of web scenarios (in grey color) for which Zabbix has neither collected data, nor has an
information about the failed steps.
Clickable links in the widget allow to easily navigate and quickly acquire a full information on each web scenario. Thus, to view:
Now, you can return to the web scenario configuration form and correct your settings.
To view the details in the case of Unknown status, you can repeat the same steps as explained for Failed.
Attention:
At the first monitoring instance, a web scenario is always displayed in Unknown state, which is switched to Failed or Ok
state right after the first check. In the case when a host is monitored by the proxy, the status change occurs in accordance
with the data collection frequency configured on the proxy.
2 Monitoring
Overview
The Monitoring menu is all about displaying data. Whatever information Zabbix is configured to gather, visualize and act upon, it
will be displayed in the various sections of the Monitoring menu.
The following buttons located in the top right corner are common for every section:
Display page in kiosk mode. In this mode only page content is displayed.
To exit kiosk mode, move the mouse cursor until the exit button appears and click on it.
You will be taken back to normal mode.
802
1 Problems
Overview
In Monitoring → Problems you can see what problems you currently have. Problems are those triggers that are in the ”Problem”
state.
By default all new problems are classified as cause problems. It is possible to manually reclassify certain problems as symptom
problem of the cause problem. For more details, see cause and symptom events.
Column Description
The following icon is displayed if a suppressed problem is being shown (see Show suppressed
problems option in the filter). Rolling a mouse over the icon will display more details:
803
Column Description
Hovering on the icon after the problem name will bring up the trigger description (for those
problems that have it).
Operational data Operational data are displayed containing latest item values.
Operational data can be a combination of text and item value macros if configured on a trigger
level. If no operational data is configured on a trigger level, the latest values of all items from the
expression are displayed.
This column is only displayed if Separately is selected for Show operational data in the filter.
Duration Problem duration is displayed.
See also: Negative problem duration
Update Click on the Update link to go to the problem update screen where various actions can be taken
on the problem, including commenting and acknowledging the problem.
Actions History of activities about the problem is displayed using symbolic icons:
- problem severity has been changed, but returned to the original level (e.g. Warning →
Information → Warning)
- actions have been taken, at least one is in progress. The number of actions is also
displayed.
- actions have been taken, at least one has failed. The number of actions is also displayed.
When rolling the mouse over the icons, popups with details about the activity are displayed. See
viewing details to learn more about icons used in the popup for actions taken.
Tags Tags are displayed (if any).
In addition, tags from an external ticketing system may also be displayed (see the Process tags
option when configuring webhooks).
It is possible to display operational data for current problems, i.e. the latest item values as opposed to the item values at the time
of the problem.
Operational data display can be configured in the filter of Monitoring → Problems or in the configuration of the respective dashboard
widget, by selecting one of the three options:
• With problem name - operational data is appended to the problem name and in parentheses. Operational data are appended
to the problem name only if the Operational data field is non-empty in the trigger configuration.
804
The content of operational data can be configured with each trigger, in the Operational data field. This field accepts an arbitrary
string with macros, most importantly, the {ITEM.LASTVALUE<1-9>} macro.
{ITEM.LASTVALUE<1-9>} in this field will always resolve to the latest values of items in the trigger expression. {ITEM.VALUE<1-9>}
in this field will resolve to the item values at the moment of trigger status change (i.e. change into problem, change into OK, being
closed manually by a user or being closed by correlation).
It is actually possible in some common situations to have negative problem duration i.e. when the problem resolution time is earlier
than problem creation time, e. g.:
• If some host is monitored by proxy and a network error happens, leading to no data received from the proxy for a while,
the nodata(/host/key) trigger will be fired by the server. When the connection is restored, the server will receive item data
from the proxy having a time from the past. Then, the nodata(/host/key) problem will be resolved and it will have a negative
problem duration;
• When item data that resolve the problem event are sent by Zabbix sender and contain a timestamp earlier than the problem
creation time, a negative problem duration will also be displayed.
Note:
Negative problem duration is not affecting SLA calculation or Availability report of a particular trigger in any way; it neither
reduces nor expands problem time.
• Mass update - update the selected problems by navigating to the problem update screen
To use this option, mark the checkboxes before the respective problems, then click on the Mass update button.
Buttons
View mode buttons, being common for all sections, are described on the Monitoring page.
Using filter
You can use the filter to display only the problems you are interested in. For better search performance, data is searched with
macros unresolved.
The filter is located above the table. Favorite filter settings can be saved as tabs and then quickly accessed by clicking on the tabs
above the filter.
805
Parameter Description
806
Tabs for favorite filters
To save a new set of filter parameters, open the main tab, and configure the filter settings, then press the Save as button. In a
new popup window, define Filter properties.
Parameter Description
When saved, the filter is created as a named filter tab and immediately activated.
To edit the filter properties of an existing filter, press the gear symbol next to the active tab name.
Notes:
• To hide the filter area, click on the name of the current tab. Click on the active tab name again to open the filter area again.
• Keyboard navigation is supported: use arrows to switch between tabs, press Enter to open.
• The left/right buttons above the filter may be used to switch between saved filters. Alternatively, the downward pointing
button opens a drop-down menu with all saved filters and you can click on the one you need.
• Filter tabs can be re-arranged by dragging and dropping.
• If the settings of a saved filter have been changed (but not saved), a green dot is displayed after the filter name. To update
the filter according to the new settings, click on the Update button, which is displayed instead of the Save as button.
• Current filter settings are remembered in the user profile. When the user opens the page again, the filter settings will have
stayed the same.
Note:
To share filters, copy and send to others a URL of an active filter. After opening this URL, other users will be able to save
this set of parameters as a permanent filter in their Zabbix account.
See also: Page parameters.
Filter buttons
807
Apply specified filtering criteria (without saving).
Reset current filter and return to saved parameters of the current tab. On the main tab, this will
clear the filter.
Save current filter parameters in a new tab. Only available on the main tab.
Replace tab parameters with currently specified parameters. Not available on the main tab.
Viewing details
The times for problem start and recovery in Monitoring → Problems are links. Clicking on them opens more details of the event.
Note that the problem severity may differ for the trigger and the problem event - if it has been updated for the problem event
using the Update problem screen.
In the action list, the following icons are used to denote the activity type:
• - problem severity has been changed, but returned to the original level (e.g. Warning → Information → Warning)
808
• - the problem has been closed manually
Overview
By default all new problems are classified as cause problems. It is possible to manually reclassify certain problems as symptom
problems of the cause problem.
For example, power outage may be the actual root cause why some host is unreachable or some service is down. In this case,
”host is unreachable” and ”service is down” problems must be classified as symptom problems of ”power outage” - the cause
problem.
The cause-symptom hierarchy supports only two levels. A problem that is already a symptom cannot be assigned ”subordinate”
symptom problems; any problems assigned as symptoms to a symptom problem will become symptoms of the same cause problem.
Only cause problems are counted in problem totals in maps, dashboard widgets such as Problems by severity or Problem hosts,
etc. However, problem ranking does not affect services.
A symptom problem can be linked to only one cause problem. Symptom problems are not automatically resolved, if the cause
problem is resolved or closed.
Configuration
To reclassify a problem as symptom problem, first select it in the list of problems. One or several problems can be selected.
Then go to the cause problem, and in its context menu click on the Mark selected as symptoms option.
After that, the selected problems will be updated by the server to symptom problems of the cause problem.
While the status of the problem is being updated, it is displayed in one of two ways:
• A blinking or icon in the Info column (this is in effect if Problems only are selected in the filter and thus the Status
column is not shown).
809
Display
Symptom problems are displayed below the cause problem and marked accordingly in Monitoring -> Problems (and the Problems
dashboard widget) - with an icon, smaller font and different background.
In collapsed view, only the cause problem is seen; the existence of symptom problems is indicated by the number in the beginning
of the line and the icon for expanding the view.
It is also possible to additionally display symptom problems in normal font and in their own line. For that select Show symptoms
in the filter settings or the widget configuration.
• click on the Mark as cause option in the context menu of the symptom problem;
• mark the Convert to cause option in to the problem update screen and click on Update (this option will also work if several
problems are selected).
2 Hosts
Overview
The Monitoring → Hosts section displays a full list of monitored hosts with detailed information about host interface, availability,
tags, current problems, status (enabled/disabled), and links to easily navigate to the host’s latest data, problem history, graphs,
dashboards and web scenarios.
Column Description
Name The visible host name. Clicking on the name brings up the host menu.
An orange wrench icon after the name indicates that this host is in maintenance.
Click on the column header to sort hosts by name in ascending or descending order.
Interface The main interface of the host is displayed.
810
Column Description
Buttons
Create host allows to create a new host. This button is available for Admin and Super Admin users only.
View mode buttons being common for all sections are described on the Monitoring page.
Using filter
You can use the filter to display only the hosts you are interested in. For better search performance, data is searched with macros
unresolved.
The filter is located above the table. It is possible to filter hosts by name, host group, IP or DNS, interface port, tags, problem
severity, status (enabled/disabled/any); you can also select whether to display suppressed problems and hosts that are currently
in maintenance.
811
Parameter Description
Saving filter
Favorite filter settings can be saved as tabs and then quickly accessed by clicking on the respective tab above the filter.
1 Graphs
Overview
Host graphs can be accessed from Monitoring → Hosts by clicking on Graphs for the respective host.
Any custom graph that has been configured for the host can be displayed, as well as any simple graph.
812
Graphs are sorted by:
Take note of the time period selector above the graph. It allows selecting often required periods with one mouse click.
Using filter
To view a specific graph, select it in the filter. The filter allows to specify the host, the graph name and the Show option (all/host
graphs/simple graphs).
Using subfilter
The subfilter is useful for a quick one-click access to related graphs. The subfilter operates autonomously from the main filter -
results are filtered immediately, no need to click on Apply in the main filter.
Note that the subfilter only allows to further modify the filtering from the main filter.
Unlike the main filter, the subfilter is updated together with each table refresh request to always get up-to-date information of
available filtering options and their counter numbers.
The subfilter shows clickable links allowing to filter graphs based on a common entity - the tag name or tag value. As soon as the
entity is clicked, graphs are immediately filtered; the selected entity is highlighted with gray background. To remove the filtering,
click on the entity again. To add another entity to the filtered results, click on another entity.
The number of entities displayed is limited to 100 horizontally. If there are more, a three-dot icon is displayed at the end; it is not
clickable. Vertical lists (such as tags with their values) are limited to 20 entries. If there are more, a three-dot icon is displayed; it
is not clickable.
813
A number next to each clickable entity indicates the number of graphs it has in the results of the main filter.
Once one entity is selected, the numbers with other available entities are displayed with a plus sign indicating how many graphs
may be added to the current selection.
Buttons
View mode buttons, being common for all sections, are described on the Monitoring page.
2 Host dashboards
Overview
Host dashboards look similar to global dashboards; however, host dashboards lack an owner and display data only for the selected
host.
When viewing host dashboards, you can switch between the configured dashboards by clicking:
• the arrow button under the header, which will display the full list of host dashboards available.
To switch to the Monitoring → Hosts section, click the All hosts navigation link under the header in the upper left corner.
Configuration
Host dashboards are configured at the template level. Once a template is linked to a host, host dashboards are generated for that
host. Note that host dashboards cannot be configured in the Dashboards section, which is reserved for global dashboards.
Widgets of host dashboards can also be configured only at the template level, except for changing the refresh interval. Moreover,
widgets of host dashboards can only be copied to other host dashboards within the same template. Note that widgets from global
dashboards cannot be copied to host dashboards.
Note:
Host dashboards used to be host screens before Zabbix 5.2. When importing an older template that contains screens, the
screen import will be ignored.
Access
814
• after searching for a host name in global search (click the Dashboards link provided in the search results);
• after clicking a host name in Inventory → Hosts (click the Dashboards link provided in the host overview);
• from the host menu by clicking Dashboards.
Note that host dashboards cannot be directly accessed in the Dashboards section, which is reserved for global dashboards.
3 Web scenarios
Overview
Host web scenario information can be accessed from Monitoring → Hosts by clicking on Web for the respective host.
Clicking on the host name brings up the host menu. Data of disabled hosts is also accessible. The name of a disabled host is listed
in red.
The maximum number of scenarios displayed per page depends on the Rows per page user profile setting.
By default, only values that fall within the last 24 hours are displayed. This limit has been introduced with the aim of improving
initial loading times for large pages of latest data. You can extend this time period by changing the value of Max history display
period parameter in the Administration → General → GUI menu section.
815
Using filter
The page shows a list of all web scenarios of the selected host. To view web scenarios for another host or host group without
returning to the Monitoring → Hosts page, select that host or group in the filter. You may also filter scenarios based on tags.
Buttons
View mode buttons being common for all sections are described on the Monitoring page.
3 Latest data
Overview
The Monitoring → Latest data section displays the latest values gathered by items.
• Filter
• Subfilter
• Item list
Note:
The subfilter and item list is displayed only if the filter is set and there are results to display.
816
Column Description
If a host is in maintenance, an orange wrench icon is displayed after the host’s name.
If a host is disabled, the name of the host is displayed in red. Note that data of disabled hosts
(including graphs and item value lists) is accessible in the Latest data section.
Name Name of the item.
Clicking on the name brings up the item context menu.
A question mark icon is displayed next to the item name for all items that have a
description. Hover over the icon to display a tooltip with the item description.
Last check Time since the last item check.
Last value Most recent value for the item.
Values are displayed with unit conversion and value mapping applied. Hover over the value to
display raw data.
By default, only values received in the last 24 hours are displayed. This limit improves initial
loading times for large pages of latest data; to extend it, update the value of the Max history
display period parameter in Administration → General → GUI.
Change Difference between the previous value and the most recent value.
For items with an update frequency of 1 day or more, the change amount will never be displayed
(with the default setting). In this case, the last value will not be displayed at all if it was received
more than 24 hours ago.
Tags Tags associated with the item.
Tags in the item list are clickable. Clicking a tag enables in the subfilter, making the item list
display only items containing this tag (and any other tags previously selected in the subfilter).
Note that once items have been filtered this way, tags in the item list are no longer clickable.
Further modification based on tags (for example, to remove tags or specify other filters) must be
done in the subfilter.
Graph/History Link to simple graph/history of item values.
Info Additional information about the item.
If an item has errors (for example, has become unsupported), an information icon is
displayed. Hover over the icon for details.
Buttons
View mode buttons being common for all sections are described on the Monitoring page.
Mass actions
Buttons below the list offer mass actions with one or several selected items:
817
• Display stacked graph - display a stacked ad-hoc graph.
• Display graph - display a simple ad-hoc graph.
• Execute now - execute a check for new item values immediately. Supported for passive checks only (see more details). This
option is available only for hosts with read-write access. Accessing this option for hosts with read-only permissions depends
on the user role option called Invoke ”Execute now” on read-only hosts.
To use these options, mark the checkboxes before the respective items, then click on the required button.
Using filter
You can use the filter to display only the items you are interested in. For better search performance, data is searched with macros
unresolved.
The filter icon is located above the item list and the subfilter. Click it to expand the filter.
The filter allows to narrow the item list by host group, host, item name, tag, state and other settings. Specifying a parent host
group in the filter implicitly selects all nested host groups. See Monitoring → Problems for details on filtering by tags.
The Show details filter option allows to extend the information displayed for the items. Mark it to display such details as the item
refresh interval, history and trends settings, item type, and item errors (fine/unsupported).
Saving filter
Favorite filter settings can be saved as tabs and then quickly accessed by clicking on the respective tab above the filter.
Using subfilter
The subfilter allows to further modify the filtering from the main filter. It is useful for a quick one-click access to filter groups of
related items.
Unlike the main filter, the subfilter is updated with each table refresh request to always have up-to-date information of available
filtering options and their counter numbers.
The subfilter shows clickable links, allowing to filter items based on a common entity group - host, tag name or value, item state
or data status. When an entity is clicked, the entity is highlighted with a gray background, and items are immediately filtered (no
need to click Apply in the main filter). Clicking another entity adds it to the filtered results. Clicking the entity again removes the
filtering.
For each entity group (hosts, tags, tag values, etc.), up to 10 rows of entities are displayed. If there are more entities, this list
can be expanded to display a maximum of 1000 entries (the value of SUBFILTER_VALUES_PER_GROUP in frontend definitions) by
818
clicking the three-dot icon at the end of the list. For Tag values, the list can be expanded to display a maximum of 200 tag
names with their corresponding values. Note that once fully expanded, the list cannot be collapsed.
A number next to each clickable entity indicates the number of items grouped in it (based on the results of the main filter). When
an entity is clicked, the numbers with other available entities are displayed with a plus sign indicating how many items may be
added to the current selection. Entities without items are not displayed unless selected in the subfilter before.
The Graph/History column in the item list offers the following links:
• History - for all textual items, leading to listings (Values/500 latest values) displaying the history of previous item values.
• Graph - for all numeric items, leading to a simple graph. Note that when the graph is displayed, a dropdown on the upper
right offers a possibility to switch to Values/500 latest values as well.
The values displayed in this list are raw, that is, no postprocessing is applied.
Note:
The total amount of values displayed is defined by the value of Limit for search and filter results parameter, set in Admin-
istration → General → GUI.
4 Maps
Overview
In the Monitoring → Maps section you can configure, manage and view network maps.
When you open this section, you will either see the last map you accessed or a listing of all maps you have access to.
All maps can be either public or private. Public maps are available to all users, while private maps are accessible only to their
owner and the users the map is shared with.
Map listing
Displayed data:
Column Description
Name Name of the map. Click on the name to view the map.
Width Map width is displayed.
Height Map height is displayed.
Actions Two actions are available:
Properties - set general map properties
Edit - access the grid for adding map elements
819
To configure a new map, click on the Create map button in the top right-hand corner. To import a map from a YAML, XML, or JSON
file, click on the Import button in the top right-hand corner. The user who imports the map will be set as its owner.
To use these options, mark the checkboxes before the respective maps, then click on the required button.
Using filter
You can use the filter to display only the maps you are interested in. For better search performance, data is searched with macros
unresolved.
Viewing maps
You can use the drop-down in the map title bar to select the lowest severity level of the problem triggers to display. The severity
marked as default is the level set in the map configuration. If the map contains a sub-map, navigating to the sub-map will retain
the higher-level map severity (except if it is Not classified, in this case, it will not be passed to the sub-map).
Icon highlighting
If a map element is in problem status, it is highlighted with a round circle. The fill color of the circle corresponds to the severity
color of the problem. Only problems on or above the selected severity level will be displayed with the element. If all problems are
acknowledged, a thick green border around the circle is displayed.
Additionally:
• a host in maintenance is highlighted with an orange, filled square. Note that maintenance highlighting has priority over the
problem severity highlighting, if the map element is host.
• a disabled (not-monitored) host is highlighted with a gray, filled square.
Inward pointing red triangles around an element indicate a recent trigger status change - one that’s happened within the last 30
minutes. These triangles are shown if the Mark elements on trigger status change check-box is marked in map configuration.
Links
Clicking on a map element opens a menu with some available links. Clicking on the host name brings up the host menu.
Buttons
820
Go to editing of the map content.
The map is in the favorites widget in Dashboards. Click to remove map from the favorites widget.
View mode buttons being common for all sections are described on the Monitoring page.
A hidden ”aria-label” property is available allowing map information to be read with a screen reader. Both general map description
and individual element description is available, in the following format:
• for map description: <Map name>, <* of * items in problem state>, <* problems in total>.
• for describing one element with one problem: <Element type>, Status <Element status>, <Element name>,
<Problem description>.
• for describing one element with multiple problems: <Element type>, Status <Element status>, <Element
name>, <* problems>.
• for describing one element without problems: <Element type>, Status <Element status>, <Element name>.
'Local network, 1 of 6 elements in problem state, 1 problem in total. Host, Status problem, My host, Free
for the following map:
Network maps can be referenced by both sysmapid and mapname GET parameters. For example,
http://zabbix/zabbix/zabbix.php?action=map.view&mapname=Local%20network
will open the map with that name (Local network).
If both sysmapid (map ID) and mapname (map name) are specified, mapname has higher priority.
821
5 Discovery
Overview
In the Monitoring → Discovery section results of network discovery are shown. Discovered devices are sorted by the discovery rule.
Displayed data:
Column Description
Discovered device Discovered devices are listed, grouped by the discovery rule. Clicking on the discovery rule
brings up the rule menu containing the link to the discovery rule configuration form.
Monitored host If a device is already monitored, the host name will be listed in this column.
Clicking on the host name brings up the host menu.
Uptime/Downtime The duration of the device being discovered
or lost after previous
discovery is displayed in
this column.
Discovery check The state of the individual service (discovery check) for each discovered device is displayed. A
red cell shows that the service is down. Service uptime or downtime is included within the cell.
This column is displayed only if the service has been found on at least one discovered device.
Buttons
View mode buttons being common for all sections are described on the Monitoring page.
Using filter
You can use the filter to display only the discovery rules you are interested in. For better search performance, data is searched
with macros unresolved.
With nothing selected in the filter, all enabled discovery rules are displayed. To select a specific discovery rule for display, start
typing its name in the filter. All matching enabled discovery rules will be listed for selection. More than one discovery rule can be
selected.
3 Services
Overview
1 Services
Overview
In this section you can see a high-level status of whole services that have been configured in Zabbix, based on your infrastructure.
A service may be a hierarchy consisting of several levels of other services, called ”child” services, which are attributes to the
overall status of the service (see also an overview of the service monitoring functionality.)
822
The main categories of service status are OK or Problem, where the Problem status is expressed by the corresponding problem
severity name and color.
While the view mode allows to monitor services with their status and other details, you can also configure the service hierarchy in
this section (add/edit services, child services) by switching to the edit mode.
To switch from the view to the edit mode (and back) click on the respective button in the upper right corner:
• - view services
Viewing services
Displayed data:
Parameter Description
Buttons
View mode buttons being common for all sections are described on the Monitoring page.
Using filter
You can use the filter to display only the services you are interested in.
Editing services
Click on the Edit button to access the edit mode. When in edit mode, the listing is complemented with checkboxes before the
entries and also these additional options:
823
To configure a new service, click on the Create service button in the top right-hand corner.
Service details
To access service details, click on the service name. To return to the list of all services, click on All services.
Service details include the info box and the list of child services.
To access the info box, click on the Info tab. The info box contains the following entries:
To use the filter for child services, click on the Filter tab.
When in edit mode, the child service listing is complemented with additional editing options:
2 SLA
Overview
SLAs
824
A list of the configured SLAs is displayed. Note that only the SLAs related to services accessible to the user will be displayed (as
read-only, unless Manage SLA is enabled for the user role).
Displayed data:
Parameter Description
3 SLA report
Overview
This section allows to view SLA reports, based on the criteria selected in the filter.
Report
The filter allows to select the report based on the SLA name as well as the service name. It is also possible to limit the displayed
period.
Each column (period) displays the SLI for that period. SLIs that are in breach of the set SLO are highlighted in red.
20 periods are displayed in the report. A maximum of 100 periods can be displayed, if both the From date and To date are specified.
Report details
If you click on the service name in the report, you can access another report that displays a more detailed view.
825
Note that negative problem duration does not affect SLA calculation or reporting.
4 Inventory
Overview
The Inventory menu features sections providing an overview of host inventory data by a chosen parameter as well as the ability
to view host inventory details.
1 Overview
Overview
The Inventory → Overview section provides ways of having an overview of host inventory data.
For an overview to be displayed, choose host groups (or none) and the inventory field by which to display data. The number of
hosts corresponding to each entry of the chosen field will be displayed.
The completeness of an overview depends on how much inventory information is maintained with the hosts.
Numbers in the Host count column are links; they lead to these hosts being filtered out in the Host Inventories table.
826
2 Hosts
Overview
You can filter the hosts by host group(s) and by any inventory field to display only the hosts you are interested in.
To display all host inventories, select no host group in the filter, clear the comparison field in the filter and press ”Filter”.
While only some key inventory fields are displayed in the table, you can also view all available inventory information for that host.
To do that, click on the host name in the first column.
Inventory details
The Overview tab contains some general information about the host along with links to predefined scripts, latest monitoring data
and host configuration options:
The Details tab contains all available inventory details for the host:
827
The completeness of inventory data depends on how much inventory information is maintained with the host. If no information is
maintained, the Details tab is disabled.
5 Reports
Overview
The Reports menu features several sections that contain a variety of predefined and user-customizable reports focused on display-
ing an overview of such parameters as system information, triggers and gathered data.
1 System information
Overview
In Reports → System information, a summary of key Zabbix server and system data is displayed. System data is collected using
internal items.
Note that in a high availability setup, it is possible to redirect the system information source (server instance). To do this, edit the
zabbix.conf.php file - uncomment and set $ZBX_SERVER or both $ZBX_SERVER and $ZBX_SERVER_PORT to a server other than
the one shown active. Note that when setting $ZBX_SERVER only, a default value (10051) for $ZBX_SERVER_PORT will be used.
With the high availability setup enabled, a separate block is displayed below the system stats with details of high availability nodes.
This block is visible to Zabbix Super Admin users only.
System stats
Displayed data:
Zabbix server is Status of Zabbix server: Location and port of Zabbix server.
running Yes - server is running
No - server is not running
Note: To display the rest of the information the
web, frontend needs the server to be running and
there must be at least one trapper process started
on the server (StartTrappers parameter in
zabbix_server.conf file > 0).
828
Parameter Value Details
Zabbix server Current server version number is displayed. Server version status is displayed:
version Note: It is only displayed when Zabbix server is Up to date - using the latest version;
running. New update available - a more up-to-date
version is available;
Outdated - the full support period for this version
has expired.
This information is only available if software
update check is enabled in Zabbix server
configuration. Nothing is displayed if the last
software update check was performed more than
a week ago or no data exist about the current
version.
Zabbix frontend Zabbix frontend version number is displayed. Zabbix frontend version status is displayed:
version Up to date - using the latest version;
New update available - a more up-to-date
version is available;
Outdated - the full support period for this version
has expired.
This information is only available if software
update check is enabled in Zabbix server
configuration. Nothing is displayed if the last
software update check was performed more than
a week ago or no data exist about the current
version.
Software update The date of the last Zabbix software update check
last checked is displayed.
This information is only available if software
update check is enabled in Zabbix server
configuration.
Latest release The number of a newer release (if available) for A link to the release notes of the latest available
the current Zabbix version is displayed. Zabbix release is displayed.
This information is only available if software
update check is enabled in Zabbix server
configuration. Nothing is displayed if the last
software update check was performed more than
a week ago or no data exist about the current
version.
Number of hosts Total number of hosts configured is displayed. Number of monitored hosts/not monitored hosts.
Number of Total number of templates is displayed.
templates
Number of items Total number of items is displayed. Number of monitored/disabled/unsupported
host-level items.
Items on disabled hosts are counted as disabled.
Number of Total number of triggers is displayed. Number of enabled/disabled host-level triggers;
triggers split of the enabled triggers according to
”Problem”/”OK” states.
829
Parameter Value Details
Required server The expected number of new values processed by Required server performance is an estimate and
performance, Zabbix server per second is displayed. can be useful as a guideline. For precise numbers
new values per of values processed, use the
second zabbix[wcache,values,all] internal item.
System information will also display an error message in the following conditions:
• The database used does not have the required character set or collation (UTF-8).
• The version of the database is below or above the supported range (available only to users with the Super admin role type).
• Housekeeping for TimescaleDB is incorrectly configured (history or trend tables contain compressed chunks, but Override
item history period or Override item trend period options are disabled).
If high availability cluster is enabled, then another block of data is displayed with the status of each high availability node.
Displayed data:
Column Description
2 Scheduled reports
Overview
830
In Reports → Scheduled reports, users with sufficient permissions can configure scheduled generation of PDF versions of the
dashboards, which will be sent by email to specified recipients.
The opening screen displays information about scheduled reports, which can be filtered out for easy navigation - see Using filter
section below.
Displayed data:
Column Description
Name Name of the report. Clicking it opens the report configuration form.
Owner User who created the report.
Repeats Report generation frequency (daily/weekly/monthly/yearly).
Period Period for which the report is prepared.
Last sent The date and time when the latest report has been sent.
Status Current status of the report (enabled/disabled/expired). Users with sufficient permissions can
change the status by clicking it - from ”Enabled” to ”Disabled” (and back); from ”Expired” to
”Disabled” (and back). For users with insufficient rights, the status is not clickable.
Info Displays informative icons:
A red icon indicates that report generation has failed; hovering over it will display a tooltip with
the error information.
A yellow icon indicates that a report was generated, but sending to some (or all) recipients has
failed or that a report is expired; hovering over it will display a tooltip with additional information.
Using filter
You may use the filter to narrow down the list of reports. For better search performance, data is searched with macros unresolved.
The filter is located above the Scheduled reports bar. It can be opened and collapsed by clicking the Filter tab in the upper right
corner.
Mass update
Sometimes you may want to delete or change the status of a number of reports at once. Instead of opening each individual report
for editing, you may use the mass update function for that.
3 Availability report
Overview
In Reports → Availability report you can see what proportion of time each trigger has been in problem/ok state. The percentage of
time for each state is displayed.
Thus it is easy to determine the availability situation of various elements on your system.
831
From the drop-down in the upper right corner, you can choose the selection mode - whether to display triggers by hosts or by
triggers belonging to a template.
The name of the trigger is a link to the latest events of that trigger.
Using filter
The filter can help narrow down the number of hosts and/or triggers displayed. For better search performance, data is searched
with macros unresolved.
The filter is located below the Availability report bar. It can be opened and collapsed by clicking on the Filter tab on the right.
In the By trigger template mode results can be filtered by one or several parameters listed below.
Parameter Description
Template group Filter hosts by triggers that are inherited from templates belonging to the selected template
group.
Template Filter hosts by triggers that are inherited from the selected template, including nested templates.
If a nested template has its own triggers, those triggers will not be displayed.
Template trigger Filter hosts by the selected trigger. Other triggers of the filtered hosts will not be displayed.
Host group Filter hosts belonging to the selected host group.
Filtering by host
In the By host mode results can be filtered by a host or by the host group. Specifying a parent host group implicitly selects all
nested host groups.
The Time period selector allows to select often required periods with one mouse click. The Time period selector can be expanded
and collapsed by clicking the Time period tab next to the filter.
Clicking on Show in the Graph column displays a bar graph where availability information is displayed in bar format each bar
representing a past week of the current year.
832
The green part of a bar stands for OK time and red for problem time.
Overview
In Reports → Top 100 triggers, you can see the triggers with the highest number of problems detected during the selected period.
Both host and trigger column entries are links that offer some useful options:
• for host - clicking on the host name brings up the host menu
• for trigger - clicking on the trigger name brings up links to the latest events, simple graph for each trigger item, and the
configuration forms of the trigger itself and each trigger item
Using filter
You may use the filter to display triggers by host group, host, problem name, tags, or trigger severity. Specifying a parent host
group implicitly selects all nested host groups. For better search performance, data is searched with macros unresolved.
The filter is located below the Top 100 triggers title. It can be opened and collapsed by clicking on the Filter tab on the left.
The Time period selector allows to select often required periods with one mouse click. The Time period selector can be expanded
and collapsed by clicking the Time period tab next to the filter.
5 Audit log
Overview
In the Reports → Audit log section, the records of user and system activity can be viewed.
833
Note:
For audit records to be collected and displayed, the Enable audit logging checkbox has to be marked in the Administration
→ Audit log section. Without this setting enabled, the history of activities will not be recorded in the database and will not
be shown in the audit log.
Column Description
Note:
When a trapper item or an HTTP agent item (with trapping enabled) has received some data, an entry in the audit log will
be added only if the data was sent using the history.push API method, and not the Zabbix sender utility.
Using filter
The filter is located below the Audit log bar. It can be opened and collapsed by clicking the Filter tab in the upper right corner.
You may use the filter to narrow the records by user, affected resource, resource ID, and performed operation (Recordset ID).
Depending on the resource, one or more specific actions can be selected in the filter.
For better search performance, all data is searched with macros unresolved.
The Time period selector allows to select often required periods with one mouse click. The Time period selector can be expanded
and collapsed by clicking the Time period tab in the upper right corner.
6 Action log
Overview
834
In the Reports → Action log section users can view details of operations (notifications, remote commands) executed within an
action.
Displayed data:
Column Description
Buttons
The button at the top right corner of the page offers the following option:
Export action log records from all pages to a CSV file. If a filter is applied, only the filtered
records will be exported.
In the exported CSV file the columns ”Recipient” and ”Message” are divided into several columns
- ”Recipient’s Zabbix username”, ”Recipient’s name”, ”Recipient’s surname”, ”Recipient”, and
”Subject”, ”Message”, ”Command”.
Using filter
The filter is located below the Action log bar. It can be opened and collapsed by clicking on the Filter tab at the top right corner of
the page.
835
You may use the filter to narrow down the records by notification recipients, actions, media types, status, or by the message/remote
command content (Search string). For better search performance, data is searched with macros unresolved.
The Time period selector allows to select often required periods with one mouse click. The Time period selector can be expanded
and collapsed by clicking the Time period tab next to the filter.
7 Notifications
Overview
In the Reports → Notifications section a report on the number of notifications sent to each user is displayed.
From the dropdowns in the top right-hand corner you can choose the media type (or all), period (data for each day/week/month/year)
and year for the notifications sent.
6 Data collection
Overview
This menu features sections that are related to configuring data collection.
1 Items
Overview
The item list for a template can be accessed from Data collection → Templates by clicking on Items for the respective template.
Displayed data:
836
Column Description
Item menu Click on the three-dot icon to open the menu for this specific item with these options:
Create trigger - create a trigger based on this item
Triggers - click to see a list with links to already-configured trigger of this item
Create dependent item - create a dependent item for this item
Create dependent discovery rule - create a dependent discovery rule for this item
Template Template the item belongs to.
This column is displayed only if multiple templates are selected in the filter.
Name Name of the item displayed as a blue link to item details.
Clicking on the item name link opens the item configuration form.
If the item is inherited from another template, the template name is displayed before the item
name, as a gray link. Clicking on the template link will open the item list on that template level.
Triggers Moving the mouse over Triggers will display an infobox displaying the triggers associated with
the item.
The number of the triggers is displayed in gray.
Key Item key is displayed.
Interval Frequency of the check is displayed.
History How many days item data history will be kept is displayed.
Trends How many days item trends history will be kept is displayed.
Type Item type is displayed (Zabbix agent, SNMP agent, simple check, etc).
Status Item status is displayed - Enabled or Disabled. By clicking on the status you can change it - from
Enabled to Disabled (and back).
Tags Item tags are displayed.
Up to three tags (name:value pairs) can be displayed. If there are more tags, a ”...” link is
displayed that allows to see all tags on mouseover.
To configure a new item, click on the Create item button at the top right corner.
To use these options, mark the checkboxes before the respective items, then click on the required button.
Using filter
The item list may contain a lot of items. By using the filter, you can filter out some of them to quickly locate the items you’ŗe
looking for. For better search performance, data is searched with macros unresolved.
The Filter icon is available at the top right corner. Clicking on it will open a filter where you can specify the desired filtering criteria.
837
Parameter Description
The Subfilter below the filter offers further filtering options (for the data already filtered). You can select groups of items with a
common parameter value. Upon clicking on a group, it gets highlighted and only the items with this parameter value remain in
the list.
2 Triggers
Overview
The trigger list for a template can be accessed from Data collection → Templates by clicking on Triggers for the respective template.
838
Displayed data:
Column Description
Severity Severity of the trigger is displayed by both name and cell background color.
Template Template the trigger belongs to.
This column is displayed only if multiple templates are selected in the filter.
Name Name of the trigger displayed as a blue link to trigger details.
Clicking on the trigger name link opens the trigger configuration form.
If the trigger is inherited from another template, the template name is displayed before the
trigger name, as a gray link. Clicking on the template link will open the trigger list on that
template level.
Operational data Operational data definition of the trigger, containing arbitrary strings and macros that will
resolve dynamically in Monitoring → Problems.
Expression Trigger expression is displayed. The template-item part of the expression is displayed as a link,
leading to the item configuration form.
Status Trigger status is displayed - Enabled or Disabled. By clicking on the status you can change it -
from Enabled to Disabled (and back).
Tags If a trigger contains tags, tag name and value are displayed in this column.
To configure a new trigger, click on the Create trigger button at the top right corner.
To use these options, mark the checkboxes before the respective triggers, then click on the required button.
Using filter
You can use the filter to display only the triggers you are interested in. For better search performance, data is searched with macros
unresolved.
The Filter icon is available at the top right corner. Clicking on it will open a filter where you can specify the desired filtering criteria.
Parameter Description
839
Parameter Description
Tags Filter by trigger tag name and value. It is possible to include as well as exclude specific tags and
tag values. Several conditions can be set. Tag name matching is always case-sensitive.
There are several operators available for each condition:
Exists - include the specified tag names
Equals - include the specified tag names and values (case-sensitive)
Contains - include the specified tag names where the tag values contain the entered string
(substring match, case-insensitive)
Does not exist - exclude the specified tag names
Does not equal - exclude the specified tag names and values (case-sensitive)
Does not contain - exclude the specified tag names where the tag values contain the entered
string (substring match, case-insensitive)
There are two calculation types for conditions:
And/Or - all conditions must be met, conditions having the same tag name will be grouped by
the Or condition
Or - enough if one condition is met
Macros and macro functions are supported in tag name and tag value fields.
Inherited Filter triggers inherited (or not inherited) from linked templates.
With dependencies Filter triggers with (or without) dependencies.
3 Graphs
Overview
The custom graph list for a template can be accessed from Data collection → Templates by clicking on Graphs for the respective
template.
Displayed data:
Column Description
To configure a new graph, click on the Create graph button at the top right corner.
840
To use these options, mark the checkboxes before the respective graphs, then click on the required button.
Using filter
You can filter graphs by template group and template. For better search performance, data is searched with macros unresolved.
4 Discovery rules
Overview
The list of low-level discovery rules for a template can be accessed from Data collection → Templates by clicking on Discovery for
the respective template.
A list of existing low-level discovery rules is displayed. It is also possible to see all discovery rules independently of the template,
or all discovery rules of a specific template group by changing the filter settings.
Displayed data:
Column Description
To configure a new low-level discovery rule, click on the Create discovery rule button at the top right corner.
841
To use these options, mark the checkboxes before the respective discovery rules, then click on the required button.
Using filter
You can use the filter to display only the discovery rules you are interested in. For better search performance, data is searched
with macros unresolved.
The Filter icon is available at the top right corner. Clicking on it will open a filter where you can specify the desired filtering criteria
such as template, discovery rule name, item key, item type, etc.
Parameter Description
1 Item prototypes
Overview
In this section the configured item prototypes of a low-level discovery rule on the template are displayed.
If the template is linked to the host, item prototypes will become the basis of creating real host items during low-level discovery.
Displayed data:
Column Description
842
Column Description
To configure a new item prototype, click on the Create item prototype button at the top right corner.
To use these options, mark the checkboxes before the respective item prototypes, then click on the required button.
2 Trigger prototypes
Overview
In this section the configured trigger prototypes of a low-level discovery rule on the template are displayed.
If the template is linked to the host, trigger prototypes will become the basis of creating real host triggers during low-level discovery.
Displayed data:
Column Description
843
Column Description
To configure a new trigger prototype, click on the Create trigger prototype button at the top right corner.
To use these options, mark the checkboxes before the respective trigger prototypes, then click on the required button.
3 Graph prototypes
Overview
In this section the configured graph prototypes of a low-level discovery rule on the template are displayed.
If the template is linked to the host, graph prototypes will become the basis of creating real host graphs during low-level discovery.
Displayed data:
Column Description
To configure a new graph prototype, click on the Create graph prototype button at the top right corner.
To use these options, mark the checkboxes before the respective graph prototypes, then click on the required button.
4 Host prototypes
844
Overview
In this section the configured host prototypes of a low-level discovery rule on the template are displayed.
If the template is linked to the host, host prototypes will become the basis of creating real hosts during low-level discovery.
Displayed data:
Column Description
To configure a new host prototype, click on the Create host prototype button at the top right corner.
To use these options, mark the checkboxes before the respective host prototypes, then click on the required button.
5 Web scenarios
Overview
The web scenario list for a template can be accessed from Data collection → Templates by clicking on Web for the respective
template.
Displayed data:
845
Column Description
Name Name of the web scenario. Clicking on the web scenario name opens the web scenario
configuration form.
If the web scenario is inherited from another template, the template name is displayed before
the web scenario name, as a gray link. Clicking on the template link will open the web scenarios
list on that template level.
Number of steps The number of steps the scenario contains.
Update interval How often the scenario is performed.
Attempts How many attempts for executing web scenario steps are performed.
Authentication Authentication method is displayed - Basic, NTLM on None.
HTTP proxy Displays HTTP proxy or ’No’ if not used.
Status Web scenario status is displayed - Enabled or Disabled.
By clicking on the status you can change it.
Tags Web scenario tags are displayed.
Up to three tags (name:value pairs) can be displayed. If there are more tags, a ”...” link is
displayed that allows to see all tags on mouseover.
To configure a new web scenario, click on the Create web scenario button at the top right corner.
To use these options, mark the checkboxes before the respective web scenarios, then click on the required button.
Using filter
You can use the filter to display only the scenarios you are interested in. For better search performance, data is searched with
macros unresolved.
The Filter link is available above the list of web scenarios. If you click on it, a filter becomes available where you can filter scenarios
by template group, template, status and tags.
1 Template groups
Overview
In the Data collection → Templates groups section users can configure and maintain template groups.
A listing of existing template groups with their details is displayed. You can search and filter template groups by name.
846
Displayed data:
Column Description
Name Name of the template group. Clicking on the group name opens the group configuration form.
Templates Number of templates in the group (displayed in gray) followed by the list of group members.
Clicking on a template name will open the template configuration form.
Clicking on the number opens the list of templates in this group.
To delete several template groups at once, mark the checkboxes before the respective groups, then click on the Delete button
below the list.
Using filter
You can use the filter to display only the template groups you are interested in. For better search performance, data is searched
with macros unresolved.
2 Host groups
Overview
In the Data collection → Host groups section users can configure and maintain host groups.
A listing of existing host groups with their details is displayed. You can search and filter host groups by name.
Displayed data:
Column Description
Name Name of the host group. Clicking on the group name opens the group configuration form.
Discovered host groups are displayed with low-level discovery rule names as a prefix. Clicking on
the LLD rule name opens the host prototype configuration form. Note that discovered host
groups are deleted when all LLD rules that discovered it are deleted.
Hosts Number of hosts in the group (displayed in gray) followed by the list of group members.
Clicking on a host name will open the host configuration form.
Clicking on the number will, in the whole listing of hosts, filter out those that belong to the group.
Info Error information (if any) regarding the host group is displayed.
• Enable hosts - change the status of all hosts in the group to ”Monitored”
• Disable hosts - change the status of all hosts in the group to ”Not monitored”
• Delete - delete the host groups
847
To use these options, mark the checkboxes before the respective host groups, then click on the required button.
Using filter
You can use the filter to display only the host groups you are interested in. For better search performance, data is searched with
macros unresolved.
3 Templates
Overview
In the Data collection → Templates section users can configure and maintain templates.
Displayed data:
Column Description
To configure a new template, click on the Create template button in the top right-hand corner.
To import a template from a YAML, XML, or JSON file, click on the Import button in the top right-hand corner.
Using filter
You can use the filter to display only the templates you are interested in. For better search performance, data is searched with
macros unresolved.
The Filter link is available below Create template and Import buttons. If you click on it, a filter becomes available where you can
filter templates by template group, linked templates, name and tags.
848
Parameter Description
To use these options, mark the checkboxes before the respective templates, then click on the required button.
4 Hosts
Overview
In the Data collection → Hosts section users can configure and maintain hosts.
849
Displayed data:
Column Description
An orange wrench icon before the host status indicates that this host is in maintenance.
Maintenance details are displayed when the mouse pointer is positioned on the icon.
Discovered hosts that have been lost are marked with an info icon. The tooltip text provides
details on their status.
850
Column Description
Availability icons represent only those interface types (Zabbix agent, SNMP, IPMI, JMX) that are
configured. If you position the mouse pointer on the icon, a pop-up list appears listing all
interfaces of this type with details, status and errors (for the agent interface, availability of active
checks is also listed).
The column is empty for hosts with no interfaces.
The current status of all interfaces of one type is displayed by the respective icon color:
Green - all interfaces available;
Yellow - at least one interface is unavailable and at least one is either available or unknown;
others can have any status, including ’unknown’;
Red - no interfaces available;
Gray - at least one interface unknown (none unavailable).
Active check availability. Since Zabbix 6.2 active checks also affect host availability if there is
at least one active check enabled on the host. To determine active check availability, heartbeat
messages are sent in the agent active check thread. The frequency of the heartbeat messages is
set by the HeartbeatFrequency parameter in Zabbix agent and agent 2 configurations (60
seconds by default, 0-3600 range). Active checks are considered unavailable when the active
check heartbeat is older than 2 x HeartbeatFrequency seconds.
Note: If Zabbix agents older than 6.2.x are used, they are not sending any active check
heartbeats, so the availability of their hosts will remain unknown.
Active agent availability is counted towards the total Zabbix agent availability in the same way
as a passive interface is. For example, if a passive interface is available while the active checks
are unknown, the total agent availability is set to gray (unknown).
Agent encryption Encryption status for connections to the host is displayed:
None - no encryption;
PSK - using pre-shared key;
Cert - using certificate.
Info Error information (if any) regarding the host is displayed.
Tags Tags of the host with macros unresolved.
To configure a new host, click on the Create host button in the top right-hand corner. To import a host from a YAML, XML, or JSON
file, click on the Import button in the top right-hand corner.
To use these options, mark the checkboxes before the respective hosts, then click on the required button.
Using filter
You can use the filter to display only the hosts you are interested in. For better search performance, data is searched with macros
unresolved.
The Filter icon is available at the top right corner. Clicking on it will open a filter where you can specify the desired filtering criteria.
851
Parameter Description
Host availability icons reflect the current host interface status on Zabbix server. Therefore, in the frontend:
• If you disable a host, availability icons will not immediately turn gray (unknown status), because the server has to synchronize
the configuration changes first.
• If you enable a host, availability icons will not immediately turn green (available), because the server has to synchronize the
configuration changes and start polling the host first.
Zabbix server determines an ’unknown’ status for the corresponding agent interface (Zabbix, SNMP, IPMI, JMX) in the following
cases:
• There are no enabled items on the interface (they were removed or disabled).
• There are only active Zabbix agent items.
• There are no pollers for that type of the interface (e.g. StartAgentPollers=0).
• Host is disabled.
• Host is set to be monitored by proxy, a different proxy or by server if it was monitored by proxy.
• Host is monitored by a proxy that appears to be offline (no updates received from the proxy during the maximum heartbeat
interval - 1 hour).
Setting interface availability to ’unknown’ is done after server configuration cache synchronization. Restoring interface availability
(available/unavailable) on hosts monitored by proxies is done after proxy configuration cache synchronization.
For more details about host interface unreachability, see Unreachable/unavailable host interface settings.
1 Items
Overview
852
The item list for a host can be accessed from Data collection → Hosts by clicking on Items for the respective host.
Displayed data:
Column Description
Item context menu Click on the three-dot icon to open the item context menu.
Host Host of the item.
This column is displayed only if multiple hosts are selected in the filter.
Name Name of the item displayed as a blue link to item details.
Clicking on the item name link opens the item configuration form.
If the host item belongs to a template, the template name is displayed before the item name as
a gray link. Clicking on the template link will open the item list on the template level.
If the item has been created from an item prototype, its name is preceded by the low-level
discovery rule name, in orange. Clicking on the discovery rule name will open the item prototype
list.
Triggers Moving the mouse over Triggers will display an infobox displaying the triggers associated with
the item.
The number of the triggers is displayed in gray.
Key Item key is displayed.
Interval Frequency of the check is displayed.
Note that passive items can also be checked immediately by pushing the Execute now button.
History How many days item data history will be kept is displayed.
Trends How many days item trends history will be kept is displayed.
Type Item type is displayed (Zabbix agent, SNMP agent, simple check, etc).
Status Item status is displayed - Enabled, Disabled or Not supported. You can change the status
manually by clicking on it - from Enabled to Disabled (and back); from Not supported to Disabled
(and back).
Discovered items that have been lost are marked with an info icon. The tooltip text provides
details on their status.
Tags Item tags are displayed.
Up to three tags (name:value pairs) can be displayed. If there are more tags, a ”...” link is
displayed that allows to see all tags on mouseover.
Info If the item is working correctly, no icon is displayed in this column. In case of errors, a square
icon with the letter ”i” is displayed. Hover over the icon to see a tooltip with the error description.
To configure a new item, click on the Create item button at the top right corner.
853
• Mass update - update several properties for a number of items at once.
• Delete - delete the items.
To use these options, mark the checkboxes before the respective items, then click on the required button.
Using filter
You can use the filter to display only the items you are interested in. For better search performance, data is searched with macros
unresolved.
The Filter icon is available at the top right corner. Clicking on it will open a filter where you can specify the desired filtering criteria.
Parameter Description
854
Parameter Description
The Subfilter below the filter offers further filtering options (for the data already filtered). You can select groups of items with a
common parameter value. Upon clicking on a group, it gets highlighted and only the items with this parameter value remain in
the list.
2 Triggers
Overview
The trigger list for a host can be accessed from Data collection → Hosts by clicking on Triggers for the respective host.
Displayed data:
Column Description
Severity Severity of the trigger is displayed by both name and cell background color.
Value Trigger value is displayed:
OK - the trigger is in the OK state
PROBLEM - the trigger is in the Problem state
Host Host of the trigger.
This column is displayed only if multiple hosts are selected in the filter.
Name Name of the trigger, displayed as a blue link to trigger details.
Clicking on the trigger name link opens the trigger configuration form.
If the host trigger belongs to a template, the template name is displayed before the trigger
name, as a gray link. Clicking on the template link will open the trigger list on the template level.
If the trigger has been created from a trigger prototype, its name is preceded by the low-level
discovery rule name, in orange. Clicking on the discovery rule name will open the trigger
prototype list.
Operational data Operational data definition of the trigger, containing arbitrary strings and macros that will
resolve dynamically in Monitoring → Problems.
Expression Trigger expression is displayed. The host-item part of the expression is displayed as a link,
leading to the item configuration form.
Status Trigger status is displayed - Enabled, Disabled or Unknown. By clicking on the status you can
manually change it - from Enabled to Disabled (and back); from Unknown to Disabled (and back).
Problems of a disabled trigger are no longer displayed in the frontend, but are not deleted.
Discovered triggers that have been lost are marked with an info icon. The tooltip text provides
details on their status.
855
Column Description
Info If everything is working correctly, no icon is displayed in this column. In case of errors, a square
icon with the letter ”i” is displayed. Hover over the icon to see a tooltip with the error description.
Tags If a trigger contains tags, tag name and value are displayed in this column.
To configure a new trigger, click on the Create trigger button at the top right corner.
To use these options, mark the checkboxes before the respective triggers, then click on the required button.
Using filter
You can use the filter to display only the triggers you are interested in. For better search performance, data is searched with macros
unresolved.
The Filter icon is available at the top right corner. Clicking on it will open a filter where you can specify the desired filtering criteria.
Parameter Description
856
Parameter Description
3 Graphs
Overview
The custom graph list for a host can be accessed from Data collection → Hosts by clicking on Graphs for the respective host.
Displayed data:
Column Description
Name Name of the custom graph, displayed as a blue link to graph details.
Clicking on the graph name link opens the graph configuration form.
If the host graph belongs to a template, the template name is displayed before the graph name,
as a gray link. Clicking on the template link will open the graph list on the template level.
If the graph has been created from a graph prototype, its name is preceded by the low-level
discovery rule name, in orange. Clicking on the discovery rule name will open the graph
prototype list.
Width Graph width is displayed.
Height Graph height is displayed.
Graph type Graph type is displayed - Normal, Stacked, Pie or Exploded.
Info If the graph is working correctly, no icon is displayed in this column. In case of errors, a square
icon with the letter ”i” is displayed. Hover over the icon to see a tooltip with the error description.
To configure a new graph, click on the Create graph button at the top right corner.
To use these options, mark the checkboxes before the respective graphs, then click on the required button.
Using filter
You can filter graphs by host group and host. For better search performance, data is searched with macros unresolved.
4 Discovery rules
Overview
857
The list of low-level discovery rules for a host can be accessed from Data collection → Hosts by clicking on Discovery for the
respective host.
A list of existing low-level discovery rules is displayed. It is also possible to see all discovery rules independently of the host, or all
discovery rules of a specific host group by changing the filter settings.
Displayed data:
Column Description
To configure a new low-level discovery rule, click on the Create discovery rule button at the top right corner.
To use these options, mark the checkboxes before the respective discovery rules, then click on the required button.
Using filter
You can use the filter to display only the discovery rules you are interested in. For better search performance, data is searched
with macros unresolved.
858
The Filter link is available above the list of discovery rules. If you click on it, a filter becomes available where you can filter discovery
rules by host group, host, name, item key, item type, and other parameters.
Parameter Description
1 Item prototypes
Overview
In this section the item prototypes of a low-level discovery rule on the host are displayed. Item prototypes are the basis of real
host items that are created during low-level discovery.
Displayed data:
859
Column Description
To configure a new item prototype, click on the Create item prototype button at the top right corner.
To use these options, mark the checkboxes before the respective item prototypes, then click on the required button.
2 Trigger prototypes
Overview
In this section the trigger prototypes of a low-level discovery rule on the host are displayed. Trigger prototypes are the basis of
real host triggers that are created during low-level discovery.
Displayed data:
860
Column Description
To configure a new trigger prototype, click on the Create trigger prototype button at the top right corner.
To use these options, mark the checkboxes before the respective trigger prototypes, then click on the required button.
3 Graph prototypes
Overview
In this section the graph prototypes of a low-level discovery rule on the host are displayed. Graph prototypes are the basis of real
host graphs that are created during low-level discovery.
Displayed data:
Column Description
To configure a new graph prototype, click on the Create graph prototype button at the top right corner.
861
Buttons below the list offer some mass-editing options:
To use these options, mark the checkboxes before the respective graph prototypes, then click on the required button.
4 Host prototypes
Overview
In this section the host prototypes of a low-level discovery rule on the host are displayed. Host prototypes are the basis of real
hosts that are created during low-level discovery.
Displayed data:
Column Description
To configure a new host prototype, click on the Create host prototype button at the top right corner.
To use these options, mark the checkboxes before the respective host prototypes, then click on the required button.
5 Web scenarios
Overview
The web scenario list for a host can be accessed from Data collection → Hosts by clicking on Web for the respective host.
862
Displayed data:
Column Description
Name Name of the web scenario. Clicking on the web scenario name opens the web scenario
configuration form.
If the host web scenario belongs to a template, the template name is displayed before the web
scenario name as a gray link. Clicking on the template link will open the web scenario list on the
template level.
Number of steps The number of steps the scenario contains.
Update interval How often the scenario is performed.
Attempts How many attempts for executing web scenario steps are performed.
Authentication Authentication method is displayed - Basic, NTLM, or None.
HTTP proxy Displays HTTP proxy or ’No’ if not used.
Status Web scenario status is displayed - Enabled or Disabled.
By clicking on the status you can change it.
Tags Web scenario tags are displayed.
Up to three tags (name:value pairs) can be displayed. If there are more tags, a ”...” link is
displayed that allows to see all tags on mouseover.
Info If everything is working correctly, no icon is displayed in this column. In case of errors, a square
icon with the letter ”i” is displayed. Hover over the icon to see a tooltip with the error description.
To configure a new web scenario, click on the Create web scenario button at the top right corner.
To use these options, mark the checkboxes before the respective web scenarios, then click on the required button.
Using filter
You can use the filter to display only the scenarios you are interested in. For better search performance, data is searched with
macros unresolved.
The Filter link is available above the list of web scenarios. If you click on it, a filter becomes available where you can filter scenarios
by host group, host, status and tags.
5 Maintenance
Overview
In the Data collection → Maintenance section users can configure and maintain maintenance periods for hosts.
Displayed data:
863
Column Description
Name Name of the maintenance period. Clicking on the maintenance period name opens the
maintenance period configuration form.
Type The type of maintenance is displayed: With data collection or No data collection
Active since The date and time when executing maintenance periods becomes active.
Note: This time does not activate a maintenance period; maintenance periods need to be set
separately.
Active till The date and time when executing maintenance periods stops being active.
State The state of the maintenance period:
Approaching - will become active soon
Active - is active
Expired - is not active any more
Description Description of the maintenance period is displayed.
To configure a new maintenance period, click on the Create maintenance period button in the top right-hand corner.
To use this option, mark the checkboxes before the respective maintenance periods and click on Delete.
Using filter
You can use the filter to display only the maintenance periods you are interested in. For better search performance, data is searched
with macros unresolved.
The Filter link is available above the list of maintenance periods. If you click on it, a filter becomes available where you can filter
maintenance periods by host group, name and state.
6 Event correlation
Overview
In the Data collection → Event correlation section users can configure and maintain global correlation rules for Zabbix events.
Displayed data:
Column Description
Name Name of the correlation rule. Clicking on the correlation rule name opens the rule configuration
form.
Conditions Correlation rule conditions are displayed.
Operations Correlation rule operations are displayed.
864
Column Description
To configure a new correlation rule, click on the Create event correlation button in the top right-hand corner.
To use these options, mark the checkboxes before the respective correlation rules, then click on the required button.
Using filter
You can use the filter to display only the correlation rules you are interested in. For better search performance, data is searched
with macros unresolved.
The Filter link is available above the list of correlation rules. If you click on it, a filter becomes available where you can filter
correlation rules by name and status.
7 Discovery
Overview
In the Data collection → Discovery section users can configure and maintain discovery rules.
Displayed data:
Column Description
Name Name of the discovery rule. Clicking on the discovery rule name opens the discovery rule
configuration form.
IP range The range of IP addresses to use for network scanning is displayed.
Proxy The proxy name is displayed, if discovery is performed by the proxy.
Interval The frequency of performing discovery displayed.
Checks The types of checks used for discovery are displayed.
Status The discovery rule status is displayed - Enabled or Disabled.
By clicking on the status you can change it.
Info If everything is working correctly, nothing is displayed in this column. In case of errors, a red info
icon with the letter ”i” is displayed. Hover over the icon to see a tooltip with the error description.
To configure a new discovery rule, click on the Create discovery rule button in the top right-hand corner.
865
• Enable - change the discovery rule status to Enabled
• Disable - change the discovery rule status to Disabled
• Delete - delete the discovery rules
To use these options, mark the checkboxes before the respective discovery rules, then click on the required button.
Using filter
You can use the filter to display only the discovery rules you are interested in. For better search performance, data is searched
with macros unresolved.
The Filter link is available above the list of discovery rules. If you click on it, a filter becomes available where you can filter discovery
rules by name and status.
7 Alerts
Overview
This menu features sections that are related to configuring alerts in Zabbix.
1 Actions
Overview
In the Alerts → Actions section users can configure and maintain actions.
The actions displayed are actions assigned to the selected event source (trigger, services, discovery, autoregistration, internal
actions).
Actions are grouped into subsections by event source (trigger, service, discovery, autoregistration, internal actions). The list of
available subsections appears upon clicking on Actions in the Alerts menu section. It is also possible to switch between subsections
by using the title dropdown in the top left corner.
After selecting a subsection, a list of existing actions with their details will be displayed.
Displayed data:
Column Description
Name Name of the action. Clicking on the action name opens the action configuration form.
Conditions Action conditions are displayed.
Operations Action operations are displayed.
The operation list also displays the media type (email, SMS or script) used for notification as well
as the name and surname (in parentheses after the username) of a notification recipient.
Action operation can both be a notification or a remote command depending on the selected
type of operation.
Status Action status is displayed - Enabled or Disabled.
By clicking on the status you can change it.
See the Escalations section for more details as to what happens if an action is disabled during an
escalation in progress.
To configure a new action, click on the Create action button in the top right-hand corner.
866
For users without Super admin rights actions are displayed according to the permission settings. That means in some cases a
user without Super admin rights isn’t able to view the complete action list because of certain permission restrictions. An action is
displayed to the user without Super admin rights if the following conditions are fulfilled:
• The user has read-write access to host groups, hosts, templates, and triggers in action conditions
• The user has read-write access to host groups, hosts, and templates in action operations, recovery operations, and update
operations
• The user has read access to user groups and users in action operations, recovery operations, and update operations
To use these options, mark the checkboxes before the respective actions, then click on the required button.
Using filter
You can use the filter to display only the actions you are interested in. For better search performance, data is searched with macros
unresolved.
The Filter link is available above the list of actions. If you click on it, a filter becomes available where you can filter actions by name
and status.
2 Media types
Overview
In the Alerts → Media types section users can configure and maintain media type information.
Media type information contains general instructions for using a medium as delivery channel for notifications. Specific details,
such as the individual email addresses to send a notification to are kept with individual users.
Displayed data:
Column Description
Name Name of the media type. Clicking on the name opens the media type configuration form.
Type Type of the media (email, SMS, etc) is displayed.
Status Media type status is displayed - Enabled or Disabled.
By clicking on the status you can change it.
867
Column Description
Used in actions All actions where the media type is used are displayed. Clicking on the action name opens the
action configuration form.
Details Detailed information of the media type is displayed.
Actions The following action is available:
Test - click to open a testing form where you can enter media type parameters (e.g. a recipient
address with test subject and body) and send a test message to verify that the configured media
type works. See also: Media type testing for Email, Webhook, or Script.
To configure a new media type, click on the Create media type button in the top right-hand corner.
To import a media type from XML, click on the Import button in the top right-hand corner.
To use these options, mark the checkboxes before the respective media types, then click on the required button.
Using filter
You can use the filter to display only the media types you are interested in. For better search performance, data is searched with
macros unresolved.
The Filter link is available above the list of media types. If you click on it, a filter becomes available where you can filter media
types by name and status.
3 Scripts
Overview
In the Alerts → Scripts section user-defined global scripts can be configured and maintained.
Global scripts, depending on the configured scope and also user permissions, are available for execution:
• from the host menu in various frontend locations (Dashboard, Problems, Latest data, Maps, etc.)
• from the event menu
• can be run as an action operation
The scripts are executed on Zabbix agent, Zabbix server (proxy) or Zabbix server only. See also Command execution.
Both on Zabbix agent and Zabbix proxy remote scripts are disabled by default. They can be enabled by:
Global script execution on Zabbix server can be disabled by setting EnableGlobalScripts=0 in server configuration. For new instal-
lations, since Zabbix 7.0, global script execution on Zabbix server is disabled by default.
868
Displayed data:
Column Description
Name Name of the script. Clicking on the script name opens the script configuration form.
Scope Scope of the script - action operation, manual host action or manual event action. This setting
determines where the script is available.
Used in actions Actions where the script is used are displayed.
Type Script type is displayed - URL, Webhook, Script, SSH, Telnet or IPMI command.
Execute on It is displayed whether the script will be executed on Zabbix agent, Zabbix proxy or server, or
Zabbix server only.
Commands All commands to be executed within the script are displayed.
User group The user group that the script is available to is displayed (or All for all user groups).
Host group The host group that the script is available for is displayed (or All for all host groups).
Host access The permission level for the host group is displayed - Read or Write. Only users with the required
permission level will have access to executing the script.
To configure a new script, click on the Create script button in the top right-hand corner.
To use this option, mark the checkboxes before the respective scripts and click on Delete.
Using filter
You can use the filter to display only the scripts you are interested in. For better search performance, data is searched with macros
unresolved.
The Filter link is available above the list of scripts. If you click on it, a filter becomes available where you can filter scripts by name
and scope.
869
Script attributes:
Parameter Description
870
Parameter Description
URL Specify the URL for quick access from the host menu or event menu.
Macros and custom user macros are supported. Macro support depends on the scope of the
script (see Scope above).
Use the {MANUALINPUT} macro in this field to be able to specify manual input at script
execution time, for example:
http://{MANUALINPUT}/zabbix/zabbix.php?action=dashboard.view
Macro values must not be URL-encoded.
Open in new window Determines whether the URL should be opened in a new or the same browser tab.
Script type: Webhook
Parameters Specify the webhook variables as attribute-value pairs.
See also: Webhook media configuration.
Macros and custom user macros are supported in parameter values. Macro support depends
on the scope of the script (see Scope above).
Script Enter the JavaScript code in the block that appears when clicking in the parameter field (or
on the view/edit button next to it).
Macro support depends on the scope of the script (see Scope above).
See also: Webhook media configuration, Additional Javascript objects.
Timeout JavaScript execution timeout (1-60s, default 30s).
Time suffixes are supported, e.g. 30s, 1m.
Script type: Script
Execute on Click the respective button to execute the shell script on:
Zabbix agent - the script will be executed by Zabbix agent (if the system.run item is
allowed) on the host
Zabbix proxy or server - the script will be executed by Zabbix proxy or server - depending
on whether the host is monitored by proxy or server.
It will be executed on the proxy if enabled by EnableRemoteCommands.
It will be executed on the server if global scripts are enabled by the EnableGlobalScripts
server parameter.
Zabbix server - the script will be executed by Zabbix server only.
This option will not be available if global scripts are disabled by the EnableGlobalScripts
server parameter.
Commands Enter full path to the commands to be executed within the script.
Macro support depends on the scope of the script (see Scope above). Custom user macros
are supported.
Script type: SSH
Authentication method Select authentication method - password or public key.
Username Enter the username.
Password Enter the password.
This field is available if ’Password’ is selected as the authentication method.
Public key file Enter the path to the public key file.
This field is available if ’Public key’ is selected as the authentication method.
Private key file Enter the path to the private key file.
This field is available if ’Public key’ is selected as the authentication method.
Passphrase Enter the passphrase.
This field is available if ’Public key’ is selected as the authentication method.
Port Enter the port.
Commands Enter the commands.
Macro support depends on the scope of the script (see Scope above). Custom user macros
are supported.
Script type: Telnet
Username Enter the username.
Password Enter the password.
Port Enter the port.
Commands Enter the commands.
Macro support depends on the scope of the script (see Scope above). Custom user macros
are supported.
Script type: IPMI
Command Enter the IPMI command.
Macro support depends on the scope of the script (see Scope above). Custom user macros
are supported.
Description Enter a description for the script.
871
Parameter Description
Host Select the host group that the script will be available for (or All for all host groups).
group
User Select the user group that the script will be available to (or All for all user groups).
group This field is displayed only if ’Manual host action’ or ’Manual event action’ is selected as
Scope.
Required Select the permission level for the host group - Read or Write. Only users with the required
host permission level will have access to executing the script.
per- This field is displayed only if ’Manual host action’ or ’Manual event action’ is selected as
mis- Scope.
sions
Advanced Click on the Advanced configuration label to display advanced configuration options.
con- This field is displayed only if ’Manual host action’ or ’Manual event action’ is selected as
fig- Scope.
u-
ra-
tion
Advanced configuration
Parameter Description
Enable user input Mark the checkbox to enable manual user input before executing the script.
Manual user input will replace the {MANUALINPUT} macro value in the script.
See also: Manual user input.
Input prompt Enter custom text prompting for custom user input. This text will be displayed above the input
field in the Manual input popup.
To see a preview of the Manual input popup, click on Test user input. The preview also allows to
test if the input string complies with the input validation rule (see parameters below).
Macro and user macro support depends on the scope of the script (see Scope in general script
configuration parameters).
Input type Select the manual input type:
String - single string;
Dropdown - value is selected from multiple dropdown options.
Dropdown options Enter unique values for the user input dropdown in a comma-delimited list.
To include an empty option in the dropdown, add an extra comma at the beginning, middle, or
end of the list.
This field is displayed only if ’Dropdown’ is selected as Input type.
Default input string Enter the default string for user input (or none).
This field will be validated against the regular expression provided in the Input validation rule
field.
The value entered here will be displayed by default in the Manual input popup.
This field is displayed only if ’String’ is selected as Input type.
Input validation rule Enter a regular expression to validate the user input string.
Global regular expressions are supported.
This field is displayed only if ’String’ is selected as Input type.
872
Parameter Description
Enable confirmation Mark the checkbox to display a confirmation message before executing the script. This feature
might be especially useful with potentially dangerous operations (like a reboot script) or ones
that might take a long time.
Confirmation text Enter custom confirmation text for the confirmation popup enabled with the checkbox above (for
example, Remote system will be rebooted. Are you sure?). To see how the text will look like, click
on Test confirmation next to the field.
Macros and custom user macros are supported.
Note: the macros will not be expanded when testing the confirmation message.
If both manual user input and a confirmation message are configured, they will be displayed in consecutive popup windows.
Manual user input allows to supply a custom parameter on each execution of the script. This saves the necessity to create multiple
similar user scripts with only a single parameter difference.
For example, you may want to supply a different integer or a different URL address to the script during execution.
• use the {MANUALINPUT} macro in the script (commands, script, script parameter) where required; or in the URL field of URL
scripts;
• in advanced script configuration, enable manual user input and configure input options.
With user input enabled, before script execution, a Manual input popup will appear to the user asking to supply a custom value.
The supplied value will replace {MANUALINPUT} in the script.
Depending on the configuration, the user will be asked to enter a string value:
Manual user input is available only for scripts where the scope is ’Manual host action’ or ’Manual event action’.
873
Script execution and result
Scripts run by Zabbix server are executed in the order described in the Command execution page (including exit code checking).
The script result will be displayed in a pop-up window that will appear after the script is run.
The return value of the script is a standard output together with a standard error.
The return value is limited to 16MB (including trailing whitespace that is truncated); database limits also apply. When data has to
pass through Zabbix proxy, it must be stored in the database, thus subjecting it to the same database limits.
uname -v
/tmp/non_existing_script.sh
echo "This script was started by {USER.USERNAME}"
Script timeout
Zabbix agent
You may encounter a situation when a timeout occurs while executing a script.
See an example of a script running on Zabbix agent and the result window below:
sleep 5
df -h
874
in Zabbix agent configuration. This ensures that the server has enough time to receive the active check results from the agent.
Note that script execution on an active agent is supported since Zabbix agent 7.0.
In case the Timeout parameter has been changed in Zabbix agent configuration, the following error message will appear:
Get value from agent failed: ZBX_TCP_READ() timed out.
It means that the modification has been made in Zabbix agent configuration, but it is required to modify the Timeout parameter
in Zabbix server configuration as well.
Zabbix server/proxy
See an example of a script running on Zabbix server and the result window below:
sleep 11
df -h
It is also advised to optimize the script itself (instead of adjusting TrapperTimeout parameter to a corresponding value (in our
case, > ‘11’) by modifying the Zabbix server configuration).
8 Users
Overview
This menu features sections that are related to configuring users in Zabbix. This menu is available to SuperAdmin user type users
only.
1 User groups
Overview
In the Users → User groups section user groups of the system are maintained.
User groups
Displayed data:
875
Column Description
Name Name of the user group. Clicking on the user group name opens the user group configuration
form.
# The number of users in the group. Clicking on Users will display the respective users filtered out
in the user list.
Members Usernames of individual users in the user group (with name and surname in parentheses).
Clicking on the username will open the user configuration form. Users from disabled groups are
displayed in red.
Frontend access Frontend access level is displayed:
System default - users are authenticated by Zabbix, LDAP or HTTP (depending on the
authentication method set globally);
Internal - users are authenticated by Zabbix; ignored if HTTP authentication is the global default;
LDAP - users are authenticated by LDAP; ignored if HTTP authentication is the global default;
Disabled - access to Zabbix frontend is forbidden for this group.
By clicking on the current level, you can change it.
Debug mode Debug mode status is displayed - Enabled or Disabled. By clicking on the status you can change
it.
Status User group status is displayed - Enabled or Disabled. By clicking on the status you can change it.
To configure a new user group, click on the Create user group button in the top right-hand corner.
To use these options, mark the checkboxes before the respective user groups, then click on the required button.
Using filter
You can use the filter to display only the user groups you are interested in. For better search performance, data is searched with
macros unresolved.
The Filter link is available above the list of user groups. If you click on it, a filter becomes available where you can filter user groups
by name and status.
2 User roles
Overview
In the Users → User roles section you may create user roles.
User roles allow to create fine-grained permissions based on the initially selected user type (User, Admin, Super admin).
Upon selecting a user type, all available permissions for this user type are granted (checked by default).
Permissions can only be revoked from the subset that is available for the user type; they cannot be extended beyond what is
available for the user type.
Checkboxes for unavailable permissions are grayed out; users will not be able to access the element even by entering a direct URL
to this element into the browser.
User roles can be assigned to system users. Each user may have only one role assigned.
By default, Zabbix is configured with four user roles, which have a pre-defined set of permissions:
876
• Guest role
• User role
• Admin role
• Super admin role
These are based on the main user types in Zabbix. The list of all users assigned the respective role is displayed. The users included
in disabled groups are stated in red. The Guest role is a user-type role with the only permissions to view some frontend sections.
Note:
The default Super admin role cannot be modified or deleted, because at least one Super admin user with unlimited privileges
must exist in Zabbix. Users of type Super admin can modify settings of their own role, but not the user type.
Configuration
To create a new role, click on the Create user role button at the top right corner. To update an existing role, click on the role name
to open the configuration form.
877
878
Available permissions are displayed. To revoke a certain permission, unmark its checkbox.
Available permissions along with the defaults for each pre-configured user role in Zabbix are described below.
Default permissions
Access to UI elements
The default access to menu sections depends on the user type. See the Permissions page for details.
879
<Module name> Allow/deny access to a specific module. Only enabled Yes Yes Yes Yes
modules are shown in this section. It is not possible to
grant or restrict access to a module that is currently
disabled.
Default access to Enable/disable access to modules that may be added
new modules in the future.
Access
to
API
Enabled Enable/disable access to API. Yes Yes Yes No
API methods Select either Allow list to allow, or Deny list to deny
the API methods specified in the search field. Note
that it is not possible to allow some API methods and
deny others.
880
Default access to Enable/disable access to new actions.
new actions
See also:
• Configuring a user
3 Users
Overview
Users
Displayed data:
Column Description
Username Username for logging into Zabbix. Clicking on the username opens the user configuration form.
Name First name of the user.
Last name Second name of the user.
User role User role is displayed.
Groups Groups that the user is a member of are listed. Clicking on the user group name opens the user
group configuration form. Disabled groups are displayed in red.
Is online? The on-line status of the user is displayed - Yes or No. The time of last user activity is displayed
in parentheses.
Login The login status of the user is displayed - Ok or Blocked. A user can become temporarily blocked
upon exceeding the number of unsuccessful login attempts set in the Administration → General
→ Other section (five by default). By clicking on Blocked you can unblock the user.
Frontend access Frontend access level is displayed - System default, Internal, LDAP, or Disabled, depending on
the one set for the whole user group.
API access API access status is displayed - Enabled or Disabled, depending on the one set for the user role.
Debug mode Debug mode status is displayed - Enabled or Disabled, depending on the one set for the whole
user group.
Status User status is displayed - Enabled or Disabled, depending on the one set for the whole user
group.
Provisioned The date when the user was last provisioned is displayed.
Used for users created by JIT provisioning from LDAP/SAML.
Info Information about errors is displayed.
A yellow warning is displayed for users without user groups.
A red warning is displayed for users without roles, and for users without roles and user groups.
To configure a new user, click on the Create user button in the top right-hand corner.
• Provision now - update user information from LDAP (this option is only enabled if an LDAP user is selected)
• Reset TOTP secret - reset user TOTP secrets for all TOTP methods and delete the user session (this option is only enabled if
MFA is enabled; for users without TOTP secrets, their session will not be deleted)
• Unblock - re-enable system access to blocked users
• Delete - delete the users
881
To use these options, mark the check-boxes before the respective users, then click on the required button.
Using filter
You can use the filter to display only the users you are interested in. For better search performance, data is searched with macros
unresolved.
The Filter link is available above the list of users. If you click on it, a filter becomes available where you can filter users by username,
name, last name, user role and user group.
4 API tokens
Overview
You may filter API tokens by name, users to whom the tokens are assigned, expiry date, users that created tokens, or status
(enabled/disabled). Click on the token status in the list to quickly enable/disable a token. You may also mass enable/disable tokens
by selecting them in the list and then clicking on the Enable/Disable buttons below the list.
To create a new token, press Create API token button at the top right corner, then fill out the required fields in the token configuration
screen:
Parameter Description
882
Parameter Description
Press Add to create a token. On the next screen, copy and save in a safe place Auth token value before closing the page, then
press Close. The token will appear in the list.
Warning:
Auth token value cannot be viewed again later. It is only available immediately after creating a token. If you lose a saved
token you will have to regenerate it and doing so will create a new authorization string.
Click on the token name to edit the name, description, expiry date settings, or token status. Note that it is not possible to change
to which user the token is assigned. Press Update button to save changes. If a token has been lost or exposed, you may press
Regenerate button to generate new token value. A confirmation dialog box will appear, asking you to confirm this operation since
after proceeding the previously generated token will become invalid.
Users without access to the Administration menu section can see and modify details of tokens assigned to them in the User profile
→ API tokens section only if Manage API tokens is allowed in their user role permissions.
5 Authentication
Overview
The Users → Authentication section allows to specify the user authentication method for Zabbix and internal password requirements.
The available authentication methods are internal, HTTP, LDAP, SAML, and MFA authentication.
Default authentication
It is possible to change the default authentication method to LDAP system-wide. To do so, navigate to the LDAP tab and configure
LDAP parameters, then return to the Authentication tab and switch the Default authentication selector to LDAP.
Note that the authentication method can be fine-tuned on the user group level. Even if LDAP authentication is set globally, some
user groups can still be authenticated by Zabbix. These groups must have frontend access set to Internal.
It is also possible to enable LDAP authentication only for specific user groups, if internal authentication is used globally. In this case
LDAP authentication details can be specified and used for specific user groups whose frontend access must then be set to LDAP. If
a user is included into at least one user group with LDAP authentication, this user will not be able to use the internal authentication
method.
HTTP, SAML 2.0, and MFA authentication methods can be used in addition to the default authentication method.
Zabbix supports just-in-time (JIT) provisioning that allows to create user accounts in Zabbix the first time an external user authen-
ticates and provision these user accounts. JIT provisioning is supported for LDAP and SAML.
See also:
• HTTP authentication
• LDAP authentication
• SAML authentication
• MFA authentication
Configuration
The Authentication tab allows to set the default authentication method, specify a group for deprovisioned users and set password
complexity requirements for Zabbix users.
883
Configuration parameters:
Parameter Description
Default authentication Select the default authentication method for Zabbix - Internal or LDAP.
Deprovisioned users Specify a user group for deprovisioned users. This setting is required only for JIT provisioning,
group regarding users that were created in Zabbix from LDAP or SAML systems, but no longer need to
be provisioned.
A disabled user group must be specified.
Minimum password By default, the minimum password length is set to 8. Supported range: 1-70. Note that
length passwords longer than 72 characters will be truncated.
Password must contain Mark one or several checkboxes to require usage of specified characters in a password:
- an uppercase and a lowercase Latin letter
- a digit
- a special character
Hover over the question mark to see a hint with the list of characters for each option.
Avoid easy-to-guess If marked, a password will be checked against the following requirements:
passwords - must not contain user’s name, surname, or username
- must not be one of the common or context-specific passwords.
The list of common and context-specific passwords is generated automatically from the list of
NCSC ”Top 100k passwords”, the list of SecLists ”Top 1M passwords” and the list of Zabbix
context-specific passwords. Internal users will not be allowed to set passwords included in this
list as such passwords are considered weak due to their common use.
Changes in password complexity requirements will not affect existing user passwords, but if an existing user chooses to change a
password, the new password will have to meet current requirements. A hint with the list of requirements will be displayed next to
the Password field in the user profile and in the user configuration form accessible from the Users → Users menu.
1 HTTP
Overview
HTTP or web server-based authentication (for example: BasicAuthentication, NTLM/Kerberos) can be used to check user names
and passwords. Note that a user must exist in Zabbix as well, however its Zabbix password will not be used.
Attention:
Be careful! Make sure that web server authentication is configured and works properly before switching it on.
HTTP authentication can be disabled in the frontend configuration file by setting $ALLOW_HTTP_AUTH=false in zabbix.conf.php.
In this case the tab with HTTP authentication options will not be displayed in the frontend. Note that reinstalling the frontend
(running setup.php) will remove this parameter.
884
Configuration
Configuration parameters:
Parameter Description
Enable HTTP Mark the checkbox to enable HTTP authentication. Hovering the mouse over will bring up a
authentication hint box warning that in the case of web server authentication, all users (even with frontend
access set to LDAP/Internal) will be authenticated by the web server, not by Zabbix.
Default login form Specify whether to direct non-authenticated users to:
Zabbix login form - standard Zabbix login page.
HTTP login form - HTTP login page.
It is recommended to enable web-server based authentication for the index_http.php page
only. If Default login form is set to ’HTTP login page’ the user will be logged in automatically if
$_SERVER variable.
web server authentication module will set valid user login in the
Supported $_SERVER keys are PHP_AUTH_USER, REMOTE_USER, AUTH_USER.
Remove domain name A comma-delimited list of domain names that should be removed from the username.
E.g. comp,any - if username is ’Admin@any’, ’comp\Admin’, user will be logged in as ’Admin’; if
username is ’notacompany\Admin’, login will be denied.
Case sensitive login Unmark the checkbox to disable case-sensitive login (enabled by default) for usernames.
E.g. disable case-sensitive login and log in with, for example, ’ADMIN’ user even if the Zabbix
user is ’Admin’.
Note that with case-sensitive login disabled the login will be denied if multiple users exist in
Zabbix database with similar usernames (e.g. Admin, admin).
Note:
For internal users who are unable to log in using HTTP credentials (with HTTP login form set as default) leading to the 401
error, you may want to add a ErrorDocument 401 /index.php?form=default line to basic authentication directives,
which will redirect to the regular Zabbix login form.
2 LDAP
Overview
External LDAP authentication can be used to check user names and passwords.
Zabbix LDAP authentication works at least with Microsoft Active Directory and OpenLDAP.
If only LDAP sign-in is configured, then the user must also exist in Zabbix, however, its Zabbix password will not be used. If
authentication is successful, then Zabbix will match a local username with the username attribute returned by LDAP.
User provisioning
It is possible to configure JIT (just-in-time) user provisioning for LDAP users. In this case, it is not required that a user already
exists in Zabbix. The user account can be created when the user logs into Zabbix for the first time.
When an LDAP user enters their LDAP login and password, Zabbix checks the default LDAP server if this user exists. If the user
exists and does not have an account in Zabbix yet, a new user is created in Zabbix and the user is able to log in.
Attention:
If JIT provisioning is enabled, a user group for deprovisioned users must be specified in the Authentication tab.
885
JIT provisioning also allows to update provisioned user accounts based on changes in LDAP. For example, if a user is moved from
one LDAP group to another, the user will also be moved from one group to another in Zabbix; if a user is removed from an LDAP
group, the user will also be removed from the group in Zabbix and, if not belonging to any other group, added to the user group
for deprovisioned users. Note that provisioned user accounts are updated based on the configured provisioning period or when
the user logs into Zabbix.
LDAP JIT provisioning is available only when LDAP is configured to use ”anonymous” or ”special user” for binding. For direct user
binding, provisioning will be made only for user login action, because logging in user password is used for such type of binding.
Multiple servers
Several LDAP servers can be defined, if necessary. For example, a different server can be used to authenticate a different user
group. Once LDAP servers are configured, in user group configuration it becomes possible to select the required LDAP server for
the respective user group.
If a user is in multiple user groups and multiple LDAP servers, the first server in the list of LDAP servers sorted by name in ascending
order will be used for authentication.
Configuration
Configuration parameters:
Parameter Description
886
LDAP server configuration parameters:
Parameter Description
887
Parameter Description
888
Note:
To configure an LDAP server for direct user binding, append an attribute uid=%{user} to the Base DN parameter (for
example,uid=%{user},dc=example,dc=com) and leave BindDN and Bind password parameters empty. When authenti-
cating, a placeholder %{user} will be replaced by the username entered during login.
The following fields are specific to ”groupOfNames” as the Group configuration method:
Parameter Description
Warning:
In case of trouble with certificates, to make a secure LDAP connection (ldaps) work you may need to add a TLS_REQCERT
allow line to the /etc/openldap/ldap.conf configuration file. It may decrease the security of connection to the LDAP catalog.
Note:
It is recommended to create a separate LDAP account (Bind DN) to perform binding and searching over the LDAP server
with minimal privileges in the LDAP instead of using real user accounts (used for logging in the Zabbix frontend).
Such an approach provides more security and does not require changing the Bind password when the user changes his
own password in the LDAP server.
In the table above it’s the ldap_search account name.
Testing access
Parameter Description
Login LDAP user name to test (prefilled with the current user name from Zabbix frontend). This user
name must exist in the LDAP server.
Zabbix will not activate LDAP authentication if it is unable to authenticate the test user.
User password LDAP user password to test.
3 SAML
889
Overview
If only SAML sign-in is configured, then the user must also exist in Zabbix, however, its Zabbix password will not be used. If
authentication is successful, then Zabbix will match a local username with the username attribute returned by SAML.
User provisioning
It is possible to configure JIT (just-in-time) user provisioning for SAML users. In this case, it is not required that a user already
exists in Zabbix. The user account can be created when the user logs into Zabbix for the first time.
Attention:
If JIT provisioning is enabled, a user group for deprovisioned users must be specified in the Authentication tab.
On top of JIT provisioning it is also possible to enable and configure SCIM (System for Cross-domain Identity Management) provi-
sioning - continuous user account management for those users that have been created by user provisioning. SCIM provisioning
requires a Zabbix API token (with Super admin permissions) for authentication into Zabbix.
For example, if a user is moved from one SAML group to another, the user will also be moved from one group to another in Zabbix;
if a user is removed from a SAML group, the user will also be removed from the group in Zabbix and, if not belonging to any other
group, added to the user group for deprovisioned users.
If SCIM is enabled and configured, a SAML user will be provisioned at the moment the user logs into Zabbix and continuously updated
based on changes in SAML. Already existing SAML users will not be provisioned, and only provisioned users will be updated. Note
that only valid media will be added to a user when the user is provisioned or updated.
If SCIM is not enabled, a SAML user will be provisioned (and later updated) at the moment the user logs into Zabbix.
Note:
If SAML authentication is enabled, users will be able to choose between logging in locally or via SAML single sign-on. If JIT
provisioning is used, then only single sign-on is possible.
In order to work with Zabbix, a SAML identity provider (onelogin.com, auth0.com, okta.com, etc.) needs to be configured in the
following way:
Attention:
It is required to install php-openssl if you want to use SAML authentication in the frontend.
1. Private key and certificate should be stored in the ui/conf/certs/, unless custom paths are provided in zabbix.conf.php.
2. All of the most important settings can be configured in the Zabbix frontend. However, it is possible to specify additional settings
in the configuration file.
890
Configuration parameters, available in the Zabbix frontend:
Parameter Description
891
Parameter Description
SLO service URL The URL users will be redirected to when logging out. If left empty, the SLO service will not be
used.
Username attribute SAML attribute to be used as a username when logging into Zabbix.
The list of supported values is determined by the identity provider.
Examples:
uid
u,serprincipalname
samaccountname
username
userusername
urn:oid:0.9.2342.19200300.100.1.1
urn:oid:1.3.6.1.4.1.5923.1.1.1.13
urn:oid:0.9.2342.19200300.100.1.44
SP entity ID The unique service provider identifier (if not matching, the operation will be rejected).
It is possible to specify a URL or any string of data.
SP name ID format Defines which name identifier format should be used.
Examples:
urn:oasis:names:tc:SAML:2.0:nameid-format:persistent
urn:oasis:names:tc:SAML:2.0:nameid-format:transient
urn:oasis:names:tc:SAML:2.0:nameid-format:kerberos
urn:oasis:names:tc:SAML:2.0:nameid-format:entity
Sign Mark the checkboxes to select entities for which SAML signature should be enabled:
Messages
Assertions
AuthN requests
Logout requests
Logout responses
Encrypt Mark the checkboxes to select entities for which SAML encryption should be enabled:
Name ID
Assertions
Case-sensitive login Mark the checkbox to enable case-sensitive login (disabled by default) for usernames.
E.g. disable case-sensitive login and log in with, for example, ’ADMIN’ user even if the Zabbix
user is ’Admin’.
Note that with case-sensitive login disabled the login will be denied if multiple users exist in
Zabbix database with similar usernames (e.g. Admin, admin).
Configure JIT Mark this checkbox to show options related to JIT user provisioning.
provisioning
Group name attribute Specify the group name attribute for JIT user provisioning.
User name attribute Specify the user name attribute for JIT user provisioning.
User last name attribute Specify the user last name attribute for JIT user provisioning.
User group mapping Map a SAML user group pattern to Zabbix user group and user role.
This is required to determine what user group/role the provisioned user will get in Zabbix.
Click on Add to add a mapping.
The SAML group pattern field supports wildcards. The group name must match an existing group.
If a SAML user matches several Zabbix user groups, the user becomes a member of all of them.
If a user matches several Zabbix user roles, the user will get the highest permission level among
them.
Media type mapping Map the user’s SAML media attributes (e.g. email) to Zabbix user media for sending notifications.
Enable SCIM Mark this checkbox to enable SCIM 2.0 provisioning.
provisioning
See examples of configuring SAML identity providers for sign-in and user provisioning into Zabbix with:
• Microsoft Azure AD
• Okta
• Onelogin
For SCIM provisioning specify the path to the Zabbix frontend and append api_scim.php to it, on the identity provider side, i.e.:
892
https://<your-zabbix-url>/zabbix/api_scim.php
User attributes that are used in Zabbix (username, user name, user lastname and media attributes) need to be added as custom at-
tributes and, if necessary, external namespace should be the same as user schema: urn:ietf:params:scim:schemas:core:2.0:User.
Advanced settings
Additional SAML parameters can be configured in the Zabbix frontend configuration file (zabbix.conf.php):
Note:
Zabbix uses OneLogin’s SAML PHP Toolkit library (version 3.4.1). The structure of $SSO[’SETTINGS’] section should be
similar to the structure used by the library. For the description of configuration options, see official library documentation.
• strict
• baseurl
• compress
• contactPerson
• organization
• sp (only options specified in this list)
– attributeConsumingService
– x509certNew
• idp (only options specified in this list)
– singleLogoutService (only one option)
∗ responseUrl
– certFingerprint
– certFingerprintAlgorithm
– x509certMulti
• security (only options specified in this list)
– signMetadata
– wantNameId
– requestedAuthnContext
– requestedAuthnContextComparison
– wantXMLValidation
– relaxDestinationValidation
– destinationStrictlyMatches
– rejectUnsolicitedResponsesWithInResponseTo
– signatureAlgorithm
– digestAlgorithm
– lowercaseUrlencoding
All other options will be taken from the database and cannot be overridden. The debug option will be ignored.
In addition, if Zabbix UI is behind a proxy or a load balancer, the custom use_proxy_headers option can be used:
If using a load balancer to connect to Zabbix instance, where the load balancer uses TLS/SSL and Zabbix does not, you must
indicate ’baseurl’, ’strict’ and ’use_proxy_headers’ parameters as follows:
$SSO['SETTINGS'] = [
'security' => [
'signatureAlgorithm' => 'http://www.w3.org/2001/04/xmldsig-more#rsa-sha384'
'digestAlgorithm' => 'http://www.w3.org/2001/04/xmldsig-more#sha384',
// ...
],
// ...
];
893
4 MFA
Overview
Multi-factor authentication (MFA) can be used to sign in to Zabbix, providing an additional layer of security beyond just a username
and password.
With MFA, the user must exist in Zabbix, must provide Zabbix credentials when logging in, and must also prove their identity by
other means, usually, a code generated by an authenticator app on the user’s phone.
Multiple MFA methods are available, allowing users to choose the option that best fits their security requirements and preferences.
These methods are Time-Based One-Time Password (TOTP) and Duo Universal Prompt.
Configuration
Configuration parameters:
Parameter Description
Method configuration
Parameter Description
894
Parameter Description
API hostname Enter the API hostname provided by the Duo authentication service.
This parameter is available if MFA method type is set to ”Duo Universal Prompt”.
Client ID Enter the client ID provided by the Duo authentication service.
This parameter is available if MFA method type is set to ”Duo Universal Prompt”.
Client secret Enter the client secret provided by the Duo authentication service.
This parameter is available if MFA method type is set to ”Duo Universal Prompt”.
Configuration examples
This section provides examples of configuring MFA using Time-Based One-Time Password (TOTP) and Duo Universal Prompt.
TOTP
For TOTP, users must verify their identity using an authenticator app (for example, the Google Authenticator app).
1. Go to the MFA settings in Zabbix under Users → Authentication and enable multi-factor authentication.
• Type: TOTP
• Name: Zabbix TOTP
• Hash function: SHA-1
• Code length: 6
3. Go to Users → User groups and create a new user group with the following configuration:
4. Log out of Zabbix and log back in using your credentials. Upon successful login, you will be prompted to enroll in MFA, displaying
a QR code and a secret key.
895
5. Scan the QR code or enter the secret key into the Google Authenticator app. The app will generate a verification code which
you should enter to complete the login process.
6. For subsequent logins, retrieve the verification code from the Google Authenticator app and enter it during login.
For Duo Universal Prompt, users must verify their identity using the Duo Mobile authenticator app.
Attention:
The Duo Universal Prompt MFA method requires the installation of the php-curl extension, access to Zabbix over HTTPS,
and permission for outbound connections to Duo servers. Moreover, if you have enabled Content Security Policy (CSP) on
the web server, make sure to add ”duo.com” to the CSP directive in your virtual host’s configuration file.
2. Open the Duo Admin Panel, go to Applications → Protect an Application, search for the Web SDK application, and click Protect.
3. Note the credentials (Client ID, Client secret, API hostname) required for configuring the MFA method in Zabbix.
4. Go to MFA settings in Zabbix under Users → Authentication and enable multi-factor authentication.
896
• Client secret: (use Client secret from Duo)
6. Go to Users → User groups and create a new user group with the following configuration:
7. Log out of Zabbix and log back in using your credentials. Upon successful login, you will be prompted to enroll in MFA and
redirected to Duo. Complete the Duo setup and verify your user with your phone’s Duo app to log in.
8. For subsequent logins, use the appropriate MFA method provided by the Duo app (such as retrieving a verification code,
responding to push notifications, or using hard keys), and enter the required information during login.
9 Administration
Overview
The Administration menu is for administrative functions of Zabbix. This menu is available to SuperAdmin user type users only.
1 General
Overview
The Administration → General section contains a number of subsections for setting frontend-related defaults and customizing
Zabbix.
The list of available subsections appears upon pressing on General in the Administration menu section. It is also possible to switch
between subsections by using the title dropdown in the top left corner.
1 GUI
Configuration parameters:
897
Parameter Description
Default language Default language for users who have not specified a language in their profiles and guest users.
For more information, see Installation of additional frontend languages.
Default time zone Default time zone for users who have not specified a time zone in their profiles and guest users.
Default theme Default theme for users who have not specified a theme in their profiles and guest users.
Limit for search and Maximum amount of elements (rows) that will be displayed in a web-interface list, for example,
filter results in Data collection → Hosts.
Note: If set to, for example, ’50’, only the first 50 elements will be displayed in all affected
frontend lists. If some list contains more than fifty elements, the indication of that will be the ’+’
sign in ”Displaying 1 to 50 of 50+ found”. Also, if filtering is used and still there are more than
50 matches, only the first 50 will be displayed.
Max number of Maximum number of columns and rows to display in Data overview and Trigger overview
columns<br>and rows dashboard widgets. The same limit applies to both columns and rows. If more rows and/or
in overview tables columns than shown exist, the system will display a warning at the bottom of the table: ”Not all
results are displayed. Please provide more specific search criteria.”
Max count of For entries that are displayed in a single table cell, no more than configured here will be shown.
elements<br>to show
inside table cell
Show warning if Zabbix This parameter enables a warning message to be displayed in a browser window if the Zabbix
server is down server cannot be reached (possibly down). The message remains visible even if the user scrolls
down the page. When hovered over, the message is temporarily hidden to reveal the contents
underneath it.
Working time This system-wide parameter defines working hours. In graphs, working time is displayed as a
white background and non-working time is displayed as gray.
See Time period specification page for description of the time format.
User macros are supported.
Show technical errors If checked, all registered users will be able to see technical errors (PHP/SQL). If unchecked, the
information is only available to Zabbix Super Admins and users belonging to the user groups with
enabled debug mode.
Max history display Maximum time period for which to display historical data in Monitoring subsections: Latest data,
period Web, and in the Data overview dashboard widget.
Allowed range: 24 hours (default) - 1 week. Time suffixes, e.g. 1w (one week), 36h (36 hours),
are supported.
Time filter default period Time period to be used in graphs and dashboards by default. Allowed range: 1 minute - 10 years
(default: 1 hour).
Time suffixes, e.g. 10m (ten minutes), 5w (five weeks), are supported.
Note: when a user changes the time period while viewing a graph, this time period is stored as
user preference, replacing the global default or a previous user selection.
Max period for time Maximum available time period for graphs and dashboards. Users will not be able to visualize
selector older data. Allowed range: 1 year - 10 years (default: 2 years).
Time suffixes, e.g. 1y (one year), 365w (365 weeks), are supported.
2 Autoregistration
In this section, you can configure the encryption level for active agent autoregistration.
Configuration parameters:
898
Parameter Description
3 Timeouts
In this section, it is possible to set global item-type timeouts and network timeouts. All fields in this form are mandatory.
899
Parameter Description
Timeouts How many seconds to wait for a response from a monitored item (based on its type).
for item Allowed range: 1 - 600s (default: 3s; default for Browser item type: 60s).
types Time suffixes, e.g. 30s, 1m, and user macros are supported.
Note that if a proxy is used and has timeouts configured, the timeout settings of the proxy will
override the global ones. If there are timeouts set for specific items, they will override the
proxy and global settings.
Network
timeouts
for UI
Communication
How many seconds to wait before closing an idle socket (if a connection to Zabbix server has
been established earlier, but frontend cannot finish data reading/sending operation during this
time, the connection will be dropped). Allowed range: 1 - 300s (default: 3s).
Connection How many seconds to wait before stopping an attempt to connect to Zabbix server. Allowed
range: 1 - 30s (default: 3s).
Media type How many seconds to wait for a response when testing a media type. Allowed range: 1 - 300s
test (default: 65s).
Script How many seconds to wait for a response when executing a script. Allowed range: 1 - 300s
execution (default: 60s).
Item test How many seconds to wait for returned data when testing an item. Allowed range: 1 - 600s
(default: 60s).
Scheduled How many seconds to wait for returned data when testing a scheduled report. Allowed range:
report test 1 - 300s (default: 60s).
4 Images
The Images section displays all the images available in Zabbix. Images are stored in the database.
The Type dropdown allows you to switch between icon and background images:
900
Adding image
You can add your own image by clicking on the Create icon or Create background button in the top right corner.
Image attributes:
Parameter Description
Note:
Maximum size of the upload file is limited by the value of ZBX_MAX_IMAGE_SIZE that is 1024x1024 bytes or 1 MB.
The upload of an image may fail if the image size is close to 1 MB and the max_allowed_packet MySQL configu-
ration parameter is at a default of 1MB. In this case, increase the max_allowed_packet parameter.
5 Icon mapping
This section allows creating the mapping of certain hosts with certain icons. Host inventory field information is used to create the
mapping.
The mappings can then be used in network map configuration to assign appropriate icons to matching hosts automatically.
To create a new icon map, click on Create icon map in the top right corner.
Configuration parameters:
Parameter Description
901
6 Regular expressions
This section allows creating custom regular expressions that can be used in several places in the frontend. See Regular expressions
section for details.
This section allows customizing how trigger status is displayed in the frontend and trigger severity names and colors.
Parameter Description
Use custom event Checking this parameter turns on the customization of colors for acknowledged/unacknowledged
status colors problems.
Unacknowledged Enter new color code or click on the color to select a new one from the provided palette.
PROBLEM events, If blinking checkbox is marked, triggers will blink for some time upon the status change to
Acknowledged become more visible.
PROBLEM events,
Unacknowledged
RESOLVED events,
Acknowledged
RESOLVED events
Display OK triggers for Time period for displaying OK triggers. Allowed range: 0 - 24 hours. Time suffixes, e.g. 5m, 2h,
1d, are supported.
On status change Length of trigger blinking. Allowed range: 0 - 24 hours. Time suffixes, e.g. 5m, 2h, 1d, are
triggers blink for supported.
902
Parameter Description
Not classified, Custom severity names and/or colors to display instead of system default.
Information, Enter new color code or click on the color to select a new one from the provided palette.
Warning,
Average, Note that custom severity names entered here will be used in all locales. If you need to translate
High, them to other languages for certain users, see Customizing trigger severities page.
Disaster
8 Geographical maps
This section allows selecting geographical map tile service provider and configuring service provider settings for the Geomap
dashboard widget. To provide visualization using the geographical maps, Zabbix uses open-source JavaScript interactive maps
library Leaflet. Please note that Zabbix has no control over the quality of images provided by third-party tile providers, including
the predefined tile providers.
Parameter Description
Tile provider Select one of the available tile service providers or select Other to add another tile provider or
self-hosted tiles (see Using a custom tile service provider).
Tile URL The URL template (up to 2048 characters) for loading and displaying the tile layer on
geographical maps.
This field is editable only if Tile provider is set to Other.
Example: https://{s}.example.com/{z}/{x}/{y}{r}.png
Attribution text Tile provider attribution text to be displayed in a small text box on the map. This field is visible
only if Tile provider is set to Other.
Max zoom level Maximum zoom level of the map. This field is editable only if Tile provider is set to Other.
The Geomap widget is capable to load raster tile images from a custom self-hosted or a third-party tile provider service. To use a
custom third-party tile provider service or a self-hosted tile folder or server, select Other in the Tile provider field and specify the
custom URL in the Tile URL field using proper placeholders.
9 Modules
Click on Scan directory to register/unregister any custom modules. Registered modules will appear in the list; unregistered modules
will be removed from the list.
903
Click on the module status in the list to enable/disable a module. You may also mass enable/disable modules by selecting them in
the list and then clicking on the Enable/Disable buttons below the list.
Click on the module name in the list to view its details in a pop-up window.
Module status can also be updated in the module details pop-up window; to do this, mark/unmark the Enabled checkbox and then
click on Update.
10 Connectors
This section allows to configure connectors for Zabbix data streaming to external systems over HTTP.
You may filter connectors by name or status (enabled/disabled). Click on the connector status in the list to enable/disable a
connector. You may also mass enable/disable connectors by selecting them in the list and then clicking on the Enable/Disable
buttons below the list.
11 Other
904
Parameter Description
Authorization
Parameter Description
Login attempts Number of unsuccessful login attempts before the possibility to log in gets blocked.
Login blocking interval Period of time for which logging in will be prohibited when Login attempts limit is exceeded.
Allowed range: 0 - 3600 seconds. Time suffixes, e.g. 90s, 5m, 1h, are supported.
Storage of secrets
905
Vault provider parameter allows selecting secret management software for storing user macro values. Supported options:
Security
Parameter Description
Validate URI schemes Unmark this checkbox to disable URI scheme validation (enabled by default).
If marked, you can specify a comma-separated list of allowed URI schemes (default:
http,https,ftp,file,mailto,tel,ssh). Applies to all fields in the frontend where URIs are used (for
example, map element URLs).
Use X-Frame-Options Unmark this checkbox to disable the HTTP X-Frame-options header (not recommended).
HTTP header If marked, you can specify the value of the HTTP X-Frame-options header. Supported values:
SAMEORIGIN (default) or ’self’ (must be single-quoted) - the page can only be displayed in a
frame on the same origin as the page itself;
DENY or ’none’ (must be single-quoted) - the page cannot be displayed in a frame, regardless
of the site attempting to do so;
a string of space-separated hostnames; adding ’self’ (must be single-quoted) to the list
allows the page to be displayed in a frame on the same origin as the page itself.
Note that using ’self’ or ’none’ without single quotes will result in them being regarded as
hostnames.
Use iframe sandboxing Unmark this checkbox to disable putting the retrieved URL content into sandbox (not
recommended).
If marked, you can specify the iframe sandboxing exceptions; unspecified restrictions will still be
applied. If this field is empty, all sandbox attribute restrictions apply.
For more information, see the description of the sandbox attribute.
2 Audit log
Overview
Parameter Description
3 Housekeeping
Overview
906
The housekeeper is a periodical process, executed by Zabbix server. The process removes outdated information and information
deleted by user.
In this section housekeeping tasks can be enabled or disabled on a per-task basis separately for: events and alerts/IT services/user
sessions/history/trends. Audit housekeeping settings are available in a separate menu section.
If housekeeping is enabled, it is possible to set for how many days data records will be kept before being removed by the house-
keeper.
Also, an event will only be deleted by the housekeeper if it is not associated with a problem in any way. This means that if an
event is either a problem or recovery event, it will not be deleted until the related problem record is removed. The housekeeper
will delete problems first and events after, to avoid potential problems with stale events or problem records.
907
For history and trends an additional option is available: Override item history period and Override item trend period. This option
allows setting globally for how many days item history/trends will be stored (1 hour to 25 years; or ”0”), overriding the respective
Store up to values set for individual items in the item configuration form. Note that the storage period will not be overridden for
items that have configuration option Do not store enabled.
It is possible to override the history/trend storage period even if internal housekeeping is disabled. Thus, when using an external
housekeeper, the history storage period could be set using the history Data storage period field.
Attention:
If using TimescaleDB, in order to take full advantage of TimescaleDB automatic partitioning of history and trends tables,
Override item history period and Override item trend period options must be enabled as well as Enable internal house-
keeping option for history and trends. Otherwise, data kept in these tables will still be stored in partitions, however, the
housekeeper will not drop outdated partitions, and warnings about incorrect configuration will be displayed. When drop-
ping of outdated partitions is enabled, Zabbix server and frontend will no longer keep track of deleted items, and history
for deleted items will be cleared when an outdated partition is deleted.
Time suffixes are supported in the period fields, e.g., 1d (one day), 1w (one week). The minimum is 1 day (1 hour for history), the
maximum - 25 years.
4 Proxies
Overview
In the Administration → Proxies section proxies for distributed monitoring can be configured in the Zabbix frontend.
Proxies
Displayed data:
Column Description
908
Column Description
Item count The number of enabled items on enabled hosts assigned to the proxy is displayed.
Required vps Required proxy performance is displayed (the number of values that need to be collected per
second).
Hosts Count of enabled hosts assigned to the proxy is displayed and hosts monitored by the proxy are
listed.
Clicking on the host name opens the host configuration form.
To configure a new proxy, click on the Create proxy button in the top right-hand corner.
To use these options, mark the checkboxes before the respective proxies, then click on the required button.
Using filter
You can use the filter to display only the proxies you are interested in. For better search performance, data is searched with macros
unresolved.
The Filter link is available above the list of proxies. If you click on it, a filter becomes available where you can filter proxies by name,
mode and version. Note that the filter option Outdated displays both outdated (partially supported) and unsupported proxies.
5 Proxy groups
Overview
Proxy groups are used in proxy load balancing with automated distribution of hosts between proxies and high availability between
proxies.
Proxy groups
Displayed data:
Column Description
Name Name of the proxy group. Clicking on the proxy group name opens the proxy group configuration
form.
909
Column Description
To configure a new proxy group, click on the Create proxy groups button in the top right-hand corner.
To use these options, mark the checkboxes before the respective proxy groups, then click on the required button.
Using filter
You can use the filter to display only the proxy groups you are interested in. For better search performance, data is searched with
macros unresolved.
The Filter link is available above the list of proxy groups. If you click on it, a filter becomes available where you can filter proxy
groups by name and status.
6 Macros
Overview
This section allows to define system-wide user macros as name-value pairs. Note that macro values can be kept as plain text,
secret text or Vault secret. Adding a description is also supported.
910
7 Queue
Overview
In the Administration → Queue section items that are waiting to be updated are displayed.
Ideally, when you open this section it should all be ”green” meaning no items in the queue. If all items are updated without delay,
there are none waiting. However, due to lacking server performance, connection problems or problems with agents, some items
may get delayed and the information is displayed in this section. For more details, see the Queue section.
Note:
Queue is available only if Zabbix server is running.
The list of available pages appears upon pressing on Queue in the Administration menu section. It is also possible to switch between
pages by using a title dropdown in the top left corner.
In this screen it is easy to locate if the problem is related to one or several item types.
Each line contains an item type. Each column shows the number of waiting items - waiting for 5-10 seconds/10-30 seconds/30-60
seconds/1-5 minutes/5-10 minutes or over 10 minutes respectively.
Overview by proxy
In this screen it is easy to locate if the problem is related to one of the proxies or the server.
911
Each line contains a proxy, with the server last in the list. Each column shows the number of waiting items - waiting for 5-10
seconds/10-30 seconds/30-60 seconds/1-5 minutes/5-10 minutes or over 10 minutes respectively.
Displayed data:
Column Description
Scheduled check The time when the check was due is displayed.
Delayed by The length of the delay is displayed.
Host Host of the item is displayed.
Name Name of the waiting item is displayed.
Proxy The proxy name is displayed, if the host is monitored by proxy.
You may encounter a situation when no data is displayed and the following error message appears:
3 User settings
Overview
Depending on user role permissions, the User settings section may contain the following pages:
The list of available pages appears upon clicking the user icon near the bottom of the Zabbix menu (not available for the
guest user). It is also possible to switch between pages by using the title dropdown in the top left corner.
912
Third-level menu. Title dropdown.
User profile
The User profile section provides options to set a custom interface language, color theme, number of rows displayed in the lists,
etc. The changes made here will be applied to the current user only.
Parameter Description
Password Click on the Change password button to open three fields: Old password, New password, New
password (once again).
On a successful password change, the user will be logged out of all active sessions.
Note that the password can only be changed for users using Zabbix internal authentication.
Language Select the interface language of your choice or select System default to use default system
settings.
For more information, see Installation of additional frontend languages.
Time zone Select the time zone to override the global time zone on the user level or select System default
to use global time zone settings.
Theme Select a color theme specifically for your profile:
System default - use default system settings;
Blue - standard blue theme;
Dark - alternative dark theme;
High-contrast light - light theme with high contrast;
High-contrast dark - dark theme with high contrast.
Auto-login Mark this checkbox to make Zabbix remember you and log you in automatically for 30 days.
Browser cookies are used for this.
913
Parameter Description
Auto-logout With this checkbox marked, you will be logged out automatically after the set amount of seconds
(minimum 90 seconds, maximum 1 day).
Time suffixes are supported, for example: 90s, 5m, 2h, 1d.
Note that this option will not work in the following cases:
* When the Monitoring menu pages perform background information refreshes. In case pages
that are refreshing data in a specific time interval (dashboards, graphs, latest data, etc.) are left
open, the session lifetime is extended, respectively disabling the auto-logout feature.
* If logging in with the Remember me for 30 days option checked.
Auto-logout can also accept ”0”, meaning that the auto-logout feature becomes disabled after
profile settings update.
Refresh Set how often the information on the Monitoring menu pages will be refreshed (minimum 0
seconds, maximum 1 hour).
Time suffixes are supported, for example: 30s, 90s, 1m, 1h.
Rows per page Set how many rows will be displayed per page in the lists. Fewer rows (and fewer records to
display) result in faster loading times.
URL (after login) Set a specific URL to be displayed after login. Instead of the default Dashboards, it can be, for
example, the URL of Monitoring → Triggers.
The Media tab allows you to specify media details for the user, such as media types and addresses to use and when to use them
to deliver notifications.
Note:
Only Admin and Super admin type users can change their own media details.
API tokens
The API tokens section allows you to view tokens assigned to the user, edit token details and create new tokens. This section is
available to a user only if the Manage API tokens action is allowed in the user role settings.
You can filter API tokens by name, expiry date, or status (Enabled/Disabled). Click on the token status in the list to quickly en-
able/disable a token. You can also enable/disable multiple tokens at once by selecting them in the list and then clicking on the
914
Enable/Disable buttons below the list.
Attention:
Users cannot view the Auth token value of the tokens assigned to them in Zabbix. The Auth token value is displayed only
once - immediately after creating a token. If the token has been lost, it has to be regenerated.
1 Global notifications
Overview
Global notifications provide a way to display real-time issues directly on your current screen within Zabbix frontend.
Without global notifications, when working outside the Problems or Dashboard sections, you would not receive any information
about current issues. Global notifications ensure that this information is displayed, regardless of your current location within the
Zabbix frontend.
Attention:
The autoplay of sounds might be disabled (by default) in recent browser versions. In such cases, you need to enable this
setting manually.
Configuration
Global notifications can be enabled per user in the Frontend notifications tab of profile configuration.
Parameter Description
915
Parameter Description
Message timeout Set the duration for which the message will be displayed. By default, messages remain on the
screen for 60 seconds.
Time suffixes are supported, for example: 30s, 5m, 2h, 1d.
Play sound Set the duration for which the sound will be played.
Once - sound is played once and fully;
10 seconds - sound is repeated for 10 seconds;
Message timeout - sound is repeated while the message is visible.
Trigger severity Set the trigger severities for which global notifications and sounds will be activated. You can also
select sounds appropriate for various severities.
If no severity is marked, no messages will be displayed.
Additionally, recovery messages will only be displayed for marked severities. For instance, if
Recovery and Disaster are marked, global notifications will be displayed for problems and
recoveries of Disaster severity triggers.
Show suppressed Mark the checkbox to display notifications for problems that would otherwise be suppressed (not
problems shown) due to host maintenance.
As messages arrive, they are displayed in a floating section on the right-hand side. You can freely reposition this section by dragging
the section header.
• Mute/Unmute button switches between playing and not playing the alarm sounds at all.
2 Sound in browsers
Overview
For the sounds to be played in Zabbix frontend, Frontend notifications must be enabled in the user profile’s Frontend notifications
tab, with all trigger severities checked. Additionally, sounds should be enabled in the global notification pop-up window.
If, for any reason, audio cannot be played on the device, the button in the global notification pop-up window will remain
permanently in the ”mute” state, accompanied by the message ”Cannot support notification audio for this device” upon hovering
Sounds, including the default audio clips, are supported in MP3 format only.
916
The sounds of Zabbix frontend have been successfully tested in recent Firefox and Opera browsers on Linux, and in Chrome, Firefox,
Microsoft Edge, and Opera browsers on Windows.
Attention:
The autoplay of sounds might be disabled (by default) in recent browser versions. In such cases, you need to enable this
setting manually.
4 Global search
It is possible to search Zabbix frontend for hosts, host groups, templates and template groups.
The search input box is located below the Zabbix logo in the menu. The search can be started by pressing Enter or clicking on the
search icon.
If there is a host that contains the entered string in any part of the name, a dropdown will appear, listing all such hosts (with the
matching part highlighted in orange). The dropdown will also list a host if that host’s visible name is a match to the technical name
entered as a search string; the matching host will be listed, but without any highlighting.
Searchable attributes
• Host name
• Visible name
• IP address
• DNS name
Templates can be searched by name or visible name. If you search by a name that is different from the visible name (of a
template/host), in the search results it is displayed below the visible name in parentheses.
Host and template groups can be searched by name. Specifying a parent group implicitly selects all nested groups.
Search results
Search results consist of four separate blocks for hosts, host groups, templates and template groups.
917
It is possible to collapse/expand each individual block. The entry count is displayed at the bottom of each block, for example,
Displaying 13 of 13 found. If there are no entries, the entry count is not displayed. Total entries displayed within one block are
limited to 100.
Each entry provides links to monitoring and configuration data. See the full list of links.
For all configuration data (such as items, triggers, graphs) the amount of entities found is displayed by a number next to the entity
name, in gray. Note that if there are zero entities, no number is displayed.
Links available
• Hosts
– Monitoring
∗ Latest data
∗ Problems
∗ Graphs
∗ Host dashboards
∗ Web scenarios
– Configuration
∗ Items
∗ Triggers
∗ Graphs
∗ Discovery rules
∗ Web scenarios
• Host groups
– Monitoring
∗ Latest data
∗ Problems
∗ Web scenarios
– Configuration
∗ Hosts
• Templates
– Configuration
∗ Items
∗ Triggers
∗ Graphs
∗ Template dashboards
∗ Discovery rules
∗ Web scenarios
• Template groups
– Configuration
∗ Templates
918
5 Frontend maintenance mode
Overview
Zabbix web frontend can be temporarily disabled in order to prohibit access to it. This can be useful for protecting the Zabbix
database from any changes initiated by users, thus protecting the integrity of database.
Zabbix database can be stopped and maintenance tasks can be performed while Zabbix frontend is in maintenance mode.
Users from defined IP addresses will be able to work with the frontend normally during maintenance mode.
Configuration
In order to enable maintenance mode, the maintenance.inc.php file (located in /conf of Zabbix HTML document directory on
the web server) must be modified to uncomment the following lines:
// Maintenance mode.
define('ZBX_DENY_GUI_ACCESS', 1);
Note:
Mostly the maintenance.inc.php file is located in /conf of Zabbix HTML document directory on the web server. How-
ever, the location of the directory may differ depending on the operating system and a web server it uses.
For example, the location for:
• SUSE and RedHat is /etc/zabbix/web/maintenance.inc.php.
• Debian-based systems is /usr/share/zabbix/conf/.
See also Copying PHP files.
Parameter Details
Note that the location of the /conf directory will vary based on the operating system and web server.
Display
The following screen will be displayed when trying to access the Zabbix frontend while in maintenance mode. The screen is
refreshed every 30 seconds in order to return to a normal state without user intervention when the maintenance is over.
6 Page parameters
Overview
919
Most Zabbix web interface pages support various HTTP GET parameters that control what will be displayed. They may be passed
by specifying parameter=value pairs after the URL, separated from the URL by a question mark (?) and from each other by
ampersands (&).
Monitoring → Problems
The kiosk mode in supported frontend pages can be activated using URL parameters. For example, in dashboards:
7 Definitions
Overview
While many things in the frontend can be configured using the frontend itself, some customizations are currently only possible by
editing a definitions file.
This file is defines.inc.php located in /include of the Zabbix HTML document directory.
Parameters
• ZBX_MIN_PERIOD
920
Minimum graph period, in seconds. One minute by default.
• GRAPH_YAXIS_SIDE_DEFAULT
Default location of Y axis in simple graphs and default value for drop down box when adding items to custom graphs. Possible
values: 0 - left, 1 - right.
Default: 0
• ZBX_SESSION_NAME
Default: zbx_sessionid
• ZBX_DATA_CACHE_TTL
TTL timeout in seconds used to invalidate data cache of Vault response. Set 0 to disable Vault response caching.
Default: 60
• SUBFILTER_VALUES_PER_GROUP
Number of subfilter values per group (For example, in the latest data subfilter).
Default: 1000
Overview
By default, Zabbix provides a number of predefined themes. You may follow the step-by-step procedure provided here in order to
create your own. Feel free to share the result of your work with Zabbix community if you created something nice.
Step 1
To define your own theme you’ll need to create a CSS file and save it in the assets/styles/ folder (for example, custom-
theme.css). You can either copy the files from a different theme and create your theme based on it or start from scratch.
Step 2
Add your theme to the list of themes returned by the APP::getThemes() method. You can do this by overriding the
ZBase::getThemes() method in the APP class. This can be done by adding the following code before the closing brace in
include/classes/core/APP.php:
Attention:
Note that the name you specify within the first pair of quotes must match the name of the theme file without extension.
To add multiple themes, just list them under the first theme, for example:
Note:
To change graph colors, the entry must be added in the graph_theme database table.
Step 3
921
Activate the new theme.
In Zabbix frontend, you may either set this theme to be the default one or change your theme in the user profile.
9 Debug mode
Overview
Debug mode may be used to diagnose performance problems with frontend pages.
Configuration
Debug mode can be activated for individual users who belong to a user group:
When Debug mode is enabled for a user group, its users will see a Debug button in the lower right corner of the browser window:
Clicking on the Debug button opens a new window below the page contents which contains the SQL statistics of the page, along
with a list of API calls and individual SQL statements:
In case of performance problems with the page, this window may be used to search for the root cause of the problem.
Warning:
Enabled Debug mode negatively affects frontend performance.
Overview
922
a
Secure
a
Secure
indicates
that the
cookie
should only
be trans-
mitted over
a
HttpOnly a secure
HTTPS con-
a
According nection from
to specifica- the client.
tion these When set to
are voltages ’true’, the
on chip pins cookie will
and gener- only be set
ally speaking if a secure
may need connection
Name Description Values Expires/Max-Age scaling. exists.
ZBX_SESSION_NAME
Zabbix frontend session data, Session (expires when the + + (only if
stored as JSON encoded by browsing session ends) HTTPS is
base64 enabled
on a web
server)
tab Active tab number; this cookie Example: Session (expires when the - -
is only used on pages with 1 browsing session ends)
multiple tabs (e.g. Host, Trigger
or Action configuration page)
and is created, when a user
navigates from a primary tab to
another tab (such as Tags or
Dependencies tab).
Note:
Forcing ’HttpOnly’ flag on Zabbix cookies by a webserver directive is not supported.
11 Time zones
Overview
The frontend time zone can be set globally in the frontend and adjusted for individual users.
923
If System is selected, the web server time zone will be used for the frontend (including the value of ’date.timezone’ of php.ini, if
set), while Zabbix server will use the time zone of the machine it is running on.
Note:
Zabbix server will only use the specified global/user time zone when expanding macros in notifications (e.g. {EVENT.TIME}
can expand to a different time zone per user) and for the time limit when notifications are sent (see ”When active” setting
in user media configuration).
Configuration
12 Resetting password
Overview This section describes the steps for resetting user passwords in Zabbix.
Steps Turn to your Zabbix administrator if you have forgotten your Zabbix password and cannot log in.
A Super administrator user can change passwords for all users in the user configuration form.
If a Super administrator has forgotten their password and cannot log in, the following SQL query must be run to apply the default
password to the Super admin user (replace ’Admin’ with the appropriate Super admin username):
924
13 Time period selector
Overview
The Time period selector allows to select often required periods with one mouse click. It can be expanded or collapsed by clicking
the Time period tab in the top right corner.
Options such as Today, This week, etc., display the whole period, including the hours/days in the future. Options such as Today so
far, This week so far, etc., display only the hours passed.
Once a period is selected, it can be moved back and forth in time by clicking the arrow buttons. The Zoom out button
allows to zoom out the period by 50% in each direction.
Note:
For graphs, selecting the displayed time period is also possible by highlighting an area in the graph with the left mouse
button. Once you release the left mouse button, the graph will zoom into the highlighted area. Zooming out is also possible
by double-clicking in the graph.
The From/To fields display the selected period in either absolute time syntax (in format Y-m-d H:i:s) or relative time syntax. A
relative time period can contain one or several mathematical operations (- or +), for example, now-1d or now-1d-2h+5m.
The following relative time abbreviations are supported:
• now
• s (seconds)
• m (minutes)
• h (hours)
• d (days)
• w (weeks)
• M (months)
• y (years)
Precision is supported in the Time period selector (for example, /M in now-1d/M). Details of precision:
Precision From To
It is also possible to select a time period using the Date picker. To open it, click the calendar icon next to the From/To fields.
925
Note:
Within the date picker, you can navigate between year/month/date using Tab, Shift+Tab, and keyboard arrow buttons.
Pressing Enter confirms the selection.
Examples
19 API
Overview The Zabbix API allows you to programmatically retrieve and modify configuration of Zabbix and provides access to
historical data. It is widely used to:
The Zabbix API is an HTTP-based API, and it is shipped as a part of the web frontend. It uses the JSON-RPC 2.0 protocol, which
means two things:
For more information about the protocol and JSON, see the JSON-RPC 2.0 specification and the JSON format homepage.
For more information about integrating Zabbix functionality into your Python applications, see the zabbix_utils Python library for
Zabbix API.
Structure The API consists of a number of methods that are nominally grouped into separate APIs. Each of the methods performs
one specific task. For example, the host.create method belongs to the host API and is used to create new hosts. Historically,
APIs are sometimes referred to as ”classes”.
Note:
Most APIs contain at least four methods: get, create, update and delete for retrieving, creating, updating and deleting
data respectively, but some APIs may provide a totally different set of methods.
926
Performing requests Once you have set up the frontend, you can use remote HTTP requests to call the API. To do that, you
need to send HTTP POST requests to the api_jsonrpc.php file located in the frontend directory. For example, if your Zabbix
frontend is installed under https://example.com/zabbix, an HTTP request to call the apiinfo.version method may look
like this:
• jsonrpc - the version of the JSON-RPC protocol used by the API (Zabbix API implements JSON-RPC version 2.0);
• method - the API method being called;
• params - the parameters that will be passed to the API method;
• id - an arbitrary identifier of the request.
If the request is correct, the response returned by the API should look like this:
{
"jsonrpc": "2.0",
"result": "7.0.0",
"id": 1
}
Example workflow The following section will walk you through some examples of usage in a greater detail.
• use an existing API token (created in Zabbix frontend or using the Token API);
• use an authentication token obtained with the user.login method.
For example, if you wanted to obtain a new authentication token by logging in as a standard Admin user, then a JSON request
would look like this:
If you provided the credentials correctly, the response returned by the API should contain the user authentication token:
{
"jsonrpc": "2.0",
"result": "0424bd59b807674191e7d77572075f33",
"id": 1
}
All API requests require an authentication or an API token. You can provide the credentials by using the ”Authorization” request
header:
By ”auth” property
927
Attention:
Note that the ”auth” property is deprecated. It will be removed in the future releases.
By Zabbix cookie
A ”zbx_session” cookie is used to authorize an API request from Zabbix UI performed using JavaScript (from a module or a custom
widget).
Retrieving hosts Now you have a valid user authentication token that can be used to access the data in Zabbix. For example,
you can use the host.get method to retrieve the IDs, host names and interfaces of all the configured hosts:
Request:
Note:
data.json is a file that contains a JSON query. Instead of a file, you can pass the query in the --data argument.
data.json
{
"jsonrpc": "2.0",
"method": "host.get",
"params": {
"output": [
"hostid",
"host"
],
"selectInterfaces": [
"interfaceid",
"ip"
]
},
"id": 2
}
The response object will contain the requested data about the hosts:
{
"jsonrpc": "2.0",
"result": [
{
"hostid": "10084",
"host": "Zabbix server",
"interfaces": [
{
"interfaceid": "1",
"ip": "127.0.0.1"
}
]
}
],
"id": 2
}
928
Note:
For performance reasons it is always recommended to list the object properties you want to retrieve. Thus, you will avoid
retrieving everything.
Creating a new item Now, create a new item on the host ”Zabbix server” using the data you have obtained from the previous
host.get request. This can be done using the item.create method:
A successful response will contain the ID of the newly created item, which can be used to reference the item in the following
requests:
{
"jsonrpc": "2.0",
"result": {
"itemids": [
"24759"
]
},
"id": 3
}
Note:
The item.create method as well as other create methods can also accept arrays of objects and create multiple items
with one API call.
Creating multiple triggers Thus, if create methods accept arrays, you can add multiple triggers, for example, this one:
The successful response will contain the IDs of the newly created triggers:
{
"jsonrpc": "2.0",
"result": {
"triggerids": [
"17369",
"17370"
]
},
"id": 4
}
{
"jsonrpc": "2.0",
"result": {
929
"itemids": [
"10092"
]
},
"id": 5
}
Note:
The item.update method as well as other update methods can also accept arrays of objects and update multiple items
with one API call.
Updating multiple triggers Enable multiple triggers by setting their status to ”0”:
The successful response will contain the IDs of the updated triggers:
{
"jsonrpc": "2.0",
"result": {
"triggerids": [
"13938",
"13939"
]
},
"id": 6
}
Note:
This is the preferred method of updating. Some API methods, such as the host.massupdate allow to write a simpler
code. However, it is not recommended to use these methods as they will be removed in the future releases.
Error handling Up to the present moment, everything you have tried has worked fine. But what would happen if you tried making
an incorrect call to the API? Try to create another host by calling host.create but omitting the mandatory groups parameter:
curl --request POST \
--url 'https://example.com/zabbix/api_jsonrpc.php' \
--header 'Authorization: Bearer ${AUTHORIZATION_TOKEN}' \
--header 'Content-Type: application/json-rpc' \
--data '{"jsonrpc":"2.0","method":"host.create","params":{"host":"Linux server","interfaces":[{"type":1,
{
"jsonrpc": "2.0",
"error": {
"code": -32602,
"message": "Invalid params.",
"data": "No groups for host \"Linux server\"."
},
"id": 7
}
If an error has occurred, instead of the result property, the response object will contain the error property with the following
data:
930
Errors can occur in various cases, such as, using incorrect input values, a session timeout or trying to access non-existing objects.
Your application should be able to gracefully handle these kinds of errors.
API versions To simplify API versioning, since Zabbix 2.0.4, the version of the API matches the version of Zabbix itself. You
can use the apiinfo.version method to find out the version of the API you are working with. This can be useful for adjusting your
application to use version-specific features.
Zabbix guarantees feature backward compatibility inside a major version. When making backward incompatible changes between
major releases, Zabbix usually leaves the old features as deprecated in the next release, and only removes them in the release
after that. Occasionally, Zabbix may remove features between major releases without providing any backward compatibility. It is
important that you never rely on any deprecated features and migrate to newer alternatives as soon as possible.
Note:
You can follow all the changes made to the API in the API changelog.
Further reading Now, you have enough knowledge to start working with the Zabbix API, however, do not stop here. For further
reading you are advised to have a look at the list of available APIs.
Method reference
This section provides an overview of the functions provided by the Zabbix API and will help you find your way around the available
classes and methods.
Monitoring The Zabbix API allows you to access history and other data gathered during monitoring.
Dashboards
History
Retrieve historical values gathered by Zabbix monitoring processes for presentation or further processing.
History API
Trends
Retrieve trend values calculated by Zabbix server for presentation or further processing.
Trend API
Events
Retrieve events generated by triggers, network discovery and other Zabbix systems for more flexible situation management or
third-party tool integration.
Event API
Problems
Problem API
Maps
Map API
Tasks
Interact with Zabbix server task manager, creating tasks and retrieving response.
931
Task API
Services The Zabbix API allows you to access data gathered during service monitoring.
Define Service Level Objectives (SLO), retrieve detailed Service Level Indicators (SLI) information about service performance.
SLA API
Services
Manage services for service-level monitoring and retrieve detailed SLA information about any service.
Service API
Data collection The Zabbix API allows you to manage the configuration of your monitoring system.
Manage host groups, hosts and everything related to them, including host interfaces, host macros and maintenance periods.
Host API | Host group API | Host interface API | User macro API | Value map API | Maintenance API
Items
Item API
Triggers
Configure triggers to notify you about problems in your system. Manage trigger dependencies.
Trigger API
Graphs
Edit graphs or separate graph items for better presentation of the gathered data.
Low-level discovery
Configure low-level discovery rules as well as item, trigger and graph prototypes to monitor dynamic entities.
LLD rule API | Item prototype API | Trigger prototype API | Graph prototype API | Host prototype API
Event correlation
Correlation API
Network discovery
Manage network-level discovery rules to automatically find and monitor new hosts. Gain full access to information about discovered
services and hosts.
Discovery rule API | Discovery check API | Discovered host API | Discovered service API
Export and import Zabbix configuration data for configuration backups, migration or large-scale configuration updates.
Configuration API
Web monitoring
932
Alerts The Zabbix API allows you to manage the actions and alerts of your monitoring system.
Define actions and operations to notify users about certain events or automatically execute remote commands. Gain access to
information about generated alerts and their receivers.
Media types
Configure media types and multiple ways users will receive alerts.
Scripts
Configure and execute scripts to help you with your daily tasks.
Script API
Users The Zabbix API allows you to manage users of your monitoring system.
Add users that will have access to Zabbix, assign them to user groups and grant permissions. Make roles for granular management
of user rights.
User API | User group API | User directory API | User role API
API Tokens
Token API
Authentication
Authentication API
Administration With the Zabbix API you can change administration settings of your monitoring system.
General
Autoregistration API | Icon map API | Image API | Settings API | Regular expression API | Module API | Connector API
Audit log
Housekeeping
Configure housekeeping.
Housekeeping API
Macros
Manage macros.
API information Retrieve the version of the Zabbix API so that your application could use version-specific features.
933
Action
Object references:
• Action
• Action operation
• Action operation message
• Action operation condition
• Action recovery operation
• Action update operation
• Action filter
• Action filter condition
Available methods:
Action object
Property behavior:
- read-only
- required for update operations
esc_period string Default operation step duration. Must be at least 60 seconds. Accepts
seconds, time unit with suffix, or a user macro.
Property behavior:
- supported if eventsource is set to ”event created by a trigger”,
”internal event”, or ”event created on service status update”
eventsource integer Type of events that the action will handle.
Refer to the event source property for a list of supported event types.
Property behavior:
- constant
- required for create operations
name string Name of the action.
Property behavior:
- required for create operations
status integer Whether the action is enabled or disabled.
Possible values:
0 - (default) enabled;
1 - disabled.
934
Property Type Description
Possible values:
0 - Don’t pause escalation for symptom problems;
1 - (default) Pause escalation for symptom problems.
Property behavior:
- supported if eventsource is set to ”event created by a trigger”
pause_suppressed integer Whether to pause escalation during maintenance periods or not.
Possible values:
0 - Don’t pause escalation;
1 - (default) Pause escalation.
Property behavior:
- supported if eventsource is set to ”event created by a trigger”
notify_if_canceled integer Whether to notify when escalation is canceled.
Possible values:
0 - Don’t notify when escalation is canceled;
1 - (default) Notify when escalation is canceled.
Property behavior:
- supported if eventsource is set to ”event created by a trigger”
Action operation
The action operation object defines an operation that will be performed when an action is executed. It has the following properties.
Possible values:
0 - send message;
1 - global script;
2 - add host;
3 - remove host;
4 - add to host group;
5 - remove from host group;
6 - link template;
7 - unlink template;
8 - enable host;
9 - disable host;
10 - set host inventory mode;
13 - add host tags;
14 - remove host tags.
Property behavior:
- required
935
Property Type Description
Default: 0s.
Property behavior:
- supported if eventsource of Action object is set to ”event created by
a trigger”, ”internal event”, or ”event created on service status
update”
esc_step_from integer Step to start escalation from.
Default: 1.
Property behavior:
- supported if eventsource of Action object is set to ”event created by
a trigger”, ”internal event”, or ”event created on service status
update”
esc_step_to integer Step to end escalation at.
Default: 1.
Property behavior:
- supported if eventsource of Action object is set to ”event created by
a trigger”, ”internal event”, or ”event created on service status
update”
evaltype integer Operation condition evaluation method.
Possible values:
0 - (default) AND / OR;
1 - AND;
2 - OR.
opcommand object Global script to execute.
Property behavior:
- required if operationtype is set to ”global script”
opcommand_grp array Host groups to run global scripts on.
Property behavior:
operationtype is set to ”global script” and
- required if
opcommand_hst is not set
opcommand_hst array Host to run global scripts on.
Property behavior:
operationtype is set to ”global script” and
- required if
opcommand_grp is not set
opconditions array Operation conditions used for trigger actions.
Property behavior:
- required if operationtype is set to ”add to host group” or ”remove
from host group”
936
Property Type Description
opmessage object Object containing the data about the message sent by the operation.
Property behavior:
- required if operationtype is set to ”send message”
opmessage_grp array User groups to send messages to.
Property behavior:
operationtype is set to ”send message” and
- required if
opmessage_usr is not set
opmessage_usr array Users to send messages to.
Property behavior:
operationtype is set to ”send message” and
- required if
opmessage_grp is not set
optemplate array Templates to link to the hosts.
Property behavior:
- required if operationtype is set to ”link template” or ”unlink
template”
opinventory object Inventory mode set host to.
Property behavior:
- required if operationtype is set to ”set host inventory mode”
optag array Host tags to add or remove.
Property behavior:
- supported if operationtype is set to ”add host tags” or ”remove
host tags”.
The operation message object contains data about the message that will be sent by the operation. It has the following properties.
default_msg integer Whether to use the default action message text and subject.
Possible values:
0 - use the data from the operation;
1 - (default) use the data from the media type.
mediatypeid ID ID of the media type that will be used to send the message.
Property behavior:
- supported if operationtype of Action operation object, Action
recovery operation object, or Action update operation object is set to
”send message”, or if operationtype of Action update operation
object is set to ”notify all involved”
937
Property Type Description
Property behavior:
- supported if default_msg is set to ”use the data from the operation”
subject string Operation message subject.
Property behavior:
- supported if default_msg is set to ”use the data from the operation”
The action operation condition object defines a condition that must be met to perform the current operation. It has the following
properties.
Possible values:
14 - event acknowledged.
Property behavior:
- required
value string Value to compare with.
Property behavior:
- required
operator integer Condition operator.
Possible values:
0 - (default) =
The following operators and values are supported for each operation condition type.
Possible values:
0 - not acknowledged;
1 - acknowledged.
The action recovery operation object defines an operation that will be performed when a problem is resolved. Recovery operations
are possible only for trigger, internal and service actions. It has the following properties.
938
Property Type Description
Property behavior:
- required
opcommand object Global script to execute.
Property behavior:
- required if operationtype is set to ”global script”
opcommand_grp array Host groups to run global scripts on.
Property behavior:
eventsource of Action object is set to ”event created by
- required if
operationtype is set to ”global script”, and
a trigger”, and
opcommand_hst is not set
opcommand_hst array Host to run global scripts on.
Property behavior:
eventsource of Action object is set to ”event created by
- required if
operationtype is set to ”global script”, and
a trigger”, and
opcommand_grp is not set
opmessage object Object containing the data about the message sent by the recovery
operation.
Property behavior:
- required if operationtype is set to ”send message”
opmessage_grp array User groups to send messages to.
Property behavior:
operationtype is set to ”send message” and
- required if
opmessage_usr is not set
opmessage_usr array Users to send messages to.
Property behavior:
operationtype is set to ”send message” and
- required if
opmessage_grp is not set
939
The action update operation object defines an operation that will be performed when a problem is updated (commented upon,
acknowledged, severity changed, or manually closed). Update operations are possible only for trigger and service actions. It has
the following properties.
Possible values:
0 - send message;
1 - global script;
12 - notify all involved.
Property behavior:
- required
opcommand object Global script to execute.
Property behavior:
- required if operationtype is set to ”global script”
opcommand_grp array Host groups to run global scripts on.
Property behavior:
eventsource of Action object is set to ”event created by
- required if
operationtype is set to ”global script”, and
a trigger”, and
opcommand_hst is not set
opcommand_hst array Host to run global scripts on.
Property behavior:
eventsource of Action object is set to ”event created by
- required if
operationtype is set to ”global script”, and
a trigger”, and
opcommand_grp is not set
opmessage object Object containing the data about the message sent by the update
operation.
Property behavior:
operationtype is set to ”send message” and
- required if
opmessage_usr is not set
opmessage_usr array Users to send messages to.
Property behavior:
operationtype is set to ”send message” and
- required if
opmessage_grp is not set
Action filter
The action filter object defines a set of conditions that must be met to perform the configured action operations. It has the following
properties.
940
Property Type Description
conditions array Set of filter conditions to use for filtering results. The conditions will be
sorted in the order of their placement in the formula.
Property behavior:
- required
evaltype integer Filter condition evaluation method.
Possible values:
0 - and/or;
1 - and;
2 - or;
3 - custom expression.
Property behavior:
- required
eval_formula string Generated expression that will be used for evaluating filter conditions.
The expression contains IDs that reference specific filter conditions by
formulaid. The value of eval_formula is equal to the value of
its
formula for filters with a custom expression.
Property behavior:
- read-only
formula string User-defined expression to be used for evaluating conditions of filters
with a custom expression. The expression must contain IDs that
reference specific filter conditions by its formulaid. The IDs used in
the expression must exactly match the ones defined in the filter
conditions: no condition can remain unused or omitted.
Property behavior:
- required if evaltype is set to ”custom expression”
The action filter condition object defines a specific condition that must be checked before running the action operations.
941
Property Type Description
Property behavior:
- required
value string Value to compare with.
Property behavior:
- required
942
Property Type Description
Property behavior:
- required if eventsource of Action object is set to ”event created by
a trigger”, conditiontype is set to any possible value for trigger
actions, and the type of condition (see below) is ”26”
eventsource of Action object is set to ”internal event”,
- required if
conditiontype is set to any possible value for internal actions, and
the type of condition (see below) is ”26”
- required if eventsource of Action object is set to ”event created on
service status update”, conditiontype is set to any possible value
for service actions, and the type of condition (see below) is ”26”
formulaid string Arbitrary unique ID that is used to reference the condition from a
custom expression. Can only contain capital-case letters. The ID must
be defined by the user when modifying filter conditions, but will be
generated anew when requesting them afterward.
operator integer Condition operator.
Possible values:
0 - (default) equals;
1 - does not equal;
2 - contains;
3 - does not contain;
4 - in;
5 - is greater than or equals;
6 - is less than or equals;
7 - not in;
8 - matches;
9 - does not match;
10 - Yes;
11 - No.
Note:
To better understand how to use filters with various types of expressions, see examples on the action.get and action.create
method pages.
The following operators and values are supported for each condition type.
943
Condition Condition name Supported operators Expected value
8 Discovered service equals, Type of discovered service. The type of service matches the
type does not equal type of the discovery check used to detect the service. Refer
to the discovery check type property for a list of supported
types.
9 Discovered service equals, One or several port ranges, separated by commas.
port does not equal
10 Discovery status equals Status of a discovered object.
Possible values:
0 - host or service up;
1 - host or service down;
2 - host or service discovered;
3 - host or service lost.
11 Uptime or downtime is greater than or Time indicating how long has the discovered object been in
duration equals, the current status in seconds.
is less than or
equals
12 Received values equals, Value returned when performing a Zabbix agent, SNMPv1,
does not equal, SNMPv2 or SNMPv3 discovery check.
is greater than or
equals,
is less than or
equals,
contains,
does not contain
13 Host template equals, Linked template ID.
does not equal
16 Problem is Yes, No No value required: using the ”Yes” operator means that
suppressed problem must be suppressed, ”No” - not suppressed.
18 Discovery rule equals, ID of the discovery rule.
does not equal
19 Discovery check equals, ID of the discovery check.
does not equal
20 Proxy equals, ID of the proxy.
does not equal
21 Discovery object equals Type of object that triggered the discovery event.
Possible values:
1 - discovered host;
2 - discovered service.
22 Host name contains, Host name.
does not contain, Using a regular expression is supported for operators matches
matches, and does not match in autoregistration conditions.
does not match
23 Event type equals Specific internal event.
Possible values:
0 - item in ”not supported” state;
1 - item in ”normal” state;
2 - LLD rule in ”not supported” state;
3 - LLD rule in ”normal” state;
4 - trigger in ”unknown” state;
5 - trigger in ”normal” state.
24 Host metadata contains, Metadata of the auto-registered host.
does not contain, Using a regular expression is supported for operators matches
matches, and does not match.
does not match
25 Tag equals, Event tag.
does not equal,
contains,
does not contain
944
Condition Condition name Supported operators Expected value
action.create
Description
Note:
This method is only available to Admin and Super admin user types. Permissions to call the method can be revoked in user
role settings. See User roles for more information.
Parameters
Return values
(object) Returns an object containing the IDs of the created actions under the actionids property. The order of the returned
IDs matches the order of the passed actions.
Examples
Create a trigger action that will begin once a trigger (with the word ”memory” in its name) from host ”10084” goes into a PROBLEM
state. The action will have 4 configured operations. The first and immediate operation will send a message to all users in user
group ”7” via media type ”1”. If the event is not resolved in 30 minutes, the second operation will run script ”5” (script with scope
”Action operation”) on all hosts in group ”2”. If the event is resolved, a recovery operation will notify all users who received any
messages regarding the problem. If the event is updated, an acknowledge/update operation will notify (with a custom subject and
message) all users who received any messages regarding the problem.
Request:
{
"jsonrpc": "2.0",
"method": "action.create",
"params": {
"name": "Trigger action",
"eventsource": 0,
"esc_period": "30m",
"filter": {
"evaltype": 0,
"conditions": [
{
"conditiontype": 1,
945
"operator": 0,
"value": "10084"
},
{
"conditiontype": 3,
"operator": 2,
"value": "memory"
}
]
},
"operations": [
{
"operationtype": 0,
"esc_step_from": 1,
"esc_step_to": 1,
"opmessage_grp": [
{
"usrgrpid": "7"
}
],
"opmessage": {
"default_msg": 1,
"mediatypeid": "1"
}
},
{
"operationtype": 1,
"esc_step_from": 2,
"esc_step_to": 2,
"opconditions": [
{
"conditiontype": 14,
"operator": 0,
"value": "0"
}
],
"opcommand_grp": [
{
"groupid": "2"
}
],
"opcommand": {
"scriptid": "5"
}
}
],
"recovery_operations": [
{
"operationtype": "11",
"opmessage": {
"default_msg": 1
}
}
],
"update_operations": [
{
"operationtype": "12",
"opmessage": {
"default_msg": 0,
"message": "Custom update operation message body",
"subject": "Custom update operation message subject"
}
946
}
]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"actionids": [
"17"
]
},
"id": 1
}
Create a discovery action that will link template ”10001” to discovered hosts.
Request:
{
"jsonrpc": "2.0",
"method": "action.create",
"params": {
"name": "Discovery action",
"eventsource": 1,
"filter": {
"evaltype": 0,
"conditions": [
{
"conditiontype": 21,
"operator": 0,
"value": "1"
},
{
"conditiontype": 10,
"operator": 0,
"value": "2"
}
]
},
"operations": [
{
"operationtype": 6,
"optemplate": [
{
"templateid": "10001"
}
]
}
]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"actionids": [
"18"
947
]
},
"id": 1
}
Create a trigger action that uses a custom expression - ”A and (B or C)” - for evaluating action conditions. Once a trigger with
a severity higher or equal to ”Warning” from host ”10084” or host ”10106” goes into a PROBLEM state, the action will send a
message to all users in user group ”7” via media type ”1”. The formula IDs ”A”, ”B” and ”C” have been chosen arbitrarily.
Request:
{
"jsonrpc": "2.0",
"method": "action.create",
"params": {
"name": "Trigger action",
"eventsource": 0,
"esc_period": "15m",
"filter": {
"evaltype": 3,
"formula": "A and (B or C)",
"conditions": [
{
"conditiontype": 4,
"operator": 5,
"value": "2",
"formulaid": "A"
},
{
"conditiontype": 1,
"operator": 0,
"value": "10084",
"formulaid": "B"
},
{
"conditiontype": 1,
"operator": 0,
"value": "10106",
"formulaid": "C"
}
]
},
"operations": [
{
"operationtype": 0,
"esc_step_from": 1,
"esc_step_to": 1,
"opmessage_grp": [
{
"usrgrpid": "7"
}
],
"opmessage": {
"default_msg": 1,
"mediatypeid": "1"
}
}
]
},
"id": 1
}
Response:
948
{
"jsonrpc": "2.0",
"result": {
"actionids": [
"18"
]
},
"id": 1
}
Create an autoregistration action that adds a host to host group ”2” when the host name contains ”SRV” or metadata contains
”AlmaLinux”.
Request:
{
"jsonrpc": "2.0",
"method": "action.create",
"params": {
"name": "Register Linux servers",
"eventsource": "2",
"filter": {
"evaltype": "2",
"conditions": [
{
"conditiontype": "22",
"operator": "2",
"value": "SRV"
},
{
"conditiontype": "24",
"operator": "2",
"value": "AlmaLinux"
}
]
},
"operations": [
{
"operationtype": "4",
"opgroup": [
{
"groupid": "2"
}
]
}
]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"actionids": [
19
]
},
"id": 1
}
949
Create an autoregistration action that adds a host to host group ”2” and adds two host tags.
Request:
{
"jsonrpc": "2.0",
"method": "action.create",
"params": {
"name": "Register Linux servers with tags",
"eventsource": "2",
"operations": [
{
"operationtype": "4",
"opgroup": [
{
"groupid": "2"
}
]
},
{
"operationtype": "13",
"optag": [
{
"tag": "Location",
"value": "Office"
},
{
"tag": "City",
"value": "Riga"
}
]
}
]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"actionids": [
20
]
},
"id": 1
}
See also
• Action filter
• Action operation
• Script
Source
CAction::create() in ui/include/classes/api/services/CAction.php.
action.delete
Description
950
Note:
This method is only available to Admin and Super admin user types. Permissions to call the method can be revoked in user
role settings. See User roles for more information.
Parameters
(object) Returns an object containing the IDs of the deleted actions under the actionids property.
Examples
Request:
{
"jsonrpc": "2.0",
"method": "action.delete",
"params": [
"17",
"18"
],
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"actionids": [
"17",
"18"
]
},
"id": 1
}
Source
CAction::delete() in ui/include/classes/api/services/CAction.php.
action.get
Description
Note:
This method is available to users of any type. Permissions to call the method can be revoked in user role settings. See
User roles for more information.
Parameters
951
Parameter Type Description
triggerids ID/array Return only actions that use the given triggers in action conditions.
mediatypeids ID/array Return only actions that use the given media types to send messages.
usrgrpids ID/array Return only actions that are configured to send messages to the given
user groups.
userids ID/array Return only actions that are configured to send messages to the given
users.
scriptids ID/array Return only actions that are configured to run the given scripts.
selectFilter query Return a filter property with the action condition filter.
selectOperations query Return an operations property with action operations.
selectRecoveryOperations query Return a recovery_operations property with action recovery
operations.
selectUpdateOperations query Return an update_operations property with action update
operations.
sortfield string/array Sort the result by the given properties.
Return values
Retrieve all configured trigger actions together with action conditions and operations.
Request:
{
"jsonrpc": "2.0",
"method": "action.get",
"params": {
"output": "extend",
"selectOperations": "extend",
"selectRecoveryOperations": "extend",
"selectUpdateOperations": "extend",
"selectFilter": "extend",
"filter": {
"eventsource": 0
}
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": [
952
{
"actionid": "3",
"name": "Report problems to Zabbix administrators",
"eventsource": "0",
"status": "1",
"esc_period": "1h",
"pause_suppressed": "1",
"filter": {
"evaltype": "0",
"formula": "",
"conditions": [],
"eval_formula": ""
},
"operations": [
{
"operationid": "3",
"actionid": "3",
"operationtype": "0",
"esc_period": "0",
"esc_step_from": "1",
"esc_step_to": "1",
"evaltype": "0",
"opconditions": [],
"opmessage": [
{
"default_msg": "1",
"subject": "",
"message": "",
"mediatypeid" => "0"
}
],
"opmessage_grp": [
{
"usrgrpid": "7"
}
]
}
],
"recovery_operations": [
{
"operationid": "7",
"actionid": "3",
"operationtype": "11",
"evaltype": "0",
"opconditions": [],
"opmessage": {
"default_msg": "0",
"subject": "{TRIGGER.STATUS}: {TRIGGER.NAME}",
"message": "Trigger: {TRIGGER.NAME}\r\nTrigger status: {TRIGGER.STATUS}\r\nTrigger
"mediatypeid": "0"
}
}
],
"update_operations": [
{
"operationid": "31",
"operationtype": "12",
"evaltype": "0",
"opmessage": {
"default_msg": "1",
"subject": "",
"message": "",
953
"mediatypeid": "0"
}
},
{
"operationid": "32",
"operationtype": "0",
"evaltype": "0",
"opmessage": {
"default_msg": "0",
"subject": "Updated: {TRIGGER.NAME}",
"message": "{USER.FULLNAME} updated problem at {EVENT.UPDATE.DATE} {EVENT.UPDATE.T
"mediatypeid": "1"
},
"opmessage_grp": [
{
"usrgrpid": "7"
}
],
"opmessage_usr": []
},
{
"operationid": "33",
"operationtype": "1",
"evaltype": "0",
"opcommand": {
"scriptid": "3"
},
"opcommand_hst": [
{
"hostid": "10084"
}
],
"opcommand_grp": []
}
]
}
],
"id": 1
}
Retrieve all configured discovery actions together with action conditions and operations. The filter uses the ”and” evaluation type,
so the formula property is empty and eval_formula is generated automatically.
Request:
{
"jsonrpc": "2.0",
"method": "action.get",
"params": {
"output": "extend",
"selectOperations": "extend",
"selectFilter": "extend",
"filter": {
"eventsource": 1
}
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
954
"result": [
{
"actionid": "2",
"name": "Auto discovery. Linux servers.",
"eventsource": "1",
"status": "1",
"esc_period": "0s",
"pause_suppressed": "1",
"filter": {
"evaltype": "0",
"formula": "",
"conditions": [
{
"conditiontype": "10",
"operator": "0",
"value": "0",
"value2": "",
"formulaid": "B"
},
{
"conditiontype": "8",
"operator": "0",
"value": "9",
"value2": "",
"formulaid": "C"
},
{
"conditiontype": "12",
"operator": "2",
"value": "Linux",
"value2": "",
"formulaid": "A"
}
],
"eval_formula": "A and B and C"
},
"operations": [
{
"operationid": "1",
"actionid": "2",
"operationtype": "6",
"esc_period": "0s",
"esc_step_from": "1",
"esc_step_to": "1",
"evaltype": "0",
"opconditions": [],
"optemplate": [
{
"templateid": "10001"
}
]
},
{
"operationid": "2",
"actionid": "2",
"operationtype": "4",
"esc_period": "0s",
"esc_step_from": "1",
"esc_step_to": "1",
"evaltype": "0",
"opconditions": [],
"opgroup": [
955
{
"groupid": "2"
}
]
}
]
}
],
"id": 1
}
See also
• Action filter
• Action operation
Source
CAction::get() in ui/include/classes/api/services/CAction.php.
action.update
Description
Note:
This method is only available to Admin and Super admin user types. Permissions to call the method can be revoked in user
role settings. See User roles for more information.
Parameters
Additionally to the standard action properties, the method accepts the following parameters.
Parameter behavior:
- supported if eventsource of Action object is set to ”event created by
a trigger”, ”internal event”, or ”event created on service status
update”
update_operations array Action update operations to replace existing update operations.
Parameter behavior:
- supported if eventsource of Action object is set to ”event created by
a trigger” or ”event created on service status update”
Return values
(object) Returns an object containing the IDs of the updated actions under the actionids property.
Examples
Disable action
Request:
956
{
"jsonrpc": "2.0",
"method": "action.update",
"params": {
"actionid": "2",
"status": "1"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"actionids": [
"2"
]
},
"id": 1
}
See also
• Action filter
• Action operation
Source
CAction::update() in ui/include/classes/api/services/CAction.php.
Alert
Object references:
• Alert
Available methods:
Alert object
Note:
Alerts are created by Zabbix server and cannot be modified via the API.
The alert object contains information about whether certain action operations have been executed successfully. It has the following
properties.
Possible values:
0 - message;
1 - remote command.
clock timestamp Time when the alert was generated.
error string Error text if there are problems sending a message or running a
command.
957
Property Type Description
esc_step integer Action escalation step during which the alert was generated.
eventid ID ID of the event that triggered the action.
mediatypeid ID ID of the media type that was used to send the message.
message text Message text.
Property behavior:
- supported if alerttype is set to ”message”
retries integer Number of times Zabbix tried to send the message.
sendto string Address, user name or other identifier of the recipient.
Property behavior:
- supported if alerttype is set to ”message”
status integer Status indicating whether the action operation has been executed
successfully.
Property behavior:
- supported if alerttype is set to ”message”
userid ID ID of the user that the message was sent to.
p_eventid ID ID of problem event, which generated the alert.
acknowledgeid ID ID of acknowledgment, which generated the alert.
alert.get
Description
Note:
This method is available to users of any type. Permissions to call the method can be revoked in user role settings. See
User roles for more information.
Parameters
958
Parameter Type Description
eventobject integer Return only alerts generated by events related to objects of the given
type.
Default: 0 - trigger.
eventsource integer Return only alerts generated by events of the given type.
Return values
Request:
{
"jsonrpc": "2.0",
"method": "alert.get",
"params": {
"output": "extend",
"actionids": "3"
},
"id": 1
}
Response:
959
{
"jsonrpc": "2.0",
"result": [
{
"alertid": "1",
"actionid": "3",
"eventid": "21243",
"userid": "1",
"clock": "1362128008",
"mediatypeid": "1",
"sendto": "[email protected]",
"subject": "PROBLEM: Zabbix agent on Linux server is unreachable for 5 minutes: ",
"message": "Trigger: Zabbix agent on Linux server is unreachable for 5 minutes: \nTrigger stat
"status": "0",
"retries": "3",
"error": "",
"esc_step": "1",
"alerttype": "0",
"p_eventid": "0",
"acknowledgeid": "0"
}
],
"id": 1
}
See also
• Host
• Media type
• User
Source
CAlert::get() in ui/include/classes/api/services/CAlert.php.
API info
Available methods:
apiinfo.version
Description
string apiinfo.version(array)
This method allows to retrieve the version of the Zabbix API.
Attention:
This method is only available to unauthenticated users and must be called without the auth parameter in the JSON-RPC
request.
Parameters
Note:
Starting from Zabbix 2.0.4 the version of the API matches the version of Zabbix.
960
Examples
Request:
{
"jsonrpc": "2.0",
"method": "apiinfo.version",
"params": [],
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": "7.0.0",
"id": 1
}
Source
CAPIInfo::version() in ui/include/classes/api/services/CAPIInfo.php.
Audit log
Object references:
• Audit log
Available methods:
The audit log object contains information about user actions. It has the following properties.
Possible values:
0 - Add;
1 - Update;
2 - Delete;
4 - Logout;
7 - Execute;
8 - Login;
9 - Failed login;
10 - History clear;
11 - Config refresh;
12 - Push.
961
Property Type Description
Possible values:
0 - User;
3 - Media type;
4 - Host;
5 - Action;
6 - Graph;
11 - User group;
13 - Trigger;
14 - Host group;
15 - Item;
16 - Image;
17 - Value map;
18 - Service;
19 - Map;
22 - Web scenario;
23 - Discovery rule;
25 - Script;
26 - Proxy;
27 - Maintenance;
28 - Regular expression;
29 - Macro;
30 - Template;
31 - Trigger prototype;
32 - Icon mapping;
33 - Dashboard;
34 - Event correlation;
35 - Graph prototype;
36 - Item prototype;
37 - Host prototype;
38 - Autoregistration;
39 - Module;
40 - Settings;
41 - Housekeeping;
42 - Authentication;
43 - Template dashboard;
44 - User role;
45 - API token;
46 - Scheduled report;
47 - High availability node;
48 - SLA;
49 - User directory;
50 - Template group;
51 - Connector;
52 - LLD rule;
53 - History.
resourceid ID Audit log entry resource identifier.
resourcename string Audit log entry resource human readable name.
recordsetid ID Audit log entry recordset ID. The audit log records created during the
same operation will have the same recordset ID. Generated using CUID
algorithm.
962
Property Type Description
details text Audit log entry details. The details are stored as a JSON object, where
each property name is a path to the property or nested object in which
the change occurred, and where each value contains the data (in array
format) about the change in this property or nested object.
auditlog.get
Description
Note:
This method is only available to Super admin user type. Permissions to call the method can be revoked in user role settings.
See User roles for more information.
Parameters
auditids ID/array Return only audit log with the given IDs.
userids ID/array Return only audit log that were created by the given users.
time_from timestamp Returns only audit log entries that have been created after or at the
given time.
time_till timestamp Returns only audit log entries that have been created before or at the
given time.
sortfield string/array Sort the result by the given properties.
Return values
963
Examples
Request:
{
"jsonrpc": "2.0",
"method": "auditlog.get",
"params": {
"output": "extend",
"sortfield": "clock",
"sortorder": "DESC",
"limit": 2
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": [
{
"auditid": "cksstgfam0001yhdcc41y20q2",
"userid": "1",
"username": "Admin",
"clock": "1629975715",
"ip": "127.0.0.1",
"action": "1",
"resourcetype": "0",
"resourceid": "0",
"resourcename": "Jim",
"recordsetid": "cksstgfal0000yhdcso67ondl",
"details": "{\"user.name\":[\"update\",\"Jim\",\"\"],\"user.medias[37]\":[\"add\"],\"user.medi
},
{
"auditid": "ckssofl0p0001yhdcqxclsg8r",
"userid": "1",
"username": "Admin",
"clock": "1629967278",
"ip": "127.0.0.1",
"action": "0",
"resourcetype": "0",
"resourceid": "20",
"resourcename": "John",
"recordsetid": "ckssofl0p0000yhdcpxyo1jgo",
"details": "{\"user.username\":[\"add\",\"John\"], \"user.userid:\":[\"add\",\"20\"],\"user.us
}
],
"id": 1
}
See also
Source
CAuditLog::get() in ui/include/classes/api/services/CAuditLog.php.
Authentication
Object references:
964
• Authentication
Available methods:
Authentication object
Possible values:
0 - (default) Internal;
1 - LDAP.
http_auth_enabled integer HTTP authentication.
Possible values:
0 - (default) Disabled;
1 - Enabled.
Property behavior:
- supported if $ALLOW_HTTP_AUTH is enabled in the frontend
configuration file (zabbix.conf.php).
http_login_form integer Default login form.
Possible values:
0 - (default) Zabbix login form;
1 - HTTP login form.
Property behavior:
- supported if $ALLOW_HTTP_AUTH is enabled in the frontend
configuration file (zabbix.conf.php).
http_strip_domains string Domain name to remove.
Property behavior:
- supported if $ALLOW_HTTP_AUTH is enabled in the frontend
configuration file (zabbix.conf.php).
http_case_sensitive integer HTTP case sensitive login.
Possible values:
0 - Off;
1 - (default) On.
Property behavior:
- supported if $ALLOW_HTTP_AUTH is enabled in the frontend
configuration file (zabbix.conf.php).
ldap_auth_enabled integer LDAP authentication.
Possible values:
0 - (default) Disabled;
1 - Enabled.
ldap_case_sensitive integer LDAP case sensitive login.
Possible values:
0 - Off;
1 - (default) On.
965
Property Type Description
Property behavior:
- required if ldap_auth_enabled is set to ”Enabled”
saml_auth_enabled integer SAML authentication.
Possible values:
0 - (default) Disabled;
1 - Enabled.
saml_case_sensitive integer SAML case sensitive login.
Possible values:
0 - Off;
1 - (default) On.
passwd_min_length integer Password minimal length requirement.
Default: 8.
passwd_check_rules integer Password checking rules.
This is a bitmask field, any combination of possible bitmap values is
acceptable.
Possible values:
0 - Disabled for configured LDAP IdPs;
1 - Enabled for configured LDAP IdPs.
saml_jit_status integer Status of SAML provisioning.
Possible values:
0 - Disabled for configured SAML IdPs;
1 - Enabled for configured SAML IdPs.
jit_provision_interval string Time interval between JIT provision requests for logged-in user.
Accepts seconds and time unit with suffix with month and year support
(3600s,60m,1h,1d,1M,1y). Minimum value: 1h.
Default: 1h.
Property behavior:
- required if ldap_jit_status is set to ”Enabled for configured LDAP
IdPs”, or saml_jit_status is set to ”Enabled for configured SAML
IdPs”
mfa_status integer Multi-factor authentication.
Possible values:
0 - Disabled (for all configured MFA methods);
1 - Enabled (for all configured MFA methods).
966
Property Type Description
mfaid ID Default MFA method for user groups with MFA enabled.
Property behavior:
- required if mfa_status is set to ”Enabled”
authentication.get
Description
Note:
This method is only available to Super admin user type. Permissions to call the method can be revoked in user role settings.
See User roles for more information.
Parameters
output query This parameter being common for all get methods described in the
reference commentary.
Return values
Request:
{
"jsonrpc": "2.0",
"method": "authentication.get",
"params": {
"output": "extend"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"authentication_type": "0",
"http_auth_enabled": "0",
"http_login_form": "0",
"http_strip_domains": "",
"http_case_sensitive": "1",
"ldap_auth_enabled": "0",
"ldap_case_sensitive": "1",
"ldap_userdirectoryid": "0",
"saml_auth_enabled": "0",
"saml_case_sensitive": "0",
"passwd_min_length": "8",
"passwd_check_rules": "15",
"jit_provision_interval": "1h",
"saml_jit_status": "0",
967
"ldap_jit_status": "0",
"disabled_usrgrpid": "9",
"mfa_status": "0",
"mfaid": "0"
},
"id": 1
}
Source
CAuthentication::get() in ui/include/classes/api/services/CAuthentication.php.
authentication.update
Description
Note:
This method is only available to Super admin user type. Permissions to call the method can be revoked in user role settings.
See User roles for more information.
Parameters
Request:
{
"jsonrpc": "2.0",
"method": "authentication.update",
"params": {
"http_auth_enabled": 1,
"http_case_sensitive": 0,
"http_login_form": 1
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": [
"http_auth_enabled",
"http_case_sensitive",
"http_login_form"
],
"id": 1
}
Source
CAuthentication::update() in ui/include/classes/api/services/CAuthentication.php.
Autoregistration
Object references:
968
• Autoregistration
Available methods:
Autoregistration object
Possible values:
1 - allow insecure connections;
2 - allow TLS with PSK;
3 - allow both insecure and TLS with PSK connections.
tls_psk_identity string PSK identity string.
Property behavior:
- write-only
tls_psk string PSK value string (an even number of hexadecimal characters).
Property behavior:
- write-only
autoregistration.get
Description
Note:
This method is only available to Super admin user type. Permissions to call the method can be revoked in user role settings.
See User roles for more information.
Parameters
output query This parameter being common for all get methods described in the
reference commentary.
Return values
Request:
969
{
"jsonrpc": "2.0",
"method": "autoregistration.get",
"params": {
"output": "extend"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"tls_accept": "3"
},
"id": 1
}
Source
CAutoregistration::get() in ui/include/classes/api/services/CAutoregistration.php.
autoregistration.update
Description
Note:
This method is only available to Super admin user type. Permissions to call the method can be revoked in user role settings.
See User roles for more information.
Parameters
Request:
{
"jsonrpc": "2.0",
"method": "autoregistration.update",
"params": {
"tls_accept": "3",
"tls_psk_identity": "PSK 001",
"tls_psk": "11111595725ac58dd977beef14b97461a7c1045b9a1c923453302c5473193478"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": true,
"id": 1
}
Source
CAutoregistration::update() in ui/include/classes/api/services/CAutoregistration.php.
970
Configuration
Available methods:
configuration.export
Description
Note:
This method is available to users of any type. Permissions to call the method can be revoked in user role settings. See
User roles for more information.
Parameters
(object) Parameters defining the objects to be exported and the format to use.
Possible values:
yaml - YAML;
xml - XML;
json - JSON;
raw - unprocessed PHP array.
Parameter behavior:
- required
prettyprint boolean Make the output more human readable by adding indentation.
Possible values:
true - add indentation;
false - (default) do not add indentation.
options object Objects to be exported.
Parameter behavior:
- required
Return values
Exporting a template
971
Request:
{
"jsonrpc": "2.0",
"method": "configuration.export",
"params": {
"options": {
"templates": [
"10571"
]
},
"format": "xml"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<zabbix_export><version>7.0</version><template_
"id": 1
}
Source
CConfiguration::export() in ui/include/classes/api/services/CConfiguration.php.
configuration.import
Description
Note:
This method is available to users of any type. Permissions to call the method can be revoked in user role settings. See
User roles for more information.
Parameters
(object) Parameters containing the data to import and rules how the data should be handled.
Possible values:
yaml - YAML;
xml - XML;
json - JSON.
Parameter behavior:
- required
source string Serialized string containing the configuration data.
Parameter behavior:
- required
rules object Rules on how new and existing objects should be imported.
Parameter behavior:
- required
972
Note:
If no rules are given, the configuration will not be updated.
Supported parameters:
createMissing - (boolean) if set to true, new LLD rules will be
created; default: false;
updateExisting - (boolean) if set to true, existing LLD rules will
be updated; default: false;
deleteMissing - (boolean) if set to true, LLD rules not present in
the imported data will be deleted from the database; default: false.
graphs object Rules on how to import graphs.
Supported parameters:
createMissing - (boolean) if set to true, new graphs will be
created; default: false;
updateExisting - (boolean) if set to true, existing graphs will be
updated; default: false;
deleteMissing - (boolean) if set to true, graphs not present in
the imported data will be deleted from the database; default: false.
host_groups object Rules on how to import host groups.
Supported parameters:
createMissing - (boolean) if set to true, new host groups will be
created; default: false;
updateExisting - (boolean) if set to true, existing host groups
will be updated; default: false.
template_groups object Rules on how to import template groups.
Supported parameters:
createMissing - (boolean) if set to true, new template groups
will be created; default:false;
updateExisting - (boolean) if set to true, existing template
groups will be updated; default: false.
hosts object Rules on how to import hosts.
Supported parameters:
createMissing - (boolean) if set to true, new hosts will be
created; default: false;
updateExisting - (boolean) if set to true, existing hosts will be
updated; default: false.
httptests object Rules on how to import web scenarios.
Supported parameters:
createMissing - (boolean) if set to true, new web scenarios will
be created; default: false;
updateExisting - (boolean) if set to true, existing web scenarios
will be updated; default: false;
deleteMissing - (boolean) if set to true, web scenarios not
present in the imported data will be deleted from the database;
default: false.
images object Rules on how to import images.
Supported parameters:
createMissing - (boolean) if set to true, new images will be
created; default: false;
updateExisting - (boolean) if set to true, existing images will be
updated; default: false.
973
Parameter Type Description
Supported parameters:
createMissing - (boolean) if set to true, new items will be
created; default: false;
updateExisting - (boolean) if set to true, existing items will be
updated; default: false;
deleteMissing - (boolean) if set to true, items not present in the
imported data will be deleted from the database; default: false.
maps object Rules on how to import maps.
Supported parameters:
createMissing - (boolean) if set to true, new maps will be
created; default: false;
updateExisting - (boolean) if set to true, existing maps will be
updated; default: false.
mediaTypes object Rules on how to import media types.
Supported parameters:
createMissing - (boolean) if set to true, new media types will be
created; default: false;
updateExisting - (boolean) if set to true, existing media types
will be updated; default: false.
templateLinkage object Rules on how to import template links.
Supported parameters:
createMissing - (boolean) if set to true, templates that are not
linked to the host or template being imported, but are present in the
false;
imported data, will be linked; default:
deleteMissing - (boolean) if set to true, templates that are
linked to the host or template being imported, but are not present in
the imported data, will be unlinked without removing entities (items,
triggers, etc.) inherited from the unlinked templates; default: false.
templates object Rules on how to import templates.
Supported parameters:
createMissing - (boolean) if set to true, new templates will be
created; default: false;
updateExisting - (boolean) if set to true, existing templates will
be updated; default: false.
templateDashboards object Rules on how to import template dashboards.
Supported parameters:
createMissing - (boolean) if set to true, new template
dashboards will be created; default: false;
updateExisting - (boolean) if set to true, existing template
dashboards will be updated; default: false;
deleteMissing - (boolean) if set to true, template dashboards
not present in the imported data will be deleted from the database;
default: false.
triggers object Rules on how to import triggers.
Supported parameters:
createMissing - (boolean) if set to true, new triggers will be
created; default: false;
updateExisting - (boolean) if set to true, existing triggers will be
updated; default: false;
deleteMissing - (boolean) if set to true, triggers not present in
the imported data will be deleted from the database; default: false.
974
Parameter Type Description
Supported parameters:
createMissing - (boolean) if set to true, new value maps will be
created; default: false;
updateExisting - (boolean) if set to true, existing value maps
will be updated; default: false;
deleteMissing - (boolean) if set to true, value maps not present
in the imported data will be deleted from the database; default: false.
Return values
Importing a template
Import the template configuration contained in the XML string. If any items or triggers in the XML string are missing, they will be
deleted from the database, and everything else will be left unchanged.
Request:
{
"jsonrpc": "2.0",
"method": "configuration.import",
"params": {
"format": "xml",
"rules": {
"templates": {
"createMissing": true,
"updateExisting": true
},
"items": {
"createMissing": true,
"updateExisting": true,
"deleteMissing": true
},
"triggers": {
"createMissing": true,
"updateExisting": true,
"deleteMissing": true
},
"valueMaps": {
"createMissing": true,
"updateExisting": false
}
},
"source": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<zabbix_export><version>7.0</version><templ
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": true,
"id": 1
}
Source
CConfiguration::import() in ui/include/classes/api/services/CConfiguration.php.
975
configuration.importcompare
Description
Note:
This method is available to users of any type. Permissions to call the method can be revoked in user role settings. See
User roles for more information.
Parameters
(object) Parameters containing the possible data to import and rules how the data should be handled.
Possible values:
yaml - YAML;
xml - XML;
json - JSON.
Parameter behavior:
- required
source string Serialized string containing the configuration data.
Parameter behavior:
- required
rules object Rules on how new and existing objects should be compared.
Parameter behavior:
- required
Note:
If no rules are given, there will be nothing to update and result will be empty.
Note:
Comparison will be done only for host groups and templates. Triggers and graphs will be compared only for imported
templates, any other will be considered as ”new”.
Supported parameters:
createMissing - (boolean) if set to true, new LLD rules will be
created; default: false;
updateExisting - (boolean) if set to true, existing LLD rules will
be updated; default: false;
deleteMissing - (boolean) if set to true, LLD rules not present in
the imported data will be deleted from the database; default: false.
976
Parameter Type Description
Supported parameters:
createMissing - (boolean) if set to true, new graphs will be
created; default: false;
updateExisting - (boolean) if set to true, existing graphs will be
updated; default: false;
deleteMissing - (boolean) if set to true, graphs not present in
the imported data will be deleted from the database; default: false.
host_groups object Rules on how to import host groups.
Supported parameters:
createMissing - (boolean) if set to true, new host groups will be
created; default: false;
updateExisting - (boolean) if set to true, existing host groups
will be updated; default: false.
template_groups object Rules on how to import template groups.
Supported parameters:
createMissing - (boolean) if set to true, new template groups
false;
will be created; default:
updateExisting - (boolean) if set to true, existing template
groups will be updated; default: false.
hosts object Rules on how to import hosts.
Supported parameters:
createMissing - (boolean) if set to true, new hosts will be
created; default: false;
updateExisting - (boolean) if set to true, existing hosts will be
updated; default: false.
Supported parameters:
createMissing - (boolean) if set to true, new web scenarios will
be created; default: false;
updateExisting - (boolean) if set to true, existing web scenarios
will be updated; default: false;
deleteMissing - (boolean) if set to true, web scenarios not
present in the imported data will be deleted from the database;
default: false.
images object Rules on how to import images.
Supported parameters:
createMissing - (boolean) if set to true, new images will be
created; default: false;
updateExisting - (boolean) if set to true, existing images will be
updated; default: false.
Supported parameters:
createMissing - (boolean) if set to true, new items will be
created; default: false;
updateExisting - (boolean) if set to true, existing items will be
updated; default: false;
deleteMissing - (boolean) if set to true, items not present in the
imported data will be deleted from the database; default: false.
977
Parameter Type Description
Supported parameters:
createMissing - (boolean) if set to true, new maps will be
created; default: false;
updateExisting - (boolean) if set to true, existing maps will be
updated; default: false.
Supported parameters:
createMissing - (boolean) if set to true, new media types will be
created; default: false;
updateExisting - (boolean) if set to true, existing media types
will be updated; default: false.
Supported parameters:
createMissing - (boolean) if set to true, templates that are not
linked to the host or template being imported, but are present in the
false;
imported data, will be linked; default:
deleteMissing - (boolean) if set to true, templates that are
linked to the host or template being imported, but are not present in
the imported data, will be unlinked without removing entities (items,
triggers, etc.) inherited from the unlinked templates; default: false.
templates object Rules on how to import templates.
Supported parameters:
createMissing - (boolean) if set to true, new templates will be
created; default: false;
updateExisting - (boolean) if set to true, existing templates will
be updated; default: false.
templateDashboards object Rules on how to import template dashboards.
Supported parameters:
createMissing - (boolean) if set to true, new template
dashboards will be created; default: false;
updateExisting - (boolean) if set to true, existing template
dashboards will be updated; default: false;
deleteMissing - (boolean) if set to true, template dashboards
not present in the imported data will be deleted from the database;
default: false.
triggers object Rules on how to import triggers.
Supported parameters:
createMissing - (boolean) if set to true, new triggers will be
created; default: false;
updateExisting - (boolean) if set to true, existing triggers will be
updated; default: false;
deleteMissing - (boolean) if set to true, triggers not present in
the imported data will be deleted from the database; default: false.
978
Parameter Type Description
Supported parameters:
createMissing - (boolean) if set to true, new value maps will be
created; default: false;
updateExisting - (boolean) if set to true, existing value maps
will be updated; default: false;
deleteMissing - (boolean) if set to true, value maps not present
in the imported data will be deleted from the database; default: false.
Return values
Compare the template contained in the XML string to the current system elements, and show what will be changed if this template
will be imported.
Request:
{
"jsonrpc": "2.0",
"method": "configuration.importcompare",
"params": {
"format": "xml",
"rules": {
"discoveryRules": {
"createMissing": true,
"updateExisting": true,
"deleteMissing": true
},
"graphs": {
"createMissing": true,
"updateExisting": true,
"deleteMissing": true
},
"host_groups": {
"createMissing": true,
"updateExisting": true
},
"template_groups": {
"createMissing": true,
"updateExisting": true
},
"httptests": {
"createMissing": true,
"updateExisting": true,
"deleteMissing": true
},
"items": {
"createMissing": true,
"updateExisting": true,
"deleteMissing": true
},
"templateLinkage": {
"createMissing": true,
"deleteMissing": true
},
"templates": {
"createMissing": true,
"updateExisting": true
979
},
"templateDashboards": {
"createMissing": true,
"updateExisting": true,
"deleteMissing": true
},
"triggers": {
"createMissing": true,
"updateExisting": true,
"deleteMissing": true
},
"valueMaps": {
"createMissing": true,
"updateExisting": true,
"deleteMissing": true
}
},
"source": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<zabbix_export><version>7.0</version><templ
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"templates": {
"updated": [
{
"before": {
"uuid": "5aef0444a82a4d8cb7a95dc4c0c85330",
"template": "New template",
"name": "New template",
"groups": [
{
"name": "Templates"
}
]
},
"after": {
"uuid": "5aef0444a82a4d8cb7a95dc4c0c85330",
"template": "New template",
"name": "New template",
"groups": [
{
"name": "Templates"
}
]
},
"items": {
"added": [
{
"after": {
"uuid": "648006da5971424ead0c47ddbbf1ea2e",
"name": "CPU utilization",
"key": "system.cpu.util",
"value_type": "FLOAT",
"units": "%"
},
"triggers": {
"added": [
{
980
"after": {
"uuid": "736225012c534ec480c2a66a91322ce0",
"expression": "avg(/New template/system.cpu.util,3m)>70",
"name": "CPU utilization too high on 'New host' for 3 minu
"priority": "WARNING"
}
}
]
}
}
],
"removed": [
{
"before": {
"uuid": "6805d4c39a624a8bab2cc8ab63df1ab3",
"name": "CPU load",
"key": "system.cpu.load",
"value_type": "FLOAT"
},
"triggers": {
"removed": [
{
"before": {
"uuid": "ab4c2526c2bc42e48a633082255ebcb3",
"expression": "avg(/New template/system.cpu.load,3m)>2",
"name": "CPU load too high on 'New host' for 3 minutes",
"priority": "WARNING"
}
}
]
}
}
],
"updated": [
{
"before": {
"uuid": "7f1e6f1e48aa4a128e5b6a958a5d11c3",
"name": "Zabbix agent ping",
"key": "agent.ping"
},
"after": {
"uuid": "7f1e6f1e48aa4a128e5b6a958a5d11c3",
"name": "Zabbix agent ping",
"key": "agent.ping",
"delay": "3m"
}
}
]
}
}
]
}
},
"id": 1
}
Source
CConfiguration::importcompare() in ui/include/classes/api/services/CConfiguration.php.
Connector
981
Object references:
• Connector
• Tag filter
Available methods:
Connector object
Property behavior:
- read-only
- required for update operations
name string Name of the connector.
Property behavior:
- required for create operations
url string Endpoint URL, that is, URL of the receiver.
User macros are supported.
Property behavior:
- required for create operations
protocol integer Communication protocol.
Possible values:
0 - (default) Zabbix Streaming Protocol v1.0.
data_type integer Data type.
Possible values:
0 - (default) Item values;
1 - Events.
item_value_type integer A sum of item value types to be sent.
Possible values:
1 - Numeric (float);
2 - Character;
4 - Log;
8 - Numeric (unsigned);
16 - Text.
Property behavior:
- supported if data_type is set to ”Item values”.
max_records integer Maximum number of events or items that can be sent within one
message.
Default: 0 - Unlimited.
982
Property Type Description
Default: 1.
max_attempts integer Number of attempts.
Default: 1.
attempt_interval string The interval between retry attempts.
Accepts seconds.
Default: 5s.
Property behavior:
- supported if max_attempts is greater than 1.
timeout string Timeout.
Time suffixes are supported (e.g., 30s, 1m).
User macros are supported.
Default: 5s.
http_proxy string HTTP(S) proxy connection string given as
[protocol]://[username[:password]@]proxy.example.com[:port].
Possible values:
0 - (default) None;
1 - Basic;
2 - NTLM;
3 - Kerberos;
4 - Digest;
5 - Bearer.
username string User name.
User macros are supported.
Property behavior:
- supported if authtype is set to ”Basic”, ”NTLM”, ”Kerberos”, or
”Digest”
password string Password.
User macros are supported.
Property behavior:
- supported if authtype is set to ”Basic”, ”NTLM”, ”Kerberos”, or
”Digest”
token string Bearer token.
User macros are supported.
Property behavior:
- required if authtype is set to ”Bearer”
verify_peer integer Whether to validate that the host’s certificate is authentic.
Possible values:
0 - Do not validate;
1 - (default) Validate.
983
Property Type Description
verify_host integer Whether to validate that the host name for the connection matches the
one in the host’s certificate.
Possible values:
0 - Do not validate;
1 - (default) Validate.
ssl_cert_file string Public SSL Key file path.
User macros are supported.
ssl_key_file string Private SSL Key file path.
User macros are supported.
ssl_key_password string Password for SSL Key file.
User macros are supported.
description text Description of the connector.
status integer Whether the connector is enabled.
Possible values:
0 - Disabled;
1 - (default) Enabled.
tags_evaltype integer Tag evaluation method.
Possible values:
0 - (default) And/Or;
2 - Or.
Tag filter
Tag filter allows to export only matching item values or events. If not set then everything will be exported. The tag filter object
has the following properties.
Property behavior:
- required
operator integer Condition operator.
Possible values:
0 - (default) Equals;
1 - Does not equal;
2 - Contains;
3 - Does not contain;
12 - Exists;
1 - Does not exist.
value string Tag value.
connector.create
Description
Note:
This method is only available to Super admin user type. Permissions to call the method can be revoked in user role settings.
See User roles for more information.
Parameters
984
Additionally to the standard connector properties, the method accepts the following parameters.
Return values
(object) Returns an object containing the IDs of the created connectors under the connectorids property. The order of the
returned IDs matches the order of the passed connectors.
Examples
Creating a connector
Create a connector to export trigger events with a tag filter. HTTP authentication will be performed using Bearer token.
Request:
{
"jsonrpc": "2.0",
"method": "connector.create",
"params": [
{
"name": "Export of events",
"data_type": 1,
"url": "{$DATA_EXPORT_URL}",
"authtype": 5,
"token": "{$DATA_EXPORT_BEARER_TOKEN}",
"tags": [
{
"tag": "service",
"operator": 0,
"value": "mysqld"
}
]
}
],
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"connectorid": [
"3"
]
},
"id": 1
}
Source
CConnector::create() in ui/include/classes/api/services/CConnector.php.
connector.delete
Description
Note:
This method is only available to Super admin user type. Permissions to call the method can be revoked in user role settings.
See User roles for more information.
985
Parameters
(object) Returns an object containing the IDs of the deleted connectors under the connectorids property.
Examples
Request:
{
"jsonrpc": "2.0",
"method": "connector.delete",
"params": [
3,
5
],
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"connectorids": [
"3",
"5"
]
},
"id": 1
}
Source
CConnector::delete() in ui/include/classes/api/services/CConnector.php.
connector.get
Description
Note:
This method is only available to Super admin user type. Permissions to call the method can be revoked in user role settings.
See User roles for more information.
Parameters
Supports count.
sortfield string/array Sort the result by the given properties.
986
Parameter Type Description
countOutput boolean These parameters being common for all get methods are described in
detail in the reference commentary.
excludeSearch boolean
filter object
limit integer
output query
preservekeys boolean
search object
searchByAny boolean
searchWildcardsEnabled boolean
sortorder string/array
startSearch boolean
Return values
Request:
{
"jsonrpc": "2.0",
"method": "connector.get",
"params": {
"output": "extend",
"selectTags": ["tag", "operator", "value"],
"preservekeys": true
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": [
{
"connectorid": "1",
"name": "Export of item values",
"protocol": "0",
"data_type": "0",
"url": "{$DATA_EXPORT_VALUES_URL}",
"item_value_type": "31",
"authtype": "4",
"username": "{$DATA_EXPORT_VALUES_USERNAME}",
"password": "{$DATA_EXPORT_VALUES_PASSWORD}",
"token": "",
"max_records": "0",
"max_senders": "4",
"max_attempts": "2",
"attempt_interval": "10s",
"timeout": "10s",
"http_proxy": "{$DATA_EXPORT_VALUES_PROXY}",
"verify_peer": "1",
"verify_host": "1",
"ssl_cert_file": "{$DATA_EXPORT_VALUES_SSL_CERT_FILE}",
"ssl_key_file": "{$DATA_EXPORT_VALUES_SSL_KEY_FILE}",
987
"ssl_key_password": "",
"description": "",
"status": "1",
"tags_evaltype": "0",
"tags": [
{
"tag": "component",
"operator": "0",
"value": "memory"
}
]
},
{
"connectorid": "2",
"name": "Export of events",
"protocol": "0",
"data_type": "1",
"url": "{$DATA_EXPORT_EVENTS_URL}",
"item_value_type": "31",
"authtype": "5",
"username": "",
"password": "",
"token": "{$DATA_EXPORT_EVENTS_BEARER_TOKEN}",
"max_records": "0",
"max_senders": "2",
"max_attempts": "1",
"attempt_interval": "5s",
"timeout": "5s",
"http_proxy": "",
"verify_peer": "1",
"verify_host": "1",
"ssl_cert_file": "",
"ssl_key_file": "",
"ssl_key_password": "",
"description": "",
"status": "1",
"tags_evaltype": "0",
"tags": [
{
"tag": "scope",
"operator": "0",
"value": "performance"
}
]
}
],
"id": 1
}
Source
CConnector:get() in ui/include/classes/api/services/CConnector.php.
connector.update
Description
988
Note:
This method is only available to Super admin user type. Permissions to call the method can be revoked in user role settings.
See User roles for more information.
Parameters
Additionally to the standard connector properties, the method accepts the following parameters.
tags array Connector tag filter to replace the current tag filter.
Return values
(object) Returns an object containing the IDs of the updated connectors under the connectorids property.
Examples
Request:
{
"jsonrpc": "2.0",
"method": "connector.update",
"params": {
"connectorid": 3,
"authtype": 5,
"token": "{$DATA_EXPORT_BEARER_TOKEN}"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"connectorids": [
"3"
]
},
"id": 1
}
Request:
{
"jsonrpc": "2.0",
"method": "connector.update",
"params": [
{
"connectorid": 5,
"tags_evaltype": 2,
"tags": [
{
"tag": "service",
"operator": 0,
989
"value": "mysqld"
},
{
"tag": "error",
"operator": 12,
"value": ""
}
]
}
],
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"connectorids": [
"5"
]
},
"id": 1
}
Source
CConnector::update() in ui/include/classes/api/services/CConnector.php.
Correlation
Object references:
• Correlation
• Correlation operation
• Correlation filter
• Correlation filter condition
Available methods:
Correlation object
Property behavior:
- read-only
- required for update operations
990
Property Type Description
Property behavior:
- required for create operations
description string Description of the correlation.
status integer Whether the correlation is enabled or disabled.
Possible values:
0 - (default) enabled;
1 - disabled.
Correlation operation
The correlation operation object defines an operation that will be performed when a correlation is executed. It has the following
properties.
Possible values:
0 - close old events;
1 - close new event.
Property behavior:
- required
Correlation filter
The correlation filter object defines a set of conditions that must be met to perform the configured correlation operations. It has
the following properties.
conditions array Set of filter conditions to use for filtering results. The conditions will be
sorted in the order of their placement in the formula.
Property behavior:
- required
evaltype integer Filter condition evaluation method.
Possible values:
0 - and/or;
1 - and;
2 - or;
3 - custom expression.
Property behavior:
- required
eval_formula string Generated expression that will be used for evaluating filter conditions.
The expression contains IDs that reference specific filter conditions by
formulaid. The value of eval_formula is equal to the value of
its
formula for filters with a custom expression.
Property behavior:
- read-only
991
Property Type Description
Property behavior:
- required if evaltype is set to ”custom expression”
The correlation filter condition object defines a specific condition that must be checked before running the correlation operations.
Possible values:
0 - old event tag;
1 - new event tag;
2 - new event host group;
3 - event tag pair;
4 - old event tag value;
5 - new event tag value.
Property behavior:
- required
tag string Event tag (old or new).
Property behavior:
- required if type is set to ”old event tag”, ”new event tag”, ”old event
tag value”, or ”new event tag value”
groupid ID ID of the host group.
Property behavior:
- required if type is set to ”new event host group”
oldtag string Old event tag.
Property behavior:
- required if type is set to ”event tag pair”
newtag string Old event tag.
Property behavior:
- required if type is set to ”event tag pair”
value string Event tag (old or new) value.
Property behavior:
- required if type is set to ”old event tag value” or ”new event tag
value”
formulaid string Arbitrary unique ID that is used to reference the condition from a
custom expression. Can only contain capital-case letters. The ID must
be defined by the user when modifying filter conditions, but will be
generated anew when requesting them afterward.
operator integer Condition operator.
Property behavior:
- required if type is set to ”new event host group”, ”old event tag
value”, or ”new event tag value”
992
Note:
To better understand how to use filters with various types of expressions, see examples on the correlation.get and correla-
tion.create method pages.
The following operators and values are supported for each condition type.
correlation.create
Description
Note:
This method is only available to Super admin user type. Permissions to call the method can be revoked in user role settings.
See User roles for more information.
Parameters
Parameter behavior:
- required
filter object Correlation filter object for the correlation.
Parameter behavior:
- required
Return values
(object) Returns an object containing the IDs of the created correlations under the correlationids property. The order of the
returned IDs matches the order of the passed correlations.
Examples
Create a correlation using evaluation method AND/OR with one condition and one operation. By default the correlation will be
enabled.
Request:
{
"jsonrpc": "2.0",
"method": "correlation.create",
"params": {
"name": "new event tag correlation",
"filter": {
"evaltype": 0,
"conditions": [
{
"type": 1,
"tag": "ok"
993
}
]
},
"operations": [
{
"type": 0
}
]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"correlationids": [
"1"
]
},
"id": 1
}
Create a correlation that will use a custom filter condition. The formula IDs ”A” or ”B” have been chosen arbitrarily. Condition type
will be ”Host group” with operator ”<>”.
Request:
{
"jsonrpc": "2.0",
"method": "correlation.create",
"params": {
"name": "new host group correlation",
"description": "a custom description",
"status": 0,
"filter": {
"evaltype": 3,
"formula": "A or B",
"conditions": [
{
"type": 2,
"operator": 1,
"formulaid": "A"
},
{
"type": 2,
"operator": 1,
"formulaid": "B"
}
]
},
"operations": [
{
"type": 1
}
]
},
"id": 1
}
Response:
994
{
"jsonrpc": "2.0",
"result": {
"correlationids": [
"2"
]
},
"id": 1
}
See also
• Correlation filter
• Correlation operation
Source
CCorrelation::create() in ui/include/classes/api/services/CCorrelation.php.
correlation.delete
Description
Note:
This method is only available to Super admin user type. Permissions to call the method can be revoked in user role settings.
See User roles for more information.
Parameters
(object) Returns an object containing the IDs of the deleted correlations under the correlationids property.
Example
Request:
{
"jsonrpc": "2.0",
"method": "correlation.delete",
"params": [
"1",
"2"
],
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"correlationids": [
"1",
"2"
]
},
"id": 1
}
995
Source
CCorrelation::delete() in ui/include/classes/api/services/CCorrelation.php.
correlation.get
Description
Note:
This method is available to users of any type. Permissions to call the method can be revoked in user role settings. See
User roles for more information.
Parameters
Return values
Retrieve correlations
Retrieve all configured correlations together with correlation conditions and operations. The filter uses the ”and/or” evaluation
type, so the formula property is empty and eval_formula is generated automatically.
Request:
{
"jsonrpc": "2.0",
"method": "correlation.get",
"params": {
"output": "extend",
"selectOperations": "extend",
"selectFilter": "extend"
},
996
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": [
{
"correlationid": "1",
"name": "Correlation 1",
"description": "",
"status": "0",
"filter": {
"evaltype": "0",
"formula": "",
"conditions": [
{
"type": "3",
"oldtag": "error",
"newtag": "ok",
"formulaid": "A"
}
],
"eval_formula": "A"
},
"operations": [
{
"type": "0"
}
]
}
],
"id": 1
}
See also
• Correlation filter
• Correlation operation
Source
CCorrelation::get() in ui/include/classes/api/services/CCorrelation.php.
correlation.update
Description
Note:
This method is only available to Super admin user type. Permissions to call the method can be revoked in user role settings.
See User roles for more information.
Parameters
Additionally to the standard correlation properties, the method accepts the following parameters.
997
Parameter Type Description
Return values
(object) Returns an object containing the IDs of the updated correlations under the correlationids property.
Examples
Disable correlation
Request:
{
"jsonrpc": "2.0",
"method": "correlation.update",
"params": {
"correlationid": "1",
"status": "1"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"correlationids": [
"1"
]
},
"id": 1
}
Request:
{
"jsonrpc": "2.0",
"method": "correlation.update",
"params": {
"correlationid": "1",
"filter": {
"conditions": [
{
"type": 3,
"oldtag": "error",
"newtag": "ok"
}
]
}
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"correlationids": [
"1"
]
},
998
"id": 1
}
See also
• Correlation filter
• Correlation operation
Source
CCorrelation::update() in ui/include/classes/api/services/CCorrelation.php.
Dashboard
Object references:
• Dashboard
• Dashboard page
• Dashboard widget
• Dashboard widget field
• Dashboard user group
• Dashboard user
Available methods:
Dashboard object
Property behavior:
- read-only
- required for update operations
name string Name of the dashboard.
Property behavior:
- required for create operations
userid ID ID of the user that is the owner of the dashboard.
private integer Type of dashboard sharing.
Possible values:
0 - public dashboard;
1 - (default) private dashboard.
display_period integer Default page display period (in seconds).
Default: 30.
999
Property Type Description
Possible values:
0 - do not auto start slideshow;
1 - (default) auto start slideshow.
Dashboard page
Property behavior:
- read-only
name string Dashboard page name.
Dashboard widget
Property behavior:
- read-only
1000
Property Type Description
Possible values:
actionlog - Action log;
clock - Clock;
(deprecated) dataover - Data overview;
discovery - Discovery status;
favgraphs - Favorite graphs;
favmaps - Favorite maps;
gauge - Gauge;
geomap - Geomap;
graph - Graph (classic);
graphprototype - Graph prototype;
honeycomb - Honeycomb;
hostavail - Host availability;
hostnavigator - Host navigator;
itemhistory - Item history;
itemnavigator - Item navigator;
item - Item value;
map - Map;
navtree - Map Navigation Tree;
piechart - Pie chart;
problemhosts - Problem hosts;
problems - Problems;
problemsbysv - Problems by severity;
slareport - SLA report;
svggraph - Graph;
systeminfo - System information;
tophosts - Top hosts;
toptriggers - Top triggers;
trigover - Trigger overview;
url - URL;
web - Web monitoring.
Property behavior:
- required
name string Custom widget name.
x integer A horizontal position from the left side of the dashboard.
Possible values:
0 - (default) default widget view;
1 - with hidden header;
fields array Array of the dashboard widget field objects.
Property behavior:
- see individual widgets in Dashboard widget fields
1001
The dashboard widget field object has the following properties.
Possible values:
0 - Integer;
1 - String;
2 - Host group;
3 - Host;
4 - Item;
5 - Item prototype;
6 - Graph;
7 - Graph prototype;
8 - Map;
9 - Service;
10 - SLA;
11 - User;
12 - Action;
13 - Media type.
Property behavior:
- required
name string Widget field name.
Property behavior:
- required
value mixed Widget field value depending on the type.
Property behavior:
- required
List of dashboard permissions based on user groups. It has the following properties.
Property behavior:
- required
permission integer Type of permission level.
Possible values:
2 - read only;
3 - read-write.
Property behavior:
- required
Dashboard user
1002
Property Type Description
Property behavior:
- required
permission integer Type of permission level.
Possible values:
2 - read only;
3 - read-write.
Property behavior:
- required
dashboard.create
Description
Note:
This method is available to users of any type. Permissions to call the method can be revoked in user role settings. See
User roles for more information.
Parameters
pages array Dashboard pages to be created for the dashboard. Dashboard pages
will be ordered in the same order as specified.
Parameter behavior:
- required
users array Dashboard user shares to be created on the dashboard.
userGroups array Dashboard user group shares to be created on the dashboard.
Return values
(object) Returns an object containing the IDs of the created dashboards under the dashboardids property. The order of the
returned IDs matches the order of the passed dashboards.
Examples
Creating a dashboard
Create a dashboard named ”My dashboard” with one Problems widget with tags and using two types of sharing (user group and
user) on a single dashboard page.
Request:
{
"jsonrpc": "2.0",
"method": "dashboard.create",
"params": {
"name": "My dashboard",
"display_period": 30,
"auto_start": 1,
"pages": [
{
1003
"widgets": [
{
"type": "problems",
"x": 0,
"y": 0,
"width": 36,
"height": 5,
"view_mode": 0,
"fields": [
{
"type": 1,
"name": "tags.0.tag",
"value": "service"
},
{
"type": 0,
"name": "tags.0.operator",
"value": 1
},
{
"type": 1,
"name": "tags.0.value",
"value": "zabbix_server"
}
]
}
]
}
],
"userGroups": [
{
"usrgrpid": "7",
"permission": 2
}
],
"users": [
{
"userid": "4",
"permission": 3
}
]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"dashboardids": [
"2"
]
},
"id": 1
}
See also
• Dashboard page
• Dashboard widget
• Dashboard widget field
• Dashboard user
• Dashboard user group
1004
Source
CDashboard::create() in ui/include/classes/api/services/CDashboard.php.
dashboard.delete
Description
Note:
This method is available to users of any type. Permissions to call the method can be revoked in user role settings. See
User roles for more information.
Parameters
(object) Returns an object containing the IDs of the deleted dashboards under the dashboardids property.
Examples
Request:
{
"jsonrpc": "2.0",
"method": "dashboard.delete",
"params": [
"2",
"3"
],
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"dashboardids": [
"2",
"3"
]
},
"id": 1
}
Source
CDashboard::delete() in ui/include/classes/api/services/CDashboard.php.
dashboard.get
Description
1005
Note:
This method is available to users of any type. Permissions to call the method can be revoked in user role settings. See
User roles for more information.
Parameters
Return values
Retrieving a dashboard by ID
Request:
{
"jsonrpc": "2.0",
"method": "dashboard.get",
"params": {
"output": "extend",
"selectPages": "extend",
"selectUsers": "extend",
"selectUserGroups": "extend",
"dashboardids": [
"1",
"2"
]
},
"id": 1
}
Response:
1006
{
"jsonrpc": "2.0",
"result": [
{
"dashboardid": "1",
"name": "Dashboard",
"userid": "1",
"private": "0",
"display_period": "30",
"auto_start": "1",
"users": [],
"userGroups": [],
"pages": [
{
"dashboard_pageid": "1",
"name": "",
"display_period": "0",
"widgets": [
{
"widgetid": "9",
"type": "systeminfo",
"name": "",
"x": "12",
"y": "8",
"width": "12",
"height": "5",
"view_mode": "0",
"fields": []
},
{
"widgetid": "8",
"type": "problemsbysv",
"name": "",
"x": "12",
"y": "4",
"width": "12",
"height": "4",
"view_mode": "0",
"fields": []
},
{
"widgetid": "7",
"type": "problemhosts",
"name": "",
"x": "12",
"y": "0",
"width": "12",
"height": "4",
"view_mode": "0",
"fields": []
},
{
"widgetid": "6",
"type": "discovery",
"name": "",
"x": "6",
"y": "9",
"width": "18",
"height": "4",
"view_mode": "0",
"fields": []
},
1007
{
"widgetid": "5",
"type": "web",
"name": "",
"x": "0",
"y": "9",
"width": "18",
"height": "4",
"view_mode": "0",
"fields": []
},
{
"widgetid": "4",
"type": "problems",
"name": "",
"x": "0",
"y": "3",
"width": "12",
"height": "6",
"view_mode": "0",
"fields": []
},
{
"widgetid": "3",
"type": "favmaps",
"name": "",
"x": "8",
"y": "0",
"width": "12",
"height": "3",
"view_mode": "0",
"fields": []
},
{
"widgetid": "1",
"type": "favgraphs",
"name": "",
"x": "0",
"y": "0",
"width": "12",
"height": "3",
"view_mode": "0",
"fields": []
}
]
},
{
"dashboard_pageid": "2",
"name": "",
"display_period": "0",
"widgets": []
},
{
"dashboard_pageid": "3",
"name": "Custom page name",
"display_period": "60",
"widgets": []
}
]
},
{
"dashboardid": "2",
1008
"name": "My dashboard",
"userid": "1",
"private": "1",
"display_period": "60",
"auto_start": "1",
"users": [
{
"userid": "4",
"permission": "3"
}
],
"userGroups": [
{
"usrgrpid": "7",
"permission": "2"
}
],
"pages": [
{
"dashboard_pageid": "4",
"name": "",
"display_period": "0",
"widgets": [
{
"widgetid": "10",
"type": "problems",
"name": "",
"x": "0",
"y": "0",
"width": "12",
"height": "5",
"view_mode": "0",
"fields": [
{
"type": "2",
"name": "groupids",
"value": "4"
}
]
}
]
}
]
}
],
"id": 1
}
See also
• Dashboard page
• Dashboard widget
• Dashboard widget field
• Dashboard user
• Dashboard user group
Source
CDashboard::get() in ui/include/classes/api/services/CDashboard.php.
dashboard.update
Description
1009
object dashboard.update(object/array dashboards)
This method allows to update existing dashboards.
Note:
This method is available to users of any type. Permissions to call the method can be revoked in user role settings. See
User roles for more information.
Parameters
Additionally to the standard dashboard properties, the method accepts the following parameters.
Return values
(object) Returns an object containing the IDs of the updated dashboards under the dashboardids property.
Examples
Renaming a dashboard
Request:
{
"jsonrpc": "2.0",
"method": "dashboard.update",
"params": {
"dashboardid": "2",
"name": "SQL server status"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"dashboardids": [
"2"
]
},
"id": 1
}
Rename the first dashboard page, replace widgets on the second dashboard page and add a new page as the third one. Delete all
other dashboard pages.
Request:
1010
{
"jsonrpc": "2.0",
"method": "dashboard.update",
"params": {
"dashboardid": "2",
"pages": [
{
"dashboard_pageid": 1,
"name": "Renamed Page"
},
{
"dashboard_pageid": 2,
"widgets": [
{
"type": "clock",
"x": 0,
"y": 0,
"width": 12,
"height": 3
}
]
},
{
"display_period": 60
}
]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"dashboardids": [
"2"
]
},
"id": 1
}
Request:
{
"jsonrpc": "2.0",
"method": "dashboard.update",
"params": {
"dashboardid": "2",
"userid": "1"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"dashboardids": [
"2"
]
1011
},
"id": 1
}
See also
• Dashboard page
• Dashboard widget
• Dashboard widget field
• Dashboard user
• Dashboard user group
Source
CDashboard::update() in ui/include/classes/api/services/CDashboard.php.
This page contains navigation links for dashboard widget parameters and possible property values for the respective dashboard
widget field objects.
To see the parameters and property values for each widget, go to individual widget pages for:
• Action log
• Clock
• Discovery status
• Favorite graphs
• Favorite maps
• Gauge
• Geomap
• Graph
• Graph (classic)
• Graph prototype
• Honeycomb
• Host availability
• Host navigator
• Item history
• Item navigator
• Item value
• Map
• Map navigation tree
• Pie chart
• Problem hosts
• Problems
• SLA report
• System information
• Problems by severity
• Top hosts
• Top triggers
• Trigger overview
• URL
• Web monitoring
Deprecated widgets:
• Data overview
Attention:
Deprecated widgets will be removed in the upcoming major release.
1 Action log
Description
1012
These parameters and the possible property values for the respective dashboard widget field objects allow to configure the Action
log widget in dashboard.create and dashboard.update methods.
Attention:
Widget fields properties are not validated during the creation or update of a dashboard. This allows users to modify
built-in widgets and create custom widgets, but also introduces the risk of creating or updating widgets incorrectly. To
ensure the successful creation or update of the Action log widget, please refer to the parameter behavior outlined in the
tables below.
Parameters
The following parameters are supported for the Action log widget.
Default: DASHBOARD._timeperiod
Alternatively, you can set the time period only in the From and To
parameters.
From 1 time_period.from Valid time string in absolute (YYYY-MM-DD hh:mm:ss) or relative
time syntax (now, now/d, now/w-1w, etc.).
Parameter behavior:
- supported if Time period is not set
1013
Parameter type name value
Parameter behavior:
- supported if Time period is not set
Sort 0 sort_triggers 3 - Time (ascending);
en- 4 - (default) Time (descending);
tries 5 - Type (ascending);
by 6 - Type (descending);
7 - Status (ascending);
8 - Status (descending);
11 - Recipient (ascending);
12 - Recipient (descending).
Show 0 show_lines Possible values range from 1-100.
lines
Default: 25.
Examples
The following examples aim to only describe the configuration of the dashboard widget field objects for the Action log widget. For
more information on configuring a dashboard, see dashboard.create.
Configuring an Action log widget
Configure an Action log widget that displays 10 entries of action operation details, sorted by time (in ascending order). In addition,
display details only for those action operations that attempted to send an email to user ”1”, but were unsuccessful.
Request:
{
"jsonrpc": "2.0",
"method": "dashboard.create",
"params": {
"name": "My dashboard",
"display_period": 30,
"auto_start": 1,
"pages": [
{
"widgets": [
{
"type": "actionlog",
"name": "Action log",
"x": 0,
"y": 0,
"width": 36,
"height": 5,
"view_mode": 0,
"fields": [
{
"type": 0,
"name": "show_lines",
"value": 10
},
{
"type": 0,
"name": "sort_triggers",
"value": 3
},
{
"type": 11,
"name": "userids.0",
"value": 1
},
1014
{
"type": 13,
"name": "mediatypeids.0",
"value": 1
},
{
"type": 0,
"name": "statuses.0",
"value": 2
}
]
}
]
}
],
"userGroups": [
{
"usrgrpid": 7,
"permission": 2
}
],
"users": [
{
"userid": 1,
"permission": 3
}
]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"dashboardids": [
"3"
]
},
"id": 1
}
See also
Description
These parameters and the possible property values for the respective dashboard widget field objects allow to configure the Clock
widget in dashboard.create and dashboard.update methods.
Attention:
Widget fields properties are not validated during the creation or update of a dashboard. This allows users to modify
built-in widgets and create custom widgets, but also introduces the risk of creating or updating widgets incorrectly. To
ensure the successful creation or update of the Clock widget, please refer to the parameter behavior outlined in the tables
below.
Parameters
1015
The following parameters are supported for the Clock widget.
The following parameters are supported if Time type is set to ”Host time”.
Advanced configuration
The following advanced configuration parameters are supported if Clock type is set to ”Digital”.
Date
The following advanced configuration parameters are supported if Clock type is set to ”Digital”, and Show is set to ”Date”.
Default: 20.
Bold 0 date_bold 0 - (default) Disabled;
1 - Enabled.
Color 1 date_color Hexadecimal color code (e.g. FF0000).
Time
1016
The following advanced configuration parameters are supported if Clock type is set to ”Digital”, and Show is set to ”Time”.
Default: 30.
Bold 0 time_bold 0 - (default) Disabled;
1 - Enabled.
Color 1 time_color Hexadecimal color code (e.g. FF0000).
Time zone
The following advanced configuration parameters are supported if Clock type is set to ”Digital”, and Show is set to ”Time zone”.
Default: 20.
Bold 0 tzone_bold 0 - (default) Disabled;
1 - Enabled.
Color 1 tzone_color Hexadecimal color code (e.g. FF0000).
Default: local.
Parameter behavior:
- supported if Time type is set to ”Local time” or ”Server time”
Format 0 tzone_format 0 - (default) Short;
1 - Full.
Parameter behavior:
- supported if Time type is set to ”Local time” or ”Server time”
Examples
The following examples aim to only describe the configuration of the dashboard widget field objects for the Clock widget. For more
information on configuring a dashboard, see dashboard.create.
Configuring a Clock widget
Configure a Clock widget that displays local date, time and time zone in a customized digital clock.
Request:
{
"jsonrpc": "2.0",
"method": "dashboard.create",
"params": {
"name": "My dashboard",
"display_period": 30,
"auto_start": 1,
"pages": [
{
"widgets": [
1017
{
"type": "clock",
"name": "Clock",
"x": 0,
"y": 0,
"width": 12,
"height": 3,
"view_mode": 0,
"fields": [
{
"type": 0,
"name": "clock_type",
"value": 1
},
{
"type": 0,
"name": "show.0",
"value": 1
},
{
"type": 0,
"name": "show.1",
"value": 2
},
{
"type": 0,
"name": "show.2",
"value": 3
},
{
"type": 0,
"name": "date_size",
"value": 20
},
{
"type": 1,
"name": "date_color",
"value": "E1E1E1"
},
{
"type": 0,
"name": "time_bold",
"value": 1
},
{
"type": 0,
"name": "tzone_size",
"value": 10
},
{
"type": 1,
"name": "tzone_color",
"value": "E1E1E1"
},
{
"type": 1,
"name": "tzone_timezone",
"value": "Europe/Riga"
},
{
"type": 0,
"name": "tzone_format",
1018
"value": 1
}
]
}
]
}
],
"userGroups": [
{
"usrgrpid": 7,
"permission": 2
}
],
"users": [
{
"userid": 1,
"permission": 3
}
]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"dashboardids": [
"3"
]
},
"id": 1
}
See also
Attention:
This widget is deprecated and will be removed in the upcoming major release. Consider using the Top hosts widget instead.
Description
These parameters and the possible property values for the respective dashboard widget field objects allow to configure the Data
overview widget in dashboard.create and dashboard.update methods.
Attention:
Widget fields properties are not validated during the creation or update of a dashboard. This allows users to modify
built-in widgets and create custom widgets, but also introduces the risk of creating or updating widgets incorrectly. To
ensure the successful creation or update of the Data overview widget, please refer to the parameter behavior outlined in
the tables below.
Parameters
The following parameters are supported for the Data overview widget.
1019
Parameter type name value
Note: The number in the property name references tag order in the
tag evaluation list.
Parameter behavior:
- required if configuring Item tags
1020
Parameter type name value
Note: The number in the property name references tag order in the
tag evaluation list.
Parameter behavior:
- required if configuring Item tags
Tag value 1 tags.0.value Any string value.
Note: The number in the property name references tag order in the
tag evaluation list.
Parameter behavior:
- required if configuring Item tags
Show 0 show_suppressed 0 - (default) Disabled;
sup- 1 - Enabled.
pressed
prob-
lems
Hosts 0 style 0 - (default) Left;
lo- 1 - Top.
ca-
tion
Examples
The following examples aim to only describe the configuration of the dashboard widget field objects for the Data overview widget.
For more information on configuring a dashboard, see dashboard.create.
Configuring a Data overview widget
Configure a Data overview widget that displays data for host ”10084” and only for items for which the tag with the name ”compo-
nent” contains value ”cpu”. In addition, display the data with hosts located on top.
Request:
{
"jsonrpc": "2.0",
"method": "dashboard.create",
"params": {
"name": "My dashboard",
"display_period": 30,
"auto_start": 1,
"pages": [
{
"widgets": [
{
"type": "dataover",
"name": "Data overview",
"x": 0,
"y": 0,
"width": 36,
"height": 5,
"view_mode": 0,
"fields": [
{
"type": 3,
"name": "hostids.0",
1021
"value": 10084
},
{
"type": 1,
"name": "tags.0.tag",
"value": "component"
},
{
"type": 0,
"name": "tags.0.operator",
"value": 0
},
{
"type": 1,
"name": "tags.0.value",
"value": "cpu"
},
{
"type": 0,
"name": "style",
"value": 1
}
]
}
]
}
],
"userGroups": [
{
"usrgrpid": 7,
"permission": 2
}
],
"users": [
{
"userid": 1,
"permission": 3
}
]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"dashboardids": [
"3"
]
},
"id": 1
}
See also
1022
Description
These parameters and the possible property values for the respective dashboard widget field objects allow to configure the Dis-
covery status widget in dashboard.create and dashboard.update methods.
Parameters
The following parameters are supported for the Discovery status widget.
Examples
The following examples aim to only describe the configuration of the dashboard widget field objects for the Discovery status widget.
For more information on configuring a dashboard, see dashboard.create.
Configuring Discovery status widget
Configure a Discovery status widget with the refresh interval set to 15 minutes.
Request:
{
"jsonrpc": "2.0",
"method": "dashboard.create",
"params": {
"name": "My dashboard",
"display_period": 30,
"auto_start": 1,
"pages": [
{
"widgets": [
{
"type": "discovery",
"name": "Discovery status",
"x": 0,
"y": 0,
"width": 18,
"height": 3,
"view_mode": 0,
"fields": [
{
"type": 0,
"name": "rf_rate",
"value": 900
}
]
}
]
}
],
"userGroups": [
{
"usrgrpid": 7,
"permission": 2
}
],
"users": [
{
1023
"userid": 1,
"permission": 3
}
]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"dashboardids": [
"3"
]
},
"id": 1
}
See also
Description
These parameters and the possible property values for the respective dashboard widget field objects allow to configure the Favorite
graphs widget in dashboard.create and dashboard.update methods.
Parameters
The following parameters are supported for the Favorite graphs widget.
Examples
The following examples aim to only describe the configuration of the dashboard widget field objects for the Favorite graphs widget.
For more information on configuring a dashboard, see dashboard.create.
Configuring a Favorite graphs widget
Configure a Favorite graphs widget with the refresh interval set to 10 minutes.
Request:
{
"jsonrpc": "2.0",
"method": "dashboard.create",
"params": {
"name": "My dashboard",
"display_period": 30,
"auto_start": 1,
"pages": [
1024
{
"widgets": [
{
"type": "favgraphs",
"name": "Favorite graphs",
"x": 0,
"y": 0,
"width": 12,
"height": 3,
"view_mode": 0,
"fields": [
{
"type": 0,
"name": "rf_rate",
"value": 600
}
]
}
]
}
],
"userGroups": [
{
"usrgrpid": 7,
"permission": 2
}
],
"users": [
{
"userid": 1,
"permission": 3
}
]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"dashboardids": [
"3"
]
},
"id": 1
}
See also
Description
These parameters and the possible property values for the respective dashboard widget field objects allow to configure the Favorite
maps widget in dashboard.create and dashboard.update methods.
Parameters
1025
The following parameters are supported for the Favorite maps widget.
Examples
The following examples aim to only describe the configuration of the dashboard widget field objects for the Favorite maps widget.
For more information on configuring a dashboard, see dashboard.create.
Configuring a Favorite maps widget
Configure a Favorite maps widget with the refresh interval set to 10 minutes.
Request:
{
"jsonrpc": "2.0",
"method": "dashboard.create",
"params": {
"name": "My dashboard",
"display_period": 30,
"auto_start": 1,
"pages": [
{
"widgets": [
{
"type": "favmaps",
"name": "Favorite maps",
"x": 0,
"y": 0,
"width": 12,
"height": 3,
"view_mode": 0,
"fields": [
{
"type": 0,
"name": "rf_rate",
"value": 600
}
]
}
]
}
],
"userGroups": [
{
"usrgrpid": 7,
"permission": 2
}
],
"users": [
{
"userid": 1,
"permission": 3
}
]
},
1026
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"dashboardids": [
"3"
]
},
"id": 1
}
See also
Description
These parameters and the possible property values for the respective dashboard widget field objects allow to configure the Gauge
widget in dashboard.create and dashboard.update methods.
Attention:
Widget fields properties are not validated during the creation or update of a dashboard. This allows users to modify built-
in widgets and create custom widgets, but also introduces the risk of creating or updating widgets incorrectly. To ensure
the successful creation or update of the Gauge widget, please refer to the parameter behavior outlined in the tables below.
Parameters
Parameter behavior:
- required if Item (Widget) is not set
Item (Widget) 1 itemid._reference Instead of Item ID:
ABCDE._itemid - set a compatible widget (with its Reference
parameter set to ”ABCDE”) as the data source for items.
Parameter behavior:
- required if Item is not set
Min 1 min Any numeric value. Suffixes (e.g. ”1d”, ”2w”, ”4K”, ”8G”) are
supported.
Default: ”0”.
Max 1 max Any numeric value. Suffixes (e.g. ”1d”, ”2w”, ”4K”, ”8G”) are
supported.
Default: ”100”.
1027
Parameter type name value
Default: 1, 2, 4, 5.
Advanced configuration
The following advanced configuration parameters are supported for the Gauge widget.
Note:
The number in the Thresholds property name (e.g. thresholds.0.color) references the threshold place in a list, sorted
in ascending order. However, if thresholds are configured in a different order, the values will be sorted in as-
"thresholds.0.threshold":"5"
cending order after updating widget configuration in Zabbix frontend (e.g. →
"thresholds.0.threshold":"1"; "thresholds.1.threshold":"1" → "thresholds.1.threshold": "5").
Default: {ITEM.NAME}.
Size 0 desc_size Possible values range from 1-100.
Default: 15.
Vertical position 0 desc_v_pos 0 - Top;
1 - (default) Bottom.
1028
Parameter type name value
Default: 2.
Size 0 value_size Possible values range from 1-100.
Default: 25.
Bold 0 value_bold 0 - (default) Disabled;
1 - Enabled.
Color 1 value_color Hexadecimal color code (e.g. FF0000).
Parameter behavior:
- supported if Units (checkbox) is set to ”Enabled”
Size 0 units_size Possible values range from 1-100.
Default: 25.
Parameter behavior:
- supported if Units (checkbox) is set to ”Enabled”
Bold 0 units_bold 0 - (default) Disabled;
1 - Enabled.
Parameter behavior:
- supported if Units (checkbox) is set to ”Enabled”
Position 0 units_pos 0 - Before value;
1 - Above value;
2 - (default) After value;
3 - Below value.
Parameter behavior:
- supported if Units (checkbox) is set to ”Enabled”
Default: 20.
Needle
Color 1 needle_color Hexadecimal color code (e.g. FF0000).
Parameter behavior:
- supported if a dashboard widget field object for Show with the value
”Value arc” is set, or Show arc is set to ”Enabled”
Scale
1029
Parameter type name value
Parameter behavior:
- supported if Units (checkbox) is set to ”Enabled” and either a
dashboard widget field object for Show with the value ”Value arc” is set,
or Show arc is set to ”Enabled”
Size 0 scale_size Possible values range from 1-100.
Default: 15.
Parameter behavior:
- supported if a dashboard widget field object for Show with the value
”Value arc” is set, or Show arc is set to ”Enabled”
Decimal places 0 scale_decimal_places Possible values range from 1-10.
Default: 0.
Parameter behavior:
- supported if a dashboard widget field object for Show with the value
”Value arc” is set, or Show arc is set to ”Enabled”
Thresholds
Color 1 thresholds.0.color Hexadecimal color code (e.g. FF0000).
Threshold 1 thresholds.0.thresholdAny numeric value. Suffixes (e.g. ”1d”, ”2w”, ”4K”, ”8G”) are supported.
Show labels 0 th_show_labels 0 - (default) Disabled;
1 - Enabled.
Parameter behavior:
- supported if Thresholds are set and either a dashboard widget field
object for Show with the value ”Value arc” is set or Show arc is set to
”Enabled”
Show arc 0 th_show_arc 0 - (default) Disabled;
1 - Enabled.
Parameter behavior:
- supported if Thresholds are set
Arc size 0 th_arc_size Possible values range from 1-100.
Default: 5.
Parameter behavior:
- supported if Show arc is set to ”Enabled”
Examples
The following examples aim to only describe the configuration of the dashboard widget field objects for the Gauge widget. For
more information on configuring a dashboard, see dashboard.create.
Configuring a Gauge widget
Configure a Gauge widget that displays the item value for the item ”44474” (Interface enp0s3: Bits sent). In addition, visually
fine-tune the widget with multiple advanced options, including thresholds.
Request:
{
"jsonrpc": "2.0",
"method": "dashboard.create",
"params": {
"name": "My dashboard",
"display_period": 30,
"auto_start": 1,
"pages": [
1030
{
"widgets": [
{
"type": "gauge",
"name": "Gauge",
"x": 0,
"y": 0,
"width": 18,
"height": 5,
"view_mode": 0,
"fields": [
{
"type": 4,
"name": "itemid.0",
"value": 44474
},
{
"type": 1,
"name": "min",
"value": "100000"
},
{
"type": 1,
"name": "max",
"value": "1000000"
},
{
"type": 0,
"name": "show.0",
"value": 1
},
{
"type": 0,
"name": "show.1",
"value": 2
},
{
"type": 0,
"name": "show.2",
"value": 3
},
{
"type": 0,
"name": "show.4",
"value": 4
},
{
"type": 0,
"name": "show.5",
"value": 5
},
{
"type": 0,
"name": "angle",
"value": 270
},
{
"type": 0,
"name": "desc_size",
"value": 10
},
{
1031
"type": 0,
"name": "desc_bold",
"value": 1
},
{
"type": 0,
"name": "decimal_places",
"value": 0
},
{
"type": 0,
"name": "value_bold",
"value": 1
},
{
"type": 0,
"name": "units_size",
"value": 15
},
{
"type": 0,
"name": "units_pos",
"value": 3
},
{
"type": 1,
"name": "needle_color",
"value": "3C3C3C"
},
{
"type": 1,
"name": "thresholds.0.color",
"value": "FF465C"
},
{
"type": 1,
"name": "thresholds.0.threshold",
"value": "700000"
},
{
"type": 1,
"name": "thresholds.1.color",
"value": "FFD54F"
},
{
"type": 1,
"name": "thresholds.1.threshold",
"value": "500000"
},
{
"type": 1,
"name": "thresholds.2.color",
"value": "0EC9AC"
},
{
"type": 1,
"name": "thresholds.2.threshold",
"value": "100000"
},
{
"type": 0,
"name": "th_show_labels",
1032
"value": 1
},
{
"type": 0,
"name": "th_show_arc",
"value": 1
},
{
"type": 0,
"name": "th_arc_size",
"value": 15
}
]
}
]
}
],
"userGroups": [
{
"usrgrpid": 7,
"permission": 2
}
],
"users": [
{
"userid": 1,
"permission": 3
}
]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"dashboardids": [
"3"
]
},
"id": 1
}
See also
Description
These parameters and the possible property values for the respective dashboard widget field objects allow to configure the Geomap
widget in dashboard.create and dashboard.update methods.
1033
Attention:
Widget fields properties are not validated during the creation or update of a dashboard. This allows users to modify
built-in widgets and create custom widgets, but also introduces the risk of creating or updating widgets incorrectly. To
ensure the successful creation or update of the Geomap widget, please refer to the parameter behavior outlined in the
tables below.
Parameters
1034
Parameter type name value
Note: The number in the property name references tag order in the
tag evaluation list.
Parameter behavior:
- required if configuring Tags
Note: The number in the property name references tag order in the
tag evaluation list.
Parameter behavior:
- required if configuring Tags
Note: The number in the property name references tag order in the
tag evaluation list.
Parameter behavior:
- required if configuring Tags
Parameter behavior:
- required
Examples
The following examples aim to only describe the configuration of the dashboard widget field objects for the Geomap widget. For
more information on configuring a dashboard, see dashboard.create.
Configuring a Geomap widget
Configure a Geomap widget that displays hosts from host groups ”2” and ”22” based on the following tag configuration: tag with
the name ”component” contains value ”node”, and tag with the name ”location” equals value ”New York”. In addition, set the map
initial view to coordinates ”40.6892494” (latitude), ”-74.0466891” (longitude) with the zoom level ”10”.
Request:
{
"jsonrpc": "2.0",
"method": "dashboard.create",
"params": {
"name": "My dashboard",
1035
"display_period": 30,
"auto_start": 1,
"pages": [
{
"widgets": [
{
"type": "geomap",
"name": "Geomap",
"x": 0,
"y": 0,
"width": 36,
"height": 5,
"view_mode": 0,
"fields": [
{
"type": 2,
"name": "groupids.0",
"value": 22
},
{
"type": 2,
"name": "groupids.1",
"value": 2
},
{
"type": 1,
"name": "default_view",
"value": "40.6892494,-74.0466891,10"
},
{
"type": 0,
"name": "evaltype",
"value": 2
},
{
"type": 1,
"name": "tags.0.tag",
"value": "component"
},
{
"type": 0,
"name": "tags.0.operator",
"value": 0
},
{
"type": 1,
"name": "tags.0.value",
"value": "node"
},
{
"type": 1,
"name": "tags.1.tag",
"value": "location"
},
{
"type": 0,
"name": "tags.1.operator",
"value": 1
},
{
"type": 1,
"name": "tags.1.value",
1036
"value": "New York"
}
]
}
]
}
],
"userGroups": [
{
"usrgrpid": 7,
"permission": 2
}
],
"users": [
{
"userid": 1,
"permission": 3
}
]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"dashboardids": [
"3"
]
},
"id": 1
}
See also
Description
These parameters and the possible property values for the respective dashboard widget field objects allow to configure the Graph
widget in dashboard.create and dashboard.update methods.
Attention:
Widget fields properties are not validated during the creation or update of a dashboard. This allows users to modify
built-in widgets and create custom widgets, but also introduces the risk of creating or updating widgets incorrectly. To
ensure the successful creation or update of the Graph widget, please refer to the parameter behavior outlined in the tables
below.
Parameters
1037
Parameter type name value
Parameter behavior:
- required
Data set
Note:
The first number in the property name (e.g. ds.0.hosts.0, ds.0.items.0) represents the particular data set, while the second
number, if present, represents the configured host or item.
Parameter behavior:
- required if Data set type is set to ”Item list” and Items (Widget) is
not set
Items (Widget) 1 ds.0.itemids.0._reference
Instead of Item ID:
ABCDE._itemid - set a compatible widget (with its Reference
parameter set to ”ABCDE”) as the data source for items.
Parameter behavior:
- required if Data set type is set to ”Item list” and Items is not set
Color 1 ds.0.color.0 Hexadecimal color code (e.g. FF0000).
Parameter behavior:
- required if Data set type is set to ”Item list”
Host patterns 1 ds.0.hosts.0 Host name or pattern (e.g., ”Zabbix*”).
Parameter behavior:
- required if Data set type is set to ”Item patterns”
1038
Parameter type name value
Item patterns 1 ds.0.items.0 Item name or pattern (e.g., ”*: Number of processed *values per
second”).
Parameter behavior:
- required if Data set type is set to ”Item patterns”
Color 1 ds.0.color Hexadecimal color code (e.g. FF0000).
Default: FF465C.
Parameter behavior:
- supported if Data set type is set to ”Item patterns”
Draw 0 ds.0.type 0 - (default) Line;
1 - Points;
2 - Staircase;
3 - Bar.
Stacked 0 ds.0.stacked 0 - (default) Disabled;
1 - Enabled.
Parameter behavior:
- supported if Draw is set to ”Line”, ”Staircase”, or ”Bar”
Width 0 ds.0.width Possible values range from 1-10.
Default: 1.
Parameter behavior:
- supported if Draw is set to ”Line” or ”Staircase”
Point 0 ds.0.pointsize Possible values range from 1-10.
size
Default: 3.
Parameter behavior:
- supported if Draw is set to ”Points”
Transparency 0 ds.0.transparency Possible values range from 1-10.
Default: 5.
Fill 0 ds.0.fill Possible values range from 1-10.
Default: 3.
Parameter behavior:
- supported if Draw is set to ”Line” or ”Staircase”
Missing 0 ds.0.missingdatafunc0 - (default) None;
data 1 - Connected;
2 - Treat as 0;
3 - Last known.
Parameter behavior:
- supported if Draw is set to ”Line” or ”Staircase”
Y- 0 ds.0.axisy 0 - (default) Left;
axis 1 - Right.
Time 1 ds.0.timeshift Valid time string (e.g. 3600, 1h, etc.).
shift You may use time suffixes. Negative values are also allowed.
1039
Parameter type name value
Aggregation 0 ds.0.aggregate_function
0 - (default) not used;
func- 1 - min;
tion 2 - max;
3 - avg;
4 - count;
5 - sum;
6 - first;
7 - last.
Aggregation 1 ds.0.aggregate_interval
Valid time string (e.g. 3600, 1h, etc.).
in- You may use time suffixes.
ter-
val Default: 1h.
Aggregate 0 ds.0.aggregate_grouping
0 - (default) Each item;
1 - Data set.
Parameter behavior:
- supported if Aggregation function is set to ”min”, ”max”, ”avg”,
”count”, ”sum”, ”first”, or ”last”
Approximation 0 ds.0.approximation 1 - min;
2 - (default) avg;
4 - max;
7 - all.
Data 1 ds.0.data_set_label Any string value.
set
la- Default: "" (empty).
bel
Display options
Parameter behavior:
- supported if Y-axis (in Data set configuration) is set to ”Left”
Value 0 percentile_left_value Possible values range from 1-100.
Parameter behavior:
- supported if Y-axis (in Data set configuration) is set to ”Left”
Percentile
line
(right)
1040
Parameter type name value
Parameter behavior:
- supported if Y-axis (in Data set configuration) is set to ”Right”
Value 0 percentile_right_valuePossible values range from 1-100.
Parameter behavior:
- supported if Y-axis (in Data set configuration) is set to ”Right”
Time period
Default: DASHBOARD._timeperiod
Alternatively, you can set the time period only in the From and To
parameters.
From 1 time_period.from Valid time string in absolute (YYYY-MM-DD hh:mm:ss) or relative
time syntax (now, now/d, now/w-1w, etc.).
Parameter behavior:
- supported if Time period is not set
To 1 time_period.to Valid time string in absolute (YYYY-MM-DD hh:mm:ss) or relative
time syntax (now, now/d, now/w-1w, etc.).
Parameter behavior:
- supported if Time period is not set
Axes
Parameter behavior:
- supported if Y-axis (in Data set configuration) is set to ”Left”
Right Y 0 righty 0 - (default) Disabled;
1 - Enabled.
Parameter behavior:
- supported if Y-axis (in Data set configuration) is set to ”Right”
Min 1 lefty_min Any numeric value.
1041
Parameter type name value
Legend
Parameter behavior:
- supported if Show legend is set to ”Enabled”
Display 0 legend_statistic 0 - (default) Disabled;
min/avg/max 1 - Enabled.
Parameter behavior:
- supported if Show legend is set to ”Enabled”
Show aggregation 0 legend_aggregation 0 - (default) Disabled;
function 1 - Enabled.
Parameter behavior:
- supported if Show legend is set to ”Enabled”
Rows 0 legend_lines_mode 0 - (default) Fixed;
1 - Variable.
Parameter behavior:
- supported if Show legend is set to ”Enabled”
Number of rows/ 0 legend_lines Possible values range from 1-10.
Maximum number
of rows Default: 1.
Parameter behavior:
- supported if Show legend is set to ”Enabled”
Number of columns 0 legend_columns Possible values range from 1-4.
Default: 4.
Parameter behavior:
- supported if Show legend is set to ”Enabled”, and Display min/avg/max
is set to ”Disabled”
Problems
1042
Parameter type name value
Note: The number in the property name references tag order in the
tag evaluation list.
Parameter behavior:
- required if configuring Problem tags
Operator 0 tags.0.operator 0 - Contains;
1 - Equals;
2 - Does not contain;
3 - Does not equal;
4 - Exists;
5 - Does not exist.
Note: The number in the property name references tag order in the
tag evaluation list.
Parameter behavior:
- required if configuring Problem tags
Tag value 1 tags.0.value Any string value.
Note: The number in the property name references tag order in the
tag evaluation list.
Parameter behavior:
- required if configuring Problem tags
Overrides
1043
Note:
The first number in the property name (e.g. or.0.hosts.0, or.0.items.0) represents the particular data set, while the second
number, if present, represents the configured host or item.
Parameter behavior:
- required if configuring Overrides
Item patterns 1 or.0.items.0 Item name or pattern (e.g. *: Number of processed *values per
second).
When configuring the widget on a template dashboard, only the
patterns for items configured on the template should be set.
Parameter behavior:
- required if configuring Overrides
Base color 1 or.0.color Hexadecimal color code (e.g. FF0000).
Width 0 or.0.width Possible values range from 1-10.
Draw 0 or.0.type 0 - Line;
1 - Points;
2 - Staircase;
3 - Bar.
Transparency 0 or.0.transparency Possible values range from 1-10.
Fill 0 or.0.fill Possible values range from 1-10.
Point size 0 or.0.pointsize Possible values range from 1-10.
Missing data 0 or.0.missingdatafunc 0 - None;
1 - Connected;
2 - Treat as 0;
3 - Last known.
Y-axis 0 or.0.axisy 0 - Left;
1 - Right.
Time shift 1 or.0.timeshift Valid time string (e.g. 3600, 1h, etc.).
You may use time suffixes. Negative values are allowed.
Examples
The following examples aim to only describe the configuration of the dashboard widget field objects for the Graph widget. For more
information on configuring a dashboard, see dashboard.create.
Configuring a Graph widget
Request:
{
"jsonrpc": "2.0",
"method": "dashboard.create",
"params": {
1044
"name": "My dashboard",
"display_period": 30,
"auto_start": 1,
"pages": [
{
"widgets": [
{
"type": "svggraph",
"name": "Graph",
"x": 0,
"y": 0,
"width": 36,
"height": 5,
"view_mode": 0,
"fields": [
{
"type": 0,
"name": "ds.0.dataset_type",
"value": 0
},
{
"type": 4,
"name": "ds.0.itemids.1",
"value": 23264
},
{
"type": 1,
"name": "ds.0.color.1",
"value": "FF0000"
},
{
"type": 4,
"name": "ds.0.itemids.2",
"value": 23269
},
{
"type": 1,
"name": "ds.0.color.2",
"value": "BF00FF"
},
{
"type": 4,
"name": "ds.0.itemids.3",
"value": 23257
},
{
"type": 1,
"name": "ds.0.color.3",
"value": "0040FF"
},
{
"type": 0,
"name": "ds.0.width",
"value": 3
},
{
"type": 0,
"name": "ds.0.transparency",
"value": 3
},
{
"type": 0,
1045
"name": "ds.0.fill",
"value": 1
},
{
"type": 1,
"name": "ds.1.hosts.0",
"value": "Zabbix server"
},
{
"type": 1,
"name": "ds.1.items.0",
"value": "*: Number of processed *values per second"
},
{
"type": 1,
"name": "ds.1.color",
"value": "000000"
},
{
"type": 0,
"name": "ds.1.transparency",
"value": 0
},
{
"type": 0,
"name": "ds.1.fill",
"value": 0
},
{
"type": 0,
"name": "ds.1.axisy",
"value": 1
},
{
"type": 0,
"name": "ds.1.aggregate_function",
"value": 3
},
{
"type": 1,
"name": "ds.1.aggregate_interval",
"value": "1m"
},
{
"type": 0,
"name": "ds.1.aggregate_grouping",
"value": 1
},
{
"type": 1,
"name": "ds.1.data_set_label",
"value": "Number of processed values per second"
},
{
"type": 0,
"name": "graph_time",
"value": 1
},
{
"type": 1,
"name": "time_period.from",
"value": "now-3h"
1046
},
{
"type": 0,
"name": "legend_statistic",
"value": 1
},
{
"type": 0,
"name": "legend_lines",
"value": 4
},
{
"type": 0,
"name": "show_problems",
"value": 1
},
{
"type": 1,
"name": "reference",
"value": "YZABC"
}
]
}
]
}
],
"userGroups": [
{
"usrgrpid": 7,
"permission": 2
}
],
"users": [
{
"userid": 1,
"permission": 3
}
]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"dashboardids": [
"3"
]
},
"id": 1
}
See also
Description
1047
These parameters and the possible property values for the respective dashboard widget field objects allow to configure the Graph
(classic) widget in dashboard.create and dashboard.update methods.
Attention:
Widget fields properties are not validated during the creation or update of a dashboard. This allows users to modify
built-in widgets and create custom widgets, but also introduces the risk of creating or updating widgets incorrectly. To
ensure the successful creation or update of the Graph (classic) widget, please refer to the parameter behavior outlined in
the tables below.
Parameters
The following parameters are supported for the Graph (classic) widget.
Parameter behavior:
- required if Source is set to ”Graph”
Graph (Widget) 1 graphid._reference Instead of Graph ID:
ABCDE._graphid - set a compatible widget (with its Reference
parameter set to ”ABCDE”) as the data source for graphs.
Parameter behavior:
- required if Source is set to ”Simple graph” and Graph is not set
Item 4 itemid.0 Item ID.
Parameter behavior:
- required if Source is set to ”Simple graph” and Item (Widget) is not
set
Item (Widget) 1 itemid._reference Instead of Item ID:
ABCDE._itemid - set a compatible widget (with its Reference
parameter set to ”ABCDE”) as the data source for items.
Parameter behavior:
- required if Source is set to ”Simple graph” and Item is not set
Time 1 DASHBOARD._timeperiod - set the Time period selector as the
time_period._reference
pe- data source;
riod ABCDE._timeperiod - set a compatible widget (with its Reference
parameter set to ”ABCDE”) as the data source.
Default: DASHBOARD._timeperiod
Alternatively, you can set the time period only in the From and To
parameters.
From 1 time_period.from Valid time string in absolute (YYYY-MM-DD hh:mm:ss) or relative
time syntax (now, now/d, now/w-1w, etc.).
Parameter behavior:
- supported if Time period is not set
To 1 time_period.to Valid time string in absolute (YYYY-MM-DD hh:mm:ss) or relative
time syntax (now, now/d, now/w-1w, etc.).
Parameter behavior:
- supported if Time period is not set
1048
Parameter type name value
Parameter behavior:
- required
Examples
The following examples aim to only describe the configuration of the dashboard widget field objects for the Graph (classic) widget.
For more information on configuring a dashboard, see dashboard.create.
Configuring a Graph (classic) widget
Configure a Graph (classic) widget that displays a simple graph for the item ”42269”.
Request:
{
"jsonrpc": "2.0",
"method": "dashboard.create",
"params": {
"name": "My dashboard",
"display_period": 30,
"auto_start": 1,
"pages": [
{
"widgets": [
{
"type": "graph",
"name": "Graph (classic)",
"x": 0,
"y": 0,
"width": 36,
"height": 5,
"view_mode": 0,
"fields": [
{
"type": 0,
"name": "source_type",
"value": 1
},
{
"type": 4,
"name": "itemid.0",
"value": 42269
},
{
"type": 1,
"name": "reference",
"value": "RSTUV"
}
]
1049
}
]
}
],
"userGroups": [
{
"usrgrpid": 7,
"permission": 2
}
],
"users": [
{
"userid": 1,
"permission": 3
}
]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"dashboardids": [
"3"
]
},
"id": 1
}
See also
Description
These parameters and the possible property values for the respective dashboard widget field objects allow to configure the Graph
prototype widget in dashboard.create and dashboard.update methods.
Attention:
Widget fields properties are not validated during the creation or update of a dashboard. This allows users to modify
built-in widgets and create custom widgets, but also introduces the risk of creating or updating widgets incorrectly. To
ensure the successful creation or update of the Graph prototype widget, please refer to the parameter behavior outlined
in the tables below.
Parameters
The following parameters are supported for the Graph prototype widget.
1050
Parameter type name value
Default: DASHBOARD._timeperiod
Alternatively, you can set the time period only in the From and To
parameters.
From 1 time_period.from Valid time string in absolute (YYYY-MM-DD hh:mm:ss) or relative
time syntax (now, now/d, now/w-1w, etc.).
Parameter behavior:
- supported if Time period is not set
To 1 time_period.to Valid time string in absolute (YYYY-MM-DD hh:mm:ss) or relative
time syntax (now, now/d, now/w-1w, etc.).
Parameter behavior:
- supported if Time period is not set
Show 0 show_legend 0 - Disabled;
leg- 1 - (default) Enabled.
end
Override 1 ABCDE._hostid - set a compatible widget (with its Reference
override_hostid._reference
host parameter set to ”ABCDE”) as the data source for hosts;
DASHBOARD._hostid - set the dashboard Host selector as the data
source for hosts.
Default: 2.
Rows 0 rows Possible values range from 1-16.
Default: 1.
Reference 1 reference Any string value consisting of 5 characters (e.g., ABCDE or JBPNL).
This value must be unique within the dashboard to which the widget
belongs.
Parameter behavior:
- required
Examples
The following examples aim to only describe the configuration of the dashboard widget field objects for the Graph prototype widget.
For more information on configuring a dashboard, see dashboard.create.
Configuring a Graph prototype widget
Configure a Graph prototype widget that displays a grid of 3 graphs (3 columns, 1 row) created from an item prototype (ID: ”42316”)
by low-level discovery.
Request:
1051
{
"jsonrpc": "2.0",
"method": "dashboard.create",
"params": {
"name": "My dashboard",
"display_period": 30,
"auto_start": 1,
"pages": [
{
"widgets": [
{
"type": "graphprototype",
"name": "Graph prototype",
"x": 0,
"y": 0,
"width": 48,
"height": 5,
"view_mode": 0,
"fields": [
{
"type": 0,
"name": "source_type",
"value": 3
},
{
"type": 5,
"name": "itemid.0",
"value": 42316
},
{
"type": 0,
"name": "columns",
"value": 3
},
{
"type": 1,
"name": "reference",
"value": "OPQWX"
}
]
}
]
}
],
"userGroups": [
{
"usrgrpid": 7,
"permission": 2
}
],
"users": [
{
"userid": 1,
"permission": 3
}
]
},
"id": 1
}
Response:
1052
{
"jsonrpc": "2.0",
"result": {
"dashboardids": [
"3"
]
},
"id": 1
}
See also
Description
These parameters and the possible property values for the respective dashboard widget field objects allow to configure the Hon-
eycomb widget in dashboard.create and dashboard.update methods.
Attention:
Widget fields properties are not validated during the creation or update of a dashboard. This allows users to modify
built-in widgets and create custom widgets, but also introduces the risk of creating or updating widgets incorrectly. To
ensure the successful creation or update of the Honeycomb widget, please refer to the parameter behavior outlined in the
tables below.
Parameters
1053
Parameter type name value
Note: The number in the property name references tag order in the
tag evaluation list.
Parameter behavior:
- required if configuring Host tags
Note: The number in the property name references tag order in the
tag evaluation list.
Parameter behavior:
- required if configuring Host tags
Note: The number in the property name references tag order in the
tag evaluation list.
Parameter behavior:
- required if configuring Host tags
1054
Parameter type name value
Parameter behavior:
- required
Item
tags
Evaluation type 0 evaltype_item 0 - (default) And/Or;
2 - Or.
Tag name 1 item_tags.0.tag Any string value.
Note: The number in the property name references tag order in the
tag evaluation list.
Parameter behavior:
- required if configuring Item tags
Operator 0 item_tags.0.operator 0 - Contains;
1 - Equals;
2 - Does not contain;
3 - Does not equal;
4 - Exists;
5 - Does not exist.
Note: The number in the property name references tag order in the
tag evaluation list.
Parameter behavior:
- required if configuring Item tags
Tag value 1 item_tags.0.value Any string value.
Note: The number in the property name references tag order in the
tag evaluation list.
Parameter behavior:
- required if configuring Item tags
Show 0 maintenance 0 - (default) Disabled;
hosts 1 - Enabled.
in
main-
te-
nance
Show 0 show.0 1 - Primary label;
2 - Secondary label.
Default: 1, 2.
Reference 1 reference Any string value consisting of 5 characters (e.g., ABCDE or JBPNL).
This value must be unique within the dashboard to which the widget
belongs.
Parameter behavior:
- required
Advanced configuration
1055
The following advanced configuration parameters are supported for the Honeycomb widget.
Note:
The number in the Thresholds property name (e.g. thresholds.0.color) references the threshold place in a list, sorted
in ascending order. However, if thresholds are configured in a different order, the values will be sorted in as-
"thresholds.0.threshold":"5"
cending order after updating widget configuration in Zabbix frontend (e.g. →
"thresholds.0.threshold":"1"; "thresholds.1.threshold":"1" → "thresholds.1.threshold": "5").
Primary label
Type 0 primary_label_type 0 - (default) Text;
1 - Value.
Text 1 primary_label Any string value, including macros.
Supported macros: {HOST.*}, {ITEM.*}, {INVENTORY.*}, user macros.
Default: {HOST.NAME}
Parameter behavior:
- supported if Type is set to ”Text”
Decimal places 0 primary_label_decimal_places
Possible values range from 0-6.
Default: 2.
Parameter behavior:
- supported if Type is set to ”Value”
Size (type) 0 primary_label_size_type
0 - (default) Auto;
1 - Custom.
Size 0 primary_label_size Possible values range from 1-100.
Default: 20.
Parameter behavior:
- supported if Size (type) is set to ”Custom”
Bold 0 primary_label_bold 0 - (default) Disabled;
1 - Enabled.
Color 1 primary_label_color Hexadecimal color code (e.g. FF0000).
Parameter behavior:
- supported if Type is set to ”Value”
Units (value) 1 primary_label_units Any string value.
"" (empty)
Parameter behavior:
- supported if Type is set to ”Value” and Units (checkbox) is set to
”Enabled”
Position 0 primary_label_units_pos
0 - Before value;
1 - (default) After value.
Parameter behavior:
- supported if Type is set to ”Value” and Units (checkbox) is set to
”Enabled”
1056
Parameter type name value
Default: {{ITEM.LASTVALUE}.fmtnum(2)}
Parameter behavior:
- supported if Type is set to ”Text”
Decimal places 0 secondary_label_decimal_places
Possible values range from 0-6.
Default: 2.
Parameter behavior:
- supported if Type is set to ”Value”
Size (type) 0 secondary_label_size_type
0 - (default) Auto;
1 - Custom.
Size 0 secondary_label_size Possible values range from 1-100.
Default: 30.
Parameter behavior:
- supported if Size (type) is set to ”Custom”
Bold 0 secondary_label_bold 0 - Disabled;
1 - (default) Enabled.
Color 1 secondary_label_color Hexadecimal color code (e.g. FF0000).
Parameter behavior:
- supported if Type is set to ”Value”
Units (value) 1 secondary_label_units Any string value.
"" (empty)
Parameter behavior:
- supported if Type is set to ”Value” and Units (checkbox) is set to
”Enabled”
Position 0 secondary_label_position
0 - Before value;
1 - (default) After value.
Parameter behavior:
- supported if Type is set to ”Value” and Units (checkbox) is set to
”Enabled”
1057
Parameter type name value
Examples
The following examples aim to only describe the configuration of the dashboard widget field objects for the Honeycomb widget.
For more information on configuring a dashboard, see dashboard.create.
Configuring a Honeycomb widget
Configure a Honeycomb widget that displays the utilization of Zabbix server processes. In addition, change the primary label of
honeycomb cells and visually fine-tune the widget with thresholds.
Request:
{
"jsonrpc": "2.0",
"method": "dashboard.create",
"params": {
"name": "My dashboard",
"display_period": "30",
"auto_start": "1",
"pages": [
{
"widgets": [
{
"type": "honeycomb",
"name": "Honeycomb",
"x": "0",
"y": "0",
"width": "24",
"height": "5",
"view_mode": "0",
"fields": [
{
"type": 2,
"name": "groupids.0",
"value": 4
},
{
"type": 3,
"name": "hostids.0",
"value": 10084
},
{
"type": 1,
"name": "items.0",
"value": "Zabbix server: Utilization*"
},
{
"type": 1,
"name": "primary_label",
"value": "{ITEM.NAME}"
},
{
"type": 1,
"name": "thresholds.0.color",
"value": "0EC9AC"
},
{
"type": 1,
1058
"name": "thresholds.0.threshold",
"value": "0"
},
{
"type": 1,
"name": "thresholds.1.color",
"value": "FFD54F"
},
{
"type": 1,
"name": "thresholds.1.threshold",
"value": "70"
},
{
"type": 1,
"name": "thresholds.2.color",
"value": "FF465C"
},
{
"type": 1,
"name": "thresholds.2.threshold",
"value": "90"
},
{
"type": 1,
"name": "reference",
"value": "KSTMQ"
}
]
}
]
}
],
"userGroups": [
{
"usrgrpid": 7,
"permission": 2
}
],
"users": [
{
"userid": 1,
"permission": 3
}
]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"dashboardids": [
"3"
]
},
"id": 1
}
See also
1059
• dashboard.create
• dashboard.update
13 Host availability
Description
These parameters and the possible property values for the respective dashboard widget field objects allow to configure the Host
availability widget in dashboard.create and dashboard.update methods.
Parameters
The following parameters are supported for the Host availability widget.
1060
Examples
The following examples aim to only describe the configuration of the dashboard widget field objects for the Host availability widget.
For more information on configuring a dashboard, see dashboard.create.
Configuring a Host availability widget
Configure a Host availability widget that displays availability information (in a vertical layout) for hosts in host group ”4” with
”Zabbix agent” and ”SNMP” interfaces configured.
Request:
{
"jsonrpc": "2.0",
"method": "dashboard.create",
"params": {
"name": "My dashboard",
"display_period": 30,
"auto_start": 1,
"pages": [
{
"widgets": [
{
"type": "hostavail",
"name": "Host availability",
"x": 0,
"y": 0,
"width": 18,
"height": 3,
"view_mode": 0,
"fields": [
{
"type": 2,
"name": "groupids.0",
"value": 4
},
{
"type": 0,
"name": "interface_type",
"value": 1
},
{
"type": 0,
"name": "interface_type",
"value": 2
},
{
"type": 0,
"name": "layout",
"value": 1
}
]
}
]
}
],
"userGroups": [
{
"usrgrpid": 7,
"permission": 2
}
],
"users": [
{
"userid": 1,
1061
"permission": 3
}
]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"dashboardids": [
"3"
]
},
"id": 1
}
See also
Description
These parameters and the possible property values for the respective dashboard widget field objects allow to configure the Host
navigator widget in dashboard.create and dashboard.update methods.
Attention:
Widget fields properties are not validated during the creation or update of a dashboard. This allows users to modify
built-in widgets and create custom widgets, but also introduces the risk of creating or updating widgets incorrectly. To
ensure the successful creation or update of the Host navigator widget, please refer to the parameter behavior outlined in
the tables below.
Parameters
The following parameters are supported for the Host navigator widget.
1062
Parameter type name value
Note: The number in the property name references tag order in the
tag evaluation list.
Parameter behavior:
- required if configuring Host tags
Note: The number in the property name references tag order in the
tag evaluation list.
Parameter behavior:
- required if configuring Host tags
1063
Parameter type name value
Note: The number in the property name references tag order in the
tag evaluation list.
Parameter behavior:
- required if configuring Host tags
Parameter behavior:
- required if configuring Group by
Value 1 group_by.0.tag_nameAny string value.
Parameter behavior:
- required if configuring Group by and Attribute is set to ”Tag value”
Host 0 show_lines Possible values range from 1-9999.
limit
Default: 100.
Parameter behavior:
- required
1064
Examples
The following examples aim to only describe the configuration of the dashboard widget field objects for the Host navigator widget.
For more information on configuring a dashboard, see dashboard.create.
Configuring a Host navigator widget
Configure a Host navigator widget that displays hosts grouped by their host group and, then, by the ”City” tag value.
Request:
{
"jsonrpc": "2.0",
"method": "dashboard.create",
"params": {
"name": "My dashboard",
"display_period": "30",
"auto_start": "1",
"pages": [
{
"widgets": [
{
"type": "hostnavigator",
"name": "Host navigator",
"x": "0",
"y": "0",
"width": "12",
"height": "5",
"view_mode": "0",
"fields": [
{
"type": 2,
"name": "groupids.0",
"value": 2
},
{
"type": 2,
"name": "groupids.1",
"value": 4
},
{
"type": 0,
"name": "group_by.0.attribute",
"value": 0
},
{
"type": 0,
"name": "group_by.1.attribute",
"value": 1
},
{
"type": 1,
"name": "group_by.1.tag_name",
"value": "City"
},
{
"type": 1,
"name": "reference",
"value": "SWKLB"
}
]
}
]
}
],
1065
"userGroups": [
{
"usrgrpid": 7,
"permission": 2
}
],
"users": [
{
"userid": 1,
"permission": 3
}
]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"dashboardids": [
"3"
]
},
"id": 1
}
See also
Description
These parameters and the possible property values for the respective dashboard widget field objects allow to configure the Item
history widget in dashboard.create and dashboard.update methods.
Attention:
Widget fields properties are not validated during the creation or update of a dashboard. This allows users to modify
built-in widgets and create custom widgets, but also introduces the risk of creating or updating widgets incorrectly. To
ensure the successful creation or update of the Item history widget, please refer to the parameter behavior outlined in the
tables below.
Parameters
The following parameters are supported for the Item history widget.
1066
Parameter type name value
Default: 25.
Override host 1 ABCDE._hostid - set a compatible widget (with its Reference
override_hostid._reference
parameter set to ”ABCDE”) as the data source for hosts;
DASHBOARD._hostid - set the dashboard Host selector as the data
source for hosts.
Parameter behavior:
- required
Columns
Columns have common parameters and additional parameters depending on the configuration of the Item parameter.
Note:
For all parameters related to columns, the number in the property name (e.g. columns.0.name) references a column for
which the parameter is configured.
Parameter behavior:
- required
Item 4 columns.0.itemid Item ID.
Parameter behavior:
- required
Base color 1 columns.0.base_color Hexadecimal color code (e.g. FF0000).
The following column parameters are supported if the configured Item is a numeric type item.
Parameter behavior:
- supported if Display is set to ”Bar” or ”Indicators”
Max 1 columns.0.max Any numeric value.
Parameter behavior:
- supported if Display is set to ”Bar” or ”Indicators”
1067
Parameter type name value
Thresholds
Color 1 columns.0.thresholds.0.color
Hexadecimal color code (e.g. FF0000).
Threshold 1 Any numeric value. Suffixes (e.g. ”1d”, ”2w”, ”4K”, ”8G”) are
columns.0.thresholds.0.threshold
supported.
History 0 columns.0.history 0 - (default) Auto;
data 1 - History;
2 - Trends.
The following column parameters are supported if the configured Item is a character, text, or log type item.
Highlights
Highlight 1 columns.0.highlights.0.color
Hexadecimal color code (e.g. FF0000).
Threshold 1 columns.0.highlights.0.pattern
Any regular expression.
Display 0 columns.0.display 1 - (default) As is;
4 - HTML;
5 - Single line.
Single line 0 columns.0.max_length
Possible values range from 1-500.
Default: 100.
Parameter behavior:
- supported if Display is set to ”Single line”
Use 0 columns.0.monospace_font
0 - (default) Use default font;
monospace 1 - Use monospace font.
font
Display 0 columns.0.local_time0 - (default) Display timestamp;
lo- 1 - Display local time.
cal
time Parameter behavior:
- supported if Item is set to log type item, and Show timestamp is set
to ”Enabled”
The following column parameters are supported if the configured Item is a binary type item.
Advanced configuration
The following advanced configuration parameters are supported for the Item history widget.
1068
Parameter type name value
Default: DASHBOARD._timeperiod
Alternatively, you can set the time period only in the From and To
parameters.
From 1 time_period.from Valid time string in absolute (YYYY-MM-DD hh:mm:ss) or relative
time syntax (now, now/d, now/w-1w, etc.).
Parameter behavior:
- supported if Time period is not set
To 1 time_period.to Valid time string in absolute (YYYY-MM-DD hh:mm:ss) or relative
time syntax (now, now/d, now/w-1w, etc.).
Parameter behavior:
- supported if Time period is not set
Examples
The following examples aim to only describe the configuration of the dashboard widget field objects for the Item history widget.
For more information on configuring a dashboard, see dashboard.create.
Configuring an Item history widget
Configure an Item history widget that displays latest data for two numeric items ”42269” and ”42270”. In addition, configure the
item columns to be displayed vertically, with column names displayed horizontally; limit the display to 15 lines of data and include
a separate timestamp column.
Request:
{
"jsonrpc": "2.0",
"method": "dashboard.create",
"params": {
"name": "My dashboard",
"display_period": 30,
"auto_start": 1,
"pages": [
{
"widgets": [
{
"type": "itemhistory",
"name": "Item history",
"x": "0",
"y": "0",
"width": "18",
"height": "6",
"view_mode": "0",
"fields": [
{
"type": "0",
"name": "layout",
"value": "1"
},
{
"type": "1",
"name": "columns.0.name",
"value": "CPU utilization"
},
{
1069
"type": "4",
"name": "columns.0.itemid",
"value": "42269"
},
{
"type": "1",
"name": "columns.1.name",
"value": "Memory utilization"
},
{
"type": "4",
"name": "columns.1.itemid",
"value": "42270"
},
{
"type": "0",
"name": "show_lines",
"value": "15"
},
{
"type": "0",
"name": "show_timestamp",
"value": "1"
},
{
"type": "0",
"name": "show_column_header",
"value": "1"
},
{
"type": "1",
"name": "reference",
"value": "KIVKD"
}
]
}
]
}
],
"userGroups": [
{
"usrgrpid": 7,
"permission": 2
}
],
"users": [
{
"userid": 1,
"permission": 3
}
]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"dashboardids": [
"3"
]
1070
},
"id": 1
}
See also
Description
These parameters and the possible property values for the respective dashboard widget field objects allow to configure the Item
navigator widget in dashboard.create and dashboard.update methods.
Attention:
Widget fields properties are not validated during the creation or update of a dashboard. This allows users to modify
built-in widgets and create custom widgets, but also introduces the risk of creating or updating widgets incorrectly. To
ensure the successful creation or update of the Item navigator widget, please refer to the parameter behavior outlined in
the tables below.
Parameters
The following parameters are supported for the Item navigator widget.
1071
Parameter type name value
Note: The number in the property name references tag order in the
tag evaluation list.
Parameter behavior:
- required if configuring Host tags
Note: The number in the property name references tag order in the
tag evaluation list.
Parameter behavior:
- required if configuring Host tags
Note: The number in the property name references tag order in the
tag evaluation list.
Parameter behavior:
- required if configuring Host tags
1072
Parameter type name value
Note: The number in the property name references tag order in the
tag evaluation list.
Parameter behavior:
- required if configuring Item tags
Operator 0 item_tags.0.operator 0 - Contains;
1 - Equals;
2 - Does not contain;
3 - Does not equal;
4 - Exists;
5 - Does not exist.
Note: The number in the property name references tag order in the
tag evaluation list.
Parameter behavior:
- required if configuring Item tags
Tag value 1 item_tags.0.value Any string value.
Note: The number in the property name references tag order in the
tag evaluation list.
Parameter behavior:
- required if configuring Item tags
State 0 state -1 - (default) All;
0 - Normal;
1 - Not supported.
Show 0 show_problems 0 - All;
prob- 1 - (default) Unsuppressed;
lems 2 - None.
Group
by
Attribute 0 group_by.0.attribute 0 - Host group;
1 - Host name;
2 - Host tag value;
3 - Item tag value.
Parameter behavior:
- required if configuring Group by
Value 1 group_by.0.tag_nameAny string value.
Parameter behavior:
- required if configuring Group by and Attribute is set to ”Host tag
value” or ”Item tag value”
Item 0 show_lines Possible values range from 1-9999.
limit
Default: 100.
Reference 1 reference Any string value consisting of 5 characters (e.g., ABCDE or JBPNL).
This value must be unique within the dashboard to which the widget
belongs.
Parameter behavior:
- required
1073
Examples
The following examples aim to only describe the configuration of the dashboard widget field objects for the Item navigator widget.
For more information on configuring a dashboard, see dashboard.create.
Configuring an Item navigator widget
Configure an Item navigator widget that displays up to 1000 items grouped by their host and, then, by the ”component” item tag
value.
Request:
{
"jsonrpc": "2.0",
"method": "dashboard.create",
"params": {
"name": "My dashboard",
"display_period": "30",
"auto_start": "1",
"pages": [
{
"widgets": [
{
"type": "itemnavigator",
"name": "Item navigator",
"x": "0",
"y": "0",
"width": "12",
"height": "5",
"view_mode": "0",
"fields": [
{
"type": 0,
"name": "group_by.0.attribute",
"value": 0
},
{
"type": 0,
"name": "group_by.1.attribute",
"value": 3
},
{
"type": 1,
"name": "group_by.1.tag_name",
"value": "component"
},
{
"type": 0,
"name": "show_lines",
"value": 1000
},
{
"type": 1,
"name": "reference",
"value": "DFNLK"
}
]
}
]
}
],
"userGroups": [
{
"usrgrpid": 7,
"permission": 2
1074
}
],
"users": [
{
"userid": 1,
"permission": 3
}
]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"dashboardids": [
"3"
]
},
"id": 1
}
See also
Description
These parameters and the possible property values for the respective dashboard widget field objects allow to configure the Item
value widget in dashboard.create and dashboard.update methods.
Attention:
Widget fields properties are not validated during the creation or update of a dashboard. This allows users to modify
built-in widgets and create custom widgets, but also introduces the risk of creating or updating widgets incorrectly. To
ensure the successful creation or update of the Item value widget, please refer to the parameter behavior outlined in the
tables below.
Parameters
The following parameters are supported for the Item value widget.
Parameter behavior:
- required if Item (Widget) is not set
1075
Parameter type name value
Parameter behavior:
- required if Item is not set
Show 0 show.0 1 - Description;
2 - Value;
3 - Time;
4 - Change indicator.
Advanced configuration
The following advanced configuration parameters are supported for the Item value widget.
Note:
The number in the Thresholds property name (e.g. thresholds.0.color) references the threshold place in a list, sorted
in ascending order. However, if thresholds are configured in a different order, the values will be sorted in as-
"thresholds.0.threshold":"5"
cending order after updating widget configuration in Zabbix frontend (e.g. →
"thresholds.0.threshold":"1"; "thresholds.1.threshold":"1" → "thresholds.1.threshold": "5").
1076
Parameter type name value
Default: DASHBOARD._timeperiod
Alternatively, you can set the time period only in the From and To
parameters.
Parameter behavior:
- supported if Aggregation function is set to ”min”, ”max”, ”avg”,
”count”, ”sum”, ”first”, ”last”
From 1 time_period.from Valid time string in absolute (YYYY-MM-DD hh:mm:ss) or relative
time syntax (now, now/d, now/w-1w, etc.).
Parameter behavior:
- supported if Time period is not set and Aggregation function is set
to ”min”, ”max”, ”avg”, ”count”, ”sum”, ”first”, ”last”
To 1 time_period.to Valid time string in absolute (YYYY-MM-DD hh:mm:ss) or relative
time syntax (now, now/d, now/w-1w, etc.).
Parameter behavior:
- supported if Time period is not set and Aggregation function is set
to ”min”, ”max”, ”avg”, ”count”, ”sum”, ”first”, ”last”
History 0 history 0 - (default) Auto;
data 1 - History;
2 - Trends.
Description
The following advanced configuration parameters are supported if Show is set to ”Description”.
Default: {ITEM.NAME}.
Horizontal position 0 desc_h_pos 0 - Left;
1 - (default) Center;
2 - Right.
Two or more elements (Description, Value, Time) cannot share the same
Horizontal position and Vertical position.
Vertical position 0 desc_v_pos 0 - Top;
1 - Middle;
2 - (default) Bottom.
Two or more elements (Description, Value, Time) cannot share the same
Horizontal position and Vertical position.
Size 0 desc_size Possible values range from 1-100.
Default: 15.
Bold 0 desc_bold 0 - (default) Disabled;
1 - Enabled.
Color 1 desc_color Hexadecimal color code (e.g. FF0000).
Value
1077
The following advanced configuration parameters are supported if Show is set to ”Value”.
Decimal
places
Decimal places 0 decimal_places Possible values range from 1-10.
Default: 2.
Size 0 decimal_size Possible values range from 1-100.
Default: 35.
Position
Horizontal 0 value_h_pos 0 - Left;
position 1 - (default) Center;
2 - Right.
Default: 45.
Bold 0 value_bold 0 - Disabled;
1 - (default) Enabled.
Color 1 value_color Hexadecimal color code (e.g. FF0000).
Default: 35.
Bold 0 units_bold 0 - Disabled;
1 - (default) Enabled.
Color 1 units_color Hexadecimal color code (e.g. FF0000).
Time
The following advanced configuration parameters are supported if Show is set to ”Time”.
Two or more elements (Description, Value, Time) cannot share the same
Horizontal position and Vertical position.
1078
Parameter type name value
Two or more elements (Description, Value, Time) cannot share the same
Horizontal position and Vertical position.
Size 0 time_size Possible values range from 1-100.
Default: 15.
Bold 0 time_bold 0 - (default) Disabled;
1 - Enabled.
Color 1 time_color Hexadecimal color code (e.g. FF0000).
Change indicator
The following advanced configuration parameters are supported if Show is set to ”Change indicator”.
Examples
The following examples aim to only describe the configuration of the dashboard widget field objects for the Item value widget. For
more information on configuring a dashboard, see dashboard.create.
Configuring an Item value widget
Configure an Item value widget that displays the item value for the item ”42266” (Zabbix agent availability). In addition, visually
fine-tune the widget with multiple advanced options, including a dynamic background color that changes based on the availability
status of Zabbix agent.
Request:
{
"jsonrpc": "2.0",
"method": "dashboard.create",
"params": {
"name": "My dashboard",
"display_period": 30,
"auto_start": 1,
"pages": [
{
"widgets": [
{
"type": "item",
"name": "Item value",
"x": 0,
"y": 0,
"width": 12,
"height": 3,
"view_mode": 0,
"fields": [
1079
{
"type": 4,
"name": "itemid.0",
"value": 42266
},
{
"type": 0,
"name": "show.0",
"value": 1
},
{
"type": 0,
"name": "show.1",
"value": 2
},
{
"type": 0,
"name": "show.2",
"value": 3
},
{
"type": 1,
"name": "description",
"value": "Agent status"
},
{
"type": 0,
"name": "desc_h_pos",
"value": 0
},
{
"type": 0,
"name": "desc_v_pos",
"value": 0
},
{
"type": 0,
"name": "desc_bold",
"value": 1
},
{
"type": 1,
"name": "desc_color",
"value": "F06291"
},
{
"type": 0,
"name": "value_h_pos",
"value": 0
},
{
"type": 0,
"name": "value_size",
"value": 25
},
{
"type": 1,
"name": "value_color",
"value": "FFFF00"
},
{
"type": 0,
1080
"name": "units_show",
"value": 0
},
{
"type": 0,
"name": "time_h_pos",
"value": 2
},
{
"type": 0,
"name": "time_v_pos",
"value": 2
},
{
"type": 0,
"name": "time_size",
"value": 10
},
{
"type": 0,
"name": "time_bold",
"value": 1
},
{
"type": 1,
"name": "time_color",
"value": "9FA8DA"
},
{
"type": 1,
"name": "thresholds.0.color",
"value": "E1E1E1"
},
{
"type": 1,
"name": "thresholds.0.threshold",
"value": "0"
},
{
"type": 1,
"name": "thresholds.1.color",
"value": "D1C4E9"
},
{
"type": 1,
"name": "thresholds.1.threshold",
"value": "1"
}
]
}
]
}
],
"userGroups": [
{
"usrgrpid": 7,
"permission": 2
}
],
"users": [
{
"userid": 1,
1081
"permission": 3
}
]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"dashboardids": [
"3"
]
},
"id": 1
}
See also
Description
These parameters and the possible property values for the respective dashboard widget field objects allow to configure the Map
widget in dashboard.create and dashboard.update methods.
Attention:
Widget fields properties are not validated during the creation or update of a dashboard. This allows users to modify
built-in widgets and create custom widgets, but also introduces the risk of creating or updating widgets incorrectly. To
ensure the successful creation or update of the Map widget, please refer to the parameter behavior outlined in the tables
below.
Parameters
Parameter behavior:
- required if Map (Widget) is not set
Map 1 sysmapid._reference ABCDE._mapid - set a Map navigation tree widget (with its Reference
(Wid- parameter set to ”ABCDE”) as the data source for maps.
get)
Parameter behavior:
- required if Map is not set
1082
Parameter type name value
Reference 1 reference Any string value consisting of 5 characters (e.g., ABCDE or JBPNL).
This value must be unique within the dashboard to which the widget
belongs.
Parameter behavior:
- required
Examples
The following examples aim to only describe the configuration of the dashboard widget field objects for the Map widget. For more
information on configuring a dashboard, see dashboard.create.
Configuring a Map widget
Request:
{
"jsonrpc": "2.0",
"method": "dashboard.create",
"params": {
"name": "My dashboard",
"display_period": 30,
"auto_start": 1,
"pages": [
{
"widgets": [
{
"type": "map",
"name": "Map",
"x": 0,
"y": 0,
"width": 54,
"height": 5,
"view_mode": 0,
"fields": [
{
"type": 8,
"name": "sysmapid.0",
"value": 1
}
]
}
]
}
],
"userGroups": [
{
"usrgrpid": 7,
"permission": 2
}
],
"users": [
{
"userid": 1,
"permission": 3
}
]
},
"id": 1
}
Response:
1083
{
"jsonrpc": "2.0",
"result": {
"dashboardids": [
"3"
]
},
"id": 1
}
Request:
{
"jsonrpc": "2.0",
"method": "dashboard.create",
"params": {
"name": "My dashboard",
"display_period": 30,
"auto_start": 1,
"pages": [
{
"widgets": [
{
"type": "map",
"name": "Map",
"x": 0,
"y": 5,
"width": 54,
"height": 5,
"view_mode": 0,
"fields": [
{
"type": 1,
"name": "sysmapid._reference",
"value": "ABCDE._mapid"
}
]
},
{
"type": "navtree",
"name": "Map navigation tree",
"x": 0,
"y": 0,
"width": 18,
"height": 5,
"view_mode": 0,
"fields": [
{
"type": 1,
"name": "navtree.1.name",
"value": "Element A"
},
{
"type": 1,
"name": "navtree.2.name",
"value": "Element B"
},
{
"type": 1,
"name": "navtree.3.name",
1084
"value": "Element C"
},
{
"type": 1,
"name": "navtree.4.name",
"value": "Element A1"
},
{
"type": 1,
"name": "navtree.5.name",
"value": "Element A2"
},
{
"type": 1,
"name": "navtree.6.name",
"value": "Element B1"
},
{
"type": 1,
"name": "navtree.7.name",
"value": "Element B2"
},
{
"type": 0,
"name": "navtree.4.parent",
"value": 1
},
{
"type": 0,
"name": "navtree.5.parent",
"value": 1
},
{
"type": 0,
"name": "navtree.6.parent",
"value": 2
},
{
"type": 0,
"name": "navtree.7.parent",
"value": 2
},
{
"type": 0,
"name": "navtree.1.order",
"value": 1
},
{
"type": 0,
"name": "navtree.2.order",
"value": 2
},
{
"type": 0,
"name": "navtree.3.order",
"value": 3
},
{
"type": 0,
"name": "navtree.4.order",
"value": 1
},
1085
{
"type": 0,
"name": "navtree.5.order",
"value": 2
},
{
"type": 0,
"name": "navtree.6.order",
"value": 1
},
{
"type": 0,
"name": "navtree.7.order",
"value": 2
},
{
"type": 8,
"name": "navtree.6.sysmapid",
"value": 1
},
{
"type": 1,
"name": "reference",
"value": "ABCDE"
}
]
}
]
}
],
"userGroups": [
{
"usrgrpid": 7,
"permission": 2
}
],
"users": [
{
"userid": 1,
"permission": 3
}
]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"dashboardids": [
"3"
]
},
"id": 1
}
See also
1086
19 Map navigation tree
Description
These parameters and the possible property values for the respective dashboard widget field objects allow to configure the Map
navigation tree widget in dashboard.create and dashboard.update methods.
Attention:
Widget fields properties are not validated during the creation or update of a dashboard. This allows users to modify built-
in widgets and create custom widgets, but also introduces the risk of creating or updating widgets incorrectly. To ensure
the successful creation or update of the Map navigation tree widget, please refer to the parameter behavior outlined in
the tables below.
Parameters
The following parameters are supported for the Map navigation tree widget.
Parameter behavior:
- required
The following parameters are supported for configuring map navigation tree elements.
Note: The number in the property name sets the element number.
Linked map 8 navtree.1.sysmapid Map ID.
Examples
The following examples aim to only describe the configuration of the dashboard widget field objects for the Map navigation tree
widget. For more information on configuring a dashboard, see dashboard.create.
1087
Configuring a Map navigation tree widget
Configure a Map navigation tree widget that displays the following map navigation tree:
• Element A
– Element A1
– Element A2
• Element B
– Element B1 (contains linked map ”1” that can be displayed in a linked Map widget)
– Element B2
• Element C
Request:
{
"jsonrpc": "2.0",
"method": "dashboard.create",
"params": {
"name": "My dashboard",
"display_period": 30,
"auto_start": 1,
"pages": [
{
"widgets": [
{
"type": "navtree",
"name": "Map navigation tree",
"x": 0,
"y": 0,
"width": 18,
"height": 5,
"view_mode": 0,
"fields": [
{
"type": 1,
"name": "navtree.1.name",
"value": "Element A"
},
{
"type": 1,
"name": "navtree.2.name",
"value": "Element B"
},
{
"type": 1,
"name": "navtree.3.name",
"value": "Element C"
},
{
"type": 1,
"name": "navtree.4.name",
"value": "Element A1"
},
{
"type": 1,
"name": "navtree.5.name",
"value": "Element A2"
},
{
"type": 1,
"name": "navtree.6.name",
"value": "Element B1"
},
{
"type": 1,
1088
"name": "navtree.7.name",
"value": "Element B2"
},
{
"type": 0,
"name": "navtree.4.parent",
"value": 1
},
{
"type": 0,
"name": "navtree.5.parent",
"value": 1
},
{
"type": 0,
"name": "navtree.6.parent",
"value": 2
},
{
"type": 0,
"name": "navtree.7.parent",
"value": 2
},
{
"type": 0,
"name": "navtree.1.order",
"value": 1
},
{
"type": 0,
"name": "navtree.2.order",
"value": 2
},
{
"type": 0,
"name": "navtree.3.order",
"value": 3
},
{
"type": 0,
"name": "navtree.4.order",
"value": 1
},
{
"type": 0,
"name": "navtree.5.order",
"value": 2
},
{
"type": 0,
"name": "navtree.6.order",
"value": 1
},
{
"type": 0,
"name": "navtree.7.order",
"value": 2
},
{
"type": 8,
"name": "navtree.6.sysmapid",
"value": 1
1089
},
{
"type": 1,
"name": "reference",
"value": "HJQXF"
}
]
}
]
}
],
"userGroups": [
{
"usrgrpid": 7,
"permission": 2
}
],
"users": [
{
"userid": 1,
"permission": 3
}
]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"dashboardids": [
"3"
]
},
"id": 1
}
See also
20 Pie chart
Description
These parameters and the possible property values for the respective dashboard widget field objects allow to configure the Pie
chart widget in dashboard.create and dashboard.update methods.
Attention:
Widget fields properties are not validated during the creation or update of a dashboard. This allows users to modify
built-in widgets and create custom widgets, but also introduces the risk of creating or updating widgets incorrectly. To
ensure the successful creation or update of the Pie chart widget, please refer to the parameter behavior outlined in the
tables below.
Parameters
The following parameters are supported for the Pie chart widget.
1090
Parameter type name value
Data set
Note:
The first number in the property name (e.g. ds.0.hosts.0, ds.0.items.0) represents the particular data set, while the second
number, if present, represents the configured host or item.
Parameter behavior:
- required if Data set type is set to ”Item list” and Items (Widget) is
not set
Items (Widget) 1 ds.0.itemids.0._reference
Instead of Item ID:
ABCDE._itemid - set a compatible widget (with its Reference
parameter set to ”ABCDE”) as the data source for items.
Parameter behavior:
- required if Data set type is set to ”Item list” and Items is not set
Color 1 ds.0.color.0 Hexadecimal color code (e.g. FF0000).
Parameter behavior:
- supported if Data set type is set to ”Item list”
Item type 0 ds.0.type.0 0 - (default) Normal;
1 - Total.
The value ”Total” can be set only for one item in the whole chart.
Parameter behavior:
- supported if Data set type is set to ”Item list”
1091
Parameter type name value
Parameter behavior:
- required if Data set type is set to ”Item patterns”
Parameter behavior:
- required if Data set type is set to ”Item patterns”
Color 1 ds.0.color Hexadecimal color code (e.g. FF0000).
Parameter behavior:
- supported if Data set type is set to ”Item patterns”
Aggregation 0 ds.0.aggregate_function
1 - min;
func- 2 - max;
tion 3 - avg;
4 - count;
5 - sum;
6 - first;
7 - (default) last.
Data 0 ds.0.dataset_aggregation
0 - (default) none;
set 1 - min;
ag- 2 - max;
gre- 3 - avg;
ga- 4 - count;
tion 5 - sum.
Parameter behavior:
- supported if Item type is set to ”Total”
Data 1 ds.0.data_set_label Any string value.
set
la- Default: "" (empty).
bel
Displaying options
Parameter behavior:
- supported if Draw is set to ”Doughnut”
1092
Parameter type name value
Default: 0.
Parameter behavior:
- supported if Draw is set to ”Doughnut”
Show total value 0 total_show 0 - (default) Disabled;
1 - Enabled.
Parameter behavior:
- supported if Draw is set to ”Doughnut”
Size 0 value_size_type 0 - (default) Auto;
1 - Custom.
Parameter behavior:
- supported if Show total value is set to ”Enabled”
Size (value for 0 value_size Possible values range from 1-100.
custom size)
Default: 20.
Parameter behavior:
- supported if Show total value is set to ”Enabled”
Decimal places 0 decimal_places Possible values range from 0-6.
Default: 2.
Parameter behavior:
- supported if Show total value is set to ”Enabled”
Units (checkbox) 0 units_show 0 - (default) Disabled;
1 - Enabled.
Parameter behavior:
- supported if Show total value is set to ”Enabled”
Units (value) 1 units Any string value.
Parameter behavior:
- supported if Units (checkbox) is set to ”Enabled”
Bold 0 value_bold 0 - (default) Disabled;
1 - Enabled.
Parameter behavior:
- supported if Show total value is set to ”Enabled”
Color 1 value_color Hexadecimal color code (e.g. FF0000).
Parameter behavior:
- supported if Show total value is set to ”Enabled”
Space 0 space Possible values range from 0-10.
be-
tween Default: 1.
sec-
tors
Merge 0 merge 0 - (default) Disabled;
sec- 1 - Enabled.
tors
smaller
than
N%
(check-
box)
1093
Parameter type name value
Time period
Default: DASHBOARD._timeperiod
Alternatively, you can set the time period only in the From and To
parameters.
From 1 time_period.from Valid time string in absolute (YYYY-MM-DD hh:mm:ss) or relative
time syntax (now, now/d, now/w-1w, etc.).
Parameter behavior:
- supported if Time period is not set
To 1 time_period.to Valid time string in absolute (YYYY-MM-DD hh:mm:ss) or relative
time syntax (now, now/d, now/w-1w, etc.).
Parameter behavior:
- supported if Time period is not set
Legend
Parameter behavior:
- supported if Show legend is set to ”Enabled”
Show aggregation 0 legend_aggregation 0 - (default) Disabled;
function 1 - Enabled.
Parameter behavior:
- supported if Show legend is set to ”Enabled”
1094
Parameter type name value
Parameter behavior:
- supported if Show legend is set to ”Enabled”
Number of rows/ 0 legend_lines Possible values range from 1-10.
Maximum number
of rows Default: 1.
Parameter behavior:
- supported if Show legend is set to ”Enabled”
Number of columns 0 legend_columns Possible values range from 1-4.
Default: 4.
Parameter behavior:
- supported if Show legend is set to ”Enabled”, and Show value is set to
”Disabled”
Examples
The following examples aim to only describe the configuration of the dashboard widget field objects for the Pie chart widget. For
more information on configuring a dashboard, see dashboard.create.
Configuring a Pie chart widget
Request:
{
"jsonrpc": "2.0",
"method": "dashboard.create",
"params": {
"name": "My dashboard",
"display_period": 30,
"auto_start": 1,
"pages": [
{
"widgets": [
{
"type": "piechart",
"name": "Pie chart",
"x": 0,
"y": 0,
"width": 24,
"height": 5,
"view_mode": 0,
"fields": [
{
"type": 0,
"name": "ds.0.dataset_type",
"value": 0
},
1095
{
"type": 4,
"name": "ds.0.itemids.1",
"value": 23264
},
{
"type": 1,
"name": "ds.0.color.1",
"value": "FF0000"
},
{
"type": 0,
"name": "ds.0.type.1",
"value": 0
},
{
"type": 4,
"name": "ds.0.itemids.2",
"value": 23269
},
{
"type": 1,
"name": "ds.0.color.2",
"value": "BF00FF"
},
{
"type": 0,
"name": "ds.0.type.2",
"value": 0
},
{
"type": 4,
"name": "ds.0.itemids.3",
"value": 23257
},
{
"type": 1,
"name": "ds.0.color.3",
"value": "0040FF"
},
{
"type": 0,
"name": "ds.0.type.3",
"value": 0
},
{
"type": 1,
"name": "ds.1.hosts.0",
"value": "Zabbix server"
},
{
"type": 1,
"name": "ds.1.items.0",
"value": "*: Number of processed *values per second"
},
{
"type": 1,
"name": "ds.1.color",
"value": "000000"
},
{
"type": 0,
1096
"name": "ds.1.aggregate_function",
"value": 3
},
{
"type": 1,
"name": "ds.1.data_set_label",
"value": "Number of processed values per second"
},
{
"type": 0,
"name": "draw_type",
"value": 1
},
{
"type": 0,
"name": "width",
"value": 30
},
{
"type": 0,
"name": "total_show",
"value": 1
},
{
"type": 0,
"name": "units_show",
"value": 1
},
{
"type": 0,
"name": "graph_time",
"value": 1
},
{
"type": 1,
"name": "time_period.from",
"value": "now-3h"
},
{
"type": 0,
"name": "legend_lines",
"value": 4
}
]
}
]
}
],
"userGroups": [
{
"usrgrpid": 7,
"permission": 2
}
],
"users": [
{
"userid": 1,
"permission": 3
}
]
},
"id": 1
1097
}
Response:
{
"jsonrpc": "2.0",
"result": {
"dashboardids": [
"3"
]
},
"id": 1
}
See also
Description
These parameters and the possible property values for the respective dashboard widget field objects allow to configure the Problem
hosts widget in dashboard.create and dashboard.update methods.
Attention:
Widget fields properties are not validated during the creation or update of a dashboard. This allows users to modify
built-in widgets and create custom widgets, but also introduces the risk of creating or updating widgets incorrectly. To
ensure the successful creation or update of the Problem hosts widget, please refer to the parameter behavior outlined in
the tables below.
Parameters
The following parameters are supported for the Problem hosts widget.
1098
Parameter type name value
Note: The number in the property name references tag order in the
tag evaluation list.
Parameter behavior:
- required if configuring Problem tags
Operator 0 tags.0.operator 0 - Contains;
1 - Equals;
2 - Does not contain;
3 - Does not equal;
4 - Exists;
5 - Does not exist.
Note: The number in the property name references tag order in the
tag evaluation list.
Parameter behavior:
- required if configuring Problem tags
1099
Parameter type name value
Note: The number in the property name references tag order in the
tag evaluation list.
Parameter behavior:
- required if configuring Problem tags
Show 0 show_suppressed 0 - (default) Disabled;
sup- 1 - Enabled.
pressed
prob-
lems
Hide 0 hide_empty_groups 0 - (default) Disabled;
groups 1 - Enabled.
with-
out This parameter is not supported if configuring the widget on a
prob- template dashboard.
lems
Problem 0 ext_ack 0 - (default) All;
dis- 1 - Unacknowledged only;
play 2 - Separated.
Reference 1 reference Any string value consisting of 5 characters (e.g., ABCDE or JBPNL).
This value must be unique within the dashboard to which the widget
belongs.
Parameter behavior:
- required
Examples
The following examples aim to only describe the configuration of the dashboard widget field objects for the Problem hosts widget.
For more information on configuring a dashboard, see dashboard.create.
Configuring a Problem hosts widget
Configure a Problem hosts widget that displays hosts from host groups ”2” and ”4” that have problems with a name that includes
the string ”CPU” and that have the following severities: ”Warning”, ”Average”, ”High”, ”Disaster”.
Request:
{
"jsonrpc": "2.0",
"method": "dashboard.create",
"params": {
"name": "My dashboard",
"display_period": 30,
"auto_start": 1,
"pages": [
{
"widgets": [
{
"type": "problemhosts",
"name": "Problem hosts",
"x": 0,
"y": 0,
"width": 36,
"height": 5,
"view_mode": 0,
"fields": [
{
"type": 2,
"name": "groupids.0",
"value": 2
1100
},
{
"type": 2,
"name": "groupids.1",
"value": 4
},
{
"type": 1,
"name": "problem",
"value": "cpu"
},
{
"type": 0,
"name": "severities.0",
"value": 2
},
{
"type": 0,
"name": "severities.1",
"value": 3
},
{
"type": 0,
"name": "severities.2",
"value": 4
},
{
"type": 0,
"name": "severities.3",
"value": 5
}
]
}
]
}
],
"userGroups": [
{
"usrgrpid": 7,
"permission": 2
}
],
"users": [
{
"userid": 1,
"permission": 3
}
]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"dashboardids": [
"3"
]
},
"id": 1
}
1101
See also
Description
These parameters and the possible property values for the respective dashboard widget field objects allow to configure the Problems
widget in dashboard.create and dashboard.update methods.
Attention:
Widget fields properties are not validated during the creation or update of a dashboard. This allows users to modify
built-in widgets and create custom widgets, but also introduces the risk of creating or updating widgets incorrectly. To
ensure the successful creation or update of the Problems widget, please refer to the parameter behavior outlined in the
tables below.
Parameters
1102
Parameter type name value
Note: The number in the property name references tag order in the
tag evaluation list.
Parameter behavior:
- required if configuring Problem tags
Operator 0 tags.0.operator 0 - Contains;
1 - Equals;
2 - Does not contain;
3 - Does not equal;
4 - Exists;
5 - Does not exist.
Note: The number in the property name references tag order in the
tag evaluation list.
Parameter behavior:
- required if configuring Problem tags
Tag value 1 tags.0.value Any string value.
Note: The number in the property name references tag order in the
tag evaluation list.
Parameter behavior:
- required if configuring Problem tags
1103
Parameter type name value
Parameter behavior:
- required
1104
Examples
The following examples aim to only describe the configuration of the dashboard widget field objects for the Problems widget. For
more information on configuring a dashboard, see dashboard.create.
Configuring a Problems widget
Configure a Problems widget that displays problems for host group ”4” that satisfy the following conditions:
• Problems that have a tag with the name ”scope” that contains values ”performance” or ”availability”, or ”capacity”.
• Problems that have the following severities: ”Warning”, ”Average”, ”High”, ”Disaster”.
Request:
{
"jsonrpc": "2.0",
"method": "dashboard.create",
"params": {
"name": "My dashboard",
"display_period": 30,
"auto_start": 1,
"pages": [
{
"widgets": [
{
"type": "problems",
"name": "Problems",
"x": 0,
"y": 0,
"width": 36,
"height": 5,
"view_mode": 0,
"fields": [
{
"type": 2,
"name": "groupids.0",
"value": 4
},
{
"type": 1,
"name": "tags.0.tag",
"value": "scope"
},
{
"type": 0,
"name": "tags.0.operator",
"value": 0
},
{
"type": 1,
"name": "tags.0.value",
"value": "performance"
},
{
"type": 1,
"name": "tags.1.tag",
"value": "scope"
},
{
"type": 0,
"name": "tags.1.operator",
"value": 0
},
{
1105
"type": 1,
"name": "tags.1.value",
"value": "availability"
},
{
"type": 1,
"name": "tags.2.tag",
"value": "scope"
},
{
"type": 0,
"name": "tags.2.operator",
"value": 0
},
{
"type": 1,
"name": "tags.2.value",
"value": "capacity"
},
{
"type": 0,
"name": "severities.0",
"value": 2
},
{
"type": 0,
"name": "severities.1",
"value": 3
},
{
"type": 0,
"name": "severities.2",
"value": 4
},
{
"type": 0,
"name": "severities.3",
"value": 5
},
{
"type": 0,
"name": "show_tags",
"value": 1
},
{
"type": 0,
"name": "show_opdata",
"value": 1
}
]
}
]
}
],
"userGroups": [
{
"usrgrpid": 7,
"permission": 2
}
],
"users": [
{
1106
"userid": 1,
"permission": 3
}
]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"dashboardids": [
"3"
]
},
"id": 1
}
See also
Description
These parameters and the possible property values for the respective dashboard widget field objects allow to configure the Problems
by severity widget in dashboard.create and dashboard.update methods.
Attention:
Widget fields properties are not validated during the creation or update of a dashboard. This allows users to modify built-
in widgets and create custom widgets, but also introduces the risk of creating or updating widgets incorrectly. To ensure
the successful creation or update of the Problems by severity widget, please refer to the parameter behavior outlined in
the tables below.
Parameters
The following parameters are supported for the Problems by severity widget.
1107
Parameter type name value
Note: The number in the property name references tag order in the
tag evaluation list.
Parameter behavior:
- required if configuring Problem tags
1108
Parameter type name value
Note: The number in the property name references tag order in the
tag evaluation list.
Parameter behavior:
- required if configuring Problem tags
Tag value 1 tags.0.value Any string value.
Note: The number in the property name references tag order in the
tag evaluation list.
Parameter behavior:
- required if configuring Problem tags
Show 0 show_type 0 - (default) Host groups;
1 - Totals.
Parameter behavior:
- supported if Show is set to ”Totals”
Show 0 show_opdata 0 - (default) None;
op- 1 - Separately;
er- 2 - With problem name.
a-
tional
data
Show 0 show_suppressed 0 - (default) Disabled;
sup- 1 - Enabled.
pressed
prob-
lems
Hide 0 hide_empty_groups 0 - (default) Disabled;
groups 1 - Enabled.
with-
out Parameter behavior:
prob- - supported if Show is set to ”Host groups”
lems
This parameter is not supported if configuring the widget on a
template dashboard.
Problem 0 ext_ack 0 - (default) All;
dis- 1 - Unacknowledged only;
play 2 - Separated.
Show 0 show_timeline 0 - Disabled;
time- 1 - (default) Enabled.
line
Reference 1 reference Any string value consisting of 5 characters (e.g., ABCDE or JBPNL).
This value must be unique within the dashboard to which the widget
belongs.
Parameter behavior:
- required
1109
Examples
The following examples aim to only describe the configuration of the dashboard widget field objects for the Problems by severity
widget. For more information on configuring a dashboard, see dashboard.create.
Configuring a Problems by severity widget
Configure a Problems by severity widget that displays problem totals for all host groups.
Request:
{
"jsonrpc": "2.0",
"method": "dashboard.create",
"params": {
"name": "My dashboard",
"display_period": 30,
"auto_start": 1,
"pages": [
{
"widgets": [
{
"type": "problemsbysv",
"name": "Problems by severity",
"x": 0,
"y": 0,
"width": 36,
"height": 5,
"view_mode": 0,
"fields": [
{
"type": 0,
"name": "show_type",
"value": 1
}
]
}
]
}
],
"userGroups": [
{
"usrgrpid": 7,
"permission": 2
}
],
"users": [
{
"userid": 1,
"permission": 3
}
]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"dashboardids": [
"3"
]
},
"id": 1
1110
}
See also
Description
These parameters and the possible property values for the respective dashboard widget field objects allow to configure the SLA
report widget in dashboard.create and dashboard.update methods.
Attention:
Widget fields properties are not validated during the creation or update of a dashboard. This allows users to modify
built-in widgets and create custom widgets, but also introduces the risk of creating or updating widgets incorrectly. To
ensure the successful creation or update of the SLA report widget, please refer to the parameter behavior outlined in the
tables below.
Parameters
The following parameters are supported for the SLA report widget.
Parameter behavior:
- required
Service 9 serviceid.0 Service ID.
Show periods 0 show_periods Possible values range from 1-100.
Default: 20.
From 1 date_from Valid date string in format YYYY-MM-DD.
Relative dates with modifiers d, w, M,y (e.g. now, now/d, now/w-1w,
etc.) are supported.
To 1 date_to Valid date string in format YYYY-MM-DD.
Relative dates with modifiers d, w, M,y (e.g. now, now/d, now/w-1w,
etc.) are supported.
Examples
The following examples aim to only describe the configuration of the dashboard widget field objects for the SLA report widget. For
more information on configuring a dashboard, see dashboard.create.
Configuring an SLA report widget
Configure an SLA report widget that displays the SLA report for SLA ”4” service ”2” for the period of last 30 days.
Request:
{
"jsonrpc": "2.0",
"method": "dashboard.create",
"params": {
"name": "My dashboard",
1111
"display_period": 30,
"auto_start": 1,
"pages": [
{
"widgets": [
{
"type": "slareport",
"name": "SLA report",
"x": 0,
"y": 0,
"width": 36,
"height": 5,
"view_mode": 0,
"fields": [
{
"type": 10,
"name": "slaid.0",
"value": 4
},
{
"type": 9,
"name": "serviceid.0",
"value": 2
},
{
"type": 1,
"name": "date_from",
"value": "now-30d"
},
{
"type": 1,
"name": "date_to",
"value": "now"
}
]
}
]
}
],
"userGroups": [
{
"usrgrpid": 7,
"permission": 2
}
],
"users": [
{
"userid": 1,
"permission": 3
}
]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"dashboardids": [
"3"
]
1112
},
"id": 1
}
See also
Description
These parameters and the possible property values for the respective dashboard widget field objects allow to configure the System
Information widget in dashboard.create and dashboard.update methods.
Attention:
Widget fields properties are not validated during the creation or update of a dashboard. This allows users to modify built-
in widgets and create custom widgets, but also introduces the risk of creating or updating widgets incorrectly. To ensure
the successful creation or update of the System information widget, please refer to the parameter behavior outlined in the
tables below.
Parameters
The following parameters are supported for the System Information widget.
Examples
The following examples aim to only describe the configuration of the dashboard widget field objects for the System information
widget. For more information on configuring a dashboard, see dashboard.create.
Configuring a System information widget
Configure a System information widget that displays system stats with a refresh interval of 10 minutes and software update check
enabled.
Request:
{
"jsonrpc": "2.0",
"method": "dashboard.create",
"params": {
"name": "My dashboard",
"display_period": 30,
"auto_start": 1,
"pages": [
1113
{
"widgets": [
{
"type": "systeminfo",
"name": "System information",
"x": 0,
"y": 0,
"width": 36,
"height": 5,
"view_mode": 0,
"fields": [
{
"type": 0,
"name": "rf_rate",
"value": 600
},
{
"type": 0,
"name": "show_software_update_check_details",
"value": 1
}
]
}
]
}
],
"userGroups": [
{
"usrgrpid": 7,
"permission": 2
}
],
"users": [
{
"userid": 1,
"permission": 3
}
]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"dashboardids": [
"3"
]
},
"id": 1
}
See also
Description
1114
These parameters and the possible property values for the respective dashboard widget field objects allow to configure the Top
hosts widget in dashboard.create and dashboard.update methods.
Attention:
Widget fields properties are not validated during the creation or update of a dashboard. This allows users to modify
built-in widgets and create custom widgets, but also introduces the risk of creating or updating widgets incorrectly. To
ensure the successful creation or update of the Top hosts widget, please refer to the parameter behavior outlined in the
tables below.
Parameters
The following parameters are supported for the Top hosts widget.
1115
Parameter type name value
Note: The number in the property name references tag order in the
tag evaluation list.
Parameter behavior:
- required if configuring Host tags
Note: The number in the property name references tag order in the
tag evaluation list.
Parameter behavior:
- required if configuring Host tags
Note: The number in the property name references tag order in the
tag evaluation list.
Parameter behavior:
- required if configuring Host tags
Columns
Columns have common parameters and additional parameters depending on the configuration of the parameter Data.
1116
Note:
For all parameters related to columns the number in the property name (e.g. columns.0.name) references a column for
which the parameter is configured.
Parameter behavior:
- required
Data 0 columns.0.data 1 - Item value;
2 - Host name;
3 - Text.
Parameter behavior:
- required
Base color 1 columns.0.base_color Hexadecimal color code (e.g. FF0000).
Parameter behavior:
- required
Item value
Note:
The first number in the Thresholds property name (e.g. columnsthresholds.0.color.0) references the column for which
thresholds are configured, while the second number references threshold place in a list, sorted in ascending order.
However, if thresholds are configured in a different order, the values will be sorted in ascending order after updating
"threshold.0.threshold":"5" → "threshold.0.threshold":"1";
widget configuration in Zabbix frontend (e.g.
"threshold.1.threshold":"1" → "threshold.1.threshold": "5").
Parameter behavior:
- supported if Display is set to ”Bar” or ”Indicators”
Max 1 columns.0.max Any numeric value.
Parameter behavior:
- supported if Display is set to ”Bar” or ”Indicators”
Decimal 0 columns.0.decimal_places
Possible values range from 0-10.
places
Default: 2.
Thresholds
Color 1 columnsthresholds.0.color.0
Hexadecimal color code (e.g. FF0000).
1117
Parameter type name value
Aggregation 0 columns.0.aggregate_function
0 - (default) not used;
func- 1 - min;
tion 2 - max;
3 - avg;
4 - count;
5 - sum;
6 - first;
7 - last.
Time 1 DASHBOARD._timeperiod - set the Time period selector as the
columns.0.time_period._reference
pe- data source;
riod ABCDE._timeperiod - set a compatible widget (with its
reference parameter equal to ABCDE) as the data source.
Default: DASHBOARD._timeperiod
Alternatively, you can set the time period only in the From and To
parameters.
Parameter behavior:
- supported if Aggregation function is set to ”min”, ”max”, ”avg”,
”count”, ”sum”, ”first”, ”last”
From 1 Valid time string in absolute (YYYY-MM-DD
columns.0.time_period.from hh:mm:ss) or relative
time syntax (now, now/d, now/w-1w, etc.).
Parameter behavior:
- supported if Time period is not set and Aggregation function is set
to ”min”, ”max”, ”avg”, ”count”, ”sum”, ”first”, ”last”
To 1 Valid time string in absolute (YYYY-MM-DD
columns.0.time_period.to hh:mm:ss) or relative
time syntax (now, now/d, now/w-1w, etc.).
Parameter behavior:
- supported if Time period is not set and Aggregation function is set
to ”min”, ”max”, ”avg”, ”count”, ”sum”, ”first”, ”last”
History 0 columns.0.history 0 - (default) Auto;
data 1 - History;
2 - Trends.
Reference 1 reference Any string value consisting of 5 characters (e.g., ABCDE or JBPNL).
This value must be unique within the dashboard to which the widget
belongs.
Parameter behavior:
- required
Text
Parameter behavior:
- required if Data is set to ”Text”
Examples
The following examples aim to only describe the configuration of the dashboard widget field objects for the Top hosts widget. For
more information on configuring a dashboard, see dashboard.create.
Configuring a Top hosts widget
1118
Configure a Top hosts widget that displays top hosts by CPU utilization in host group ”4”. In addition, configure the following custom
columns: ”Host name”, ”CPU utilization in %”, ”1m avg”, ”5m avg”, ”15m avg”, ”Processes”.
Request:
{
"jsonrpc": "2.0",
"method": "dashboard.create",
"params": {
"name": "My dashboard",
"display_period": 30,
"auto_start": 1,
"pages": [
{
"widgets": [
{
"type": "tophosts",
"name": "Top hosts",
"x": 0,
"y": 0,
"width": 36,
"height": 5,
"view_mode": 0,
"fields": [
{
"type": 2,
"name": "groupids.0",
"value": 4
},
{
"type": 1,
"name": "columns.0.name",
"value": "Host"
},
{
"type": 0,
"name": "columns.0.data",
"value": 2
},
{
"type": 1,
"name": "columns.0.base_color",
"value": "FFFFFF"
},
{
"type": 1,
"name": "columns.1.name",
"value": "CPU utilization in %"
},
{
"type": 0,
"name": "columns.1.data",
"value": 1
},
{
"type": 1,
"name": "columns.1.base_color",
"value": "4CAF50"
},
{
"type": 1,
"name": "columns.1.item",
"value": "CPU utilization"
},
1119
{
"type": 0,
"name": "columns.1.display",
"value": 3
},
{
"type": 1,
"name": "columns.1.min",
"value": "0"
},
{
"type": 1,
"name": "columns.1.max",
"value": "100"
},
{
"type": 1,
"name": "columnsthresholds.1.color.0",
"value": "FFFF00"
},
{
"type": 1,
"name": "columnsthresholds.1.threshold.0",
"value": "50"
},
{
"type": 1,
"name": "columnsthresholds.1.color.1",
"value": "FF8000"
},
{
"type": 1,
"name": "columnsthresholds.1.threshold.1",
"value": "80"
},
{
"type": 1,
"name": "columnsthresholds.1.color.2",
"value": "FF4000"
},
{
"type": 1,
"name": "columnsthresholds.1.threshold.2",
"value": "90"
},
{
"type": 1,
"name": "columns.2.name",
"value": "1m avg"
},
{
"type": 0,
"name": "columns.2.data",
"value": 1
},
{
"type": 1,
"name": "columns.2.base_color",
"value": "FFFFFF"
},
{
"type": 1,
1120
"name": "columns.2.item",
"value": "Load average (1m avg)"
},
{
"type": 1,
"name": "columns.3.name",
"value": "5m avg"
},
{
"type": 0,
"name": "columns.3.data",
"value": 1
},
{
"type": 1,
"name": "columns.3.base_color",
"value": "FFFFFF"
},
{
"type": 1,
"name": "columns.3.item",
"value": "Load average (5m avg)"
},
{
"type": 1,
"name": "columns.4.name",
"value": "15m avg"
},
{
"type": 0,
"name": "columns.4.data",
"value": 1
},
{
"type": 1,
"name": "columns.4.base_color",
"value": "FFFFFF"
},
{
"type": 1,
"name": "columns.4.item",
"value": "Load average (15m avg)"
},
{
"type": 1,
"name": "columns.5.name",
"value": "Processes"
},
{
"type": 0,
"name": "columns.5.data",
"value": 1
},
{
"type": 1,
"name": "columns.5.base_color",
"value": "FFFFFF"
},
{
"type": 1,
"name": "columns.5.item",
"value": "Number of processes"
1121
},
{
"type": 0,
"name": "columns.5.decimal_places",
"value": 0
},
{
"type": 0,
"name": "column",
"value": 1
}
]
}
]
}
],
"userGroups": [
{
"usrgrpid": 7,
"permission": 2
}
],
"users": [
{
"userid": 1,
"permission": 3
}
]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"dashboardids": [
"3"
]
},
"id": 1
}
See also
Description
These parameters and the possible property values for the respective dashboard widget field objects allow to configure the Top
triggers widget in dashboard.create and dashboard.update methods.
Attention:
Widget fields properties are not validated during the creation or update of a dashboard. This allows users to modify
built-in widgets and create custom widgets, but also introduces the risk of creating or updating widgets incorrectly. To
ensure the successful creation or update of the Top triggers widget, please refer to the parameter behavior outlined in the
tables below.
1122
Parameters
The following parameters are supported for the Top triggers widget.
0 - Not classified;
1 - Information;
2 - Warning;
3 - Average;
4 - High;
5 - Disaster.
Note: The number in the property name references tag order in the
tag evaluation list.
Parameter behavior:
- required if configuring Problem tags
1123
Parameter type name value
Note: The number in the property name references tag order in the
tag evaluation list.
Parameter behavior:
- required if configuring Problem tags
Tag value 1 tags.0.value Any string value.
Note: The number in the property name references tag order in the
tag evaluation list.
Parameter behavior:
- required if configuring Problem tags
Time 1 DASHBOARD._timeperiod - set the Time period selector as the
time_period._reference
pe- data source;
riod ABCDE._timeperiod - set a compatible widget (with its Reference
parameter set to ”ABCDE”) as the data source.
Default: DASHBOARD._timeperiod
Alternatively, you can set the time period only in the From and To
parameters.
From 1 time_period.from Valid time string in absolute (YYYY-MM-DD hh:mm:ss) or relative
time syntax (now, now/d, now/w-1w, etc.).
Parameter behavior:
- supported if Time period is not set
To 1 time_period.to Valid time string in absolute (YYYY-MM-DD hh:mm:ss) or relative
time syntax (now, now/d, now/w-1w, etc.).
Parameter behavior:
- supported if Time period is not set
Trigger 0 show_lines Possible values range from 1-100.
limit
Default: 10.
Examples
The following examples aim to only describe the configuration of the dashboard widget field objects for the Top triggers widget.
For more information on configuring a dashboard, see dashboard.create.
Configuring a Top triggers widget
Configure a Top triggers widget that displays the top 5 triggers for host group ”4” with the count of all problems for each trigger.
The widget displays only triggers that have severities ”Warning”, ”Average”, ”High”, or ”Disaster”, and problems that have a tag
with the name ”scope” that contains values ”performance” or ”availability”, or ”capacity”.
Request:
{
"jsonrpc": "2.0",
"method": "dashboard.create",
"params": {
"name": "My dashboard",
"display_period": 30,
"auto_start": 1,
"pages": [
1124
{
"widgets": [
{
"type": "toptriggers",
"name": "Top triggers",
"x": 0,
"y": 0,
"width": 36,
"height": 5,
"view_mode": 0,
"fields": [
{
"type": 2,
"name": "groupids.0",
"value": 4
},
{
"type": 0,
"name": "severities.0",
"value": 2
},
{
"type": 0,
"name": "severities.1",
"value": 3
},
{
"type": 0,
"name": "severities.2",
"value": 4
},
{
"type": 0,
"name": "severities.3",
"value": 5
},
{
"type": 1,
"name": "tags.0.tag",
"value": "scope"
},
{
"type": 0,
"name": "tags.0.operator",
"value": 0
},
{
"type": 1,
"name": "tags.0.value",
"value": "performance"
},
{
"type": 1,
"name": "tags.1.tag",
"value": "scope"
},
{
"type": 0,
"name": "tags.1.operator",
"value": 0
},
{
1125
"type": 1,
"name": "tags.1.value",
"value": "availability"
},
{
"type": 1,
"name": "tags.2.tag",
"value": "scope"
},
{
"type": 0,
"name": "tags.2.operator",
"value": 0
},
{
"type": 1,
"name": "tags.2.value",
"value": "capacity"
},
{
"type": 0,
"name": "show_lines",
"value": 5
}
]
}
]
}
],
"userGroups": [
{
"usrgrpid": 7,
"permission": 2
}
],
"users": [
{
"userid": 1,
"permission": 3
}
]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"dashboardids": [
"3"
]
},
"id": 1
}
See also
1126
Description
These parameters and the possible property values for the respective dashboard widget field objects allow to configure the Trigger
overview widget in dashboard.create and dashboard.update methods.
Attention:
Widget fields properties are not validated during the creation or update of a dashboard. This allows users to modify
built-in widgets and create custom widgets, but also introduces the risk of creating or updating widgets incorrectly. To
ensure the successful creation or update of the Trigger overview widget, please refer to the parameter behavior outlined
in the tables below.
Parameters
The following parameters are supported for the Trigger Overview widget.
1127
Parameter type name value
Note: The number in the property name references tag order in the
tag evaluation list.
Parameter behavior:
- required if configuring Problem tags
Operator 0 tags.0.operator 0 - Contains;
1 - Equals;
2 - Does not contain;
3 - Does not equal;
4 - Exists;
5 - Does not exist.
Note: The number in the property name references tag order in the
tag evaluation list.
Parameter behavior:
- required if configuring Problem tags
Tag value 1 tags.0.value Any string value.
Note: The number in the property name references tag order in the
tag evaluation list.
Parameter behavior:
- required if configuring Problem tags
Show 0 show_suppressed 0 - (default) Disabled;
sup- 1 - Enabled.
pressed
prob-
lems
Hosts 0 style 0 - (default) Left;
lo- 1 - Top.
ca-
tion
Examples
The following examples aim to only describe the configuration of the dashboard widget field objects for the Trigger overview widget.
For more information on configuring a dashboard, see dashboard.create.
Configuring a Trigger overview widget
Configure a Trigger overview widget that displays trigger states for all host groups that have triggers with a tag that has the name
”scope” and contains value ”availability”.
Request:
{
"jsonrpc": "2.0",
"method": "dashboard.create",
"params": {
"name": "My dashboard",
"display_period": 30,
"auto_start": 1,
"pages": [
{
"widgets": [
{
"type": "trigover",
"name": "Trigger overview",
1128
"x": 0,
"y": 0,
"width": 36,
"height": 5,
"view_mode": 0,
"fields": [
{
"type": 1,
"name": "tags.0.tag",
"value": "scope"
},
{
"type": 0,
"name": "tags.0.operator",
"value": 0
},
{
"type": 1,
"name": "tags.0.value",
"value": "availability"
}
]
}
]
}
],
"userGroups": [
{
"usrgrpid": 7,
"permission": 2
}
],
"users": [
{
"userid": 1,
"permission": 3
}
]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"dashboardids": [
"3"
]
},
"id": 1
}
See also
Description
1129
These parameters and the possible property values for the respective dashboard widget field objects allow to configure the URL
widget in dashboard.create and dashboard.update methods.
Attention:
Widget fields properties are not validated during the creation or update of a dashboard. This allows users to modify
built-in widgets and create custom widgets, but also introduces the risk of creating or updating widgets incorrectly. To
ensure the successful creation or update of the URL widget, please refer to the parameter behavior outlined in the tables
below.
Parameters
Parameter behavior:
- required
Override host 1 ABCDE._hostid - set a compatible widget (with its Reference
override_hostid._reference
parameter set to ”ABCDE”) as the data source for hosts;
DASHBOARD._hostid - set the dashboard Host selector as the data
source for hosts.
Examples
The following examples aim to only describe the configuration of the dashboard widget field objects for the URL widget. For more
information on configuring a dashboard, see dashboard.create.
Configuring a URL widget
Configure a URL widget that displays the home page of Zabbix manual.
Request:
{
"jsonrpc": "2.0",
"method": "dashboard.create",
"params": {
"name": "My dashboard",
"display_period": 30,
"auto_start": 1,
"pages": [
{
"widgets": [
{
"type": "url",
"name": "URL",
"x": 0,
"y": 0,
"width": 36,
"height": 5,
"view_mode": 0,
"fields": [
{
1130
"type": 1,
"name": "url",
"value": "https://www.zabbix.com/documentation/7.0/en"
}
]
}
]
}
],
"userGroups": [
{
"usrgrpid": 7,
"permission": 2
}
],
"users": [
{
"userid": 1,
"permission": 3
}
]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"dashboardids": [
"3"
]
},
"id": 1
}
See also
Description
These parameters and the possible property values for the respective dashboard widget field objects allow to configure the Web
monitoring widget in dashboard.create and dashboard.update methods.
Attention:
Widget fields properties are not validated during the creation or update of a dashboard. This allows users to modify
built-in widgets and create custom widgets, but also introduces the risk of creating or updating widgets incorrectly. To
ensure the successful creation or update of the Web monitoring widget, please refer to the parameter behavior outlined in
the tables below.
Parameters
The following parameters are supported for the Web monitoring widget.
1131
Parameter type name value
Note: The number in the property name references tag order in the
tag evaluation list.
Parameter behavior:
- required if configuring Scenario tags
1132
Parameter type name value
Note: The number in the property name references tag order in the
tag evaluation list.
Parameter behavior:
- required if configuring Scenario tags
Tag value 1 tags.0.value Any string value.
Note: The number in the property name references tag order in the
tag evaluation list.
Parameter behavior:
- required if configuring Scenario tags
Show 0 maintenance 0 - Disabled;
hosts 1 - (default) Enabled.
in
main-
te-
nance
Reference 1 reference Any string value consisting of 5 characters (e.g., ABCDE or JBPNL).
This value must be unique within the dashboard to which the widget
belongs.
Parameter behavior:
- required
Examples
The following examples aim to only describe the configuration of the dashboard widget field objects for the Web monitoring widget.
For more information on configuring a dashboard, see dashboard.create.
Configuring a Web monitoring widget
Configure a Web monitoring widget that displays a status summary of the active web monitoring scenarios for host group ”4”.
Request:
{
"jsonrpc": "2.0",
"method": "dashboard.create",
"params": {
"name": "My dashboard",
"display_period": 30,
"auto_start": 1,
"pages": [
{
"widgets": [
{
"type": "web",
"name": "Web monitoring",
"x": 0,
"y": 0,
"width": 18,
"height": 3,
"view_mode": 0,
"fields": [
{
1133
"type": 2,
"name": "groupids.0",
"value": 4
}
]
}
]
}
],
"userGroups": [
{
"usrgrpid": 7,
"permission": 2
}
],
"users": [
{
"userid": 1,
"permission": 3
}
]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"dashboardids": [
"3"
]
},
"id": 1
}
See also
Discovered host
Object references:
• Discovered host
Available methods:
Note:
Discovered host are created by the Zabbix server and cannot be modified via the API.
The discovered host object contains information about a host discovered by a network discovery rule. It has the following properties.
1134
Property Type Description
Possible values:
0 - host up;
1 - host down.
dhost.get
Description
Note:
This method is available to users of any type. Permissions to call the method can be revoked in user role settings. See
User roles for more information.
Parameters
dhostids ID/array Return only discovered hosts with the given IDs.
druleids ID/array Return only discovered hosts that have been created by the given
discovery rules.
dserviceids ID/array Return only discovered hosts that are running the given services.
selectDRules query Return a drules property with an array of the discovery rules that
detected the host.
selectDServices query Return a dservices property with the discovered services running on
the host.
Supports count.
limitSelects integer Limits the number of records returned by subselects.
1135
Return values
Retrieve all hosts and the discovered services they are running that have been detected by discovery rule ”4”.
Request:
{
"jsonrpc": "2.0",
"method": "dhost.get",
"params": {
"output": "extend",
"selectDServices": "extend",
"druleids": "4"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": [
{
"dservices": [
{
"dserviceid": "1",
"dhostid": "1",
"type": "4",
"key_": "",
"value": "",
"port": "80",
"status": "0",
"lastup": "1337697227",
"lastdown": "0",
"dcheckid": "5",
"ip": "192.168.1.1",
"dns": "station.company.lan"
}
],
"dhostid": "1",
"druleid": "4",
"status": "0",
"lastup": "1337697227",
"lastdown": "0"
},
{
"dservices": [
{
"dserviceid": "2",
"dhostid": "2",
"type": "4",
"key_": "",
"value": "",
"port": "80",
"status": "0",
"lastup": "1337697234",
"lastdown": "0",
"dcheckid": "5",
1136
"ip": "192.168.1.4",
"dns": "john.company.lan"
}
],
"dhostid": "2",
"druleid": "4",
"status": "0",
"lastup": "1337697234",
"lastdown": "0"
},
{
"dservices": [
{
"dserviceid": "3",
"dhostid": "3",
"type": "4",
"key_": "",
"value": "",
"port": "80",
"status": "0",
"lastup": "1337697234",
"lastdown": "0",
"dcheckid": "5",
"ip": "192.168.1.26",
"dns": "printer.company.lan"
}
],
"dhostid": "3",
"druleid": "4",
"status": "0",
"lastup": "1337697234",
"lastdown": "0"
},
{
"dservices": [
{
"dserviceid": "4",
"dhostid": "4",
"type": "4",
"key_": "",
"value": "",
"port": "80",
"status": "0",
"lastup": "1337697234",
"lastdown": "0",
"dcheckid": "5",
"ip": "192.168.1.7",
"dns": "mail.company.lan"
}
],
"dhostid": "4",
"druleid": "4",
"status": "0",
"lastup": "1337697234",
"lastdown": "0"
}
],
"id": 1
}
See also
• Discovered service
1137
• Discovery rule
Source
CDHost::get() in ui/include/classes/api/services/CDHost.php.
Discovered service
Object references:
• Discovered service
Available methods:
Note:
Discovered services are created by the Zabbix server and cannot be modified via the API.
The discovered service object contains information about a service discovered by a network discovery rule on a host. It has the
following properties.
Possible values:
0 - service up;
1 - service down.
value string Value returned by the service when performing a Zabbix agent,
SNMPv1, SNMPv2 or SNMPv3 discovery check.
dservice.get
Description
Note:
This method is available to users of any type. Permissions to call the method can be revoked in user role settings. See
User roles for more information.
Parameters
1138
Parameter Type Description
dserviceids ID/array Return only discovered services with the given IDs.
dhostids ID/array Return only discovered services that belong to the given discovered
hosts.
dcheckids ID/array Return only discovered services that have been detected by the given
discovery checks.
druleids ID/array Return only discovered services that have been detected by the given
discovery rules.
selectDRules query Return a drules property with an array of the discovery rules that
detected the service.
selectDHosts query Return a dhosts property with an array the discovered hosts that the
service belongs to.
selectHosts query Return a hosts property with the hosts with the same IP address and
proxy as the service.
Supports count.
limitSelects integer Limits the number of records returned by subselects.
Return values
Request:
{
"jsonrpc": "2.0",
"method": "dservice.get",
"params": {
"output": "extend",
"dhostids": "11"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
1139
"result": [
{
"dserviceid": "12",
"dhostid": "11",
"value": "",
"port": "80",
"status": "1",
"lastup": "0",
"lastdown": "1348650607",
"dcheckid": "5",
"ip": "192.168.1.134",
"dns": "john.local"
},
{
"dserviceid": "13",
"dhostid": "11",
"value": "",
"port": "21",
"status": "1",
"lastup": "0",
"lastdown": "1348650610",
"dcheckid": "6",
"ip": "192.168.1.134",
"dns": "john.local"
}
],
"id": 1
}
See also
• Discovered host
• Discovery check
• Host
Source
CDService::get() in ui/include/classes/api/services/CDService.php.
Discovery check
Object references:
• Discovery check
Available methods:
The discovery check object defines a specific check performed by a network discovery rule. It has the following properties.
1140
Property Type Description
key_ string Item key (if type is set to ”Zabbix agent”) or SNMP OID (if type is set
to ”SNMPv1 agent”, ”SNMPv2 agent”, or ”SNMPv3 agent”).
Property behavior:
- required if type is set to ”Zabbix agent”, ”SNMPv1 agent”, ”SNMPv2
agent”, or ”SNMPv3 agent”
ports string One or several port ranges to check, separated by commas.
Default: 0.
Property behavior:
- supported if type is set to ”SSH” (0), ”LDAP” (1), ”SMTP” (2), ”FTP”
(3), ”HTTP” (4), ”POP” (5), ”NNTP” (6), ”IMAP” (7), ”TCP” (8), ”Zabbix
agent” (9), ”SNMPv1 agent” (10), ”SNMPv2 agent” (11), ”SNMPv3
agent” (13), ”HTTPS” (14), or ”Telnet” (15)
snmp_community string SNMP community.
Property behavior:
- required if type is set to ”SNMPv1 agent” or ”SNMPv2 agent”
snmpv3_authpassphrase string Authentication passphrase.
Property behavior:
type is set to ”SNMPv3 agent” and
- supported if
snmpv3_securitylevel is set to ”authNoPriv” or ”authPriv”
snmpv3_authprotocol integer Authentication protocol.
Possible values:
0 - (default) MD5;
1 - SHA1;
2 - SHA224;
3 - SHA256;
4 - SHA384;
5 - SHA512.
Property behavior:
type is set to ”SNMPv3 agent” and
- supported if
snmpv3_securitylevel is set to ”authNoPriv” or ”authPriv”
snmpv3_contextname string SNMPv3 context name.
Property behavior:
- supported if type is set to ”SNMPv3 agent”
snmpv3_privpassphrase string Privacy passphrase.
Property behavior:
type is set to ”SNMPv3 agent” and
- supported if
snmpv3_securitylevel is set to ”authPriv”
snmpv3_privprotocol integer Privacy protocol.
Possible values:
0 - (default) DES;
1 - AES128;
2 - AES192;
3 - AES256;
4 - AES192C;
5 - AES256C.
Property behavior:
type is set to ”SNMPv3 agent” and
- supported if
snmpv3_securitylevel is set to ”authPriv”
1141
Property Type Description
Possible values:
0 - noAuthNoPriv;
1 - authNoPriv;
2 - authPriv.
Property behavior:
- supported if type is set to ”SNMPv3 agent”
snmpv3_securityname string Security name.
Property behavior:
- supported if type is set to ”SNMPv3 agent”
type integer Type of check.
Possible values:
0 - SSH;
1 - LDAP;
2 - SMTP;
3 - FTP;
4 - HTTP;
5 - POP;
6 - NNTP;
7 - IMAP;
8 - TCP;
9 - Zabbix agent;
10 - SNMPv1 agent;
11 - SNMPv2 agent;
12 - ICMP ping;
13 - SNMPv3 agent;
14 - HTTPS;
15 - Telnet.
Property behavior:
- required
uniq integer Whether to use this check as a device uniqueness criteria. Only a
single unique check can be configured for a discovery rule.
Possible values:
0 - (default) do not use this check as a uniqueness criteria;
1 - use this check as a uniqueness criteria.
Property behavior:
- supported if type is set to ”Zabbix agent”, ”SNMPv1 agent”,
”SNMPv2 agent”, or ”SNMPv3 agent”
host_source integer Source for host name.
Possible values:
1 - (default) DNS;
2 - IP;
3 - discovery value of this check.
name_source integer Source for visible name.
Possible values:
0 - (default) not specified;
1 - DNS;
2 - IP;
3 - discovery value of this check.
1142
Property Type Description
allow_redirect integer Allow situation where the target being ICMP pinged responds from a
different IP address.
Possible values:
0 - (default) treat redirected responses as if the target host is down
(fail);
1 - treat redirected responses as if the target host is up (success).
Property behavior:
- supported if type is set to ”ICMP ping”
dcheck.get
Description
Note:
This method is available to users of any type. Permissions to call the method can be revoked in user role settings. See
User roles for more information.
Parameters
dcheckids ID/array Return only discovery checks with the given IDs.
druleids ID/array Return only discovery checks that belong to the given discovery rules.
dserviceids ID/array Return only discovery checks that have detected the given discovered
services.
sortfield string/array Sort the result by the given properties.
Return values
1143
Request:
{
"jsonrpc": "2.0",
"method": "dcheck.get",
"params": {
"output": "extend",
"dcheckids": "6"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": [
{
"dcheckid": "6",
"druleid": "4",
"type": "3",
"key_": "",
"snmp_community": "",
"ports": "21",
"snmpv3_securityname": "",
"snmpv3_securitylevel": "0",
"snmpv3_authpassphrase": "",
"snmpv3_privpassphrase": "",
"uniq": "0",
"snmpv3_authprotocol": "0",
"snmpv3_privprotocol": "0",
"snmpv3_contextname": "",
"host_source": "1",
"name_source": "0",
"allow_redirect": "0"
}
],
"id": 1
}
Source
CDCheck::get() in ui/include/classes/api/services/CDCheck.php.
Discovery rule
Note:
This API is meant to work with network discovery rules. For low-level discovery rules, see LLD rule API.
Object references:
• Discovery rule
Available methods:
1144
Discovery rule
The discovery rule object defines a network discovery rule. It has the following properties.
Property behavior:
- read-only
- required for update operations
iprange string One or several IP ranges to check, separated by commas.
Property behavior:
- required for create operations
name string Name of the discovery rule.
Property behavior:
- required for create operations
delay string Execution interval of the discovery rule.
Accepts seconds or time unit with suffix (e.g., 30s, 1m, 2h, 1d), or a
user macro.
Default: 1h.
proxyid ID ID of the proxy used for discovery.
status integer Whether the discovery rule is enabled.
Possible values:
0 - (default) enabled;
1 - disabled.
concurrency_max integer Maximum number of concurrent checks per discovery rule.
Possible values:
0 - (default) unlimited number of checks;
1 - one check;
2-999 - custom number of checks.
error string Error text if there have been any problems when executing the
discovery rule.
Property behavior:
- read-only
drule.create
Description
Note:
This method is only available to Admin and Super admin user types. Permissions to call the method can be revoked in user
role settings. See User roles for more information.
Parameters
1145
Parameter Type Description
Parameter behavior:
- required
Return values
(object) Returns an object containing the IDs of the created discovery rules under the druleids property. The order of the
returned IDs matches the order of the passed discovery rules.
Examples
Create a discovery rule to find machines running the Zabbix agent in the local network. The rule must use a single Zabbix agent
check on port 10050.
Request:
{
"jsonrpc": "2.0",
"method": "drule.create",
"params": {
"name": "Zabbix agent discovery",
"iprange": "192.168.1.1-255",
"concurrency_max": "10",
"dchecks": [
{
"type": "9",
"key_": "system.uname",
"ports": "10050",
"uniq": "0"
}
]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"druleids": [
"6"
]
},
"id": 1
}
See also
• Discovery check
Source
CDRule::create() in ui/include/classes/api/services/CDRule.php.
drule.delete
Description
1146
Note:
This method is only available to Admin and Super admin user types. Permissions to call the method can be revoked in user
role settings. See User roles for more information.
Parameters
(object) Returns an object containing the IDs of the deleted discovery rules under the druleids property.
Examples
Request:
{
"jsonrpc": "2.0",
"method": "drule.delete",
"params": [
"4",
"6"
],
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"druleids": [
"4",
"6"
]
},
"id": 1
}
Source
CDRule::delete() in ui/include/classes/api/services/CDRule.php.
drule.get
Description
Note:
This method is available to users of any type. Permissions to call the method can be revoked in user role settings. See
User roles for more information.
Parameters
dhostids ID/array Return only discovery rules that created the given discovered hosts.
druleids ID/array Return only discovery rules with the given IDs.
dserviceids ID/array Return only discovery rules that created the given discovered services.
1147
Parameter Type Description
selectDChecks query Return a dchecks property with the discovery checks used by the
discovery rule.
Supports count.
selectDHosts query Return a dhosts property with the discovered hosts created by the
discovery rule.
Supports count.
limitSelects integer Limits the number of records returned by subselects.
Return values
Retrieve all configured discovery rules and the discovery checks they use.
Request:
{
"jsonrpc": "2.0",
"method": "drule.get",
"params": {
"output": "extend",
"selectDChecks": "extend"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": [
{
"druleid": "2",
"proxyid": "0",
"name": "Local network",
"iprange": "192.168.3.1-255",
1148
"delay": "5s",
"status": "0",
"concurrency_max": "0",
"error": "",
"dchecks": [
{
"dcheckid": "7",
"druleid": "2",
"type": "3",
"key_": "",
"snmp_community": "",
"ports": "21",
"snmpv3_securityname": "",
"snmpv3_securitylevel": "0",
"snmpv3_authpassphrase": "",
"snmpv3_privpassphrase": "",
"uniq": "0",
"snmpv3_authprotocol": "0",
"snmpv3_privprotocol": "0",
"snmpv3_contextname": "",
"host_source": "1",
"name_source": "0",
"allow_redirect": "0"
},
{
"dcheckid": "8",
"druleid": "2",
"type": "4",
"key_": "",
"snmp_community": "",
"ports": "80",
"snmpv3_securityname": "",
"snmpv3_securitylevel": "0",
"snmpv3_authpassphrase": "",
"snmpv3_privpassphrase": "",
"uniq": "0",
"snmpv3_authprotocol": "0",
"snmpv3_privprotocol": "0",
"snmpv3_contextname": "",
"host_source": "1",
"name_source": "0",
"allow_redirect": "0"
}
]
},
{
"druleid": "6",
"proxyid": "0",
"name": "Zabbix agent discovery",
"iprange": "192.168.1.1-255",
"delay": "1h",
"status": "0",
"concurrency_max": "10",
"error": "",
"dchecks": [
{
"dcheckid": "10",
"druleid": "6",
"type": "9",
"key_": "system.uname",
"snmp_community": "",
"ports": "10050",
1149
"snmpv3_securityname": "",
"snmpv3_securitylevel": "0",
"snmpv3_authpassphrase": "",
"snmpv3_privpassphrase": "",
"uniq": "0",
"snmpv3_authprotocol": "0",
"snmpv3_privprotocol": "0",
"snmpv3_contextname": "",
"host_source": "2",
"name_source": "3",
"allow_redirect": "0"
}
]
}
],
"id": 1
}
See also
• Discovered host
• Discovery check
Source
CDRule::get() in ui/include/classes/api/services/CDRule.php.
drule.update
Description
Note:
This method is only available to Admin and Super admin user types. Permissions to call the method can be revoked in user
role settings. See User roles for more information.
Parameters
Additionally to the standard discovery rule properties, the method accepts the following parameters.
Return values
(object) Returns an object containing the IDs of the updated discovery rules under the druleids property.
Examples
Request:
{
"jsonrpc": "2.0",
"method": "drule.update",
"params": {
1150
"druleid": "6",
"iprange": "192.168.2.1-255"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"druleids": [
"6"
]
},
"id": 1
}
See also
• Discovery check
Source
CDRule::update() in ui/include/classes/api/services/CDRule.php.
Event
Object references:
• Event
• Event tag
• Media type URL
Available methods:
Event object
Note:
Events are created by the Zabbix server and cannot be modified via the API.
Possible values:
0 - event created by a trigger;
1 - event created by a discovery rule;
2 - event created by active agent autoregistration;
3 - internal event;
4 - event created on service status update.
1151
Property Type Description
Property behavior:
- supported if source is set to ”event created by a trigger”, ”event
created by a discovery rule”, ”internal event”, or ”event created on
service status update”
severity integer Event current severity.
Possible values:
0 - not classified;
1 - information;
2 - warning;
3 - average;
4 - high;
5 - disaster.
r_eventid ID ID of the recovery event.
c_eventid ID ID of the event that was used to override (close) current event under
global correlation rule. See correlationid to identify exact
correlation rule.
This parameter is only defined when the event is closed by global
correlation rule.
1152
Property Type Description
Possible values:
0 - event is in normal state;
1 - event is suppressed.
opdata string Operational data with expanded macros.
urls array Active media type URLs.
Event tag
Results will contain entries only for active media types with enabled event menu entry. Macro used in properties will be expanded,
but if one of the properties contains an unexpanded macro, both properties will be excluded from results. For supported macros,
see Supported macros.
event.acknowledge
Description
Attention:
Only trigger events can be updated.
Only problem events can be updated.
Read/Write rights for trigger are required to close the event or to change event’s severity.
To close an event, manual close should be allowed in the trigger.
Note:
This method is available to users of any type. Permissions to call the method can be revoked in user role settings. See
User roles for more information.
1153
Parameters
(object/array) Parameters containing the IDs of the events and update operations that should be performed.
Parameter behavior:
- required
action integer Event update action(s).
This is a bitmask field, any combination of possible bitmap values is
acceptable.
Parameter behavior:
- required
cause_eventid ID Cause event ID.
Parameter behavior:
- required if action contains the ”change event rank to symptom” bit
message string Text of the message.
Parameter behavior:
- required if action contains the ”add message” bit
severity integer New severity for events.
Possible values:
0 - not classified;
1 - information;
2 - warning;
3 - average;
4 - high;
5 - disaster.
Parameter behavior:
- required if action contains the ”change severity” bit
suppress_until integer Unix timestamp until which event must be suppressed.
Parameter behavior:
- required if action contains the ”suppress event” bit
Return values
(object) Returns an object containing the IDs of the updated events under the eventids property.
Examples
Acknowledging an event
Request:
1154
{
"jsonrpc": "2.0",
"method": "event.acknowledge",
"params": {
"eventids": "20427",
"action": 6,
"message": "Problem resolved."
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"eventids": [
"20427"
]
},
"id": 1
}
Request:
{
"jsonrpc": "2.0",
"method": "event.acknowledge",
"params": {
"eventids": ["20427", "20428"],
"action": 12,
"message": "Maintenance required to fix it.",
"severity": 4
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"eventids": [
"20427",
"20428"
]
},
"id": 1
}
Source
CEvent::acknowledge() in ui/include/classes/api/services/CEvent.php.
event.get
Description
1155
Attention:
This method may return events of a deleted entity if these events have not been removed by the housekeeper yet.
Note:
This method is available to users of any type. Permissions to call the method can be revoked in user role settings. See
User roles for more information.
Parameters
Refer to the event object page for a list of supported event types.
Refer to the event object page for a list of supported object types.
Default: 0 - trigger.
acknowledged boolean If set to true return only acknowledged events.
action integer Return only events for which the given event update actions have been
performed. For multiple actions, use a combination of any acceptable
bitmap values as bitmask.
action_userids ID/array Return only events with the given IDs of users who performed the
event update actions.
suppressed boolean true - return only suppressed events;
false - return events in the normal state.
symptom boolean true - return only symptom events;
false - return only cause events.
severities integer/array Return only events with the given event severities. Applies only if
object is trigger.
trigger_severities integer/array Return only events with the given trigger severities. Applies only if
object is trigger.
evaltype integer Rules for tag searching.
Possible values:
0 - (default) And/Or;
2 - Or.
tags array Return only events with the given tags. Exact match by tag and
case-insensitive search by value and operator.
[{"tag": "<tag>", "value": "<value>",
Format:
"operator": "<operator>"}, ...].
An empty array returns all events.
1156
Parameter Type Description
eventid_till string Return only events with IDs less or equal to the given ID.
time_from timestamp Return only events that have been created after or at the given time.
time_till timestamp Return only events that have been created before or at the given time.
problem_time_from timestamp Returns only events that were in the problem state starting with
problem_time_from. Applies only if the source is trigger event and
object is trigger. Mandatory if problem_time_till is specified.
problem_time_till timestamp Returns only events that were in the problem state until
problem_time_till. Applies only if the source is trigger event and
object is trigger. Mandatory if problem_time_from is specified.
value integer/array Return only events with the given values.
selectAcknowledges query Return an acknowledges property with event updates. Event updates
are sorted in reverse chronological order.
Supports count.
selectAlerts query Return an alerts property with alerts generated by the event. Alerts
are sorted in reverse chronological order.
selectHosts query Return a hosts property with hosts containing the object that created
the event. Supported only for events generated by triggers, items or
LLD rules.
selectRelatedObject query Return a relatedObject property with the object that created the
event. The type of object returned depends on the event type.
selectSuppressionData query Return a suppression_data property with the list of active
maintenances and manual suppressions:
maintenanceid - (ID) ID of the maintenance;
userid - (ID) ID of user who suppressed the event;
suppress_until - (integer) time until the event is suppressed.
selectTags query Return a tags property with event tags.
filter object Return only those results that exactly match the given filter.
Accepts an object, where the keys are property names, and the values
are either a single value or an array of values to match against.
1157
Parameter Type Description
editable boolean
excludeSearch boolean
limit integer
output query
preservekeys boolean
search object
searchByAny boolean
searchWildcardsEnabled boolean
sortorder string/array
startSearch boolean
select_acknowledges query This parameter is deprecated, please use selectAcknowledges
(deprecated) instead.
Return an acknowledges property with event updates.
Event updates are sorted in reverse chronological order.
Supports count.
select_alerts query This parameter is deprecated, please use selectAlerts instead.
(deprecated) Return an alerts property with alerts generated by the event.
Alerts are sorted in reverse chronological order.
Return values
Request:
{
"jsonrpc": "2.0",
"method": "event.get",
"params": {
"output": "extend",
"selectAcknowledges": "extend",
"selectSuppressionData": "extend",
"selectTags": "extend",
"objectids": "13926",
"sortfield": ["clock", "eventid"],
"sortorder": "DESC"
},
1158
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": [
{
"eventid": "9695",
"source": "0",
"object": "0",
"objectid": "13926",
"clock": "1347970410",
"value": "1",
"acknowledged": "1",
"ns": "413316245",
"name": "MySQL is down",
"severity": "5",
"r_eventid": "0",
"c_eventid": "0",
"correlationid": "0",
"userid": "0",
"cause_eventid": "0",
"acknowledges": [
{
"acknowledgeid": "1",
"userid": "1",
"clock": "1350640590",
"message": "Problem resolved.\n\r----[BULK ACKNOWLEDGE]----",
"action": "6",
"old_severity": "0",
"new_severity": "0",
"suppress_until": "1472511600",
"taskid": "0",
"username": "Admin",
"name": "Zabbix",
"surname": "Administrator"
}
],
"opdata": "",
"suppression_data": [
{
"maintenanceid": "15",
"suppress_until": "1472511600",
"userid": "0"
}
],
"suppressed": "1",
"tags": [
{
"tag": "service",
"value": "mysqld"
},
{
"tag": "error",
"value": ""
}
],
"urls": []
},
{
"eventid": "9671",
1159
"source": "0",
"object": "0",
"objectid": "13926",
"clock": "1347970347",
"value": "0",
"acknowledged": "0",
"ns": "0",
"name": "Unavailable by ICMP ping",
"severity": "4",
"r_eventid": "0",
"c_eventid": "0",
"correlationid": "0",
"userid": "0",
"cause_eventid": "0",
"acknowledges": [],
"opdata": "",
"suppression_data": [],
"suppressed": "0",
"tags": [],
"urls": []
}
],
"id": 1
}
Retrieve all events that have been created between October 9 and 10, 2012, in reverse chronological order.
Request:
{
"jsonrpc": "2.0",
"method": "event.get",
"params": {
"output": "extend",
"time_from": "1349797228",
"time_till": "1350661228",
"sortfield": ["clock", "eventid"],
"sortorder": "desc"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": [
{
"eventid": "20616",
"source": "0",
"object": "0",
"objectid": "14282",
"clock": "1350477814",
"value": "1",
"acknowledged": "0",
"ns": "0",
"name": "Less than 25% free in the history cache",
"severity": "3",
"r_eventid": "0",
"c_eventid": "0",
"correlationid": "0",
"userid": "0",
"cause_eventid": "0",
1160
"opdata": "",
"suppressed": "0",
"urls": []
},
{
"eventid": "20617",
"source": "0",
"object": "0",
"objectid": "14283",
"clock": "1350477814",
"value": "0",
"acknowledged": "0",
"ns": "0",
"name": "Zabbix trapper processes more than 75% busy",
"severity": "3",
"r_eventid": "0",
"c_eventid": "0",
"correlationid": "0",
"userid": "0",
"cause_eventid": "0",
"opdata": "",
"suppressed": "0",
"urls": []
},
{
"eventid": "20618",
"source": "0",
"object": "0",
"objectid": "14284",
"clock": "1350477815",
"value": "1",
"acknowledged": "0",
"ns": "0",
"name": "High ICMP ping loss",
"severity": "3",
"r_eventid": "0",
"c_eventid": "0",
"correlationid": "0",
"userid": "0",
"cause_eventid": "0",
"opdata": "",
"suppressed": "0",
"urls": []
}
],
"id": 1
}
Request:
{
"jsonrpc": "2.0",
"method": "event.get",
"params": {
"output": "extend",
"action": 2,
"action_userids": [10],
"selectAcknowledges": ["userid", "action"],
"sortfield": ["eventid"],
"sortorder": "DESC"
1161
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": [
{
"eventid": "1248566",
"source": "0",
"object": "0",
"objectid": "15142",
"clock": "1472457242",
"ns": "209442442",
"r_eventid": "1245468",
"r_clock": "1472457285",
"r_ns": "125644870",
"correlationid": "0",
"userid": "10",
"name": "Zabbix agent on localhost is unreachable for 5 minutes",
"acknowledged": "1",
"severity": "3",
"cause_eventid": "0",
"acknowledges": [
{
"userid": "10",
"action": "2"
}
],
"opdata": "",
"suppressed": "0",
"urls": []
}
],
"id": 1
}
Retrieve the top 5 triggers that have severities ”Warning”, ”Average”, ”High”, or ”Disaster”, together with the number of problem
events within a specified time period.
Request:
{
"jsonrpc": "2.0",
"method": "event.get",
"params": {
"countOutput": true,
"groupBy": "objectid",
"source": 0,
"object": 0,
"value": 1,
"time_from": 1672531200,
"time_till": 1677628800,
"trigger_severities": [2, 3, 4, 5],
"sortfield": ["rowscount"],
"sortorder": "DESC",
"limit": 5
},
"id": 1
}
Response:
1162
{
"jsonrpc": "2.0",
"result": [
{
"objectid": "232124",
"rowscount": "27"
},
{
"objectid": "29055",
"rowscount": "23"
},
{
"objectid": "253731",
"rowscount": "18"
},
{
"objectid": "254062",
"rowscount": "11"
},
{
"objectid": "23216",
"rowscount": "7"
}
],
"id": 1
}
See also
• Alert
• Item
• Host
• LLD rule
• Service
• Trigger
Source
CEvent::get() in ui/include/classes/api/services/CEvent.php.
Graph
Object references:
• Graph
Available methods:
Graph object
1163
Property Type Description
Property behavior:
- read-only
- required for update operations
height integer Height of the graph in pixels.
Property behavior:
- required for create operations
name string Name of the graph.
Property behavior:
- required for create operations
width integer Width of the graph in pixels.
Property behavior:
- required for create operations
flags integer Origin of the graph.
Possible values:
0 - (default) a plain graph;
4 - a discovered graph.
Property behavior:
- read-only
graphtype integer Graph’s layout type.
Possible values:
0 - (default) normal;
1 - stacked;
2 - pie;
3 - exploded.
percent_left float Left percentile.
Default: 0.
percent_right float Right percentile.
Default: 0.
show_3d integer Whether to show pie and exploded graphs in 3D.
Possible values:
0 - (default) show in 2D;
1 - show in 3D.
show_legend integer Whether to show the legend on the graph.
Possible values:
0 - hide;
1 - (default) show.
show_work_period integer Whether to show the working time on the graph.
Possible values:
0 - hide;
1 - (default) show.
show_triggers integer Whether to show the trigger line on the graph.
Possible values:
0 - hide;
1 - (default) show.
templateid ID ID of the parent template graph.
Property behavior:
- read-only
1164
Property Type Description
Default: 100.
yaxismin float The fixed minimum value for the Y axis.
Default: 0.
ymax_itemid ID ID of the item that is used as the maximum value for the Y axis.
Possible values:
0 - (default) calculated;
1 - fixed;
2 - item.
ymin_itemid ID ID of the item that is used as the minimum value for the Y axis.
Possible values:
0 - (default) calculated;
1 - fixed;
2 - item.
uuid string Universal unique identifier, used for linking imported graphs to already
existing ones. Auto-generated, if not given.
Property behavior:
- supported if the graph belongs to a template
graph.create
Description
Note:
This method is only available to Admin and Super admin user types. Permissions to call the method can be revoked in user
role settings. See User roles for more information.
Parameters
Parameter behavior:
- required
Return values
(object) Returns an object containing the IDs of the created graphs under the graphids property. The order of the returned IDs
matches the order of the passed graphs.
Examples
1165
Creating a graph
Request:
{
"jsonrpc": "2.0",
"method": "graph.create",
"params": {
"name": "MySQL bandwidth",
"width": 900,
"height": 200,
"gitems": [
{
"itemid": "22828",
"color": "00AA00"
},
{
"itemid": "22829",
"color": "3333FF"
}
]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"graphids": [
"652"
]
},
"id": 1
}
See also
• Graph item
Source
CGraph::create() in ui/include/classes/api/services/CGraph.php.
graph.delete
Description
Note:
This method is only available to Admin and Super admin user types. Permissions to call the method can be revoked in user
role settings. See User roles for more information.
Parameters
(object) Returns an object containing the IDs of the deleted graphs under the graphids property.
Examples
1166
Delete two graphs.
Request:
{
"jsonrpc": "2.0",
"method": "graph.delete",
"params": [
"652",
"653"
],
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"graphids": [
"652",
"653"
]
},
"id": 1
}
Source
CGraph::delete() in ui/include/classes/api/services/CGraph.php.
graph.get
Description
Note:
This method is available to users of any type. Permissions to call the method can be revoked in user role settings. See
User roles for more information.
Parameters
1167
Parameter Type Description
selectItems query Return an items property with the items used in the graph.
selectGraphDiscovery query Return a graphDiscovery property with the graph discovery object.
The graph discovery objects links the graph to a graph prototype from
which it was created.
Accepts an object, where the keys are property names, and the values
are either a single value or an array of values to match against.
Return values
Retrieve all graphs from host ”10107” and sort them by name.
Request:
1168
{
"jsonrpc": "2.0",
"method": "graph.get",
"params": {
"output": "extend",
"hostids": 10107,
"sortfield": "name"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": [
{
"graphid": "612",
"name": "CPU jumps",
"width": "900",
"height": "200",
"yaxismin": "0",
"yaxismax": "100",
"templateid": "439",
"show_work_period": "1",
"show_triggers": "1",
"graphtype": "0",
"show_legend": "1",
"show_3d": "0",
"percent_left": "0",
"percent_right": "0",
"ymin_type": "0",
"ymax_type": "0",
"ymin_itemid": "0",
"ymax_itemid": "0",
"flags": "0"
},
{
"graphid": "613",
"name": "CPU load",
"width": "900",
"height": "200",
"yaxismin": "0",
"yaxismax": "100",
"templateid": "433",
"show_work_period": "1",
"show_triggers": "1",
"graphtype": "0",
"show_legend": "1",
"show_3d": "0",
"percent_left": "0",
"percent_right": "0",
"ymin_type": "1",
"ymax_type": "0",
"ymin_itemid": "0",
"ymax_itemid": "0",
"flags": "0"
},
{
"graphid": "614",
"name": "CPU utilization",
"width": "900",
"height": "200",
1169
"yaxismin": "0",
"yaxismax": "100",
"templateid": "387",
"show_work_period": "1",
"show_triggers": "0",
"graphtype": "1",
"show_legend": "1",
"show_3d": "0",
"percent_left": "0",
"percent_right": "0",
"ymin_type": "1",
"ymax_type": "1",
"ymin_itemid": "0",
"ymax_itemid": "0",
"flags": "0"
},
{
"graphid": "645",
"name": "Disk space usage /",
"width": "600",
"height": "340",
"yaxismin": "0",
"yaxismax": "0",
"templateid": "0",
"show_work_period": "0",
"show_triggers": "0",
"graphtype": "2",
"show_legend": "1",
"show_3d": "1",
"percent_left": "0",
"percent_right": "0",
"ymin_type": "0",
"ymax_type": "0",
"ymin_itemid": "0",
"ymax_itemid": "0",
"flags": "4"
}
],
"id": 1
}
See also
• Discovery rule
• Graph item
• Item
• Host
• Host group
• Template
• Template group
Source
CGraph::get() in ui/include/classes/api/services/CGraph.php.
graph.update
Description
1170
Note:
This method is only available to Admin and Super admin user types. Permissions to call the method can be revoked in user
role settings. See User roles for more information.
Parameters
Additionally to the standard graph properties the method accepts the following parameters.
gitems array Graph items to replace existing graph items. If a graph item has the
gitemid property defined it will be updated, otherwise a new graph
item will be created.
Return values
(object) Returns an object containing the IDs of the updated graphs under the graphids property.
Examples
Request:
{
"jsonrpc": "2.0",
"method": "graph.update",
"params": {
"graphid": "439",
"ymax_type": 1,
"yaxismax": 100
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"graphids": [
"439"
]
},
"id": 1
}
Source
CGraph::update() in ui/include/classes/api/services/CGraph.php.
Graph item
Object references:
• Graph item
Available methods:
1171
Graph item object
Note:
Graph items can only be modified via the graph API.
Property behavior:
- read-only
color string Graph item’s draw color as a hexadecimal color code.
Property behavior:
- required for create operations
itemid ID ID of the item.
Property behavior:
- required for create operations
calc_fnc integer Value of the item that will be displayed.
Possible values:
1 - minimum value;
2 - (default) average value;
4 - maximum value;
7 - all values;
9 - last value, used only by pie and exploded graphs.
drawtype integer Draw style of the graph item.
Possible values:
0 - (default) line;
1 - filled region;
2 - bold line;
3 - dot;
4 - dashed line;
5 - gradient line.
graphid ID ID of the graph that the graph item belongs to.
sortorder integer Position of the item in the graph.
Default: starts with ”0” and increases by one with each entry.
type integer Type of graph item.
Possible values:
0 - (default) simple;
2 - graph sum, used only by pie and exploded graphs.
yaxisside integer Side of the graph where the graph item’s Y scale will be drawn.
Possible values:
0 - (default) left side;
1 - right side.
graphitem.get
Description
1172
The method allows to retrieve graph items according to the given parameters.
Note:
This method is available to users of any type. Permissions to call the method can be revoked in user role settings. See
User roles for more information.
Parameters
graphids ID/array Return only graph items that belong to the given graphs.
itemids ID/array Return only graph items with the given item IDs.
type integer Return only graph items with the given type.
Refer to the graph item object page for a list of supported graph item
types.
selectGraphs query Return a graphs property with an array of graphs that the item
belongs to.
sortfield string/array Sort the result by the given properties.
Return values
Retrieve all graph items used in a graph with additional information about the item and the host.
Request:
{
"jsonrpc": "2.0",
"method": "graphitem.get",
"params": {
"output": "extend",
"graphids": "387"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": [
{
"gitemid": "1242",
"graphid": "387",
"itemid": "22665",
"drawtype": "1",
1173
"sortorder": "1",
"color": "FF5555",
"yaxisside": "0",
"calc_fnc": "2",
"type": "0"
},
{
"gitemid": "1243",
"graphid": "387",
"itemid": "22668",
"drawtype": "1",
"sortorder": "2",
"color": "55FF55",
"yaxisside": "0",
"calc_fnc": "2",
"type": "0"
},
{
"gitemid": "1244",
"graphid": "387",
"itemid": "22671",
"drawtype": "1",
"sortorder": "3",
"color": "009999",
"yaxisside": "0",
"calc_fnc": "2",
"type": "0"
}
],
"id": 1
}
See also
• Graph
Source
CGraphItem::get() in ui/include/classes/api/services/CGraphItem.php.
Graph prototype
Object references:
• Graph prototype
Available methods:
1174
Property Type Description
Property behavior:
- read-only
- required for update operations
height integer Height of the graph prototype in pixels.
Property behavior:
- required for create operations
name string Name of the graph prototype.
Property behavior:
- required for create operations
width integer Width of the graph prototype in pixels.
Property behavior:
- required for create operations
graphtype integer Graph prototypes’s layout type.
Possible values:
0 - (default) normal;
1 - stacked;
2 - pie;
3 - exploded.
percent_left float Left percentile.
Default: 0.
percent_right float Right percentile.
Default: 0.
show_3d integer Whether to show discovered pie and exploded graphs in 3D.
Possible values:
0 - (default) show in 2D;
1 - show in 3D.
show_legend integer Whether to show the legend on the discovered graph.
Possible values:
0 - hide;
1 - (default) show.
show_work_period integer Whether to show the working time on the discovered graph.
Possible values:
0 - hide;
1 - (default) show.
templateid ID ID of the parent template graph prototype.
Property behavior:
- read-only
yaxismax float The fixed maximum value for the Y axis.
yaxismin float The fixed minimum value for the Y axis.
ymax_itemid ID ID of the item that is used as the maximum value for the Y axis.
Possible values:
0 - (default) calculated;
1 - fixed;
2 - item.
1175
Property Type Description
ymin_itemid ID ID of the item that is used as the minimum value for the Y axis.
Possible values:
0 - (default) calculated;
1 - fixed;
2 - item.
discover integer Graph prototype discovery status.
Possible values:
0 - (default) new graphs will be discovered;
1 - new graphs will not be discovered and existing graphs will be
marked as lost.
uuid string Universal unique identifier, used for linking imported graph prototypes
to already existing ones. Auto-generated, if not given.
Property behavior:
- supported if the graph prototype belongs to a template
graphprototype.create
Description
Note:
This method is only available to Admin and Super admin user types. Permissions to call the method can be revoked in user
role settings. See User roles for more information.
Parameters
gitems array Graph items to be created for the graph prototypes. Graph items can
reference both items and item prototypes, but at least one item
prototype must be present.
Parameter behavior:
- required
Return values
(object) Returns an object containing the IDs of the created graph prototypes under the graphids property. The order of the
returned IDs matches the order of the passed graph prototypes.
Examples
Request:
1176
{
"jsonrpc": "2.0",
"method": "graphprototype.create",
"params": {
"name": "Disk space usage {#FSNAME}",
"width": 900,
"height": 200,
"gitems": [
{
"itemid": "22828",
"color": "00AA00"
},
{
"itemid": "22829",
"color": "3333FF"
}
]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"graphids": [
"652"
]
},
"id": 1
}
See also
• Graph item
Source
CGraphPrototype::create() in ui/include/classes/api/services/CGraphPrototype.php.
graphprototype.delete
Description
Note:
This method is only available to Admin and Super admin user types. Permissions to call the method can be revoked in user
role settings. See User roles for more information.
Parameters
(object) Returns an object containing the IDs of the deleted graph prototypes under the graphids property.
Examples
Request:
1177
{
"jsonrpc": "2.0",
"method": "graphprototype.delete",
"params": [
"652",
"653"
],
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"graphids": [
"652",
"653"
]
},
"id": 1
}
Source
CGraphPrototype::delete() in ui/include/classes/api/services/CGraphPrototype.php.
graphprototype.get
Description
Note:
This method is available to users of any type. Permissions to call the method can be revoked in user role settings. See
User roles for more information.
Parameters
discoveryids ID/array Return only graph prototypes that belong to the given discovery rules.
graphids ID/array Return only graph prototypes with the given IDs.
groupids ID/array Return only graph prototypes that belong to hosts or templates in the
given host groups or template groups.
hostids ID/array Return only graph prototypes that belong to the given hosts.
inherited boolean If set to true return only graph prototypes inherited from a template.
itemids ID/array Return only graph prototypes that contain the given item prototypes.
templated boolean If set to true return only graph prototypes that belong to templates.
templateids ID/array Return only graph prototypes that belong to the given templates.
selectDiscoveryRule query Return a discoveryRule property with the LLD rule that the graph
prototype belongs to.
selectGraphItems query Return a gitems property with the graph items used in the graph
prototype.
selectHostGroups query Return a hostgroups property with the host groups that the graph
prototype belongs to.
selectHosts query Return a hosts property with the hosts that the graph prototype
belongs to.
selectItems query Return an items property with the items and item prototypes used in
the graph prototype.
1178
Parameter Type Description
selectTemplateGroups query Return a templategroups property with the template groups that the
graph prototype belongs to.
selectTemplates query Return a templates property with the templates that the graph
prototype belongs to.
filter object Return only those results that exactly match the given filter.
Accepts an object, where the keys are property names, and the values
are either a single value or an array of values to match against.
Return values
Request:
{
"jsonrpc": "2.0",
"method": "graphprototype.get",
"params": {
"output": "extend",
"discoveryids": "27426"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": [
{
1179
"graphid": "1017",
"parent_itemid": "27426",
"name": "Disk space usage {#FSNAME}",
"width": "600",
"height": "340",
"yaxismin": "0.0000",
"yaxismax": "0.0000",
"templateid": "442",
"show_work_period": "0",
"show_triggers": "0",
"graphtype": "2",
"show_legend": "1",
"show_3d": "1",
"percent_left": "0.0000",
"percent_right": "0.0000",
"ymin_type": "0",
"ymax_type": "0",
"ymin_itemid": "0",
"ymax_itemid": "0",
"discover": "0"
}
],
"id": 1
}
See also
• Discovery rule
• Graph item
• Item
• Host
• Host group
• Template
• Template group
Source
CGraphPrototype::get() in ui/include/classes/api/services/CGraphPrototype.php.
graphprototype.update
Description
Note:
This method is only available to Admin and Super admin user types. Permissions to call the method can be revoked in user
role settings. See User roles for more information.
Parameters
Additionally to the standard graph prototype properties, the method accepts the following parameters.
gitems array Graph items to replace existing graph items. If a graph item has the
gitemid property defined it will be updated, otherwise a new graph
item will be created.
1180
Return values
(object) Returns an object containing the IDs of the updated graph prototypes under the graphids property.
Examples
Request:
{
"jsonrpc": "2.0",
"method": "graphprototype.update",
"params": {
"graphid": "439",
"width": 1100,
"height": 400
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"graphids": [
"439"
]
},
"id": 1
}
Source
CGraphPrototype::update() in ui/include/classes/api/services/CGraphPrototype.php.
This class is designed to work with server nodes that are part of a high availability cluster or a standalone server instance.
Object references:
Available methods:
The following object is related to operating a High availability cluster of Zabbix servers.
Note:
Nodes are created by the Zabbix server and cannot be modified via the API.
1181
Property Type Description
Possible values:
0 - standby;
1 - stopped manually;
2 - unavailable;
3 - active.
hanode.get
Description
Note:
This method is only available to Super admin user types. See User roles for more information.
Parameters
ha_nodeids ID/array Return only nodes with the given node IDs.
filter object Return only those results that exactly match the given filter.
Accepts an object, where the keys are property names, and the values
are either a single value or an array of values to match against.
Return values
Request:
{
"jsonrpc": "2.0",
"method": "hanode.get",
"params": {
1182
"preservekeys": true,
"sortfield": "status",
"sortorder": "DESC"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"ckuo7i1nw000h0sajj3l3hh8u": {
"ha_nodeid": "ckuo7i1nw000h0sajj3l3hh8u",
"name": "node-active",
"address": "192.168.1.13",
"port": "10051",
"lastaccess": "1635335704",
"status": "3"
},
"ckuo7i1nw000e0sajwfttc1mp": {
"ha_nodeid": "ckuo7i1nw000e0sajwfttc1mp",
"name": "node6",
"address": "192.168.1.10",
"port": "10053",
"lastaccess": "1635332902",
"status": "2"
},
"ckuo7i1nv000c0sajz85xcrtt": {
"ha_nodeid": "ckuo7i1nv000c0sajz85xcrtt",
"name": "node4",
"address": "192.168.1.8",
"port": "10052",
"lastaccess": "1635334214",
"status": "1"
},
"ckuo7i1nv000a0saj1fcdkeu4": {
"ha_nodeid": "ckuo7i1nv000a0saj1fcdkeu4",
"name": "node2",
"address": "192.168.1.6",
"port": "10051",
"lastaccess": "1635335705",
"status": "0"
}
},
"id": 1
}
Request:
{
"jsonrpc": "2.0",
"method": "hanode.get",
"params": {
"ha_nodeids": ["ckuo7i1nw000e0sajwfttc1mp", "ckuo7i1nv000c0sajz85xcrtt"]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": [
1183
{
"ha_nodeid": "ckuo7i1nv000c0sajz85xcrtt",
"name": "node4",
"address": "192.168.1.8",
"port": "10052",
"lastaccess": "1635334214",
"status": "1"
},
{
"ha_nodeid": "ckuo7i1nw000e0sajwfttc1mp",
"name": "node6",
"address": "192.168.1.10",
"port": "10053",
"lastaccess": "1635332902",
"status": "2"
}
],
"id": 1
}
Request:
{
"jsonrpc": "2.0",
"method": "hanode.get",
"params": {
"output": ["ha_nodeid", "address", "port"],
"filter": {
"status": 1
}
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": [
{
"ha_nodeid": "ckuo7i1nw000g0sajjsjre7e3",
"address": "192.168.1.12",
"port": "10051"
},
{
"ha_nodeid": "ckuo7i1nv000c0sajz85xcrtt",
"address": "192.168.1.8",
"port": "10052"
},
{
"ha_nodeid": "ckuo7i1nv000d0sajd95y1b6x",
"address": "192.168.1.9",
"port": "10053"
}
],
"id": 1
}
Request:
{
"jsonrpc": "2.0",
1184
"method": "hanode.get",
"params": {
"countOutput": true,
"filter": {
"status": 0
}
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": "3",
"id": 1
}
Request:
{
"jsonrpc": "2.0",
"method": "hanode.get",
"params": {
"output": ["name", "status"],
"filter": {
"address": ["192.168.1.7", "192.168.1.13"]
}
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": [
{
"name": "node3",
"status": "0"
},
{
"name": "node-active",
"status": "3"
}
],
"id": 1
}
Source
CHaNode::get() in ui/include/classes/api/services/CHaNode.php.
History
Object references:
• Float history
• Integer history
• String history
• Text history
• Log history
1185
Available methods:
History object
Note:
History objects differ depending on the item’s type of information. They are created by Zabbix server and cannot be
modified via the API.
Float history
Integer history
String history
Text history
Log history
1186
Property Type Description
history.clear
Description
Note:
This method is only available to Admin and Super admin user types. Permissions to call the method can be revoked in user
role settings. See User roles for more information.
Parameters
(object) Returns an object containing the IDs of the cleared items under the itemids property.
Examples
Clear history
Request:
{
"jsonrpc": "2.0",
"method": "history.clear",
"params": [
"10325",
"13205"
],
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"itemids": [
"10325",
"13205"
]
},
"id": 1
}
Source
CHistory::clear() in ui/include/classes/api/services/CHistory.php.
history.get
1187
Description
Attention:
This method may return historical data of a deleted entity if this data has not been removed by the housekeeper yet.
Note:
This method is available to users of any type. Permissions to call the method can be revoked in user role settings. See
User roles for more information.
Parameters
Possible values:
0 - numeric float;
1 - character;
2 - log;
3 - (default) numeric unsigned;
4 - text;
5 - binary.
hostids ID/array Return only history from the given hosts.
itemids ID/array Return only history from the given items.
time_from timestamp Return only values that have been received after or at the given time.
time_till timestamp Return only values that have been received before or at the given time.
sortfield string/array Sort the result by the given properties.
Return values
Request:
{
"jsonrpc": "2.0",
1188
"method": "history.get",
"params": {
"output": "extend",
"history": 0,
"itemids": "23296",
"sortfield": "clock",
"sortorder": "DESC",
"limit": 10
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": [
{
"itemid": "23296",
"clock": "1351090996",
"value": "0.085",
"ns": "563157632"
},
{
"itemid": "23296",
"clock": "1351090936",
"value": "0.16",
"ns": "549216402"
},
{
"itemid": "23296",
"clock": "1351090876",
"value": "0.18",
"ns": "537418114"
},
{
"itemid": "23296",
"clock": "1351090816",
"value": "0.21",
"ns": "522659528"
},
{
"itemid": "23296",
"clock": "1351090756",
"value": "0.215",
"ns": "507809457"
},
{
"itemid": "23296",
"clock": "1351090696",
"value": "0.255",
"ns": "495509699"
},
{
"itemid": "23296",
"clock": "1351090636",
"value": "0.36",
"ns": "477708209"
},
{
"itemid": "23296",
"clock": "1351090576",
"value": "0.375",
1189
"ns": "463251343"
},
{
"itemid": "23296",
"clock": "1351090516",
"value": "0.315",
"ns": "447947017"
},
{
"itemid": "23296",
"clock": "1351090456",
"value": "0.275",
"ns": "435307141"
}
],
"id": 1
}
Source
CHistory::get() in ui/include/classes/api/services/CHistory.php.
history.push
Description
Note:
This method is available to users of any type. Permissions to call the method can be revoked in user role settings. See
User roles for more information.
Parameters
Parameter behavior:
- required if host and key are not set
host string Technical name of the host.
Parameter behavior:
- required if itemid is not set
key string Item key.
Parameter behavior:
- required if itemid is not set
value mixed Item value.
Parameter behavior:
- required
clock timestamp Time when the value was received.
ns integer Nanoseconds when the value was received.
Return values
1190
Examples
Send item history data to Zabbix server for items ”10600”, ”10601”, and ”999999”.
Request:
{
"jsonrpc": "2.0",
"method": "history.push",
"params": [
{
"itemid": 10600,
"value": 0.5,
"clock": 1690891294,
"ns": 45440940
},
{
"itemid": 10600,
"value": 0.6,
"clock": 1690891295,
"ns": 312431
},
{
"itemid": 10601,
"value": "[Tue Aug 01 15:01:35 2023] [error] [client 1.2.3.4] File does not exist: /var/www/ht
},
{
"itemid": 999999,
"value": 123
}
],
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"response": "success",
"data": [
{
"itemid": "10600"
},
{
"itemid": "10600"
},
{
"itemid": "10601",
"error": "Item is disabled."
},
{
"error": "No permissions to referred object or it does not exist."
}
]
},
"id": 1
}
See also
• Trapper items
• HTTP agent items
• Host
• Item
1191
Source
CHistory::push() in ui/include/classes/api/services/CHistory.php.
Host
Object references:
• Host
• Host inventory
• Host tag
Available methods:
Host object
Host
Property behavior:
- read-only
- required for update operations
host string Technical name of the host.
Property behavior:
- required for create operations
description text Description of the host.
flags integer Origin of the host.
Possible values:
0 - a plain host;
4 - a discovered host.
Property behavior:
- read-only
inventory_mode integer Host inventory population mode.
Possible values:
-1 - (default) disabled;
0 - manual;
1 - automatic.
1192
Property Type Description
Possible values:
-1 - (default) default;
0 - none;
1 - MD2;
2 - MD5
4 - straight;
5 - OEM;
6 - RMCP+.
ipmi_password string IPMI password.
ipmi_privilege integer IPMI privilege level.
Possible values:
1 - callback;
2 - (default) user;
3 - operator;
4 - admin;
5 - OEM.
ipmi_username string IPMI username.
maintenance_from timestamp Starting time of the effective maintenance.
Property behavior:
- read-only
maintenance_status integer Effective maintenance status.
Possible values:
0 - (default) no maintenance;
1 - maintenance in effect.
Property behavior:
- read-only
maintenance_type integer Effective maintenance type.
Possible values:
0 - (default) maintenance with data collection;
1 - maintenance without data collection.
Property behavior:
- read-only
maintenanceid ID ID of the maintenance that is currently in effect on the host.
Property behavior:
- read-only
name string Visible name of the host.
Possible values:
0 - (default) Zabbix server;
1 - Proxy;
2 - Proxy group.
proxyid ID ID of the proxy that is used to monitor the host.
Property behavior:
- required if monitored_by is set to ”Proxy”
proxy_groupid ID ID of the proxy group that is used to monitor the host.
Property behavior:
- required if monitored_by is set to ”Proxy group”
1193
Property Type Description
Possible values:
0 - (default) monitored host;
1 - unmonitored host.
tls_connect integer Connections to host.
Possible values:
1 - (default) No encryption;
2 - PSK;
4 - certificate.
tls_accept integer Connections from host.
This is a bitmask field, any combination of possible bitmap values is
acceptable.
Property behavior:
- write-only
- required if tls_connect is set to ”PSK”, or tls_accept contains the
”PSK” bit
tls_psk string The preshared key, at least 32 hex digits.
Property behavior:
- write-only
- required if tls_connect is set to ”PSK”, or tls_accept contains the
”PSK” bit
active_available integer Host active interface availability status.
Possible values:
0 - interface status is unknown;
1 - interface is available;
2 - interface is not available.
Property behavior:
- read-only
assigned_proxyid ID ID of the proxy assigned by Zabbix server, if the host is monitored by a
proxy group.
Property behavior:
- read-only
Host inventory
Note:
Each property has it’s own unique ID number, which is used to associate host inventory fields with items.
1194
ID Property Type Description Maximum length
1195
ID Property Type Description Maximum length
Host tag
Property behavior:
- required
value string Host tag value.
automatic integer Type of host tag.
Possible values:
0 - (default) manual (tag created by user);
1 - automatic (tag created by low-level discovery)
host.create
Description
Note:
This method is only available to Admin and Super admin user types. Permissions to call the method can be revoked in user
role settings. See User roles for more information.
Parameters
1196
Parameter Type Description
The host groups must have only the groupid property defined.
Parameter behavior:
- required
interfaces object/array Interfaces to be created for the host.
tags object/array Host tags.
templates object/array Templates to be linked to the host.
Return values
(object) Returns an object containing the IDs of the created hosts under the hostids property. The order of the returned IDs
matches the order of the passed hosts.
Examples
Creating a host
Create a host called ”Linux server” with an IP interface and tags, add it to a group, link a template to it and set the MAC addresses
in the host inventory.
Request:
{
"jsonrpc": "2.0",
"method": "host.create",
"params": {
"host": "Linux server",
"interfaces": [
{
"type": 1,
"main": 1,
"useip": 1,
"ip": "192.168.3.1",
"dns": "",
"port": "10050"
}
],
"groups": [
{
"groupid": "50"
}
],
"tags": [
{
"tag": "Host name",
"value": "Linux server"
}
],
"templates": [
{
"templateid": "20045"
}
],
"macros": [
{
"macro": "{$USER_ID}",
"value": "123321"
},
1197
{
"macro": "{$USER_LOCATION}",
"value": "0:0:0",
"description": "latitude, longitude and altitude coordinates"
}
],
"inventory_mode": 0,
"inventory": {
"macaddress_a": "01234",
"macaddress_b": "56768"
}
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"hostids": [
"107819"
]
},
"id": 1
}
Create a host called ”SNMP host” with an SNMPv3 interface with details.
Request:
{
"jsonrpc": "2.0",
"method": "host.create",
"params": {
"host": "SNMP host",
"interfaces": [
{
"type": 2,
"main": 1,
"useip": 1,
"ip": "127.0.0.1",
"dns": "",
"port": "161",
"details": {
"version": 3,
"bulk": 0,
"securityname": "mysecurityname",
"contextname": "",
"securitylevel": 1
}
}
],
"groups": [
{
"groupid": "4"
}
]
},
"id": 1
}
Response:
1198
{
"jsonrpc": "2.0",
"result": {
"hostids": [
"10658"
]
},
"id": 1
}
Create a host called ”PSK host” with PSK encryption configured. Note that the host has to be pre-configured to use PSK.
Request:
{
"jsonrpc": "2.0",
"method": "host.create",
"params": {
"host": "PSK host",
"interfaces": [
{
"type": 1,
"ip": "192.168.3.1",
"dns": "",
"port": "10050",
"useip": 1,
"main": 1
}
],
"groups": [
{
"groupid": "2"
}
],
"tls_accept": 2,
"tls_connect": 2,
"tls_psk_identity": "PSK 001",
"tls_psk": "1f87b595725ac58dd977beef14b97461a7c1045b9a1c963065002c5473194952"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"hostids": [
"10590"
]
},
"id": 1
}
Request:
{
"jsonrpc": "2.0",
"method": "host.create",
"params": {
"host": "Host monitored by proxy",
1199
"groups": [
{
"groupid": "2"
}
],
"monitored_by": 1,
"proxyid": 1
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"hostids": [
"10591"
]
},
"id": 1
}
Request:
{
"jsonrpc": "2.0",
"method": "host.create",
"params": {
"host": "Host monitored by proxy group",
"groups": [
{
"groupid": "2"
}
],
"monitored_by": 2,
"proxy_groupid": 1
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"hostids": [
"10592"
]
},
"id": 1
}
See also
• Host group
• Template
• User macro
• Host interface
• Host inventory
• Host tag
• Proxy
• Proxy group
1200
Source
CHost::create() in ui/include/classes/api/services/CHost.php.
host.delete
Description
Note:
This method is only available to Admin and Super admin user types. Permissions to call the method can be revoked in user
role settings. See User roles for more information.
Parameters
(object) Returns an object containing the IDs of the deleted hosts under the hostids property.
Examples
Request:
{
"jsonrpc": "2.0",
"method": "host.delete",
"params": [
"13",
"32"
],
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"hostids": [
"13",
"32"
]
},
"id": 1
}
Source
CHost::delete() in ui/include/classes/api/services/CHost.php.
host.get
Description
1201
Note:
This method is available to users of any type. Permissions to call the method can be revoked in user role settings. See
User roles for more information.
Parameters
groupids ID/array Return only hosts that belong to the given groups.
dserviceids ID/array Return only hosts that are related to the given discovered services.
graphids ID/array Return only hosts that have the given graphs.
hostids ID/array Return only hosts with the given host IDs.
httptestids ID/array Return only hosts that have the given web checks.
interfaceids ID/array Return only hosts that use the given interfaces.
itemids ID/array Return only hosts that have the given items.
maintenanceids ID/array Return only hosts that are affected by the given maintenances.
monitored_hosts flag Return only monitored hosts.
proxyids ID/array Return only hosts that are monitored by the given proxies.
proxy_groupids ID/array Return only hosts that are monitored by the given proxy groups.
templated_hosts flag Return both hosts and templates.
templateids ID/array Return only hosts that are linked to the given templates.
triggerids ID/array Return only hosts that have the given triggers.
with_items flag Return only hosts that have items.
with_monitored_items and
Overrides the
with_simple_graph_items parameters.
with_item_prototypes flag Return only hosts that have item prototypes.
Possible values:
null - (default) all hosts;
true - only hosts with suppressed problems;
false - only hosts with unsuppressed problems.
evaltype integer Rules for tag searching.
Possible values:
0 - (default) And/Or;
2 - Or.
severities integer/array Return hosts that have only problems with given severities. Applies
only if problem object is trigger.
1202
Parameter Type Description
tags object/array Return only hosts with given tags. Exact match by tag and
case-sensitive or case-insensitive search by tag value depending on
operator value.
[{"tag": "<tag>", "value": "<value>",
Format:
"operator": "<operator>"}, ...].
An empty array returns all hosts.
Possible values:
true - linked templates must also have given tags;
false - (default) linked template tags are ignored.
selectDiscoveries query Return a discoveries property with host low-level discovery rules.
Supports count.
selectDiscoveryRule query Return a discoveryRule property with the low-level discovery rule
that created the host (from host prototype in VMware monitoring).
selectGraphs query Return a graphs property with host graphs.
Supports count.
selectHostDiscovery query Return a hostDiscovery property with host discovery object data.
Supports count.
selectInterfaces query Return an interfaces property with host interfaces.
Supports count.
selectInventory query Return an inventory property with host inventory data.
selectItems query Return an items property with host items.
Supports count.
1203
Parameter Type Description
link_type -
In addition to Template object fields, it contains
(integer) the way that the template is linked to host.
Possible values:
0 - (default) manually linked;
1 - automatically linked by LLD.
Supports count.
selectDashboards query Return a dashboards property.
Supports count.
selectTags query Return a tags property with host tags.
selectInheritedTags query Return an inheritedTags property with tags that are on all
templates which are linked to host.
selectTriggers query Return a triggers property with host triggers.
Supports count.
selectValueMaps query Return a valuemaps property with host value maps.
filter object Return only those results that exactly match the given filter.
Accepts an object, where the keys are property names, and the values
are either a single value or an array of values to match against.
Accepts an object, where the keys are property names, and the values
are strings to search for. If no additional options are given, this will
perform a LIKE "%…%" search.
Accepts an object, where the keys are property names, and the values
are strings to search for. If no additional options are given, this will
perform a LIKE "%…%" search.
1204
Parameter Type Description
editable boolean
excludeSearch boolean
limit integer
output query
preservekeys boolean
searchByAny boolean
searchWildcardsEnabled boolean
sortorder string/array
startSearch boolean
selectGroups query This parameter is deprecated, please use selectHostGroups
(deprecated) instead.
Return a groups property with host groups data that the host belongs
to.
Return values
Retrieve all data about two hosts named ”Zabbix server” and ”Linux server”.
Request:
{
"jsonrpc": "2.0",
"method": "host.get",
"params": {
"filter": {
"host": [
"Zabbix server",
"Linux server"
]
}
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": [
{
"hostid": "10160",
"proxyid": "0",
"host": "Zabbix server",
"status": "0",
"ipmi_authtype": "-1",
"ipmi_privilege": "2",
"ipmi_username": "",
"ipmi_password": "",
"maintenanceid": "0",
"maintenance_status": "0",
"maintenance_type": "0",
"maintenance_from": "0",
"name": "Zabbix server",
"flags": "0",
"description": "The Zabbix monitoring server.",
"tls_connect": "1",
1205
"tls_accept": "1",
"tls_issuer": "",
"tls_subject": "",
"proxy_groupid": "0",
"monitored_by": "0",
"inventory_mode": "1",
"active_available": "1",
"assigned_proxyid": "0"
},
{
"hostid": "10167",
"proxyid": "0",
"host": "Linux server",
"status": "0",
"ipmi_authtype": "-1",
"ipmi_privilege": "2",
"ipmi_username": "",
"ipmi_password": "",
"maintenanceid": "0",
"maintenance_status": "0",
"maintenance_type": "0",
"maintenance_from": "0",
"name": "Linux server",
"flags": "0",
"description": "",
"tls_connect": "1",
"tls_accept": "1",
"tls_issuer": "",
"tls_subject": "",
"proxy_groupid": "0",
"monitored_by": "0",
"inventory_mode": "1",
"active_available": "1",
"assigned_proxyid": "0"
}
],
"id": 1
}
Retrieve host groups that the host ”Zabbix server” is a member of.
Request:
{
"jsonrpc": "2.0",
"method": "host.get",
"params": {
"output": ["hostid"],
"selectHostGroups": "extend",
"filter": {
"host": [
"Zabbix server"
]
}
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": [
1206
{
"hostid": "10085",
"hostgroups": [
{
"groupid": "2",
"name": "Linux servers",
"flags": "0",
"uuid": "dc579cd7a1a34222933f24f52a68bcd8"
},
{
"groupid": "4",
"name": "Zabbix servers",
"flags": "0",
"uuid": "6f6799aa69e844b4b3918f779f2abf08"
}
]
}
],
"id": 1
}
Request:
{
"jsonrpc": "2.0",
"method": "host.get",
"params": {
"output": ["hostid"],
"selectParentTemplates": [
"templateid",
"name"
],
"hostids": "10084"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": [
{
"hostid": "10084",
"parentTemplates": [
{
"name": "Linux",
"templateid": "10001"
},
{
"name": "Zabbix Server",
"templateid": "10047"
}
]
}
],
"id": 1
}
Retrieve hosts that have the ”10001” (Linux by Zabbix agent) template linked to them.
1207
Request:
{
"jsonrpc": "2.0",
"method": "host.get",
"params": {
"output": ["hostid", "name"],
"templateids": "10001"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": [
{
"templateid": "10001",
"hosts": [
{
"hostid": "10084",
"name": "Zabbix server"
},
{
"hostid": "10603",
"name": "Host 1"
},
{
"hostid": "10604",
"name": "Host 2"
}
]
}
],
"id": 1
}
Retrieve hosts that contain ”Linux” in the host inventory ”OS” field.
Request:
{
"jsonrpc": "2.0",
"method": "host.get",
"params": {
"output": [
"host"
],
"selectInventory": [
"os"
],
"searchInventory": {
"os": "Linux"
}
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": [
{
1208
"hostid": "10084",
"host": "Zabbix server",
"inventory": {
"os": "Linux Ubuntu"
}
},
{
"hostid": "10107",
"host": "Linux server",
"inventory": {
"os": "Linux Mint"
}
}
],
"id": 1
}
Retrieve hosts that have tag ”Host name” equal to ”Linux server”.
Request:
{
"jsonrpc": "2.0",
"method": "host.get",
"params": {
"output": ["hostid"],
"selectTags": "extend",
"evaltype": 0,
"tags": [
{
"tag": "Host name",
"value": "Linux server",
"operator": 1
}
]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": [
{
"hostid": "10085",
"tags": [
{
"tag": "Host name",
"value": "Linux server"
},
{
"tag": "OS",
"value": "RHEL 7"
}
]
}
],
"id": 1
}
Retrieve hosts that have these tags not only on host level but also in their linked parent templates.
Request:
1209
{
"jsonrpc": "2.0",
"method": "host.get",
"params": {
"output": ["name"],
"tags": [
{
"tag": "A",
"value": "1",
"operator": 1
}
],
"inheritedTags": true
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": [
{
"hostid": "10623",
"name": "PC room 1"
},
{
"hostid": "10601",
"name": "Office"
}
],
"id": 1
}
Retrieve a host with tags and all tags that are linked to parent templates.
Request:
{
"jsonrpc": "2.0",
"method": "host.get",
"params": {
"output": ["name"],
"hostids": 10502,
"selectTags": ["tag", "value"],
"selectInheritedTags": ["tag", "value"]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": [
{
"hostid": "10502",
"name": "Desktop",
"tags": [
{
"tag": "A",
"value": "1"
}
],
1210
"inheritedTags": [
{
"tag": "B",
"value": "2"
}
]
}
],
"id": 1
}
Request:
{
"jsonrpc": "2.0",
"method": "host.get",
"params": {
"output": ["name"],
"severities": 5
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": [
{
"hostid": "10160",
"name": "Zabbix server"
}
],
"id": 1
}
Request:
{
"jsonrpc": "2.0",
"method": "host.get",
"params": {
"output": ["name"],
"severities": [3, 4]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": [
{
"hostid": "20170",
"name": "Database"
},
{
"hostid": "20183",
"name": "workstation"
}
1211
],
"id": 1
}
See also
• Host group
• Template
• User macro
• Host interface
• Proxy
• Proxy group
Source
CHost::get() in ui/include/classes/api/services/CHost.php.
host.massadd
Description
Note:
This method is only available to Admin and Super admin user types. Permissions to call the method can be revoked in user
role settings. See User roles for more information.
Parameters
(object) Parameters containing the IDs of the hosts to update and the objects to add to all the hosts.
The method accepts the following parameters.
Parameter behavior:
- required
groups object/array Host groups to add to the given hosts.
The host groups must have only the groupid property defined.
interfaces object/array Host interfaces to be created for the given hosts.
macros object/array User macros to be created for the given hosts.
templates object/array Templates to link to the given hosts.
Return values
(object) Returns an object containing the IDs of the updated hosts under the hostids property.
Examples
Adding macros
Request:
{
"jsonrpc": "2.0",
"method": "host.massadd",
"params": {
1212
"hosts": [
{
"hostid": "10160"
},
{
"hostid": "10167"
}
],
"macros": [
{
"macro": "{$TEST1}",
"value": "MACROTEST1"
},
{
"macro": "{$TEST2}",
"value": "MACROTEST2",
"description": "Test description"
}
]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"hostids": [
"10160",
"10167"
]
},
"id": 1
}
See also
• host.update
• Host group
• Template
• User macro
• Host interface
Source
CHost::massAdd() in ui/include/classes/api/services/CHost.php.
host.massremove
Description
Note:
This method is only available to Admin and Super admin user types. Permissions to call the method can be revoked in user
role settings. See User roles for more information.
Parameters
(object) Parameters containing the IDs of the hosts to update and the objects that should be removed.
1213
Parameter Type Description
Parameter behavior:
- required
groupids ID/array IDs of the host groups to remove the given hosts from.
interfaces object/array Host interfaces to remove from the given hosts.
The host interface object must have only the ip, dns and port
properties defined.
macros string/array User macros to delete from the given hosts.
templateids ID/array IDs of the templates to unlink from the given hosts.
templateids_clear ID/array IDs of the templates to unlink and clear from the given hosts.
Return values
(object) Returns an object containing the IDs of the updated hosts under the hostids property.
Examples
Unlinking templates
Unlink a template from two hosts and delete all of the templated entities.
Request:
{
"jsonrpc": "2.0",
"method": "host.massremove",
"params": {
"hostids": ["69665", "69666"],
"templateids_clear": "325"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"hostids": [
"69665",
"69666"
]
},
"id": 1
}
See also
• host.update
• User macro
• Host interface
Source
CHost::massRemove() in ui/include/classes/api/services/CHost.php.
host.massupdate
Description
1214
Note:
This method is only available to Admin and Super admin user types. Permissions to call the method can be revoked in user
role settings. See User roles for more information.
Parameters
(object) Parameters containing the IDs of the hosts to update and the properties that should be updated.
Additionally to the standard host properties, the method accepts the following parameters.
Parameter behavior:
- required
groups object/array Host groups to replace the current host groups the hosts belong to.
The host groups must have only the groupid property defined.
interfaces object/array Host interfaces to replace the current host interfaces on the given
hosts.
inventory object Host inventory properties.
Return values
(object) Returns an object containing the IDs of the updated hosts under the hostids property.
Examples
Enable monitoring of two hosts, that is, set their status to ”0”.
Request:
{
"jsonrpc": "2.0",
"method": "host.massupdate",
"params": {
"hosts": [
{
"hostid": "69665"
},
{
"hostid": "69666"
}
],
"status": 0
},
"id": 1
}
Response:
1215
{
"jsonrpc": "2.0",
"result": {
"hostids": [
"69665",
"69666"
]
},
"id": 1
}
See also
• host.update
• host.massadd
• host.massremove
• Host group
• Template
• User macro
• Host interface
Source
CHost::massUpdate() in ui/include/classes/api/services/CHost.php.
host.update
Description
Note:
This method is only available to Admin and Super admin user types. Permissions to call the method can be revoked in user
role settings. See User roles for more information.
Parameters
Note, however, that updating the host technical name will also update the host’s visible name (if not given or empty) by the host’s
technical name value.
Additionally to the standard host properties, the method accepts the following parameters.
groups object/array Host groups to replace the current host groups the host belongs to.
All host groups that are not listed in the request will be unlinked.
The host groups must have only the groupid property defined.
interfaces object/array Host interfaces to replace the current host interfaces.
All interfaces that are not listed in the request will be removed.
tags object/array Host tags to replace the current host tags.
All tags that are not listed in the request will be removed.
inventory object Host inventory properties.
macros object/array User macros to replace the current user macros.
All macros that are not listed in the request will be removed.
templates object/array Templates to replace the currently linked templates.
All templates that are not listed in the request will be only unlinked.
1216
Parameter Type Description
Note:
As opposed to the Zabbix frontend, when name (visible host name) is the same as host (technical host name), updating
host via API will not automatically update name. Both properties need to be updated explicitly.
Return values
(object) Returns an object containing the IDs of the updated hosts under the hostids property.
Examples
Enabling a host
Request:
{
"jsonrpc": "2.0",
"method": "host.update",
"params": {
"hostid": "10126",
"status": 0
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"hostids": [
"10126"
]
},
"id": 1
}
Unlinking templates
Request:
{
"jsonrpc": "2.0",
"method": "host.update",
"params": {
"hostid": "10126",
"templates_clear": [
{
"templateid": "10124"
},
{
"templateid": "10125"
}
]
},
"id": 1
}
Response:
1217
{
"jsonrpc": "2.0",
"result": {
"hostids": [
"10126"
]
},
"id": 1
}
Request:
{
"jsonrpc": "2.0",
"method": "host.update",
"params": {
"hostid": "10126",
"macros": [
{
"macro": "{$PASS}",
"value": "password"
},
{
"macro": "{$DISC}",
"value": "sda",
"description": "Updated description"
}
]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"hostids": [
"10126"
]
},
"id": 1
}
Request:
{
"jsonrpc": "2.0",
"method": "host.update",
"params": {
"hostid": "10387",
"inventory_mode": 0,
"inventory": {
"location": "Latvia, Riga"
}
},
"id": 1
}
1218
Response:
{
"jsonrpc": "2.0",
"result": {
"hostids": [
"10387"
]
},
"id": 1
}
Request:
{
"jsonrpc": "2.0",
"method": "host.update",
"params": {
"hostid": "10387",
"tags": {
"tag": "OS",
"value": "RHEL 7"
}
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"hostids": [
"10387"
]
},
"id": 1
}
Convert discovery rule created ”automatic” macro to ”manual” and change its value to ”new-value”.
Request:
{
"jsonrpc": "2.0",
"method": "host.update",
"params": {
"hostid": "10387",
"macros": {
"hostmacroid": "5541",
"value": "new-value",
"automatic": "0"
}
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"hostids": [
1219
"10387"
]
},
"id": 1
}
Update the host ”10590” to use PSK encryption only for connections from host to Zabbix server, and change the PSK identity and
PSK key. Note that the host has to be pre-configured to use PSK.
Request:
{
"jsonrpc": "2.0",
"method": "host.update",
"params": {
"hostid": "10590",
"tls_connect": 1,
"tls_accept": 2,
"tls_psk_identity": "PSK 002",
"tls_psk": "e560cb0d918d26d31b4f642181f5f570ad89a390931102e5391d08327ba434e9"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"hostids": [
"10590"
]
},
"id": 1
}
See also
• host.massadd
• host.massupdate
• host.massremove
• Host group
• Template
• User macro
• Host interface
• Host inventory
• Host tag
• Proxy
• Proxy group
Source
CHost::update() in ui/include/classes/api/services/CHost.php.
Host group
Object references:
• Host group
Available methods:
1220
• hostgroup.get - retrieve host groups
• hostgroup.massadd - add related objects to host groups
• hostgroup.massremove - remove related objects from host groups
• hostgroup.massupdate - replace or remove related objects from host groups
• hostgroup.propagate - propagate permissions and tag filters to host groups’ subgroups
• hostgroup.update - update host groups
Property behavior:
- read-only
- required for update operations
name string Name of the host group.
Property behavior:
- required for create operations
flags integer Origin of the host group.
Possible values:
0 - a plain host group;
4 - a discovered host group.
Property behavior:
- read-only
uuid string Universal unique identifier, used for linking imported host groups to
already existing ones. Auto-generated, if not given.
hostgroup.create
Description
Note:
This method is only available to Super admin user type. Permissions to call the method can be revoked in user role settings.
See User roles for more information.
Parameters
Return values
(object) Returns an object containing the IDs of the created host groups under the groupids property. The order of the returned
IDs matches the order of the passed host groups.
Examples
Request:
1221
{
"jsonrpc": "2.0",
"method": "hostgroup.create",
"params": {
"name": "Linux servers"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"groupids": [
"107819"
]
},
"id": 1
}
Source
CHostGroup::create() in ui/include/classes/api/services/CHostGroup.php.
hostgroup.delete
Description
Note:
This method is only available to Admin and Super admin user types. Permissions to call the method can be revoked in user
role settings. See User roles for more information.
Parameters
(object) Returns an object containing the IDs of the deleted host groups under the groupids property.
Examples
Request:
{
"jsonrpc": "2.0",
"method": "hostgroup.delete",
"params": [
"107824",
"107825"
],
1222
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"groupids": [
"107824",
"107825"
]
},
"id": 1
}
Source
CHostGroup::delete() in ui/include/classes/api/services/CHostGroup.php.
hostgroup.get
Description
Note:
This method is available to users of any type. Permissions to call the method can be revoked in user role settings. See
User roles for more information.
Parameters
graphids ID/array Return only host groups that contain hosts with the given graphs.
groupids ID/array Return only host groups with the given host group IDs.
hostids ID/array Return only host groups that contain the given hosts.
maintenanceids ID/array Return only host groups that are affected by the given maintenances.
triggerids ID/array Return only host groups that contain hosts with the given triggers.
with_graphs flag Return only host groups that contain hosts with graphs.
with_graph_prototypes flag Return only host groups that contain hosts with graph prototypes.
with_hosts flag Return only host groups that contain hosts.
with_httptests flag Return only host groups that contain hosts with web checks.
with_monitored_items and
Overrides the
with_simple_graph_items parameters.
with_item_prototypes flag Return only host groups that contain hosts with item prototypes.
1223
Parameter Type Description
with_monitored_triggers flag Return only host groups that contain hosts with enabled triggers. All of
the items used in the trigger must also be enabled.
with_simple_graph_items flag Return only host groups that contain hosts with numeric items.
with_triggers flag Return only host groups that contain hosts with triggers.
Supports count.
limitSelects integer Limits the number of records returned by subselects.
Return values
1224
Retrieving data by name
Retrieve all data about two host groups named ”Zabbix servers” and ”Linux servers”.
Request:
{
"jsonrpc": "2.0",
"method": "hostgroup.get",
"params": {
"output": "extend",
"filter": {
"name": [
"Zabbix servers",
"Linux servers"
]
}
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": [
{
"groupid": "2",
"name": "Linux servers",
"internal": "0"
},
{
"groupid": "4",
"name": "Zabbix servers",
"internal": "0"
}
],
"id": 1
}
See also
• Host
Source
CHostGroup::get() in ui/include/classes/api/services/CHostGroup.php.
hostgroup.massadd
Description
Note:
This method is only available to Admin and Super admin user types. Permissions to call the method can be revoked in user
role settings. See User roles for more information.
Parameters
(object) Parameters containing the IDs of the host groups to update and the objects to add to all the host groups.
The method accepts the following parameters.
1225
Parameter Type Description
The host groups must have only the groupid property defined.
Parameter behavior:
- required
hosts object/array Hosts to add to all host groups.
Return values
(object) Returns an object containing the IDs of the updated host groups under the groupids property.
Examples
Request:
{
"jsonrpc": "2.0",
"method": "hostgroup.massadd",
"params": {
"groups": [
{
"groupid": "5"
},
{
"groupid": "6"
}
],
"hosts": [
{
"hostid": "30050"
},
{
"hostid": "30001"
}
]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"groupids": [
"5",
"6"
]
},
"id": 1
}
See also
• Host
Source
CHostGroup::massAdd() in ui/include/classes/api/services/CHostGroup.php.
1226
hostgroup.massremove
Description
Note:
This method is only available to Admin and Super admin user types. Permissions to call the method can be revoked in user
role settings. See User roles for more information.
Parameters
(object) Parameters containing the IDs of the host groups to update and the objects that should be removed.
Parameter behavior:
- required
hostids ID/array IDs of the hosts to remove from all host groups.
Return values
(object) Returns an object containing the IDs of the updated host groups under the groupids property.
Examples
Request:
{
"jsonrpc": "2.0",
"method": "hostgroup.massremove",
"params": {
"groupids": [
"5",
"6"
],
"hostids": [
"30050",
"30001"
]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"groupids": [
"5",
"6"
]
},
"id": 1
}
Source
CHostGroup::massRemove() in ui/include/classes/api/services/CHostGroup.php.
1227
hostgroup.massupdate
Description
Note:
This method is only available to Admin and Super admin user types. Permissions to call the method can be revoked in user
role settings. See User roles for more information.
Parameters
(object) Parameters containing the IDs of the host groups to update and the objects that should be updated.
The host groups must have only the groupid property defined.
Parameter behavior:
- required
hosts object/array Hosts to replace the current hosts on the given host groups.
All other hosts, except the ones mentioned, will be excluded from host
groups.
Discovered hosts will not be affected.
Parameter behavior:
- required
Return values
(object) Returns an object containing the IDs of the updated host groups under the groupids property.
Examples
Request:
{
"jsonrpc": "2.0",
"method": "hostgroup.massupdate",
"params": {
"groups": [
{
"groupid": "6"
}
],
"hosts": [
{
"hostid": "30050"
}
]
},
"id": 1
}
Response:
1228
{
"jsonrpc": "2.0",
"result": {
"groupids": [
"6",
]
},
"id": 1
}
See also
• hostgroup.update
• hostgroup.massadd
• Host
Source
CHostGroup::massUpdate() in ui/include/classes/api/services/CHostGroup.php.
hostgroup.propagate
Description
Note:
This method is only available to Super admin user types. Permissions to call the method can be revoked in user role
settings. See User roles for more information.
Parameters
Parameter behavior:
- required
permissions boolean Set to ”true” to propagate permissions.
Parameter behavior:
- required if tag_filters is not set
tag_filters boolean Set to ”true” to propagate tag filters.
Parameter behavior:
- required if permissions is not set
Return values
(object) Returns an object containing the IDs of the propagated host groups under the groupids property.
Examples
Request:
1229
{
"jsonrpc": "2.0",
"method": "hostgroup.propagate",
"params": {
"groups": [
{
"groupid": "6"
}
],
"permissions": true,
"tag_filters": true
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"groupids": [
"6",
]
},
"id": 1
}
See also
• hostgroup.update
• hostgroup.massadd
• Host
Source
CHostGroup::propagate() in ui/include/classes/api/services/CHostGroup.php.
hostgroup.update
Description
Note:
This method is only available to Admin and Super admin user types. Permissions to call the method can be revoked in user
role settings. See User roles for more information.
Parameters
Return values
(object) Returns an object containing the IDs of the updated host groups under the groupids property.
Examples
Request:
1230
{
"jsonrpc": "2.0",
"method": "hostgroup.update",
"params": {
"groupid": "7",
"name": "Linux hosts"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"groupids": [
"7"
]
},
"id": 1
}
Source
CHostGroup::update() in ui/include/classes/api/services/CHostGroup.php.
Host interface
Object references:
• Host interface
• Details
Available methods:
Attention:
Note that both ip and dns properties are required for create operations. If you do not want to use DNS, set it to an empty
string.
Property behavior:
- read-only
- required for update operations
1231
Property Type Description
Possible values:
0 - (default) unknown;
1 - available;
2 - unavailable.
Property behavior:
- read-only
hostid ID ID of the host that the interface belongs to.
Property behavior:
- constant
- required for create operations
type integer Interface type.
Possible values:
1 - Agent;
2 - SNMP;
3 - IPMI;
4 - JMX.
Property behavior:
- required for create operations
ip string IP address used by the interface.
Property behavior:
- required for create operations
dns string DNS name used by the interface.
Property behavior:
- required for create operations
port string Port number used by the interface.
Can contain user macros.
Property behavior:
- required for create operations
useip integer Whether the connection should be made via IP.
Possible values:
0 - connect using host DNS name;
1 - connect using host IP address.
Property behavior:
- required for create operations
main integer Whether the interface is used as default on the host. Only one
interface of some type can be set as default on a host.
Possible values:
0 - not default;
1 - default.
Property behavior:
- required for create operations
details array Additional details object for interface.
Property behavior:
- required if type is set to ”SNMP”
1232
Property Type Description
Property behavior:
- read-only
error string Error text if host interface is unavailable.
Property behavior:
- read-only
errors_from timestamp Time when host interface became unavailable.
Property behavior:
- read-only
Details
Possible values:
1 - SNMPv1;
2 - SNMPv2c;
3 - SNMPv3.
Property behavior:
- required
bulk integer Whether to use bulk SNMP requests.
Possible values:
0 - don’t use bulk requests;
1 - (default) - use bulk requests.
community string SNMP community. Used only by SNMPv1 and SNMPv2 interfaces.
Property behavior:
- required if version is set to ”SNMPv1” or ”SNMPv2c”
max_repetitions integer Max repetition value for native SNMP bulk requests
(GetBulkRequest-PDUs).
Used only for discovery[] and walk[] items in SNMPv2 and v3.
Default: 10.
securityname string SNMPv3 security name. Used only by SNMPv3 interfaces.
securitylevel integer SNMPv3 security level. Used only by SNMPv3 interfaces.
Possible values:
0 - (default) - noAuthNoPriv;
1 - authNoPriv;
2 - authPriv.
authpassphrase string SNMPv3 authentication passphrase. Used only by SNMPv3 interfaces.
privpassphrase string SNMPv3 privacy passphrase. Used only by SNMPv3 interfaces.
authprotocol integer SNMPv3 authentication protocol. Used only by SNMPv3 interfaces.
Possible values:
0 - (default) - MD5;
1 - SHA1;
2 - SHA224;
3 - SHA256;
4 - SHA384;
5 - SHA512.
1233
Property Type Description
Possible values:
0 - (default) - DES;
1 - AES128;
2 - AES192;
3 - AES256;
4 - AES192C;
5 - AES256C.
contextname string SNMPv3 context name. Used only by SNMPv3 interfaces.
hostinterface.create
Description
Note:
This method is only available to Admin and Super admin user types. Permissions to call the method can be revoked in user
role settings. See User roles for more information.
Parameters
Return values
(object) Returns an object containing the IDs of the created host interfaces under the interfaceids property. The order of
the returned IDs matches the order of the passed host interfaces.
Examples
Request:
{
"jsonrpc": "2.0",
"method": "hostinterface.create",
"params": {
"hostid": "30052",
"main": "0",
"type": "1",
"useip": "1",
"ip": "127.0.0.1",
"dns": "",
"port": "10050"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"interfaceids": [
"30062"
]
},
1234
"id": 1
}
Request:
{
"jsonrpc": "2.0",
"method": "hostinterface.create",
"params": {
"hostid": "10456",
"main": "0",
"type": "2",
"useip": "1",
"ip": "127.0.0.1",
"dns": "",
"port": "1601",
"details": {
"version": "2",
"bulk": "1",
"community": "{$SNMP_COMMUNITY}"
}
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"interfaceids": [
"30063"
]
},
"id": 1
}
See also
• hostinterface.massadd
• host.massadd
Source
CHostInterface::create() in ui/include/classes/api/services/CHostInterface.php.
hostinterface.delete
Description
Note:
This method is only available to Admin and Super admin user types. Permissions to call the method can be revoked in user
role settings. See User roles for more information.
Parameters
(object) Returns an object containing the IDs of the deleted host interfaces under the interfaceids property.
Examples
1235
Delete a host interface
Request:
{
"jsonrpc": "2.0",
"method": "hostinterface.delete",
"params": [
"30062"
],
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"interfaceids": [
"30062"
]
},
"id": 1
}
See also
• hostinterface.massremove
• host.massremove
Source
CHostInterface::delete() in ui/include/classes/api/services/CHostInterface.php.
hostinterface.get
Description
Note:
This method is available to users of any type. Permissions to call the method can be revoked in user role settings. See
User roles for more information.
Parameters
hostids ID/array Return only host interfaces used by the given hosts.
interfaceids ID/array Return only host interfaces with the given IDs.
itemids ID/array Return only host interfaces used by the given items.
triggerids ID/array Return only host interfaces used by items in the given triggers.
selectItems query Return an items property with the items that use the interface.
Supports count.
selectHosts query Return a hosts property with an array of hosts that use the interface.
limitSelects integer Limits the number of records returned by subselects.
1236
Parameter Type Description
Return values
Request:
{
"jsonrpc": "2.0",
"method": "hostinterface.get",
"params": {
"output": "extend",
"hostids": "30057"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": [
{
"interfaceid": "50039",
"hostid": "30057",
"main": "1",
"type": "1",
"useip": "1",
"ip": "::1",
"dns": "",
"port": "10050",
"available": "0",
"error": "",
"errors_from": "0",
"disable_until": "0",
"details": []
},
{
"interfaceid": "55082",
"hostid": "30057",
1237
"main": "0",
"type": "1",
"useip": "1",
"ip": "127.0.0.1",
"dns": "",
"port": "10051",
"available": "0",
"error": "",
"errors_from": "0",
"disable_until": "0",
"details": {
"version": "2",
"bulk": "0",
"community": "{$SNMP_COMMUNITY}"
}
}
],
"id": 1
}
See also
• Host
• Item
Source
CHostInterface::get() in ui/include/classes/api/services/CHostInterface.php.
hostinterface.massadd
Description
Note:
This method is only available to Admin and Super admin user types. Permissions to call the method can be revoked in user
role settings. See User roles for more information.
Parameters
(object) Parameters containing the host interfaces to be created on the given hosts.
The method accepts the following parameters.
Parameter behavior:
- required
hosts object/array Hosts to be updated.
Parameter behavior:
- required
Return values
(object) Returns an object containing the IDs of the created host interfaces under the interfaceids property.
Examples
Creating interfaces
1238
Create an interface on two hosts.
Request:
{
"jsonrpc": "2.0",
"method": "hostinterface.massadd",
"params": {
"hosts": [
{
"hostid": "30050"
},
{
"hostid": "30052"
}
],
"interfaces": {
"dns": "",
"ip": "127.0.0.1",
"main": 0,
"port": "10050",
"type": 1,
"useip": 1
}
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"interfaceids": [
"30069",
"30070"
]
},
"id": 1
}
See also
• hostinterface.create
• host.massadd
• Host
Source
CHostInterface::massAdd() in ui/include/classes/api/services/CHostInterface.php.
hostinterface.massremove
Description
Note:
This method is only available to Admin and Super admin user types. Permissions to call the method can be revoked in user
role settings. See User roles for more information.
Parameters
(object) Parameters containing the IDs of the hosts to be updated and the interfaces to be removed.
1239
Parameter Type Description
The host interface object must have only the ip, dns and port
properties defined.
Parameter behavior:
- required
hostids ID/array IDs of the hosts to be updated.
Parameter behavior:
- required
Return values
(object) Returns an object containing the IDs of the deleted host interfaces under the interfaceids property.
Examples
Removing interfaces
Request:
{
"jsonrpc": "2.0",
"method": "hostinterface.massremove",
"params": {
"hostids": [
"30050",
"30052"
],
"interfaces": {
"dns": "",
"ip": "127.0.0.1",
"port": "161"
}
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"interfaceids": [
"30069",
"30070"
]
},
"id": 1
}
See also
• hostinterface.delete
• host.massremove
Source
CHostInterface::massRemove() in ui/include/classes/api/services/CHostInterface.php.
hostinterface.replacehostinterfaces
Description
1240
object hostinterface.replacehostinterfaces(object parameters)
This method allows to replace all host interfaces on a given host.
Note:
This method is only available to Admin and Super admin user types. Permissions to call the method can be revoked in user
role settings. See User roles for more information.
Parameters
(object) Parameters containing the ID of the host to be updated and the new host interfaces.
interfaces object/array Host interfaces to replace the current host interfaces with.
Parameter behavior:
- required
hostid ID ID of the host to be updated.
Parameter behavior:
- required
Return values
(object) Returns an object containing the IDs of the created host interfaces under the interfaceids property.
Examples
Request:
{
"jsonrpc": "2.0",
"method": "hostinterface.replacehostinterfaces",
"params": {
"hostid": "30052",
"interfaces": {
"dns": "",
"ip": "127.0.0.1",
"main": 1,
"port": "10050",
"type": 1,
"useip": 1
}
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"interfaceids": [
"30081"
]
},
"id": 1
}
See also
• host.update
• host.massupdate
1241
Source
CHostInterface::replaceHostInterfaces() in ui/include/classes/api/services/CHostInterface.php.
hostinterface.update
Description
Note:
This method is only available to Admin and Super admin user types. Permissions to call the method can be revoked in user
role settings. See User roles for more information.
Parameters
Return values
(object) Returns an object containing the IDs of the updated host interfaces under the interfaceids property.
Examples
Request:
{
"jsonrpc": "2.0",
"method": "hostinterface.update",
"params": {
"interfaceid": "30048",
"port": "30050"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"interfaceids": [
"30048"
]
},
"id": 1
}
Source
CHostInterface::update() in ui/include/classes/api/services/CHostInterface.php.
Host prototype
Object references:
• Host prototype
• Group link
• Group prototype
1242
• Host prototype tag
• Custom interface
• Custom interface details
Available methods:
Property behavior:
- read-only
- required for update operations
host string Technical name of the host prototype.
Property behavior:
- required for create operations
- read-only for inherited objects
name string Visible name of the host prototype.
Property behavior:
- read-only for inherited objects
status integer Status of the host prototype.
Possible values:
0 - (default) monitored host;
1 - unmonitored host.
inventory_mode integer Host inventory population mode.
Possible values:
-1 - (default) disabled;
0 - manual;
1 - automatic.
templateid ID ID of the parent template host prototype.
Property behavior:
- read-only
discover integer Host prototype discovery status.
Possible values:
0 - (default) new hosts will be discovered;
1 - new hosts will not be discovered and existing hosts will be marked
as lost.
1243
Property Type Description
custom_interfaces integer Source of custom interfaces for hosts created by the host prototype.
Possible values:
0 - (default) inherit interfaces from parent host;
1 - use host prototypes custom interfaces.
Property behavior:
- read-only for inherited objects
uuid string Universal unique identifier, used for linking imported host prototypes
to already existing ones. Auto-generated, if not given.
Property behavior:
- supported if the host prototype belongs to a template
Group link
The group link object links a host prototype with a host group. It has the following properties.
Group prototype
The group prototype object defines a group that will be created for a discovered host. It has the following properties.
Property behavior:
- required
Property behavior:
- required
value string Host prototype tag value.
Custom interface
Custom interfaces are supported if custom_interfaces of Host prototype object is set to ”use host prototypes custom interfaces”.
The custom interface object has the following properties.
1244
Property Type Description
Possible values:
1 - Agent;
2 - SNMP;
3 - IPMI;
4 - JMX.
Property behavior:
- required
useip integer Whether the connection should be made via IP.
Possible values:
0 - connect using host DNS name;
1 - connect using host IP address.
Property behavior:
- required
ip string IP address used by the interface.
Can contain macros.
Property behavior:
- required if useip is set to ”connect using host IP address”
dns string DNS name used by the interface.
Can contain macros.
Property behavior:
- required if useip is set to ”connect using host DNS name”
port string Port number used by the interface.
Can contain user and LLD macros.
Property behavior:
- required
main integer Whether the interface is used as default on the host.
Only one interface of some type can be set as default on a host.
Possible values:
0 - not default;
1 - default.
Property behavior:
- required
details array Additional object for interface.
Property behavior:
- required if type is set to ”SNMP”
1245
Property Type Description
Possible values:
1 - SNMPv1;
2 - SNMPv2c;
3 - SNMPv3.
Property behavior:
- required
bulk integer Whether to use bulk SNMP requests.
Possible values:
0 - don’t use bulk requests;
1 - (default) - use bulk requests.
community string SNMP community.
Property behavior:
- required if version is set to ”SNMPv1” or ”SNMPv2c”
max_repetitions integer Max repetition value for native SNMP bulk requests
(GetBulkRequest-PDUs).
Used only for discovery[] and walk[] items in SNMPv2 and v3.
Default: 10.
securityname string SNMPv3 security name.
Property behavior:
- supported if version is set to ”SNMPv3”
securitylevel integer SNMPv3 security level.
Possible values:
0 - (default) - noAuthNoPriv;
1 - authNoPriv;
2 - authPriv.
Property behavior:
- supported if version is set to ”SNMPv3”
authpassphrase string SNMPv3 authentication passphrase.
Property behavior:
- supported if version is set to ”SNMPv3” and securitylevel is set
to ”authNoPriv” or ”authPriv”
privpassphrase string SNMPv3 privacy passphrase.
Property behavior:
- supported if version is set to ”SNMPv3” and securitylevel is set
to ”authPriv”
authprotocol integer SNMPv3 authentication protocol.
Possible values:
0 - (default) - MD5;
1 - SHA1;
2 - SHA224;
3 - SHA256;
4 - SHA384;
5 - SHA512.
Property behavior:
- supported if version is set to ”SNMPv3” and securitylevel is set
to ”authNoPriv” or ”authPriv”
1246
Property Type Description
Possible values:
0 - (default) - DES;
1 - AES128;
2 - AES192;
3 - AES256;
4 - AES192C;
5 - AES256C.
Property behavior:
- supported if version is set to ”SNMPv3” and securitylevel is set
to ”authPriv”
contextname string SNMPv3 context name.
Property behavior:
- supported if version is set to ”SNMPv3”
hostprototype.create
Description
Note:
This method is only available to Admin and Super admin user types. Permissions to call the method can be revoked in user
role settings. See User roles for more information.
Parameters
Parameter behavior:
- required
ruleid ID ID of the LLD rule that the host prototype belongs to.
Parameter behavior:
- required
groupPrototypes array Group prototypes to be created for the host prototype.
macros object/array User macros to be created for the host prototype.
tags object/array Host prototype tags.
interfaces object/array Host prototype custom interfaces.
templates object/array Templates to be linked to the host prototype.
Return values
(object) Returns an object containing the IDs of the created host prototypes under the hostids property. The order of the
returned IDs matches the order of the passed host prototypes.
Examples
1247
Create a host prototype ”{#VM.NAME}” on LLD rule ”23542” with a group prototype ”{#HV.NAME}”, tag pair ”Datacenter”:
”{#DATACENTER.NAME}” and custom SNMPv2 interface 127.0.0.1:161 with community {$SNMP_COMMUNITY}. Link it to host
group ”2”.
Request:
{
"jsonrpc": "2.0",
"method": "hostprototype.create",
"params": {
"host": "{#VM.NAME}",
"ruleid": "23542",
"custom_interfaces": "1",
"groupLinks": [
{
"groupid": "2"
}
],
"groupPrototypes": [
{
"name": "{#HV.NAME}"
}
],
"tags": [
{
"tag": "Datacenter",
"value": "{#DATACENTER.NAME}"
}
],
"interfaces": [
{
"main": "1",
"type": "2",
"useip": "1",
"ip": "127.0.0.1",
"dns": "",
"port": "161",
"details": {
"version": "2",
"bulk": "1",
"community": "{$SNMP_COMMUNITY}"
}
}
]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"hostids": [
"10103"
]
},
"id": 1
}
See also
• Group link
• Group prototype
• Host prototype tag
• Custom interface
1248
• User macro
Source
CHostPrototype::create() in ui/include/classes/api/services/CHostPrototype.php.
hostprototype.delete
Description
Note:
This method is only available to Admin and Super admin user types. Permissions to call the method can be revoked in user
role settings. See User roles for more information.
Parameters
(object) Returns an object containing the IDs of the deleted host prototypes under the hostids property.
Examples
Request:
{
"jsonrpc": "2.0",
"method": "hostprototype.delete",
"params": [
"10103",
"10105"
],
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"hostids": [
"10103",
"10105"
]
},
"id": 1
}
Source
CHostPrototype::delete() in ui/include/classes/api/services/CHostPrototype.php.
hostprototype.get
Description
1249
Note:
This method is available to users of any type. Permissions to call the method can be revoked in user role settings. See
User roles for more information.
Parameters
hostids ID/array Return only host prototypes with the given IDs.
discoveryids ID/array Return only host prototypes that belong to the given LLD rules.
inherited boolean If set to true return only items inherited from a template.
selectDiscoveryRule query Return a discoveryRule property with the LLD rule that the host
prototype belongs to.
selectInterfaces query Return an interfaces property with host prototype custom
interfaces.
selectGroupLinks query Return a groupLinks property with the group links of the host
prototype.
selectGroupPrototypes query Return a groupPrototypes property with the group prototypes of the
host prototype.
selectMacros query Return a macros property with host prototype macros.
selectParentHost query Return a parentHost property with the host that the host prototype
belongs to.
selectTags query Return a tags property with host prototype tags.
selectTemplates query Return a templates property with the templates linked to the host
prototype.
Supports count.
sortfield string/array Sort the result by the given properties.
Return values
Retrieve all host prototypes, their group links, group prototypes and tags from an LLD rule.
Request:
{
"jsonrpc": "2.0",
"method": "hostprototype.get",
1250
"params": {
"output": "extend",
"selectInterfaces": "extend",
"selectGroupLinks": "extend",
"selectGroupPrototypes": "extend",
"selectTags": "extend",
"discoveryids": "23554"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": [
{
"hostid": "10092",
"host": "{#HV.UUID}",
"name": "{#HV.UUID}",
"status": "0",
"templateid": "0",
"discover": "0",
"custom_interfaces": "1",
"inventory_mode": "-1",
"groupLinks": [
{
"group_prototypeid": "4",
"hostid": "10092",
"groupid": "7",
"templateid": "0"
}
],
"groupPrototypes": [
{
"group_prototypeid": "7",
"hostid": "10092",
"name": "{#CLUSTER.NAME}",
"templateid": "0"
}
],
"tags": [
{
"tag": "Datacenter",
"value": "{#DATACENTER.NAME}"
},
{
"tag": "Instance type",
"value": "{#INSTANCE_TYPE}"
}
],
"interfaces": [
{
"main": "1",
"type": "2",
"useip": "1",
"ip": "127.0.0.1",
"dns": "",
"port": "161",
"details": {
"version": "2",
"bulk": "1",
"community": "{$SNMP_COMMUNITY}"
1251
}
}
]
}
],
"id": 1
}
See also
• Group link
• Group prototype
• User macro
Source
CHostPrototype::get() in ui/include/classes/api/services/CHostPrototype.php.
hostprototype.update
Description
Note:
This method is only available to Admin and Super admin user types. Permissions to call the method can be revoked in user
role settings. See User roles for more information.
Parameters
Additionally to the standard host prototype properties, the method accepts the following parameters.
groupLinks array Group links to replace the current group links on the host prototype.
Parameter behavior:
- read-only for inherited objects
groupPrototypes array Group prototypes to replace the existing group prototypes on the host
prototype.
Parameter behavior:
- read-only for inherited objects
macros object/array User macros to replace the current user macros.
All macros that are not listed in the request will be removed.
tags object/array Host prototype tags to replace the current tags.
All tags that are not listed in the request will be removed.
Parameter behavior:
- read-only for inherited objects
interfaces object/array Host prototype custom interfaces to replace the current interfaces.
Custom interface object should contain all its parameters.
All interfaces that are not listed in the request will be removed.
Parameter behavior:
- supported if custom_interfaces of Host prototype object is set to
”use host prototypes custom interfaces”
- read-only for inherited objects
1252
Parameter Type Description
Return values
(object) Returns an object containing the IDs of the updated host prototypes under the hostids property.
Examples
Request:
{
"jsonrpc": "2.0",
"method": "hostprototype.update",
"params": {
"hostid": "10092",
"status": 1
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"hostids": [
"10092"
]
},
"id": 1
}
Request:
{
"jsonrpc": "2.0",
"method": "hostprototype.update",
"params": {
"hostid": "10092",
"tags": [
{
"tag": "Datacenter",
"value": "{#DATACENTER.NAME}"
},
{
"tag": "Instance type",
"value": "{#INSTANCE_TYPE}"
}
]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
1253
"result": {
"hostids": [
"10092"
]
},
"id": 1
}
Request:
{
"jsonrpc": "2.0",
"method": "hostprototype.update",
"params": {
"hostid": "10092",
"custom_interfaces": "1",
"interfaces": [
{
"main": "1",
"type": "2",
"useip": "1",
"ip": "127.0.0.1",
"dns": "",
"port": "161",
"details": {
"version": "2",
"bulk": "1",
"community": "{$SNMP_COMMUNITY}"
}
}
]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"hostids": [
"10092"
]
},
"id": 1
}
See also
• Group link
• Group prototype
• Host prototype tag
• Custom interface
• User macro
Source
CHostPrototype::update() in ui/include/classes/api/services/CHostPrototype.php.
Housekeeping
1254
Object references:
• Housekeeping
Available methods:
Housekeeping object
Possible values:
0 - Disable;
1 - (default) Enable.
hk_events_trigger string Trigger data storage period. Accepts seconds and time unit with suffix.
Default: 365d.
hk_events_service string Service data storage period. Accepts seconds and time unit with suffix.
Default: 1d.
hk_events_internal string Internal data storage period. Accepts seconds and time unit with suffix.
Default: 1d.
hk_events_discovery string Network discovery data storage period. Accepts seconds and time unit
with suffix.
Default: 1d.
hk_events_autoreg string Autoregistration data storage period. Accepts seconds and time unit
with suffix.
Default: 1d.
hk_services_mode integer Enable internal housekeeping for services.
Possible values:
0 - Disable;
1 - (default) Enable.
hk_services string Services data storage period. Accepts seconds and time unit with
suffix.
Default: 365d.
hk_audit_mode integer Enable internal housekeeping for audit.
Possible values:
0 - Disable;
1 - (default) Enable.
hk_audit string Audit data storage period. Accepts seconds and time unit with suffix.
Default: 31d.
hk_sessions_mode integer Enable internal housekeeping for sessions.
Possible values:
0 - Disable;
1 - (default) Enable.
1255
Property Type Description
hk_sessions string Sessions data storage period. Accepts seconds and time unit with
suffix.
Default: 365d.
hk_history_mode integer Enable internal housekeeping for history.
Possible values:
0 - Disable;
1 - (default) Enable.
hk_history_global integer Override item history period.
Possible values:
0 - Do not override;
1 - (default) Override.
hk_history string History data storage period. Accepts seconds and time unit with suffix.
Default: 31d.
hk_trends_mode integer Enable internal housekeeping for trends.
Possible values:
0 - Disable;
1 - (default) Enable.
hk_trends_global integer Override item trend period.
Possible values:
0 - Do not override;
1 - (default) Override.
hk_trends string Trends data storage period. Accepts seconds and time unit with suffix.
Default: 365d.
db_extension string Configuration flag DB extension. If this flag is set to ”timescaledb” then
the server changes its behavior for housekeeping and item deletion.
Property behavior:
- read-only
compression_availability integer Whether data compression is supported by the database (or its
extension).
Possible values:
0 - Unavailable;
1 - Available.
Property behavior:
- read-only
compression_status integer Enable TimescaleDB compression for history and trends.
Possible values:
0 - (default) Off;
1 - On.
compress_older string Compress history and trends records older than specified period.
Accepts seconds and time unit with suffix.
Default: 7d.
housekeeping.get
Description
1256
Note:
This method is available to users of any type. Permissions to call the method can be revoked in user role settings. See
User roles for more information.
Parameters
output query This parameter being common for all get methods described in the
reference commentary.
Return values
Request:
{
"jsonrpc": "2.0",
"method": "housekeeping.get",
"params": {
"output": "extend"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"hk_events_mode": "1",
"hk_events_trigger": "365d",
"hk_events_service": "1d",
"hk_events_internal": "1d",
"hk_events_discovery": "1d",
"hk_events_autoreg": "1d",
"hk_services_mode": "1",
"hk_services": "365d",
"hk_audit_mode": "1",
"hk_audit": "31d",
"hk_sessions_mode": "1",
"hk_sessions": "365d",
"hk_history_mode": "1",
"hk_history_global": "0",
"hk_history": "31d",
"hk_trends_mode": "1",
"hk_trends_global": "0",
"hk_trends": "365d",
"db_extension": "",
"compression_status": "0",
"compress_older": "7d"
},
"id": 1
}
Source
1257
housekeeping.update
Description
Note:
This method is only available to Super admin user type. Permissions to call the method can be revoked in user role settings.
See User roles for more information.
Parameters
Request:
{
"jsonrpc": "2.0",
"method": "housekeeping.update",
"params": {
"hk_events_mode": "1",
"hk_events_trigger": "200d",
"hk_events_internal": "2d",
"hk_events_discovery": "2d"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": [
"hk_events_mode",
"hk_events_trigger",
"hk_events_internal",
"hk_events_discovery"
],
"id": 1
}
Source
CHousekeeping::update() in ui/include/classes/api/services/CHousekeeping.php.
Icon map
Object references:
• Icon map
• Icon mapping
Available methods:
1258
Icon map object
Property behavior:
- read-only
- required for update operations
default_iconid ID ID of the default icon.
Property behavior:
- required for create operations
name string Name of the icon map.
Property behavior:
- required for create operations
Icon mapping
The icon mapping object defines a specific icon to be used for hosts with a certain inventory field value. It has the following
properties.
Property behavior:
- required
expression string Expression to match the inventory field against.
Property behavior:
- required
inventory_link integer ID of the host inventory field.
Property behavior:
- required
sortorder integer Position of the icon mapping in the icon map.
Property behavior:
- read-only
iconmap.create
Description
Note:
This method is only available to Super admin user type. Permissions to call the method can be revoked in user role settings.
See User roles for more information.
Parameters
1259
(object/array) Icon maps to create.
Additionally to the standard icon map properties, the method accepts the following parameters.
Parameter behavior:
- required
Return values
(object) Returns an object containing the IDs of the created icon maps under the iconmapids property. The order of the
returned IDs matches the order of the passed icon maps.
Examples
Request:
{
"jsonrpc": "2.0",
"method": "iconmap.create",
"params": {
"name": "Type icons",
"default_iconid": "2",
"mappings": [
{
"inventory_link": 1,
"expression": "server",
"iconid": "3"
},
{
"inventory_link": 1,
"expression": "switch",
"iconid": "4"
}
]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"iconmapids": [
"2"
]
},
"id": 1
}
See also
• Icon mapping
Source
CIconMap::create() in ui/include/classes/api/services/CIconMap.php.
iconmap.delete
1260
Description
Note:
This method is only available to Super admin user type. Permissions to call the method can be revoked in user role settings.
See User roles for more information.
Parameters
(object) Returns an object containing the IDs of the deleted icon maps under the iconmapids property.
Examples
Request:
{
"jsonrpc": "2.0",
"method": "iconmap.delete",
"params": [
"2",
"5"
],
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"iconmapids": [
"2",
"5"
]
},
"id": 1
}
Source
CIconMap::delete() in ui/include/classes/api/services/CIconMap.php.
iconmap.get
Description
Note:
This method is available to users of any type. Permissions to call the method can be revoked in user role settings. See
User roles for more information.
Parameters
1261
Parameter Type Description
iconmapids ID/array Return only icon maps with the given IDs.
sysmapids ID/array Return only icon maps that are used in the given maps.
selectMappings query Return a mappings property with the icon mappings used.
sortfield string/array Sort the result by the given properties.
Return values
Request:
{
"jsonrpc": "2.0",
"method": "iconmap.get",
"params": {
"iconmapids": "3",
"output": "extend",
"selectMappings": "extend"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": [
{
"mappings": [
{
"iconmappingid": "3",
"iconmapid": "3",
"iconid": "6",
"inventory_link": "1",
"expression": "server",
"sortorder": "0"
},
{
"iconmappingid": "4",
"iconmapid": "3",
"iconid": "10",
1262
"inventory_link": "1",
"expression": "switch",
"sortorder": "1"
}
],
"iconmapid": "3",
"name": "Host type icons",
"default_iconid": "2"
}
],
"id": 1
}
See also
• Icon mapping
Source
CIconMap::get() in ui/include/classes/api/services/CIconMap.php.
iconmap.update
Description
Note:
This method is only available to Super admin user type. Permissions to call the method can be revoked in user role settings.
See User roles for more information.
Parameters
Additionally to the standard icon map properties, the method accepts the following parameters.
Return values
(object) Returns an object containing the IDs of the updated icon maps under the iconmapids property.
Examples
Request:
{
"jsonrpc": "2.0",
"method": "iconmap.update",
"params": {
"iconmapid": "1",
"name": "OS icons"
},
"id": 1
}
Response:
1263
{
"jsonrpc": "2.0",
"result": {
"iconmapids": [
"1"
]
},
"id": 1
}
See also
• Icon mapping
Source
CIconMap::update() in ui/include/classes/api/services/CIconMap.php.
Image
Object references:
• Image
Available methods:
Image object
Property behavior:
- read-only
- required for update operations
name string Name of the image.
Property behavior:
- required for create operations
imagetype integer Type of image.
Possible values:
1 - (default) icon;
2 - background image.
Property behavior:
- constant
- required for create operations
1264
Property Type Description
Property behavior:
- required for create operations
image.create
Description
Note:
This method is only available to Super admin user type. Permissions to call the method can be revoked in user role settings.
See User roles for more information.
Parameters
Return values
(object) Returns an object containing the IDs of the created images under the imageids property. The order of the returned
IDs matches the order of the passed images.
Examples
Create an image
Request:
{
"jsonrpc": "2.0",
"method": "image.create",
"params": {
"imagetype": 1,
"name": "Cloud_(24)",
"image": "iVBORw0KGgoAAAANSUhEUgAAABgAAAANCAYAAACzbK7QAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAACmAAAApgB
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"imageids": [
"188"
]
},
"id": 1
}
Source
CImage::create() in ui/include/classes/api/services/CImage.php.
1265
image.delete
Description
Note:
This method is only available to Super admin user type. Permissions to call the method can be revoked in user role settings.
See User roles for more information.
Parameters
(object) Returns an object containing the IDs of the deleted images under the imageids property.
Examples
Request:
{
"jsonrpc": "2.0",
"method": "image.delete",
"params": [
"188",
"192"
],
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"imageids": [
"188",
"192"
]
},
"id": 1
}
Source
CImage::delete() in ui/include/classes/api/services/CImage.php.
image.get
Description
Note:
This method is available to users of any type. Permissions to call the method can be revoked in user role settings. See
User roles for more information.
Parameters
1266
The method supports the following parameters.
Return values
Retrieve an image
Request:
{
"jsonrpc": "2.0",
"method": "image.get",
"params": {
"output": "extend",
"select_image": true,
"imageids": "2"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": [
{
"imageid": "2",
"imagetype": "1",
"name": "Cloud_(24)",
"image": "iVBORw0KGgoAAAANSUhEUgAAABgAAAANCAYAAACzbK7QAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAACmAAA
}
],
"id": 1
}
Source
CImage::get() in ui/include/classes/api/services/CImage.php.
1267
image.update
Description
Note:
This method is only available to Super admin user type. Permissions to call the method can be revoked in user role settings.
See User roles for more information.
Parameters
Return values
(object) Returns an object containing the IDs of the updated images under the imageids property.
Examples
Rename image
Request:
{
"jsonrpc": "2.0",
"method": "image.update",
"params": {
"imageid": "2",
"name": "Cloud icon"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"imageids": [
"2"
]
},
"id": 1
}
Source
CImage::update() in ui/include/classes/api/services/CImage.php.
Item
Object references:
• Item
• Item tag
• Item preprocessing
Available methods:
1268
• item.create - create new items
• item.delete - delete items
• item.get - retrieve items
• item.update - update items
Item object
Note:
Web items cannot be directly created, updated or deleted via the Zabbix API.
Property behavior:
- read-only
- required for update operations
delay string Update interval of the item.
Accepts seconds or time unit with suffix (e.g., 30s, 1m, 2h, 1d) and,
optionally, one or more custom intervals, all separated by semicolons.
Custom intervals can be a mix of flexible and scheduling intervals.
Example:
1h;wd1-5h9-18;{$Macro1}/1-7,00:00-24:00;0/6-7,12:00-24:00;{$Macr
Property behavior:
- required if type is set to ”Zabbix agent” (0), ”Simple check” (3),
”Zabbix internal” (5), ”External check” (10), ”Database monitor” (11),
”IPMI agent” (12), ”SSH agent” (13), ”TELNET agent” (14),
”Calculated” (15), ”JMX agent” (16), ”HTTP agent” (19), ”SNMP agent”
(20), ”Script” (21), ”Browser” (22), or iftype is set to ”Zabbix agent
(active)” (7) and key_ does not contain ”mqtt.get”
hostid ID ID of the host or template that the item belongs to.
Property behavior:
- constant
- required for create operations
interfaceid ID ID of the item’s host interface.
Property behavior:
- required if item belongs to host and type is set to ”Zabbix agent”,
”IPMI agent”, ”JMX agent”, ”SNMP trap”, or ”SNMP agent”
- supported if item belongs to host and type is set to ”Simple check”,
”External check”, ”SSH agent”, ”TELNET agent”, or ”HTTP agent”
- read-only for discovered objects
key_ string Item key.
Property behavior:
- required for create operations
- read-only for inherited objects or discovered objects
1269
Property Type Description
Property behavior:
- required for create operations
- read-only for inherited objects or discovered objects
name_resolved string Name of the item with resolved user macros.
Property behavior:
- read-only
type integer Type of the item.
Possible values:
0 - Zabbix agent;
2 - Zabbix trapper;
3 - Simple check;
5 - Zabbix internal;
7 - Zabbix agent (active);
9 - Web item;
10 - External check;
11 - Database monitor;
12 - IPMI agent;
13 - SSH agent;
14 - TELNET agent;
15 - Calculated;
16 - JMX agent;
17 - SNMP trap;
18 - Dependent item;
19 - HTTP agent;
20 - SNMP agent;
21 - Script;
22 - Browser.
Property behavior:
- required for create operations
- read-only for inherited objects or discovered objects
url string URL string.
Supports user macros, {HOST.IP}, {HOST.CONN}, {HOST.DNS},
{HOST.HOST}, {HOST.NAME}, {ITEM.ID}, {ITEM.KEY}.
Property behavior:
- required if type is set to ”HTTP agent”
- read-only for inherited objects or discovered objects
value_type integer Type of information of the item.
Possible values:
0 - numeric float;
1 - character;
2 - log;
3 - numeric unsigned;
4 - text;
5 - binary.
Property behavior:
- required for create operations
- read-only for inherited objects or discovered objects
1270
Property Type Description
Property behavior:
- supported if type is set to ”HTTP agent”
- read-only for discovered objects
authtype integer Authentication method.
Property behavior:
- supported if type is set to ”SSH agent” or ”HTTP agent”
- read-only for inherited objects (if type is set to ”HTTP agent”) or
discovered objects
description string Description of the item.
Property behavior:
- read-only for discovered objects
error string Error text if there are problems updating the item value.
Property behavior:
- read-only
flags integer Origin of the item.
Possible values:
0 - a plain item;
4 - a discovered item.
Property behavior:
- read-only
follow_redirects integer Follow response redirects while polling data.
Possible values:
0 - Do not follow redirects;
1 - (default) Follow redirects.
Property behavior:
- supported if type is set to ”HTTP agent”
- read-only for inherited objects or discovered objects
headers array Array of headers that will be sent when performing an HTTP request.
Property behavior:
- supported if type is set to ”HTTP agent”
- read-only for inherited objects or discovered objects
history string A time unit of how long the history data should be stored.
Also accepts user macro.
Default: 31d.
Property behavior:
- read-only for discovered objects
1271
Property Type Description
Property behavior:
- supported if type is set to ”HTTP agent”
- read-only for inherited objects or discovered objects
inventory_link integer ID of the host inventory field that is populated by the item.
Refer to the host inventory page for a list of supported host inventory
fields and their IDs.
Default: 0.
Property behavior:
- supported if value_type is set to ”numeric float”, ”character”,
”numeric unsigned”, or ”text”
- read-only for discovered objects
ipmi_sensor string IPMI sensor.
Property behavior:
- required if type is set to ”IPMI agent” and key_ is not set to
”ipmi.get”
- supported if type is set to ”IPMI agent”
- read-only for inherited objects or discovered objects
jmx_endpoint string JMX agent custom connection string.
Default value:
service:jmx:rmi:///jndi/rmi://{HOST.CONN}:{HOST.PORT}/jmxrmi
Property behavior:
- supported if type is set to ”JMX agent”
- read-only for discovered objects
lastclock timestamp Time when the item value was last updated.
By default, only values that fall within the last 24 hours are displayed.
You can extend this time period by changing the value of Max history
display period parameter in the Administration → General menu
section.
Property behavior:
- read-only
lastns integer Nanoseconds when the item value was last updated.
By default, only values that fall within the last 24 hours are displayed.
You can extend this time period by changing the value of Max history
display period parameter in the Administration → General menu
section.
Property behavior:
- read-only
lastvalue string Last value of the item.
By default, only values that fall within the last 24 hours are displayed.
You can extend this time period by changing the value of Max history
display period parameter in the Administration → General menu
section.
Property behavior:
- read-only
1272
Property Type Description
Property behavior:
- supported if value_type is set to ”log”
- read-only for inherited objects or discovered objects
master_itemid ID ID of the master item.
Recursion up to 3 dependent items and maximum count of dependent
items equal to 29999 are allowed.
Property behavior:
- required if type is set to ”Dependent item”
- read-only for inherited objects or discovered objects
output_format integer Should the response be converted to JSON.
Property behavior:
- supported if type is set to ”HTTP agent”
- read-only for inherited objects or discovered objects
params string Additional parameters depending on the type of the item:
- executed script for SSH agent and TELNET agent items;
- SQL query for database monitor items;
- formula for calculated items;
- the script for script and browser items.
Property behavior:
- required if type is set to ”Database monitor”, ”SSH agent”, ”TELNET
agent”, ”Calculated”, ”Script”, or ”Browser”
- read-only for inherited objects (if type is set to ”Script” or ”Browser”)
or discovered objects
parameters object/array Additional parameters if type is set to ”Script” or ”Browser”. Array of
objects with name and value properties, where name must be unique.
Property behavior:
- supported if type is set to ”Script” or ”Browser”
- read-only for inherited objects or discovered objects
password string Password for authentication.
Property behavior:
- required if type is set to ”JMX agent” and username is set
- supported if type is set to ”Simple check”, ”SSH agent”, ”TELNET
agent”, ”Database monitor”, or ”HTTP agent”
- read-only for inherited objects (if type is set to ”HTTP agent”) or
discovered objects
post_type integer Type of post data body stored in posts property.
Possible values:
0 - (default) Raw data;
2 - JSON data;
3 - XML data.
Property behavior:
- supported if type is set to ”HTTP agent”
- read-only for inherited objects or discovered objects
1273
Property Type Description
Property behavior:
- required if type is set to ”HTTP agent” and post_type is set to
”JSON data” or ”XML data”
- supported if type is set to ”HTTP agent” and post_type is set to
”Raw data”
- read-only for inherited objects or discovered objects
prevvalue string Previous value of the item.
By default, only values that fall within the last 24 hours are displayed.
You can extend this time period by changing the value of Max history
display period parameter in the Administration → General menu
section.
Property behavior:
- read-only
privatekey string Name of the private key file.
Property behavior:
- required if type is set to ”SSH agent” and authtype is set to ”public
key”
- read-only for discovered objects
publickey string Name of the public key file.
Property behavior:
- required if type is set to ”SSH agent” and authtype is set to ”public
key”
- read-only for discovered objects
query_fields array Array of query fields that will be sent when performing an HTTP
request.
Property behavior:
- supported if type is set to ”HTTP agent”
- read-only for inherited objects or discovered objects
request_method integer Type of request method.
Possible values:
0 - (default) GET;
1 - POST;
2 - PUT;
3 - HEAD.
Property behavior:
- supported if type is set to ”HTTP agent”
- read-only for inherited objects or discovered objects
retrieve_mode integer What part of response should be stored.
Property behavior:
- supported if type is set to ”HTTP agent”
- read-only for inherited objects or discovered objects
1274
Property Type Description
Property behavior:
- required if type is set to ”SNMP agent”
- read-only for inherited objects or discovered objects
ssl_cert_file string Public SSL Key file path.
Property behavior:
- supported if type is set to ”HTTP agent”
- read-only for inherited objects or discovered objects
ssl_key_file string Private SSL Key file path.
Property behavior:
- supported if type is set to ”HTTP agent”
- read-only for inherited objects or discovered objects
ssl_key_password string Password for SSL Key file.
Property behavior:
- supported if type is set to ”HTTP agent”
- read-only for inherited objects or discovered objects
state integer State of the item.
Possible values:
0 - (default) normal;
1 - not supported.
Property behavior:
- read-only
status integer Status of the item.
Possible values:
0 - (default) enabled item;
1 - disabled item.
status_codes string Ranges of required HTTP status codes, separated by commas.
Also supports user macros as part of comma separated list.
Example: 200,200-{$M},{$M},200-400
Property behavior:
- supported if type is set to ”HTTP agent”
- read-only for inherited objects or discovered objects
templateid ID ID of the parent template item.
Hint: Use the hostid property to specify the template that the item
belongs to.
Property behavior:
- read-only
1275
Property Type Description
Property behavior:
type is set to ”Zabbix agent” (0), ”Simple check” (3) and
- supported if
key_ does not start with ”vmware.” and ”icmpping”, ”Zabbix agent
(active)” (7), ”External check” (10), ”Database monitor” (11), ”SSH
agent” (13), ”TELNET agent” (14), ”HTTP agent” (19), ”SNMP agent”
(20) and snmp_oid starts with ”walk[” or ”get[”, ”Script” (21),
”Browser” (22)
- read-only for inherited and discovered objects
trapper_hosts string Allowed hosts.
Property behavior:
- readonly for discovered objects
- supported if type is set to ”Zabbix trapper”, or if type is set to ”HTTP
agent” and allow_traps is set to ”Allow to accept incoming data”
trends string A time unit of how long the trends data should be stored.
Also accepts user macro.
Default: 365d.
Property behavior:
- supported if value_type is set to ”numeric float” or ”numeric
unsigned”
- read-only for discovered objects
units string Value units.
Property behavior:
- supported if value_type is set to ”numeric float” or ”numeric
unsigned”
- read-only for inherited objects or discovered objects
username string Username for authentication.
Property behavior:
type is set to ”SSH agent”, ”TELNET agent”, or if type is
- required if
set to ”JMX agent” and password is set
- supported if type is set to ”Simple check”, ”Database monitor”, or
”HTTP agent”
- read-only for inherited objects (if type is set to ”HTTP agent”) or
discovered objects
uuid string Universal unique identifier, used for linking imported item to already
existing ones. Auto-generated, if not given.
Property behavior:
- supported if the item belongs to a template
valuemapid ID ID of the associated value map.
Property behavior:
- supported if value_type is set to ”numeric float”, ”character”, or
”numeric unsigned”
- read-only for inherited objects or discovered objects
1276
Property Type Description
verify_host integer Whether to validate that the host name for the connection matches the
one in the host’s certificate.
Possible values:
0 - (default) Do not validate;
1 - Validate.
Property behavior:
- supported if type is set to ”HTTP agent”
- read-only for inherited objects or discovered objects
verify_peer integer Whether to validate that the host’s certificate is authentic.
Possible values:
0 - (default) Do not validate;
1 - Validate.
Property behavior:
- supported if type is set to ”HTTP agent”
- read-only for inherited objects or discovered objects
HTTP header
The query field object defines a name and value that is used to specify a URL parameter. It has the following properties:
Item tag
Item preprocessing
1277
Property Type Description
Possible values:
1 - Custom multiplier;
2 - Right trim;
3 - Left trim;
4 - Trim;
5 - Regular expression;
6 - Boolean to decimal;
7 - Octal to decimal;
8 - Hexadecimal to decimal;
9 - Simple change;
10 - Change per second;
11 - XML XPath;
12 - JSONPath;
13 - In range;
14 - Matches regular expression;
15 - Does not match regular expression;
16 - Check for error in JSON;
17 - Check for error in XML;
18 - Check for error using regular expression;
19 - Discard unchanged;
20 - Discard unchanged with heartbeat;
21 - JavaScript;
22 - Prometheus pattern;
23 - Prometheus to JSON;
24 - CSV to JSON;
25 - Replace;
26 - Check unsupported;
27 - XML to JSON;
28 - SNMP walk value;
29 - SNMP walk to JSON;
30 - SNMP get value.
Property behavior:
- required
params string Additional parameters used by preprocessing option.
Multiple parameters are separated by the newline (\n) character.
Property behavior:
- required if type is set to ”Custom multiplier” (1), ”Right trim” (2),
”Left trim” (3), ”Trim” (4), ”Regular expression” (5), ”XML XPath” (11),
”JSONPath” (12), ”In range” (13), ”Matches regular expression” (14),
”Does not match regular expression” (15), ”Check for error in JSON”
(16), ”Check for error in XML” (17), ”Check for error using regular
expression” (18), ”Discard unchanged with heartbeat” (20),
”JavaScript” (21), ”Prometheus pattern” (22), ”Prometheus to JSON”
(23), ”CSV to JSON” (24), ”Replace” (25), Check unsupported (26),
”SNMP walk value” (28), ”SNMP walk to JSON” (29), or ”SNMP get
value” (30)
1278
Property Type Description
Possible values:
0 - Error message is set by Zabbix server;
1 - Discard value;
2 - Set custom value;
3 - Set custom error message.
Property behavior:
- required if type is set to ”Custom multiplier” (1), ”Regular
expression” (5), ”Boolean to decimal” (6), ”Octal to decimal” (7),
”Hexadecimal to decimal” (8), ”Simple change” (9), ”Change per
second” (10), ”XML XPath” (11), ”JSONPath” (12), ”In range” (13),
”Matches regular expression” (14), ”Does not match regular
expression” (15), ”Check for error in JSON” (16), ”Check for error in
XML” (17), ”Check for error using regular expression” (18),
”Prometheus pattern” (22), ”Prometheus to JSON” (23), ”CSV to JSON”
(24), ”Check unsupported” (26), ”XML to JSON” (27), ”SNMP walk
value” (28), ”SNMP walk to JSON” (29), or ”SNMP get value” (30)
error_handler_params string Error handler parameters.
Property behavior:
- required if error_handler is set to ”Set custom value” or ”Set
custom error message”
The following parameters and error handlers are supported for each preprocessing type.
1279
Preprocessing type Name Parameter 1 Parameter 2 Parameter 3 Supported error handlers
9 Simple 0, 1, 2, 3
change
10 Change 0, 1, 2, 3
per
sec-
ond
4
11 XML path 0, 1, 2, 3
XPath
4
12 JSONPath
path 0, 1, 2, 3
1, 6 1, 6
13 In min max 0, 1, 2, 3
range
3
14 Matchespattern 0, 1, 2, 3
regu-
lar
ex-
pres-
sion
3
15 Does pattern 0, 1, 2, 3
not
match
regu-
lar
ex-
pres-
sion
4
16 Check path 0, 1, 2, 3
for
error
in
JSON
4
17 Check path 0, 1, 2, 3
for
error
in
XML
3 2
18 Check pattern output 0, 1, 2, 3
for
error
us-
ing
regu-
lar
ex-
pres-
sion
19 Discard
un-
changed
5, 6
20 Discard seconds
un-
changed
with
heart-
beat
2
21 JavaScript
script
6, 7 8, 9
22 Prometheus
pattern value, label, output 0, 1, 2, 3
pat- function
tern
6, 7
23 Prometheus
pattern 0, 1, 2, 3
to
JSON
1280
Preprocessing type Name Parameter 1 Parameter 2 Parameter 3 Supported error handlers
2 2
24 CSV character character 0,1 0, 1, 2, 3
to
JSON
2 2
25 Replace search string replacement
1 3, 6
26 Check scope pattern 1, 2, 3
un-
sup-
ported
27 XML 0, 1, 2, 3
to
JSON
2
28 SNMP OID Format: 0, 1, 2, 3
walk 0 - Unchanged
value 1 - UTF-8 from
Hex-STRING
2 - MAC from
Hex-STRING
3 - Integer
from BITS
2 2
29 SNMP Field name OID prefix Format: 0, 1, 2, 3
walk 0 - Unchanged
to 1 - UTF-8 from
10
JSON Hex-STRING
2 - MAC from
Hex-STRING
3 - Integer
from BITS
30 SNMP Format: 0, 1, 2, 3
get 1 - UTF-8 from
value Hex-STRING
2 - MAC from
Hex-STRING
3 - Integer
from BITS
1
integer or floating-point number
2
string
3
regular expression
4
JSONPath or XML XPath
5
positive integer (with support of time suffixes, e.g. 30s, 1m, 2h, 1d)
6
user macro
7
Prometheus pattern following the syntax: <metric name>{<label name>="<label value>", ...} == <value>. Each
Prometheus pattern component (metric, label name, label value and metric value) can be user macro.
8
Prometheus output following the syntax: <label name> (can be a user macro) if label is selected as the second parameter.
9
One of the aggregation functions: sum, min, max, avg, count if function is selected as the second parameter.
10
Supports multiple ”Field name,OID prefix,Format records” records delimited by a new line character.
item.create
Description
Note:
Web items cannot be created via the Zabbix API.
1281
Note:
This method is only available to Admin and Super admin user types. Permissions to call the method can be revoked in user
role settings. See User roles for more information.
Parameters
Return values
(object) Returns an object containing the IDs of the created items under the itemids property. The order of the returned IDs
matches the order of the passed items.
Examples
Creating an item
Create a numeric Zabbix agent item with 2 item tags to monitor free disk space on host with ID ”30074”.
Request:
{
"jsonrpc": "2.0",
"method": "item.create",
"params": {
"name": "Free disk space on /home/joe/",
"key_": "vfs.fs.size[/home/joe/,free]",
"hostid": "30074",
"type": 0,
"value_type": 3,
"interfaceid": "30084",
"tags": [
{
"tag": "Disk usage"
},
{
"tag": "Equipment",
"value": "Workstation"
}
],
"delay": "30s"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"itemids": [
"24758"
]
},
"id": 1
}
Create a Zabbix agent item to populate the host’s ”OS” inventory field.
1282
Request:
{
"jsonrpc": "2.0",
"method": "item.create",
"params": {
"name": "uname",
"key_": "system.uname",
"hostid": "30021",
"type": 0,
"interfaceid": "30007",
"value_type": 1,
"delay": "10s",
"inventory_link": 5
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"itemids": [
"24759"
]
},
"id": 1
}
Request:
{
"jsonrpc": "2.0",
"method": "item.create",
"params": {
"name": "Device uptime",
"key_": "sysUpTime",
"hostid": "11312",
"type": 4,
"snmp_oid": "SNMPv2-MIB::sysUpTime.0",
"value_type": 1,
"delay": "60s",
"units": "uptime",
"interfaceid": "1156",
"preprocessing": [
{
"type": 1,
"params": "0.01",
"error_handler": 1,
"error_handler_params": ""
}
]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"itemids": [
1283
"44210"
]
},
"id": 1
}
Create a dependent item for the master item with ID 24759. Only dependencies on the same host are allowed, therefore master
and the dependent item should have the same hostid.
Request:
{
"jsonrpc": "2.0",
"method": "item.create",
"params": {
"hostid": "30074",
"name": "Dependent test item",
"key_": "dependent.item",
"type": 18,
"master_itemid": "24759",
"value_type": 2
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"itemids": [
"44211"
]
},
"id": 1
}
Request:
{
"jsonrpc": "2.0",
"method": "item.create",
"params": {
"url":"http://127.0.0.1/http.php",
"query_fields": [
{
"name": "mode",
"value": "json"
},
{
"name": "min",
"value": "10"
},
{
"name": "max",
"value": "100"
}
],
"interfaceid": "1",
"type": 19,
"hostid": "10254",
1284
"delay": "5s",
"key_": "json",
"name": "HTTP agent example JSON",
"value_type": 0,
"output_format": 1,
"preprocessing": [
{
"type": 12,
"params": "$.random",
"error_handler": 0,
"error_handler_params": ""
}
]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"itemids": [
"23865"
]
},
"id": 1
}
Request:
{
"jsonrpc": "2.0",
"method": "item.create",
"params": {
"name": "Script example",
"key_": "custom.script.item",
"hostid": "12345",
"type": 21,
"value_type": 4,
"params": "var request = new HttpRequest();\nreturn request.post(\"https://postman-echo.com/post\"
"parameters": [
{
"name": "host",
"value": "{HOST.CONN}"
}
],
"timeout": "6s",
"delay": "30s"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"itemids": [
"23865"
]
},
1285
"id": 1
}
Source
CItem::create() in ui/include/classes/api/services/CItem.php.
item.delete
Description
Note:
Web items cannot be deleted via the Zabbix API.
Note:
This method is only available to Admin and Super admin user types. Permissions to call the method can be revoked in user
role settings. See User roles for more information.
Parameters
(object) Returns an object containing the IDs of the deleted items under the itemids property.
Examples
Request:
{
"jsonrpc": "2.0",
"method": "item.delete",
"params": [
"22982",
"22986"
],
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"itemids": [
"22982",
"22986"
]
},
"id": 1
}
Source
CItem::delete() in ui/include/classes/api/services/CItem.php.
1286
item.get
Description
Note:
This method is available to users of any type. Permissions to call the method can be revoked in user role settings. See
User roles for more information.
Parameters
Possible values:
0 - (default) And/Or;
2 - Or.
tags array Return only items with given tags. Exact match by tag and
case-sensitive or case-insensitive search by tag value depending on
operator value.
[{"tag": "<tag>", "value": "<value>",
Format:
"operator": "<operator>"}, ...].
An empty array returns all items.
Supports count.
selectGraphs query Return a graphs property with the graphs that contain the item.
Supports count.
1287
Parameter Type Description
selectDiscoveryRule query Return a discoveryRule property with the LLD rule that created the
item.
selectItemDiscovery query Return an itemDiscovery property with the item discovery object.
The item discovery object links the item to an item prototype from
which it was created.
Accepts an object, where the keys are property names, and the values
are either a single value or an array of values to match against.
Return values
1288
• an array of objects;
• the count of retrieved objects, if the countOutput parameter has been used.
Examples
Retrieve all items used in triggers for specific host ID that have word ”system.cpu” in the item key and sort results by name.
Request:
{
"jsonrpc": "2.0",
"method": "item.get",
"params": {
"output": "extend",
"hostids": "10084",
"with_triggers": true,
"search": {
"key_": "system.cpu"
},
"sortfield": "name"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": [
{
"itemid": "42269",
"type": "18",
"snmp_oid": "",
"hostid": "10084",
"name": "CPU utilization",
"key_": "system.cpu.util",
"delay": "0",
"history": "7d",
"trends": "365d",
"status": "0",
"value_type": "0",
"trapper_hosts": "",
"units": "%",
"logtimefmt": "",
"templateid": "42267",
"valuemapid": "0",
"params": "",
"ipmi_sensor": "",
"authtype": "0",
"username": "",
"password": "",
"publickey": "",
"privatekey": "",
"flags": "0",
"interfaceid": "0",
"description": "CPU utilization in %.",
"inventory_link": "0",
"evaltype": "0",
"jmx_endpoint": "",
"master_itemid": "42264",
"timeout": "",
"url": "",
"query_fields": [],
"posts": "",
1289
"status_codes": "200",
"follow_redirects": "1",
"post_type": "0",
"http_proxy": "",
"headers": [],
"retrieve_mode": "0",
"request_method": "0",
"output_format": "0",
"ssl_cert_file": "",
"ssl_key_file": "",
"ssl_key_password": "",
"verify_peer": "0",
"verify_host": "0",
"allow_traps": "0",
"uuid": "",
"state": "0",
"error": "",
"parameters": [],
"lastclock": "0",
"lastns": "0",
"lastvalue": "0",
"prevvalue": "0",
"name_resolved": "CPU utilization"
},
{
"itemid": "42259",
"type": "0",
"snmp_oid": "",
"hostid": "10084",
"name": "Load average (15m avg)",
"key_": "system.cpu.load[all,avg15]",
"delay": "1m",
"history": "7d",
"trends": "365d",
"status": "0",
"value_type": "0",
"trapper_hosts": "",
"units": "",
"logtimefmt": "",
"templateid": "42219",
"valuemapid": "0",
"params": "",
"ipmi_sensor": "",
"authtype": "0",
"username": "",
"password": "",
"publickey": "",
"privatekey": "",
"flags": "0",
"interfaceid": "1",
"description": "",
"inventory_link": "0",
"evaltype": "0",
"jmx_endpoint": "",
"master_itemid": "0",
"timeout": "",
"url": "",
"query_fields": [],
"posts": "",
"status_codes": "200",
"follow_redirects": "1",
"post_type": "0",
1290
"http_proxy": "",
"headers": [],
"retrieve_mode": "0",
"request_method": "0",
"output_format": "0",
"ssl_cert_file": "",
"ssl_key_file": "",
"ssl_key_password": "",
"verify_peer": "0",
"verify_host": "0",
"allow_traps": "0",
"uuid": "",
"state": "0",
"error": "",
"parameters": [],
"lastclock": "0",
"lastns": "0",
"lastvalue": "0",
"prevvalue": "0",
"name_resolved": "Load average (15m avg)"
},
{
"itemid": "42249",
"type": "0",
"snmp_oid": "",
"hostid": "10084",
"name": "Load average (1m avg)",
"key_": "system.cpu.load[all,avg1]",
"delay": "1m",
"history": "7d",
"trends": "365d",
"status": "0",
"value_type": "0",
"trapper_hosts": "",
"units": "",
"logtimefmt": "",
"templateid": "42209",
"valuemapid": "0",
"params": "",
"ipmi_sensor": "",
"authtype": "0",
"username": "",
"password": "",
"publickey": "",
"privatekey": "",
"flags": "0",
"interfaceid": "1",
"description": "",
"inventory_link": "0",
"evaltype": "0",
"jmx_endpoint": "",
"master_itemid": "0",
"timeout": "",
"url": "",
"query_fields": [],
"posts": "",
"status_codes": "200",
"follow_redirects": "1",
"post_type": "0",
"http_proxy": "",
"headers": [],
"retrieve_mode": "0",
1291
"request_method": "0",
"output_format": "0",
"ssl_cert_file": "",
"ssl_key_file": "",
"ssl_key_password": "",
"verify_peer": "0",
"verify_host": "0",
"allow_traps": "0",
"uuid": "",
"state": "0",
"error": "",
"parameters": [],
"lastclock": "0",
"lastns": "0",
"lastvalue": "0",
"prevvalue": "0",
"name_resolved": "Load average (1m avg)"
},
{
"itemid": "42257",
"type": "0",
"snmp_oid": "",
"hostid": "10084",
"name": "Load average (5m avg)",
"key_": "system.cpu.load[all,avg5]",
"delay": "1m",
"history": "7d",
"trends": "365d",
"status": "0",
"value_type": "0",
"trapper_hosts": "",
"units": "",
"logtimefmt": "",
"templateid": "42217",
"valuemapid": "0",
"params": "",
"ipmi_sensor": "",
"authtype": "0",
"username": "",
"password": "",
"publickey": "",
"privatekey": "",
"flags": "0",
"interfaceid": "1",
"description": "",
"inventory_link": "0",
"evaltype": "0",
"jmx_endpoint": "",
"master_itemid": "0",
"timeout": "",
"url": "",
"query_fields": [],
"posts": "",
"status_codes": "200",
"follow_redirects": "1",
"post_type": "0",
"http_proxy": "",
"headers": [],
"retrieve_mode": "0",
"request_method": "0",
"output_format": "0",
"ssl_cert_file": "",
1292
"ssl_key_file": "",
"ssl_key_password": "",
"verify_peer": "0",
"verify_host": "0",
"allow_traps": "0",
"uuid": "",
"state": "0",
"error": "",
"parameters": [],
"lastclock": "0",
"lastns": "0",
"lastvalue": "0",
"prevvalue": "0",
"name_resolved": "Load average (5m avg)"
},
{
"itemid": "42260",
"type": "0",
"snmp_oid": "",
"hostid": "10084",
"name": "Number of CPUs",
"key_": "system.cpu.num",
"delay": "1m",
"history": "7d",
"trends": "365d",
"status": "0",
"value_type": "3",
"trapper_hosts": "",
"units": "",
"logtimefmt": "",
"templateid": "42220",
"valuemapid": "0",
"params": "",
"ipmi_sensor": "",
"authtype": "0",
"username": "",
"password": "",
"publickey": "",
"privatekey": "",
"flags": "0",
"interfaceid": "1",
"description": "",
"inventory_link": "0",
"evaltype": "0",
"jmx_endpoint": "",
"master_itemid": "0",
"timeout": "",
"url": "",
"query_fields": [],
"posts": "",
"status_codes": "200",
"follow_redirects": "1",
"post_type": "0",
"http_proxy": "",
"headers": [],
"retrieve_mode": "0",
"request_method": "0",
"output_format": "0",
"ssl_cert_file": "",
"ssl_key_file": "",
"ssl_key_password": "",
"verify_peer": "0",
1293
"verify_host": "0",
"allow_traps": "0",
"uuid": "",
"state": "0",
"error": "",
"parameters": [],
"lastclock": "0",
"lastns": "0",
"lastvalue": "0",
"prevvalue": "0",
"name_resolved": "Number of CPUs"
}
],
"id": 1
}
Retrieve all dependent items from host with ID ”10116” that have the word ”apache” in the key.
Request:
{
"jsonrpc": "2.0",
"method": "item.get",
"params": {
"output": "extend",
"hostids": "10116",
"search": {
"key_": "apache"
},
"filter": {
"type": 18
}
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": [
{
"itemid": "25550",
"type": "18",
"snmp_oid": "",
"hostid": "10116",
"name": "Days",
"key_": "apache.status.uptime.days",
"delay": "0",
"history": "90d",
"trends": "365d",
"status": "0",
"value_type": "3",
"trapper_hosts": "",
"units": "",
"logtimefmt": "",
"templateid": "0",
"valuemapid": "0",
"params": "",
"ipmi_sensor": "",
"authtype": "0",
"username": "",
"password": "",
1294
"publickey": "",
"privatekey": "",
"flags": "0",
"interfaceid": "0",
"description": "",
"inventory_link": "0",
"evaltype": "0",
"jmx_endpoint": "",
"master_itemid": "25545",
"timeout": "",
"url": "",
"query_fields": [],
"posts": "",
"status_codes": "200",
"follow_redirects": "1",
"post_type": "0",
"http_proxy": "",
"headers": [],
"retrieve_mode": "0",
"request_method": "0",
"output_format": "0",
"ssl_cert_file": "",
"ssl_key_file": "",
"ssl_key_password": "",
"verify_peer": "0",
"verify_host": "0",
"allow_traps": "0",
"uuid": "",
"state": "0",
"error": "",
"parameters": [],
"lastclock": "0",
"lastns": "0",
"lastvalue": "0",
"prevvalue": "0",
"name_resolved": "Days"
},
{
"itemid": "25555",
"type": "18",
"snmp_oid": "",
"hostid": "10116",
"name": "Hours",
"key_": "apache.status.uptime.hours",
"delay": "0",
"history": "90d",
"trends": "365d",
"status": "0",
"value_type": "3",
"trapper_hosts": "",
"units": "",
"logtimefmt": "",
"templateid": "0",
"valuemapid": "0",
"params": "",
"ipmi_sensor": "",
"authtype": "0",
"username": "",
"password": "",
"publickey": "",
"privatekey": "",
"flags": "0",
1295
"interfaceid": "0",
"description": "",
"inventory_link": "0",
"evaltype": "0",
"jmx_endpoint": "",
"master_itemid": "25545",
"timeout": "",
"url": "",
"query_fields": [],
"posts": "",
"status_codes": "200",
"follow_redirects": "1",
"post_type": "0",
"http_proxy": "",
"headers": [],
"retrieve_mode": "0",
"request_method": "0",
"output_format": "0",
"ssl_cert_file": "",
"ssl_key_file": "",
"ssl_key_password": "",
"verify_peer": "0",
"verify_host": "0",
"allow_traps": "0",
"uuid": "",
"state": "0",
"error": "",
"parameters": [],
"lastclock": "0",
"lastns": "0",
"lastvalue": "0",
"prevvalue": "0",
"name_resolved": "Hours"
}
],
"id": 1
}
Find HTTP agent item with post body type XML for specific host ID.
Request:
{
"jsonrpc": "2.0",
"method": "item.get",
"params": {
"hostids": "10255",
"filter": {
"type": 19,
"post_type": 3
}
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": [
{
"itemid": "28252",
"type": "19",
1296
"snmp_oid": "",
"hostid": "10255",
"name": "template item",
"key_": "ti",
"delay": "30s",
"history": "90d",
"trends": "365d",
"status": "0",
"value_type": "3",
"trapper_hosts": "",
"units": "",
"logtimefmt": "",
"templateid": "0",
"valuemapid": "0",
"params": "",
"ipmi_sensor": "",
"authtype": "0",
"username": "",
"password": "",
"publickey": "",
"privatekey": "",
"flags": "0",
"interfaceid": "0",
"description": "",
"inventory_link": "0",
"evaltype": "0",
"jmx_endpoint": "",
"master_itemid": "0",
"timeout": "",
"url": "localhost",
"query_fields": [
{
"name": "mode",
"value": "xml"
}
],
"posts": "<body>\r\n<![CDATA[{$MACRO}<foo></bar>]]>\r\n</body>",
"status_codes": "200",
"follow_redirects": "0",
"post_type": "3",
"http_proxy": "",
"headers": [],
"retrieve_mode": "1",
"request_method": "3",
"output_format": "0",
"ssl_cert_file": "",
"ssl_key_file": "",
"ssl_key_password": "",
"verify_peer": "0",
"verify_host": "0",
"allow_traps": "0",
"uuid": "",
"state": "0",
"error": "",
"parameters": [],
"lastclock": "0",
"lastns": "0",
"lastvalue": "",
"prevvalue": "",
"name_resolved": "template item"
}
],
1297
"id": 1
}
Retrieve all items and their preprocessing rules for specific host ID.
Request:
{
"jsonrpc": "2.0",
"method": "item.get",
"params": {
"output": ["itemid", "name", "key_"],
"selectPreprocessing": "extend",
"hostids": "10254"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"itemid": "23865",
"name": "HTTP agent example JSON",
"key_": "json",
"preprocessing": [
{
"type": "12",
"params": "$.random",
"error_handler": "1",
"error_handler_params": ""
}
]
},
"id": 1
}
See also
• Discovery rule
• Graph
• Host
• Host interface
• Trigger
Source
CItem::get() in ui/include/classes/api/services/CItem.php.
item.update
Description
Note:
Web items cannot be updated via the Zabbix API.
Note:
This method is only available to Admin and Super admin user types. Permissions to call the method can be revoked in user
role settings. See User roles for more information.
1298
Parameters
Additionally to the standard item properties, the method accepts the following parameters.
Parameter behavior:
- read-only for inherited objects or discovered objects
tags array Item tags.
Parameter behavior:
- read-only for discovered objects
Return values
(object) Returns an object containing the IDs of the updated items under the itemids property.
Examples
Enabling an item
Request:
{
"jsonrpc": "2.0",
"method": "item.update",
"params": {
"itemid": "10092",
"status": 0
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"itemids": [
"10092"
]
},
"id": 1
}
Update Dependent item name and Master item ID. Only dependencies on same host are allowed, therefore Master and Dependent
item should have same hostid.
Request:
{
"jsonrpc": "2.0",
"method": "item.update",
"params": {
"name": "Dependent item updated name",
"master_itemid": "25562",
"itemid": "189019"
},
1299
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"itemids": [
"189019"
]
},
"id": 1
}
Request:
{
"jsonrpc": "2.0",
"method": "item.update",
"params": {
"itemid": "23856",
"allow_traps": 1
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"itemids": [
"23856"
]
},
"id": 1
}
Request:
{
"jsonrpc": "2.0",
"method": "item.update",
"params": {
"itemid": "23856",
"preprocessing": [
{
"type": 13,
"params": "\n100",
"error_handler": 1,
"error_handler_params": ""
}
]
},
"id": 1
}
Response:
1300
{
"jsonrpc": "2.0",
"result": {
"itemids": [
"23856"
]
},
"id": 1
}
Update a script item with a different script and remove unnecessary parameters that were used by previous script.
Request:
{
"jsonrpc": "2.0",
"method": "item.update",
"params": {
"itemid": "23865",
"parameters": [],
"script": "Zabbix.log(3, 'Log test');\nreturn 1;"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"itemids": [
"23865"
]
},
"id": 1
}
Source
CItem::update() in ui/include/classes/api/services/CItem.php.
Item prototype
Object references:
• Item prototype
• Item prototype tag
• Item prototype preprocessing
Available methods:
1301
Property Type Description
Property behavior:
- read-only
- required for update operations
delay string Update interval of the item prototype.
Accepts seconds or time unit with suffix (e.g., 30s, 1m, 2h, 1d) and,
optionally, one or more custom intervals, all separated by semicolons.
Custom intervals can be a mix of flexible and scheduling intervals.
Accepts user macros and LLD macros. If used, the value must be a
single macro. Multiple macros or macros mixed with text are not
supported. Flexible intervals may be written as two macros separated
by a forward slash (e.g., {$FLEX_INTERVAL}/{$FLEX_PERIOD}).
Example:
1h;wd1-5h9-18;{$Macro1}/1-7,00:00-24:00;0/6-7,12:00-24:00;{$Macr
Property behavior:
- required if type is set to ”Zabbix agent” (0), ”Simple check” (3),
”Zabbix internal” (5), ”External check” (10), ”Database monitor” (11),
”IPMI agent” (12), ”SSH agent” (13), ”TELNET agent” (14),
”Calculated” (15), ”JMX agent” (16), ”HTTP agent” (19), ”SNMP agent”
(20), ”Script” (21), ”Browser” (22), or iftype is set to ”Zabbix agent
(active)” (7) and key_ does not contain ”mqtt.get”
hostid ID ID of the host that the item prototype belongs to.
Property behavior:
- constant
- required for create operations
interfaceid ID ID of the item prototype’s host interface.
Property behavior:
- required if item prototype belongs to host and type is set to ”Zabbix
agent”, ”IPMI agent”, ”JMX agent”, ”SNMP trap”, or ”SNMP agent”
- supported if item prototype belongs to host and type is set to
”Simple check”, ”External check”, ”SSH agent”, ”TELNET agent”, or
”HTTP agent”
key_ string Item prototype key.
Property behavior:
- required for create operations
- read-only for inherited objects
name string Name of the item prototype.
Supports user macros.
Property behavior:
- required for create operations
- read-only for inherited objects
1302
Property Type Description
Possible values:
0 - Zabbix agent;
2 - Zabbix trapper;
3 - Simple check;
5 - Zabbix internal;
7 - Zabbix agent (active);
10 - External check;
11 - Database monitor;
12 - IPMI agent;
13 - SSH agent;
14 - TELNET agent;
15 - Calculated;
16 - JMX agent;
17 - SNMP trap;
18 - Dependent item;
19 - HTTP agent;
20 - SNMP agent;
21 - Script;
22 - Browser.
Property behavior:
- required for create operations
- read-only for inherited objects
url string URL string.
Supports LLD macros, user macros, {HOST.IP}, {HOST.CONN},
{HOST.DNS}, {HOST.HOST}, {HOST.NAME}, {ITEM.ID}, {ITEM.KEY}.
Property behavior:
- required if type is set to ”HTTP agent”
- read-only for inherited objects
value_type integer Type of information of the item prototype.
Possible values:
0 - numeric float;
1 - character;
2 - log;
3 - numeric unsigned;
4 - text;
5 - binary.
Property behavior:
- required for create operations
- read-only for inherited objects
allow_traps integer Allow to populate value similarly to the trapper item.
Property behavior:
- supported if type is set to ”HTTP agent”
1303
Property Type Description
Property behavior:
- supported if type is set to ”SSH agent” or ”HTTP agent”
- read-only for inherited objects (if type is set to ”HTTP agent”)
description string Description of the item prototype.
follow_redirects integer Follow response redirects while polling data.
Possible values:
0 - Do not follow redirects;
1 - (default) Follow redirects.
Property behavior:
- supported if type is set to ”HTTP agent”
- read-only for inherited objects
headers array Array of headers that will be sent when performing an HTTP request.
Property behavior:
- supported if type is set to ”HTTP agent”
- read-only for inherited objects
history string A time unit of how long the history data should be stored.
Also accepts user macro and LLD macro.
Default: 31d.
http_proxy string HTTP(S) proxy connection string.
Property behavior:
- supported if type is set to ”HTTP agent”
- read-only for inherited objects
ipmi_sensor string IPMI sensor.
Property behavior:
- required if type is set to ”IPMI agent” and key_ is not set to
”ipmi.get”
- supported if type is set to ”IPMI agent”
- read-only for inherited objects
jmx_endpoint string JMX agent custom connection string.
Default:
service:jmx:rmi:///jndi/rmi://{HOST.CONN}:{HOST.PORT}/jmxrmi
Property behavior:
- supported if type is set to ”JMX agent”
logtimefmt string Format of the time in log entries.
Property behavior:
- supported if value_type is set to ”log”
- read-only for inherited objects
1304
Property Type Description
Property behavior:
- required if type is set to ”Dependent item”
- read-only for inherited objects
output_format integer Should the response be converted to JSON.
Possible values:
0 - (default) Store raw;
1 - Convert to JSON.
Property behavior:
- supported if type is set to ”HTTP agent”
- read-only for inherited objects
params string Additional parameters depending on the type of the item prototype:
- executed script for SSH agent and TELNET agent item prototypes;
- SQL query for database monitor item prototypes;
- formula for calculated item prototypes;
- the script for script and browser item prototypes.
Property behavior:
- required if type is set to ”Database monitor”, ”SSH agent”, ”TELNET
agent”, ”Calculated”, ”Script”, or ”Browser”
type is set to ”Script” or ”Browser”)
- read-only for inherited objects (if
parameters object/array type is set to ”Script” or ”Browser”. Array of
Additional parameters if
objects with name and value properties, where name must be unique.
Property behavior:
- supported if type is set to ”Script” or ”Browser”
- read-only for inherited objects
password string Password for authentication.
Property behavior:
- required if type is set to ”JMX agent” and username is set
- supported if type is set to ”Simple check”, ”SSH agent”, ”TELNET
agent”, ”Database monitor”, or ”HTTP agent”
- read-only for inherited objects (iftype is set to ”HTTP agent”)
post_type integer Type of post data body stored in posts property.
Possible values:
0 - (default) Raw data.
2 - JSON data.
3 - XML data.
Property behavior:
- supported if type is set to ”HTTP agent”
- read-only for inherited objects
posts string HTTP(S) request body data.
Property behavior:
- required if type is set to ”HTTP agent” and post_type is set to
”JSON data” or ”XML data”
- supported if type is set to ”HTTP agent” and post_type is set to
”Raw data”
- read-only for inherited objects
1305
Property Type Description
Property behavior:
- required if type is set to ”SSH agent” and authtype is set to ”public
key”
publickey string Name of the public key file.
Property behavior:
- required if type is set to ”SSH agent” and authtype is set to ”public
key”
query_fields array Array of query fields that will be sent when performing an HTTP
request.
Property behavior:
- supported if type is set to ”HTTP agent”
- read-only for inherited objects
request_method integer Type of request method.
Possible values:
0 - (default) GET;
1 - POST;
2 - PUT;
3 - HEAD.
Property behavior:
- supported if type is set to ”HTTP agent”
- read-only for inherited objects
retrieve_mode integer What part of response should be stored.
Property behavior:
- supported if type is set to ”HTTP agent”
- read-only for inherited objects
snmp_oid string SNMP OID.
Property behavior:
- required if type is set to ”SNMP agent”
- read-only for inherited objects
ssl_cert_file string Public SSL Key file path.
Property behavior:
- supported if type is set to ”HTTP agent”
- read-only for inherited objects
ssl_key_file string Private SSL Key file path.
Property behavior:
- supported if type is set to ”HTTP agent”
- read-only for inherited objects
ssl_key_password string Password for SSL Key file.
Property behavior:
- supported if type is set to ”HTTP agent”
- read-only for inherited objects
1306
Property Type Description
Possible values:
0 - (default) enabled item prototype;
1 - disabled item prototype;
3 - unsupported item prototype.
status_codes string Ranges of required HTTP status codes, separated by commas.
Also supports user macros or LLD macros as part of comma separated
list.
Example: 200,200-{$M},{$M},200-400
Property behavior:
- supported if type is set to ”HTTP agent”
- read-only for inherited objects
templateid ID ID of the parent template item prototype.
Property behavior:
- read-only
timeout string Item data polling request timeout.
Accepts seconds or time unit with suffix (e.g., 30s, 1m). Also accepts
user macros and LLD macros.
Property behavior:
type is set to ”Zabbix agent” (0), ”Simple check” (3) and
- supported if
key_ does not start with ”vmware.” and ”icmpping”, ”Zabbix agent
(active)” (7), ”External check” (10), ”Database monitor” (11), ”SSH
agent” (13), ”TELNET agent” (14), ”HTTP agent” (19), ”SNMP agent”
(20) and snmp_oid starts with ”walk[” or ”get[”, ”Script” (21),
”Browser” (22)
- read-only for inherited objects
trapper_hosts string Allowed hosts.
Property behavior:
- supported if type is set to ”Zabbix trapper”, or if type is set to ”HTTP
agent” and allow_traps is set to ”Allow to accept incoming data”
trends string A time unit of how long the trends data should be stored.
Also accepts user macro and LLD macro.
Default: 365d.
Property behavior:
- supported if value_type is set to ”numeric float” or ”numeric
unsigned”
units string Value units.
Property behavior:
- supported if value_type is set to ”numeric float” or ”numeric
unsigned”
- read-only for inherited objects
username string Username for authentication.
Property behavior:
type is set to ”SSH agent” or ”TELNET agent”, or if type is
- required if
set to ”JMX agent” and password is set
- supported if type is set to ”Simple check”, ”Database monitor”, or
”HTTP agent”
- read-only for inherited objects (if type is set to ”HTTP agent”)
1307
Property Type Description
uuid string Universal unique identifier, used for linking imported item prototypes
to already existing ones. Auto-generated, if not given.
Property behavior:
- supported if the item prototype belongs to a template
valuemapid ID ID of the associated value map.
Property behavior:
- supported if value_type is set to ”numeric float”, ”character”, or
”numeric unsigned”
- read-only for inherited objects
verify_host integer Whether to validate that the host name for the connection matches the
one in the host’s certificate.
Possible values:
0 - (default) Do not validate;
1 - Validate.
Property behavior:
- supported if type is set to ”HTTP agent”
- read-only for inherited objects
verify_peer integer Whether to validate that the host’s certificate is authentic.
Possible values:
0 - (default) Do not validate;
1 - Validate.
Property behavior:
- supported if type is set to ”HTTP agent”
- read-only for inherited objects
discover integer Item prototype discovery status.
Possible values:
0 - (default) new items will be discovered;
1 - new items will not be discovered and existing items will be marked
as lost.
HTTP header
The query field object defines a name and value that is used to specify a URL parameter. It has the following properties:
1308
Property Type Description
Property behavior:
- required
value string Item prototype tag value.
Possible values:
1 - Custom multiplier;
2 - Right trim;
3 - Left trim;
4 - Trim;
5 - Regular expression;
6 - Boolean to decimal;
7 - Octal to decimal;
8 - Hexadecimal to decimal;
9 - Simple change;
10 - Change per second;
11 - XML XPath;
12 - JSONPath;
13 - In range;
14 - Matches regular expression;
15 - Does not match regular expression;
16 - Check for error in JSON;
17 - Check for error in XML;
18 - Check for error using regular expression;
19 - Discard unchanged;
20 - Discard unchanged with heartbeat;
21 - JavaScript;
22 - Prometheus pattern;
23 - Prometheus to JSON;
24 - CSV to JSON;
25 - Replace;
26 - Check unsupported;
27 - XML to JSON;
28 - SNMP walk value;
29 - SNMP walk to JSON;
30 - SNMP get value.
Property behavior:
- required
1309
Property Type Description
Property behavior:
- required if type is set to ”Custom multiplier” (1), ”Right trim” (2),
”Left trim” (3), ”Trim” (4), ”Regular expression” (5), ”XML XPath” (11),
”JSONPath” (12), ”In range” (13), ”Matches regular expression” (14),
”Does not match regular expression” (15), ”Check for error in JSON”
(16), ”Check for error in XML” (17), ”Check for error using regular
expression” (18), ”Discard unchanged with heartbeat” (20),
”JavaScript” (21), ”Prometheus pattern” (22), ”Prometheus to JSON”
(23), ”CSV to JSON” (24), ”Replace” (25), ”Check unsupported” (26),
”SNMP walk value” (28), ”SNMP walk to JSON” (29), or ”SNMP get
value” (30)
error_handler integer Action type used in case of preprocessing step failure.
Possible values:
0 - Error message is set by Zabbix server;
1 - Discard value;
2 - Set custom value;
3 - Set custom error message.
Property behavior:
- required if type is set to ”Custom multiplier” (1), ”Regular
expression” (5), ”Boolean to decimal” (6), ”Octal to decimal” (7),
”Hexadecimal to decimal” (8), ”Simple change” (9), ”Change per
second” (10), ”XML XPath” (11), ”JSONPath” (12), ”In range” (13),
”Matches regular expression” (14), ”Does not match regular
expression” (15), ”Check for error in JSON” (16), ”Check for error in
XML” (17), ”Check for error using regular expression” (18),
”Prometheus pattern” (22), ”Prometheus to JSON” (23), ”CSV to JSON”
(24), ”Check unsupported” (26), ”XML to JSON” (27), ”SNMP walk
value” (28), ”SNMP walk to JSON” (29), or ”SNMP get value” (30)
error_handler_params string Error handler parameters.
Property behavior:
- required if error_handler is set to ”Set custom value” or ”Set
custom error message”
The following parameters and error handlers are supported for each preprocessing type.
1310
Preprocessing type Name Parameter 1 Parameter 2 Parameter 3 Supported error handlers
3 Left list of
2
trim characters
4 Trim list of
2
characters
3 2
5 Regular pattern output 0, 1, 2, 3
ex-
pres-
sion
6 Boolean 0, 1, 2, 3
to
deci-
mal
7 Octal 0, 1, 2, 3
to
deci-
mal
8 Hexadecimal 0, 1, 2, 3
to
deci-
mal
9 Simple 0, 1, 2, 3
change
10 Change 0, 1, 2, 3
per
sec-
ond
4
11 XML path 0, 1, 2, 3
XPath
4
12 JSONPath
path 0, 1, 2, 3
1, 6 1, 6
13 In min max 0, 1, 2, 3
range
3
14 Matchespattern 0, 1, 2, 3
regu-
lar
ex-
pres-
sion
3
15 Does pattern 0, 1, 2, 3
not
match
regu-
lar
ex-
pres-
sion
4
16 Check path 0, 1, 2, 3
for
error
in
JSON
4
17 Check path 0, 1, 2, 3
for
error
in
XML
1311
Preprocessing type Name Parameter 1 Parameter 2 Parameter 3 Supported error handlers
3 2
18 Check pattern output 0, 1, 2, 3
for
error
us-
ing
regu-
lar
ex-
pres-
sion
19 Discard
un-
changed
5, 6
20 Discard seconds
un-
changed
with
heart-
beat
2
21 JavaScript
script
6, 7 8, 9
22 Prometheus
pattern value, label, output 0, 1, 2, 3
pat- function
tern
6, 7
23 Prometheus
pattern 0, 1, 2, 3
to
JSON
2 2
24 CSV character character 0,1 0, 1, 2, 3
to
JSON
2 2
25 Replace search string replacement
1 3, 6
26 Check scope pattern 1, 2, 3
un-
sup-
ported
27 XML 0, 1, 2, 3
to
JSON
2
28 SNMP OID Format: 0, 1, 2, 3
walk 0 - Unchanged
value 1 - UTF-8 from
Hex-STRING
2 - MAC from
Hex-STRING
3 - Integer
from BITS
2 2
29 SNMP Field name OID prefix Format: 0, 1, 2, 3
walk 0 - Unchanged
to 1 - UTF-8 from
10
JSON Hex-STRING
2 - MAC from
Hex-STRING
3 - Integer
from BITS
30 SNMP Format: 0, 1, 2, 3
get 1 - UTF-8 from
value Hex-STRING
2 - MAC from
Hex-STRING
3 - Integer
from BITS
1312
1
integer or floating-point number
2
string
3
regular expression
4
JSONPath or XML XPath
5
positive integer (with support of time suffixes, e.g. 30s, 1m, 2h, 1d)
6
user macro, LLD macro
7
Prometheus pattern following the syntax: <metric name>{<label name>="<label value>", ...} == <value>. Each
Prometheus pattern component (metric, label name, label value and metric value) can be user macro or LLD macro.
8
Prometheus output following the syntax: <label name> (can be a user macro or an LLD macro) if label is selected as the
second parameter.
9
One of the aggregation functions: sum, min, max, avg, count if function is selected as the second parameter.
10
Supports multiple ”Field name,OID prefix,Format records” records delimited by a new line character.
itemprototype.create
Description
Note:
This method is only available to Admin and Super admin user types. Permissions to call the method can be revoked in user
role settings. See User roles for more information.
Parameters
Parameter behavior:
- required
preprocessing array Item prototype preprocessing options.
tags array Item prototype tags.
Return values
(object) Returns an object containing the IDs of the created item prototypes under the itemids property. The order of the
returned IDs matches the order of the passed item prototypes.
Examples
Create an item prototype to monitor free disk space on a discovered file system. Discovered items should be numeric Zabbix agent
items updated every 30 seconds.
Request:
{
"jsonrpc": "2.0",
"method": "itemprototype.create",
"params": {
"name": "Free disk space on {#FSNAME}",
"key_": "vfs.fs.size[{#FSNAME},free]",
"hostid": "10197",
"ruleid": "27665",
"type": 0,
"value_type": 3,
"interfaceid": "112",
"delay": "30s"
},
1313
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"itemids": [
"27666"
]
},
"id": 1
}
Create an item using change per second and a custom multiplier as a second step.
Request:
{
"jsonrpc": "2.0",
"method": "itemprototype.create",
"params": {
"name": "Incoming network traffic on {#IFNAME}",
"key_": "net.if.in[{#IFNAME}]",
"hostid": "10001",
"ruleid": "27665",
"type": 0,
"value_type": 3,
"delay": "60s",
"units": "bps",
"interfaceid": "1155",
"preprocessing": [
{
"type": 10,
"params": "",
"error_handler": 0,
"error_handler_params": ""
},
{
"type": 1,
"params": "8",
"error_handler": 2,
"error_handler_params": "10"
}
]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"itemids": [
"44211"
]
},
"id": 1
}
Create Dependent item prototype for Master item prototype with ID 44211. Only dependencies on same host (template/discovery
1314
rule) are allowed, therefore Master and Dependent item should have same hostid and ruleid.
Request:
{
"jsonrpc": "2.0",
"method": "itemprototype.create",
"params": {
"hostid": "10001",
"ruleid": "27665",
"name": "Dependent test item prototype",
"key_": "dependent.prototype",
"type": 18,
"master_itemid": "44211",
"value_type": 3
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"itemids": [
"44212"
]
},
"id": 1
}
Create item prototype with URL using user macro, query fields and custom headers.
Request:
{
"jsonrpc": "2.0",
"method": "itemprototype.create",
"params": {
"type": "19",
"hostid": "10254",
"ruleid": "28256",
"interfaceid": "2",
"name": "api item prototype example",
"key_": "api_http_item",
"value_type": 3,
"url": "{$URL_PROTOTYPE}",
"query_fields": [
{
"name": "min",
"value": "10"
},
{
"name": "max",
"value" "100"
}
],
"headers": [
{
"name": "X-Source",
"value": "api"
}
],
"delay": "35"
},
1315
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"itemids": [
"28305"
]
},
"id": 1
}
Request:
{
"jsonrpc": "2.0",
"method": "itemprototype.create",
"params": {
"name": "Script example",
"key_": "custom.script.itemprototype",
"hostid": "12345",
"type": 21,
"value_type": 4,
"params": "var request = new HttpRequest();\nreturn request.post(\"https://postman-echo.com/post\"
"parameters": [
{
"name": "host",
"value": "{HOST.CONN}"
}
],
"timeout": "6s",
"delay": "30s"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"itemids": [
"23865"
]
},
"id": 1
}
Source
CItemPrototype::create() in ui/include/classes/api/services/CItemPrototype.php.
itemprototype.delete
Description
1316
Note:
This method is only available to Admin and Super admin user types. Permissions to call the method can be revoked in user
role settings. See User roles for more information.
Parameters
(object) Returns an object containing the IDs of the deleted item prototypes under the prototypeids property.
Examples
Request:
{
"jsonrpc": "2.0",
"method": "itemprototype.delete",
"params": [
"27352",
"27356"
],
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"prototypeids": [
"27352",
"27356"
]
},
"id": 1
}
Source
CItemPrototype::delete() in ui/include/classes/api/services/CItemPrototype.php.
itemprototype.get
Description
Note:
This method is available to users of any type. Permissions to call the method can be revoked in user role settings. See
User roles for more information.
Parameters
discoveryids ID/array Return only item prototypes that belong to the given LLD rules.
1317
Parameter Type Description
graphids ID/array Return only item prototypes that are used in the given graph
prototypes.
hostids ID/array Return only item prototypes that belong to the given hosts.
inherited boolean If set to true return only item prototypes inherited from a template.
itemids ID/array Return only item prototypes with the given IDs.
monitored boolean If set to true return only enabled item prototypes that belong to
monitored hosts.
templated boolean If set to true return only item prototypes that belong to templates.
templateids ID/array Return only item prototypes that belong to the given templates.
triggerids ID/array Return only item prototypes that are used in the given trigger
prototypes.
selectDiscoveryRule query Return a discoveryRule property with the low-level discovery rule
that the item prototype belongs to.
selectGraphs query Return a graphs property with graph prototypes that the item
prototype is used in.
Supports count.
selectHosts query Return a hosts property with an array of hosts that the item prototype
belongs to.
selectTags query Return the item prototype tags in tags property.
selectTriggers query Return a triggers property with trigger prototypes that the item
prototype is used in.
Supports count.
selectPreprocessing query Return a preprocessing property with item prototype preprocessing
options.
selectValueMap query Return a valuemap property with item prototype value map.
filter object Return only those results that exactly match the given filter.
Accepts an object, where the keys are property names, and the values
are either a single value or an array of values to match against.
Return values
1318
• the count of retrieved objects, if the countOutput parameter has been used.
Examples
Request:
{
"jsonrpc": "2.0",
"method": "itemprototype.get",
"params": {
"output": "extend",
"discoveryids": "27426"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": [
{
"itemid": "23077",
"type": "0",
"snmp_oid": "",
"hostid": "10079",
"name": "Incoming network traffic on en0",
"key_": "net.if.in[en0]",
"delay": "1m",
"history": "1w",
"trends": "365d",
"status": "0",
"value_type": "3",
"trapper_hosts": "",
"units": "bps",
"logtimefmt": "",
"templateid": "0",
"valuemapid": "0",
"params": "",
"ipmi_sensor": "",
"authtype": "0",
"username": "",
"password": "",
"publickey": "",
"privatekey": "",
"interfaceid": "0",
"description": "",
"evaltype": "0",
"jmx_endpoint": "",
"master_itemid": "0",
"timeout": "",
"url": "",
"query_fields": [],
"posts": "",
"status_codes": "200",
"follow_redirects": "1",
"post_type": "0",
"http_proxy": "",
"headers": [],
"retrieve_mode": "0",
"request_method": "0",
"output_format": "0",
1319
"ssl_cert_file": "",
"ssl_key_file": "",
"ssl_key_password": "",
"verify_peer": "0",
"verify_host": "0",
"allow_traps": "0",
"discover": "0",
"uuid": "",
"parameters": []
},
{
"itemid": "10010",
"type": "0",
"snmp_oid": "",
"hostid": "10001",
"name": "Processor load (1 min average per core)",
"key_": "system.cpu.load[percpu,avg1]",
"delay": "1m",
"history": "1w",
"trends": "365d",
"status": "0",
"value_type": "0",
"trapper_hosts": "",
"units": "",
"logtimefmt": "",
"templateid": "0",
"valuemapid": "0",
"params": "",
"ipmi_sensor": "",
"authtype": "0",
"username": "",
"password": "",
"publickey": "",
"privatekey": "",
"interfaceid": "0",
"description": "The processor load is calculated as system CPU load divided by number of CPU c
"evaltype": "0",
"jmx_endpoint": "",
"master_itemid": "0",
"timeout": "",
"url": "",
"query_fields": [],
"posts": "",
"status_codes": "200",
"follow_redirects": "1",
"post_type": "0",
"http_proxy": "",
"headers": [],
"retrieve_mode": "0",
"request_method": "0",
"output_format": "0",
"ssl_cert_file": "",
"ssl_key_file": "",
"ssl_key_password": "",
"verify_peer": "0",
"verify_host": "0",
"allow_traps": "0",
"discover": "0",
"uuid": "",
"parameters": []
}
],
1320
"id": 1
}
Request:
{
"jsonrpc": "2.0",
"method": "item.get",
"params": {
"output": "extend",
"filter": {
"type": 18,
"master_itemid": "25545"
},
"limit": "1"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": [
{
"itemid": "25547",
"type": "18",
"snmp_oid": "",
"hostid": "10116",
"name": "Seconds",
"key_": "apache.status.uptime.seconds",
"delay": "0",
"history": "90d",
"trends": "365d",
"status": "0",
"value_type": "3",
"trapper_hosts": "",
"units": "",
"logtimefmt": "",
"templateid": "0",
"valuemapid": "0",
"params": "",
"ipmi_sensor": "",
"authtype": "0",
"username": "",
"password": "",
"publickey": "",
"privatekey": "",
"interfaceid": "0",
"description": "",
"evaltype": "0",
"master_itemid": "25545",
"jmx_endpoint": "",
"timeout": "",
"url": "",
"query_fields": [],
"posts": "",
"status_codes": "200",
"follow_redirects": "1",
"post_type": "0",
"http_proxy": "",
1321
"headers": [],
"retrieve_mode": "0",
"request_method": "0",
"output_format": "0",
"ssl_cert_file": "",
"ssl_key_file": "",
"ssl_key_password": "",
"verify_peer": "0",
"verify_host": "0",
"allow_traps": "0",
"discover": "0",
"uuid": "",
"parameters": []
}
],
"id": 1
}
Find HTTP agent item prototype with request method HEAD for specific host ID.
Request:
{
"jsonrpc": "2.0",
"method": "itemprototype.get",
"params": {
"hostids": "10254",
"filter": {
"type": 19,
"request_method": 3
}
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": [
{
"itemid": "28257",
"type": "19",
"snmp_oid": "",
"hostid": "10254",
"name": "discovered",
"key_": "item[{#INAME}]",
"delay": "{#IUPDATE}",
"history": "90d",
"trends": "30d",
"status": "0",
"value_type": "3",
"trapper_hosts": "",
"units": "",
"logtimefmt": "",
"templateid": "28255",
"valuemapid": "0",
"params": "",
"ipmi_sensor": "",
"authtype": "0",
"username": "",
"password": "",
"publickey": "",
1322
"privatekey": "",
"flags": "2",
"interfaceid": "2",
"description": "",
"evaltype": "0",
"jmx_endpoint": "",
"master_itemid": "0",
"timeout": "",
"url": "{#IURL}",
"query_fields": [],
"posts": "",
"status_codes": "",
"follow_redirects": "0",
"post_type": "0",
"http_proxy": "",
"headers": [],
"retrieve_mode": "0",
"request_method": "3",
"output_format": "0",
"ssl_cert_file": "",
"ssl_key_file": "",
"ssl_key_password": "",
"verify_peer": "0",
"verify_host": "0",
"allow_traps": "0",
"discover": "0",
"uuid": "",
"parameters": []
}
],
"id": 1
}
See also
• Host
• Graph prototype
• Trigger prototype
Source
CItemPrototype::get() in ui/include/classes/api/services/CItemPrototype.php.
itemprototype.update
Description
Note:
This method is only available to Admin and Super admin user types. Permissions to call the method can be revoked in user
role settings. See User roles for more information.
Parameters
Additionally to the standard item prototype properties, the method accepts the following parameters.
1323
Parameter Type Description
Parameter behavior:
- read-only for inherited objects
tags array Item prototype tags.
Return values
(object) Returns an object containing the IDs of the updated item prototypes under the itemids property.
Examples
Request:
{
"jsonrpc": "2.0",
"method": "itemprototype.update",
"params": {
"itemid": "27428",
"interfaceid": "132"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"itemids": [
"27428"
]
},
"id": 1
}
Update Dependent item prototype with new Master item prototype ID. Only dependencies on same host (template/discovery rule)
are allowed, therefore Master and Dependent item should have same hostid and ruleid.
Request:
{
"jsonrpc": "2.0",
"method": "itemprototype.update",
"params": {
"master_itemid": "25570",
"itemid": "189030"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"itemids": [
"189030"
]
},
1324
"id": 1
}
Request:
{
"jsonrpc": "2.0",
"method": "itemprototype.update",
"params": {
"itemid":"28305",
"query_fields": [
{
"name": "random",
"value": "qwertyuiopasdfghjklzxcvbnm"
}
],
"headers": []
}
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"itemids": [
"28305"
]
},
"id": 1
}
Request:
{
"jsonrpc": "2.0",
"method": "itemprototype.update",
"params": {
"itemid": "44211",
"preprocessing": [
{
"type": 1,
"params": "4",
"error_handler": 2,
"error_handler_params": "5"
}
]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"itemids": [
"44211"
]
1325
},
"id": 1
}
Update a script item prototype with a different script and remove unnecessary parameters that were used by previous script.
Request:
{
"jsonrpc": "2.0",
"method": "itemprototype.update",
"params": {
"itemid": "23865",
"parameters": [],
"script": "Zabbix.log(3, 'Log test');\nreturn 1;"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"itemids": [
"23865"
]
},
"id": 1
}
Source
CItemPrototype::update() in ui/include/classes/api/services/CItemPrototype.php.
LLD rule
Object references:
• LLD rule
• LLD rule filter
• LLD rule filter condition
• LLD macro path
• LLD rule preprocessing
• LLD rule overrides
• LLD rule override filter
• LLD rule override filter condition
• LLD rule override operation
• LLD rule override operation status
• LLD rule override operation discover
• LLD rule override operation period
• LLD rule override operation history
• LLD rule override operation trends
• LLD rule override operation severity
• LLD rule override operation tag
• LLD rule override operation template
• LLD rule override operation inventory
Available methods:
1326
• discoveryrule.get - retrieve LLD rules
• discoveryrule.update - update LLD rules
Property behavior:
- read-only
- required for update operations
delay string Update interval of the LLD rule.
Accepts seconds or time unit with suffix (e.g., 30s, 1m, 2h, 1d) and,
optionally, one or more custom intervals, all separated by semicolons.
Custom intervals can be a mix of flexible and scheduling intervals.
Example:
1h;wd1-5h9-18;{$Macro1}/1-7,00:00-24:00;0/6-7,12:00-24:00;{$Macr
Property behavior:
- required if type is set to ”Zabbix agent” (0), ”Simple check” (3),
”Zabbix internal” (5), ”External check” (10), ”Database monitor” (11),
”IPMI agent” (12), ”SSH agent” (13), ”TELNET agent” (14), ”JMX agent”
(16), ”HTTP agent” (19), ”SNMP agent” (20), ”Script” (21), ”Browser”
(22), or if type is set to ”Zabbix agent (active)” (7) and key_ does not
contain ”mqtt.get”
hostid ID ID of the host that the LLD rule belongs to.
Property behavior:
- constant
- required for create operations
interfaceid ID ID of the LLD rule’s host interface.
Property behavior:
- required if LLD rule belongs to host and type is set to ”Zabbix agent”,
”IPMI agent”, ”JMX agent”, or ”SNMP agent”
- supported if LLD rule belongs to host and type is set to ”Simple
check”, ”External check”, ”SSH agent”, ”TELNET agent”, or ”HTTP
agent”
key_ string LLD rule key.
Property behavior:
- required for create operations
- read-only for inherited objects
name string Name of the LLD rule.
Property behavior:
- required for create operations
- read-only for inherited objects
1327
Property Type Description
Possible values:
0 - Zabbix agent;
2 - Zabbix trapper;
3 - Simple check;
5 - Zabbix internal;
7 - Zabbix agent (active);
10 - External check;
11 - Database monitor;
12 - IPMI agent;
13 - SSH agent;
14 - TELNET agent;
16 - JMX agent;
18 - Dependent item;
19 - HTTP agent;
20 - SNMP agent;
21 - Script;
22 - Browser.
Property behavior:
- required for create operations
- read-only for inherited objects
url string URL string.
Supports user macros, {HOST.IP}, {HOST.CONN}, {HOST.DNS},
{HOST.HOST}, {HOST.NAME}, {ITEM.ID}, {ITEM.KEY}.
Property behavior:
- required if type is set to ”HTTP agent”
- read-only for inherited objects
allow_traps integer Allow to populate value similarly to the trapper item.
Possible values:
0 - (default) Do not allow to accept incoming data;
1 - Allow to accept incoming data.
Property behavior:
- supported if type is set to ”HTTP agent”
authtype integer Authentication method.
Property behavior:
- supported if type is set to ”SSH agent” or ”HTTP agent”
- read-only for inherited objects (if type is set to ”HTTP agent”)
description string Description of the LLD rule.
error string Error text if there are problems updating the LLD rule value.
Property behavior:
- read-only
1328
Property Type Description
Possible values:
0 - Do not follow redirects;
1 - (default) Follow redirects.
Property behavior:
- supported if type is set to ”HTTP agent”
- read-only for inherited objects
headers array Array of headers that will be sent when performing an HTTP request.
Property behavior:
- supported if type is set to ”HTTP agent”
- read-only for inherited objects
http_proxy string HTTP(S) proxy connection string.
Property behavior:
- supported if type is set to ”HTTP agent”
- read-only for inherited objects
ipmi_sensor string IPMI sensor.
Property behavior:
- required if type is set to ”IPMI agent” and key_ is not set to
”ipmi.get”
- supported if type is set to ”IPMI agent”
- read-only for inherited objects
jmx_endpoint string JMX agent custom connection string.
Default:
service:jmx:rmi:///jndi/rmi://{HOST.CONN}:{HOST.PORT}/jmxrmi
Property behavior:
- supported if type is set to ”JMX agent”
lifetime string Time period after which items that are no longer discovered will be
deleted. Accepts seconds, time unit with suffix, or a user macro.
Default: 7d.
lifetime_type integer Scenario to delete lost LLD resources.
Possible values:
0 - (default) Delete after lifetime threshold is reached;
1 - Do not delete;
2 - Delete immediately.
enabled_lifetime string Time period after which items that are no longer discovered will be
disabled. Accepts seconds, time unit with suffix, or a user macro.
Default: 0.
enabled_lifetime_type integer Scenario to disable lost LLD resources.
Possible values:
0 - Disable after lifetime threshold is reached;
1 - Do not disable;
2 - (default) Disable immediately.
master_itemid ID ID of the master item.
Recursion up to 3 dependent items and maximum count of dependent
items equal to 999 are allowed.
Discovery rule cannot be master item for another discovery rule.
Property behavior:
- required if type is set to ”Dependent item”
- read-only for inherited objects
1329
Property Type Description
Possible values:
0 - (default) Store raw;
1 - Convert to JSON.
Property behavior:
- supported if type is set to ”HTTP agent”
- read-only for inherited objects
params string Additional parameters depending on the type of the LLD rule:
- executed script for SSH and Telnet LLD rules;
- SQL query for database monitor LLD rules;
- formula for calculated LLD rules;
- the script for script and browser LLD rules.
Property behavior:
- required if type is set to ”Database monitor”, ”SSH agent”, ”TELNET
agent”, ”Script” or ”Browser”
type is set to ”Script” or ”Browser”)
- read-only for inherited objects (if
parameters object/array Additional parameters if type is set to ”Script” or ”Browser”.
Array of objects with name and value properties, where name must be
unique.
Property behavior:
- supported if type is set to ”Script” or ”Browser”
- read-only for inherited objects
password string Password for authentication.
Property behavior:
- required if type is set to ”JMX agent” and username is set
- supported if type is set to ”Simple check”, ”Database monitor”, ”SSH
agent”, ”TELNET agent”, or ”HTTP agent”
- read-only for inherited objects (iftype is set to ”HTTP agent”)
post_type integer Type of post data body stored in posts property.
Possible values:
0 - (default) Raw data;
2 - JSON data;
3 - XML data.
Property behavior:
- supported if type is set to ”HTTP agent”
- read-only for inherited objects
posts string HTTP(S) request body data.
Property behavior:
- required if type is set to ”HTTP agent” and post_type is set to
”JSON data” or ”XML data”
- supported if type is set to ”HTTP agent” and post_type is set to
”Raw data”
- read-only for inherited objects
privatekey string Name of the private key file.
Property behavior:
- required if type is set to ”SSH agent” and authtype is set to ”public
key”
publickey string Name of the public key file.
Property behavior:
- required if type is set to ”SSH agent” and authtype is set to ”public
key”
1330
Property Type Description
query_fields array Array of query fields that will be sent when performing an HTTP
request.
Property behavior:
- supported if type is set to ”HTTP agent”
- read-only for inherited objects
request_method integer Type of request method.
Possible values:
0 - (default) GET;
1 - POST;
2 - PUT;
3 - HEAD.
Property behavior:
- supported if type is set to ”HTTP agent”
- read-only for inherited objects
retrieve_mode integer What part of response should be stored.
Property behavior:
- supported if type is set to ”HTTP agent”
- read-only for inherited objects
snmp_oid string SNMP OID.
Property behavior:
- required if type is set to ”SNMP agent”
- read-only for inherited objects
ssl_cert_file string Public SSL Key file path.
Property behavior:
- supported if type is set to ”HTTP agent”
- read-only for inherited objects
ssl_key_file string Private SSL Key file path.
Property behavior:
- supported if type is set to ”HTTP agent”
- read-only for inherited objects
ssl_key_password string Password for SSL Key file.
Property behavior:
- supported if type is set to ”HTTP agent”
- read-only for inherited objects
state integer State of the LLD rule.
Possible values:
0 - (default) normal;
1 - not supported.
Property behavior:
- read-only
1331
Property Type Description
Possible values:
0 - (default) enabled LLD rule;
1 - disabled LLD rule.
status_codes string Ranges of required HTTP status codes, separated by commas. Also
supports user macros as part of comma separated list.
Example: 200,200-{$M},{$M},200-400
Property behavior:
- supported if type is set to ”HTTP agent”
- read-only for inherited objects
templateid ID ID of the parent template LLD rule.
Property behavior:
- read-only
timeout string Item data polling request timeout.
Accepts seconds or time unit with suffix (e.g., 30s, 1m). Also accepts
user macros.
Property behavior:
type is set to ”Zabbix agent” (0), ”Simple check” (3) and
- supported if
key_ does not start with ”vmware.” and ”icmpping”, ”Zabbix agent
(active)” (7), ”External check” (10), ”Database monitor” (11), ”SSH
agent” (13), ”TELNET agent” (14), ”HTTP agent” (19), ”SNMP agent”
(20) and snmp_oid starts with ”walk[” or ”get[”, ”Script” (21),
”Browser” (22)
- read-only for inherited objects
trapper_hosts string Allowed hosts.
Property behavior:
- supported if type is set to ”Zabbix trapper”, or if type is set to ”HTTP
agent” and allow_traps is set to ”Allow to accept incoming data”
username string Username for authentication.
Property behavior:
type is set to ”SSH agent”, ”TELNET agent”, or if type is
- required if
set to ”JMX agent” and password is set
- supported if type is set to ”Simple check”, ”Database monitor”, or
”HTTP agent”
- read-only for inherited objects (if type is set to ”HTTP agent”)
uuid string Universal unique identifier, used for linking imported LLD rules to
already existing ones. Auto-generated, if not given.
Property behavior:
- supported if the LLD rule belongs to a template
verify_host integer Whether to validate that the host name for the connection matches the
one in the host’s certificate.
Possible values:
0 - (default) Do not validate;
1 - Validate.
Property behavior:
- supported if type is set to ”HTTP agent”
- read-only for inherited objects
1332
Property Type Description
Possible values:
0 - (default) Do not validate;
1 - Validate.
Property behavior:
- supported if type is set to ”HTTP agent”
- read-only for inherited objects
HTTP header
The query field object defines a name and value that is used to specify a URL parameter. It has the following properties:
The LLD rule filter object defines a set of conditions that can be used to filter discovered objects. It has the following properties:
conditions object/array Set of filter conditions to use for filtering results. The conditions will be
sorted in the order of their placement in the formula.
Property behavior:
- required
evaltype integer Filter condition evaluation method.
Possible values:
0 - and/or;
1 - and;
2 - or;
3 - custom expression.
Property behavior:
- required
1333
Property Type Description
eval_formula string Generated expression that will be used for evaluating filter conditions.
The expression contains IDs that reference specific filter conditions by
formulaid. The value of eval_formula is equal to the value of
its
formula for filters with a custom expression.
Property behavior:
- read-only
formula string User-defined expression to be used for evaluating conditions of filters
with a custom expression. The expression must contain IDs that
reference specific filter conditions by its formulaid. The IDs used in
the expression must exactly match the ones defined in the filter
conditions: no condition can remain unused or omitted.
Property behavior:
- required if evaltype is set to ”custom expression”
The LLD rule filter condition object defines a separate check to perform on the value of an LLD macro. It has the following properties:
Property behavior:
- required
value string Value to compare with.
Property behavior:
- required if operator is set to ”matches regular expression” or ”does
not match regular expression”
formulaid string Arbitrary unique ID that is used to reference the condition from a
custom expression. Can only contain capital-case letters. The ID must
be defined by the user when modifying filter conditions, but will be
generated anew when requesting them afterward.
Property behavior:
- required if evaltype of LLD rule filter object is set to ”custom
expression”
operator integer Condition operator.
Possible values:
8 - (default) matches regular expression;
9 - does not match regular expression;
12 - exists;
13 - does not exist.
Note:
To better understand how to use filters with various types of expressions, see examples on the discoveryrule.get and
discoveryrule.create method pages.
1334
Property Type Description
Property behavior:
- required
path string Selector for value which will be assigned to corresponding macro.
Property behavior:
- required
Possible values:
5 - Regular expression;
11 - XML XPath;
12 - JSONPath;
14 - Matches regular expression;
15 - Does not match regular expression;
16 - Check for error in JSON;
17 - Check for error in XML;
20 - Discard unchanged with heartbeat;
21 - JavaScript;
23 - Prometheus to JSON;
24 - CSV to JSON;
25 - Replace;
27 - XML to JSON;
28 - SNMP walk value;
29 - SNMP walk to JSON;
30 - SNMP get value.
Property behavior:
- required
params string Additional parameters used by preprocessing option. Multiple
parameters are separated by the newline (\n) character.
Property behavior:
- required if type is set to ”Regular expression” (5), ”XML XPath” (11),
”JSONPath” (12), ”Matches regular expression” (14), ”Does not match
regular expression” (15), ”Check for error in JSON” (16), ”Check for
error in XML” (17), ”Discard unchanged with heartbeat” (20),
”JavaScript” (21), ”Prometheus to JSON” (23), ”CSV to JSON” (24),
”Replace” (25), ”SNMP walk value” (28), ”SNMP walk to JSON” (29), or
”SNMP get value” (30)
1335
Property Type Description
Possible values:
0 - Error message is set by Zabbix server;
1 - Discard value;
2 - Set custom value;
3 - Set custom error message.
Property behavior:
- required if type is set to ”Regular expression” (5), ”XML XPath” (11),
”JSONPath” (12), ”Matches regular expression” (14), ”Does not match
regular expression” (15), ”Check for error in JSON” (16), ”Check for
error in XML” (17), ”Prometheus to JSON” (23), ”CSV to JSON” (24),
”XML to JSON” (27), ”SNMP walk value” (28), ”SNMP walk to JSON”
(29), or ”SNMP get value” (30)
error_handler_params string Error handler parameters.
Property behavior:
- required if error_handler is set to ”Set custom value” or ”Set
custom error message”
The following parameters and error handlers are supported for each preprocessing type.
1336
Preprocessing type Name Parameter 1 Parameter 2 Parameter 3 Supported error handlers
4, 5
20 Discard seconds
un-
changed
with
heart-
beat
2
21 JavaScript
script
5, 6
23 Prometheus
pattern 0, 1, 2, 3
to
JSON
2 2
24 CSV character character 0,1 0, 1, 2, 3
to
JSON
2 2
25 Replace search string replacement
27 XML 0, 1, 2, 3
to
JSON
2
28 SNMP OID Format: 0, 1, 2, 3
walk 0 - Unchanged
value 1 - UTF-8 from
Hex-STRING
2 - MAC from
Hex-STRING
3 - Integer
from BITS
2 2
29 SNMP Field name OID prefix Format: 0, 1, 2, 3
walk 0 - Unchanged
to 1 - UTF-8 from
7
JSON Hex-STRING
2 - MAC from
Hex-STRING
3 - Integer
from BITS
30 SNMP Format: 0, 1, 2, 3
get 1 - UTF-8 from
value Hex-STRING
2 - MAC from
Hex-STRING
3 - Integer
from BITS
1
regular expression
2
string
3
JSONPath or XML XPath
4
positive integer (with support of time suffixes, e.g. 30s, 1m, 2h, 1d)
5
user macro
6
Prometheus pattern following the syntax: <metric name>{<label name>="<label value>", ...} == <value>. Each
Prometheus pattern component (metric, label name, label value and metric value) can be user macro.
7
Supports multiple ”Field name,OID prefix,Format records” records delimited by a new line character.
The LLD rule overrides object defines a set of rules (filters, conditions and operations) that are used to override properties of
different prototype objects. It has the following properties:
Property behavior:
- required
1337
Property Type Description
Property behavior:
- required
stop integer Stop processing next overrides if matches.
Possible values:
0 - (default) don’t stop processing overrides;
1 - stop processing overrides if filter matches.
filter object Override filter.
operations object/array Override operations.
The LLD rule override filter object defines a set of conditions that if they match the discovered object the override is applied. It has
the following properties:
conditions object/array Set of override filter conditions to use for matching the discovered
objects. The conditions will be sorted in the order of their placement in
the formula.
Property behavior:
- required
evaltype integer Override filter condition evaluation method.
Possible values:
0 - and/or;
1 - and;
2 - or;
3 - custom expression.
Property behavior:
- required
eval_formula string Generated expression that will be used for evaluating override filter
conditions. The expression contains IDs that reference specific override
filter conditions by its formulaid. The value of eval_formula is
equal to the value of formula for filters with a custom expression.
Property behavior:
- read-only
formula string User-defined expression to be used for evaluating conditions of
override filters with a custom expression. The expression must contain
IDs that reference specific override filter conditions by its formulaid.
The IDs used in the expression must exactly match the ones defined in
the override filter conditions: no condition can remain unused or
omitted.
Property behavior:
- required if evaltype is set to ”custom expression”
The LLD rule override filter condition object defines a separate check to perform on the value of an LLD macro. It has the following
properties:
1338
Property Type Description
Property behavior:
- required
value string Value to compare with.
Property behavior:
- required if operator is set to ”matches regular expression” or ”does
not match regular expression”
formulaid string Arbitrary unique ID that is used to reference the condition from a
custom expression. Can only contain capital-case letters. The ID must
be defined by the user when modifying filter conditions, but will be
generated anew when requesting them afterward.
Property behavior:
- required if evaltype of LLD rule override filter object is set to
”custom expression”
operator integer Condition operator.
Possible values:
8 - (default) matches regular expression;
9 - does not match regular expression;
12 - exists;
13 - does not exist.
The LLD rule override operation is combination of conditions and actions to perform on the prototype object. It has the following
properties:
Possible values:
0 - Item prototype;
1 - Trigger prototype;
2 - Graph prototype;
3 - Host prototype.
Property behavior:
- required
operator integer Override condition operator.
Possible values:
0 - (default) equals;
1 - does not equal;
2 - contains;
3 - does not contain;
8 - matches;
9 - does not match.
value string Pattern to match item, trigger, graph or host prototype name
depending on selected object.
opstatus object Override operation status object for item, trigger and host prototype
objects.
opdiscover object Override operation discover status object (all object types).
opperiod object Override operation period (update interval) object for item prototype
object.
ophistory object Override operation history object for item prototype object.
optrends object Override operation trends object for item prototype object.
opseverity object Override operation severity object for trigger prototype object.
1339
Property Type Description
optag object/array Override operation tag object for trigger and host prototype objects.
optemplate object/array Override operation template object for host prototype object.
opinventory object Override operation inventory object for host prototype object.
LLD rule override operation status that is set to discovered object. It has the following properties:
Possible values:
0 - Create enabled;
1 - Create disabled.
Property behavior:
- required
LLD rule override operation discover status that is set to discovered object. It has the following properties:
Possible values:
0 - Yes, continue discovering the objects;
1 - No, new objects will not be discovered and existing ones will be
marked as lost.
Property behavior:
- required
LLD rule override operation period is an update interval value that is set to discovered item. It has the following properties:
Accepts seconds or time unit with suffix (e.g., 30s, 1m, 2h, 1d) and,
optionally, one or more custom intervals, all separated by semicolons.
Custom intervals can be a mix of flexible and scheduling intervals.
Accepts user macros or LLD macros. If used, the value must be a single
macro. Multiple macros or macros mixed with text are not supported.
Flexible intervals may be written as two macros separated by a
forward slash (e.g., {$FLEX_INTERVAL}/{$FLEX_PERIOD}).
Example:
1h;wd1-5h9-18;{$Macro1}/1-7,00:00-24:00;0/6-7,12:00-24:00;{$Macr
Property behavior:
- required
LLD rule override operation history value that is set to discovered item. It has the following properties:
1340
Property Type Description
history string Override the history of item prototype which is a time unit of how long
the history data should be stored. Also accepts user macro and LLD
macro.
Property behavior:
- required
LLD rule override operation trends value that is set to discovered item. It has the following properties:
trends string Override the trends of item prototype which is a time unit of how long
the trends data should be stored. Also accepts user macro and LLD
macro.
Property behavior:
- required
LLD rule override operation severity value that is set to discovered trigger. It has the following properties:
Possible values:
0 - (default) not classified;
1 - information;
2 - warning;
3 - average;
4 - high;
5 - disaster.
Property behavior:
- required
LLD rule override operation tag object contains tag name and value that are set to discovered object. It has the following properties:
LLD rule override operation template object that is linked to discovered host. It has the following properties:
Property behavior:
- required
1341
LLD rule override operation inventory
LLD rule override operation inventory mode value that is set to discovered host. It has the following properties:
Possible values:
-1 - disabled;
0 - (default) manual;
1 - automatic.
Property behavior:
- required
discoveryrule.copy
Attention:
This method is deprecated and will be removed in the future. Instead, you can configure LLD rules on templates and apply
these templates to other templates or hosts, effectively copying the LLD rules to the specified targets.
Description
Note:
This method is only available to Admin and Super admin user types. Permissions to call the method can be revoked in user
role settings. See User roles for more information.
Parameters
(object) Parameters defining the LLD rules to copy and the target hosts.
Return values
Request:
{
"jsonrpc": "2.0",
"method": "discoveryrule.copy",
"params": {
"discoveryids": [
"27426"
],
"hostids": [
"10196",
"10197"
]
},
1342
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": true,
"id": 1
}
Source
CDiscoveryRule::copy() in ui/include/classes/api/services/CDiscoveryRule.php.
discoveryrule.create
Description
Note:
This method is only available to Admin and Super admin user types. Permissions to call the method can be revoked in user
role settings. See User roles for more information.
Parameters
Return values
(object) Returns an object containing the IDs of the created LLD rules under the itemids property. The order of the returned
IDs matches the order of the passed LLD rules.
Examples
Create a Zabbix agent LLD rule to discover mounted file systems. Discovered items will be updated every 30 seconds.
Request:
{
"jsonrpc": "2.0",
"method": "discoveryrule.create",
"params": {
"name": "Mounted filesystem discovery",
"key_": "vfs.fs.discovery",
"hostid": "10197",
"type": 0,
"interfaceid": "112",
"delay": "30s"
},
"id": 1
}
Response:
1343
{
"jsonrpc": "2.0",
"result": {
"itemids": [
"27665"
]
},
"id": 1
}
Using a filter
Create an LLD rule with a set of conditions to filter the results by. The conditions will be grouped together using the logical ”and”
operator.
Request:
{
"jsonrpc": "2.0",
"method": "discoveryrule.create",
"params": {
"name": "Filtered LLD rule",
"key_": "lld",
"hostid": "10116",
"type": 0,
"interfaceid": "13",
"delay": "30s",
"filter": {
"evaltype": 1,
"conditions": [
{
"macro": "{#MACRO1}",
"value": "@regex1"
},
{
"macro": "{#MACRO2}",
"value": "@regex2",
"operator": "9"
},
{
"macro": "{#MACRO3}",
"value": "",
"operator": "12"
},
{
"macro": "{#MACRO4}",
"value": "",
"operator": "13"
}
]
}
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"itemids": [
"27665"
]
},
"id": 1
1344
}
Request:
{
"jsonrpc": "2.0",
"method": "discoveryrule.create",
"params": {
"name": "LLD rule with LLD macro paths",
"key_": "lld",
"hostid": "10116",
"type": 0,
"interfaceid": "13",
"delay": "30s",
"lld_macro_paths": [
{
"lld_macro": "{#MACRO1}",
"path": "$.path.1"
},
{
"lld_macro": "{#MACRO2}",
"path": "$.path.2"
}
]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"itemids": [
"27665"
]
},
"id": 1
}
Create an LLD rule with a filter that will use a custom expression to evaluate the conditions. The LLD rule must only discover objects
the ”{#MACRO1}” macro value of which matches both regular expression ”regex1” and ”regex2”, and the value of ”{#MACRO2}”
matches either ”regex3” or ”regex4”. The formula IDs ”A”, ”B”, ”C” and ”D” have been chosen arbitrarily.
Request:
{
"jsonrpc": "2.0",
"method": "discoveryrule.create",
"params": {
"name": "Filtered LLD rule",
"key_": "lld",
"hostid": "10116",
"type": 0,
"interfaceid": "13",
"delay": "30s",
"filter": {
"evaltype": 3,
"formula": "(A and B) and (C or D)",
"conditions": [
{
"macro": "{#MACRO1}",
1345
"value": "@regex1",
"formulaid": "A"
},
{
"macro": "{#MACRO1}",
"value": "@regex2",
"formulaid": "B"
},
{
"macro": "{#MACRO2}",
"value": "@regex3",
"formulaid": "C"
},
{
"macro": "{#MACRO2}",
"value": "@regex4",
"formulaid": "D"
}
]
}
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"itemids": [
"27665"
]
},
"id": 1
}
Request:
{
"jsonrpc": "2.0",
"method": "discoveryrule.create",
"params": {
"hostid": "10257",
"interfaceid": "5",
"type": 19,
"name": "API HTTP agent",
"key_": "api_discovery_rule",
"value_type": 3,
"delay": "5s",
"url": "http://127.0.0.1?discoverer.php",
"query_fields": [
{
"name": "mode",
"value": "json"
},
{
"name": "elements",
"value": "2"
}
],
"headers": [
1346
{
"name": "X-Type",
"value": "api"
},
{
"name": "Authorization",
"value": "Bearer mF_A.B5f-2.1JcM"
}
],
"allow_traps": 1,
"trapper_hosts": "127.0.0.1"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"itemids": [
"28336"
]
},
"id": 1
}
Request:
{
"jsonrpc": "2.0",
"method": "discoveryrule.create",
"params": {
"name": "Discovery rule with preprocessing",
"key_": "lld.with.preprocessing",
"hostid": "10001",
"ruleid": "27665",
"type": 0,
"value_type": 3,
"delay": "60s",
"interfaceid": "1155",
"preprocessing": [
{
"type": 20,
"params": "20",
"error_handler": 0,
"error_handler_params": ""
}
]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"itemids": [
"44211"
]
},
"id": 1
1347
}
Request:
{
"jsonrpc": "2.0",
"method": "discoveryrule.create",
"params": {
"name": "Discover database host",
"key_": "lld.with.overrides",
"hostid": "10001",
"type": 0,
"value_type": 3,
"delay": "60s",
"interfaceid": "1155",
"overrides": [
{
"name": "Discover MySQL host",
"step": "1",
"stop": "1",
"filter": {
"evaltype": "2",
"conditions": [
{
"macro": "{#UNIT.NAME}",
"operator": "8",
"value": "^mysqld\\.service$"
},
{
"macro": "{#UNIT.NAME}",
"operator": "8",
"value": "^mariadb\\.service$"
}
]
},
"operations": [
{
"operationobject": "3",
"operator": "2",
"value": "Database host",
"opstatus": {
"status": "0"
},
"optemplate": [
{
"templateid": "10170"
}
],
"optag": [
{
"tag": "Database",
"value": "MySQL"
}
]
}
]
},
{
"name": "Discover PostgreSQL host",
"step": "2",
"stop": "1",
"filter": {
1348
"evaltype": "0",
"conditions": [
{
"macro": "{#UNIT.NAME}",
"operator": "8",
"value": "^postgresql\\.service$"
}
]
},
"operations": [
{
"operationobject": "3",
"operator": "2",
"value": "Database host",
"opstatus": {
"status": "0"
},
"optemplate": [
{
"templateid": "10263"
}
],
"optag": [
{
"tag": "Database",
"value": "PostgreSQL"
}
]
}
]
}
]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"itemids": [
"30980"
]
},
"id": 1
}
Request:
{
"jsonrpc": "2.0",
"method": "discoveryrule.create",
"params": {
"name": "Script example",
"key_": "custom.script.lldrule",
"hostid": "12345",
"type": 21,
"value_type": 4,
"params": "var request = new HttpRequest();\nreturn request.post(\"https://postman-echo.com/post\"
"parameters": [{
1349
"name": "host",
"value": "{HOST.CONN}"
}],
"timeout": "6s",
"delay": "30s"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"itemids": [
"23865"
]
},
"id": 1
}
Create LLD rule with a specified time period for disabling and no deletion
Create an LLD rule with custom time period for disabling entity after it is no longer discovered, with the setting that it will never
be deleted.
Request:
{
"jsonrpc": "2.0",
"method": "discoveryrule.create",
"params": {
"name": "lld disable after 1h",
"key_": "lld.disable",
"hostid": "10001",
"type": 2,
"lifetime_type": 1,
"enabled_lifetime_type": 0,
"enabled_lifetime": "1h"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"itemids": [
"46864"
]
},
"id": 1
}
See also
Source
CDiscoveryRule::create() in ui/include/classes/api/services/CDiscoveryRule.php.
discoveryrule.delete
1350
Description
Note:
This method is only available to Admin and Super admin user types. Permissions to call the method can be revoked in user
role settings. See User roles for more information.
Parameters
(object) Returns an object containing the IDs of the deleted LLD rules under the ruleids property.
Examples
Request:
{
"jsonrpc": "2.0",
"method": "discoveryrule.delete",
"params": [
"27665",
"27668"
],
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"ruleids": [
"27665",
"27668"
]
},
"id": 1
}
Source
CDiscoveryRule::delete() in ui/include/classes/api/services/CDiscoveryRule.php.
discoveryrule.get
Description
Note:
This method is available to users of any type. Permissions to call the method can be revoked in user role settings. See
User roles for more information.
Parameters
1351
Parameter Type Description
itemids ID/array Return only LLD rules with the given IDs.
groupids ID/array Return only LLD rules that belong to the hosts from the given groups.
hostids ID/array Return only LLD rules that belong to the given hosts.
inherited boolean If set to true return only LLD rules inherited from a template.
interfaceids ID/array Return only LLD rules use the given host interfaces.
monitored boolean If set to true return only enabled LLD rules that belong to monitored
hosts.
templated boolean If set to true return only LLD rules that belong to templates.
templateids ID/array Return only LLD rules that belong to the given templates.
selectFilter query Return a filter property with data of the filter used by the LLD rule.
selectGraphs query Returns a graphs property with graph prototypes that belong to the
LLD rule.
Supports count.
selectHostPrototypes query Return a hostPrototypes property with host prototypes that belong
to the LLD rule.
Supports count.
selectHosts query Return a hosts property with an array of hosts that the LLD rule
belongs to.
selectItems query Return an items property with item prototypes that belong to the LLD
rule.
Supports count.
selectTriggers query Return a triggers property with trigger prototypes that belong to the
LLD rule.
Supports count.
selectLLDMacroPaths query Return an lld_macro_paths property with a list of LLD macros and
paths to values assigned to each corresponding macro.
selectPreprocessing query Return a preprocessing property with LLD rule preprocessing
options.
selectOverrides query Return an lld_rule_overrides property with a list of override
filters, conditions and operations that are performed on prototype
objects.
filter object Return only those results that exactly match the given filter.
Accepts an object, where the keys are property names, and the values
are either a single value or an array of values to match against.
1352
Parameter Type Description
sortorder string/array
startSearch boolean
Return values
Request:
{
"jsonrpc": "2.0",
"method": "discoveryrule.get",
"params": {
"output": "extend",
"hostids": "10202"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": [
{
"itemid": "27425",
"type": "0",
"snmp_oid": "",
"hostid": "10202",
"name": "Network interface discovery",
"key_": "net.if.discovery",
"delay": "1h",
"status": "0",
"trapper_hosts": "",
"templateid": "22444",
"valuemapid": "0",
"params": "",
"ipmi_sensor": "",
"authtype": "0",
"username": "",
"password": "",
"publickey": "",
"privatekey": "",
"interfaceid": "119",
"description": "Discovery of network interfaces as defined in global regular expression \"Netw
"lifetime": "30d",
"jmx_endpoint": "",
"master_itemid": "0",
"timeout": "",
"url": "",
"query_fields": [],
"posts": "",
"status_codes": "200",
"follow_redirects": "1",
"post_type": "0",
"http_proxy": "",
1353
"headers": [],
"retrieve_mode": "0",
"request_method": "0",
"ssl_cert_file": "",
"ssl_key_file": "",
"ssl_key_password": "",
"verify_peer": "0",
"verify_host": "0",
"allow_traps": "0",
"uuid": "",
"lifetime_type": "0",
"enabled_lifetime_type": "2",
"enabled_lifetime": "0",
"state": "0",
"error": "",
"parameters": []
},
{
"itemid": "27426",
"type": "0",
"snmp_oid": "",
"hostid": "10202",
"name": "Mounted filesystem discovery",
"key_": "vfs.fs.discovery",
"delay": "1h",
"status": "0",
"trapper_hosts": "",
"templateid": "22450",
"valuemapid": "0",
"params": "",
"ipmi_sensor": "",
"authtype": "0",
"username": "",
"password": "",
"publickey": "",
"privatekey": "",
"interfaceid": "119",
"description": "Discovery of file systems of different types as defined in global regular expr
"lifetime": "30d",
"jmx_endpoint": "",
"master_itemid": "0",
"timeout": "",
"url": "",
"query_fields": [],
"posts": "",
"status_codes": "200",
"follow_redirects": "1",
"post_type": "0",
"http_proxy": "",
"headers": [],
"retrieve_mode": "0",
"request_method": "0",
"ssl_cert_file": "",
"ssl_key_file": "",
"ssl_key_password": "",
"verify_peer": "0",
"verify_host": "0",
"allow_traps": "0",
"uuid": "",
"lifetime_type": "0",
"enabled_lifetime_type": "2",
"enabled_lifetime": "0",
1354
"state": "0",
"error": "",
"parameters": []
}
],
"id": 1
}
Retrieve the name of the LLD rule ”24681” and its filter conditions. The filter uses the ”and” evaluation type, so the formula
property is empty and eval_formula is generated automatically.
Request:
{
"jsonrpc": "2.0",
"method": "discoveryrule.get",
"params": {
"output": ["name"],
"selectFilter": "extend",
"itemids": ["24681"]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": [
{
"itemid": "24681",
"name": "Filtered LLD rule",
"filter": {
"evaltype": "1",
"formula": "",
"conditions": [
{
"macro": "{#MACRO1}",
"value": "@regex1",
"operator": "8",
"formulaid": "A"
},
{
"macro": "{#MACRO2}",
"value": "@regex2",
"operator": "9",
"formulaid": "B"
},
{
"macro": "{#MACRO3}",
"value": "",
"operator": "12",
"formulaid": "C"
},
{
"macro": "{#MACRO4}",
"value": "",
"operator": "13",
"formulaid": "D"
}
],
"eval_formula": "A and B and C and D"
}
1355
}
],
"id": 1
}
Retrieve LLD rule for host by rule URL field value. Only exact match of URL string defined for LLD rule is supported.
Request:
{
"jsonrpc": "2.0",
"method": "discoveryrule.get",
"params": {
"hostids": "10257",
"filter": {
"type": 19,
"url": "http://127.0.0.1/discoverer.php"
}
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": [
{
"itemid": "28336",
"type": "19",
"snmp_oid": "",
"hostid": "10257",
"name": "API HTTP agent",
"key_": "api_discovery_rule",
"delay": "5s",
"status": "0",
"trapper_hosts": "",
"templateid": "0",
"valuemapid": "0",
"params": "",
"ipmi_sensor": "",
"authtype": "0",
"username": "",
"password": "",
"publickey": "",
"privatekey": "",
"interfaceid": "5",
"description": "",
"lifetime": "30d",
"jmx_endpoint": "",
"master_itemid": "0",
"timeout": "",
"url": "http://127.0.0.1/discoverer.php",
"query_fields": [
{
"name": "mode",
"value": "json"
},
{
"name": "elements",
"value": "2"
}
],
1356
"posts": "",
"status_codes": "200",
"follow_redirects": "1",
"post_type": "0",
"http_proxy": "",
"headers": [
{
"name" : "X-Type",
"value": "api"
},
{
"name": "Authorization",
"value": "Bearer mF_A.B5f-2.1JcM"
}
],
"retrieve_mode": "0",
"request_method": "1",
"ssl_cert_file": "",
"ssl_key_file": "",
"ssl_key_password": "",
"verify_peer": "0",
"verify_host": "0",
"allow_traps": "0",
"uuid": "",
"lifetime_type": "0",
"enabled_lifetime_type": "2",
"enabled_lifetime": "0",
"state": "0",
"error": "",
"parameters": []
}
],
"id": 1
}
Request:
{
"jsonrpc": "2.0",
"method": "discoveryrule.get",
"params": {
"output": ["name"],
"itemids": "30980",
"selectOverrides": ["name", "step", "stop", "filter", "operations"]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": [
{
"name": "Discover database host",
"overrides": [
{
"name": "Discover MySQL host",
"step": "1",
"stop": "1",
"filter": {
1357
"evaltype": "2",
"formula": "",
"conditions": [
{
"macro": "{#UNIT.NAME}",
"operator": "8",
"value": "^mysqld\\.service$",
"formulaid": "A"
},
{
"macro": "{#UNIT.NAME}",
"operator": "8",
"value": "^mariadb\\.service$",
"formulaid": "B"
}
],
"eval_formula": "A or B"
},
"operations": [
{
"operationobject": "3",
"operator": "2",
"value": "Database host",
"opstatus": {
"status": "0"
},
"optag": [
{
"tag": "Database",
"value": "MySQL"
}
],
"optemplate": [
{
"templateid": "10170"
}
]
}
]
},
{
"name": "Discover PostgreSQL host",
"step": "2",
"stop": "1",
"filter": {
"evaltype": "0",
"formula": "",
"conditions": [
{
"macro": "{#UNIT.NAME}",
"operator": "8",
"value": "^postgresql\\.service$",
"formulaid": "A"
}
],
"eval_formula": "A"
},
"operations": [
{
"operationobject": "3",
"operator": "2",
"value": "Database host",
1358
"opstatus": {
"status": "0"
},
"optag": [
{
"tag": "Database",
"value": "PostgreSQL"
}
],
"optemplate": [
{
"templateid": "10263"
}
]
}
]
}
]
}
],
"id": 1
}
See also
• Graph prototype
• Host
• Item prototype
• LLD rule filter
• Trigger prototype
Source
CDiscoveryRule::get() in ui/include/classes/api/services/CDiscoveryRule.php.
discoveryrule.update
Description
Note:
This method is only available to Admin and Super admin user types. Permissions to call the method can be revoked in user
role settings. See User roles for more information.
Parameters
Additionally to the standard LLD rule properties, the method accepts the following parameters.
Parameter behavior:
- read-only for inherited objects
1359
Parameter Type Description
lld_macro_paths object/array LLD rule lld_macro_path options to replace the existing lld_macro_path
options.
Parameter behavior:
- read-only for inherited objects
overrides object/array LLD rule overrides options to replace the existing overrides options.
Parameter behavior:
- read-only for inherited objects
Return values
(object) Returns an object containing the IDs of the updated LLD rules under the itemids property.
Examples
Add a filter so that the contents of the {#FSTYPE} macro would match the @File systems for discovery regexp.
Request:
{
"jsonrpc": "2.0",
"method": "discoveryrule.update",
"params": {
"itemid": "22450",
"filter": {
"evaltype": 1,
"conditions": [
{
"macro": "{#FSTYPE}",
"value": "@File systems for discovery"
}
]
}
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"itemids": [
"22450"
]
},
"id": 1
}
Request:
{
"jsonrpc": "2.0",
"method": "discoveryrule.update",
"params": {
"itemid": "22450",
"lld_macro_paths": [
{
"lld_macro": "{#MACRO1}",
"path": "$.json.path"
}
1360
]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"itemids": [
"22450"
]
},
"id": 1
}
Disable trapping
Request:
{
"jsonrpc": "2.0",
"method": "discoveryrule.update",
"params": {
"itemid": "28336",
"allow_traps": 0
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"itemids": [
"28336"
]
},
"id": 1
}
Request:
{
"jsonrpc": "2.0",
"method": "discoveryrule.update",
"params": {
"itemid": "44211",
"preprocessing": [
{
"type": 12,
"params": "$.path.to.json",
"error_handler": 2,
"error_handler_params": "5"
}
]
},
"id": 1
}
1361
Response:
{
"jsonrpc": "2.0",
"result": {
"itemids": [
"44211"
]
},
"id": 1
}
Update an LLD rule script with a different script and remove unnecessary parameters that were used by previous script.
Request:
{
"jsonrpc": "2.0",
"method": "discoveryrule.update",
"params": {
"itemid": "23865",
"parameters": [],
"script": "Zabbix.log(3, 'Log test');\nreturn 1;"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"itemids": [
"23865"
]
},
"id": 1
}
Update LLD rule to disable no-longer discovered entity after 12 hours and to delete it after 7 days.
Request:
{
"jsonrpc": "2.0",
"method": "discoveryrule.update",
"params": {
"itemid": "46864",
"lifetime_type": 0,
"lifetime": "7d",
"enabled_lifetime_type": 0,
"enabled_lifetime": "12h"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"itemids": [
"46864"
]
},
1362
"id": 1
}
Source
CDiscoveryRule::update() in ui/include/classes/api/services/CDiscoveryRule.php.
Maintenance
Object references:
• Maintenance
• Time period
• Problem tag
Available methods:
Maintenance object
Property behavior:
- read-only
- required for update operations
name string Name of the maintenance.
Property behavior:
- required for create operations
active_since timestamp Time when the maintenance becomes active.
Property behavior:
- required for create operations
active_till timestamp Time when the maintenance stops being active.
Property behavior:
- required for create operations
description string Description of the maintenance.
maintenance_type integer Type of maintenance.
Possible values:
0 - (default) with data collection;
1 - without data collection.
1363
Property Type Description
Possible values:
0 - (default) And/Or;
2 - Or.
Time period
The time period object is used to define periods when the maintenance must come into effect. It has the following properties.
Default: 3600.
timeperiod_type integer Type of time period.
Possible values:
0 - (default) one time only;
2 - daily;
3 - weekly;
4 - monthly.
start_date timestamp Date when the maintenance period must come into effect.
The given value will be rounded down to minutes.
Property behavior:
- supported if timeperiod_type is set to ”one time only”
start_time integer Time of day when the maintenance starts in seconds.
The given value will be rounded down to minutes.
Default: 0.
Property behavior:
- supported if timeperiod_type is set to ”daily”, ”weekly”, or
”monthly”
1364
Property Type Description
every integer For daily and weekly periods every defines the day or week intervals
at which the maintenance must come into effect.
Default value if timeperiod_type is set to ”daily” or ”weekly”: 1.
For monthly periods when day is set, the every property defines the
day of the month when the maintenance must come into effect.
Default value if timeperiod_type is set to ”monthly” and day is set:
1.
Property behavior:
- supported if timeperiod_type is set to ”daily”, ”weekly”, or
”monthly”
dayofweek integer Days of the week when the maintenance must come into effect.
Days are stored in binary form with each bit representing the
corresponding day. For example, 4 equals 100 in binary and means,
that maintenance will be enabled on Wednesday.
Property behavior:
timeperiod_type is set to ”weekly”, or if
- required if
timeperiod_type is set to ”monthly” and day is not set
- supported if timeperiod_type is set to ”monthly”
day integer Day of the month when the maintenance must come into effect.
Property behavior:
- required if timeperiod_type is set to ”monthly” and dayofweek is
not set
- supported if timeperiod_type is set to ”monthly”
month integer Months when the maintenance must come into effect.
Months are stored in binary form with each bit representing the
corresponding month. For example, 5 equals 101 in binary and means,
that maintenance will be enabled in January and March.
Property behavior:
- required if timeperiod_type is set to ”monthly”
Problem tag
The problem tag object is used to define which problems must be suppressed when the maintenance comes into effect. Tags can
only be specified if maintenance_type of Maintenance object is set to ”with data collection”. It has the following properties.
Property behavior:
- required
1365
Property Type Description
Possible values:
0 - Equals;
2 - (default) Contains.
value string Problem tag value.
maintenance.create
Description
Note:
This method is only available to Admin and Super admin user types. Permissions to call the method can be revoked in user
role settings. See User roles for more information.
Parameters
The host groups must have only the groupid property defined.
Parameter behavior:
- required if hosts is not set
hosts object/array Hosts that will undergo maintenance.
Parameter behavior:
- required if groups is not set
timeperiods object/array Maintenance time periods.
Parameter behavior:
- required
tags object/array Problem tags.
Parameter behavior:
- supported if maintenance_type of Maintenance object is set to
”with data collection”
groupids array This parameter is deprecated, please use groups instead.
(deprecated) IDs of the host groups that will undergo maintenance.
hostids array This parameter is deprecated, please use hosts instead.
(deprecated) IDs of the hosts that will undergo maintenance.
Return values
(object) Returns an object containing the IDs of the created maintenances under the maintenanceids property. The order of
the returned IDs matches the order of the passed maintenances.
Examples
1366
Creating a maintenance
Create a maintenance with data collection for host group with ID ”2” and with problem tags service:mysqld and error. It must
be active from 22.01.2013 till 22.01.2014, come in effect each Sunday at 18:00 and last for one hour.
Request:
{
"jsonrpc": "2.0",
"method": "maintenance.create",
"params": {
"name": "Sunday maintenance",
"active_since": 1358844540,
"active_till": 1390466940,
"tags_evaltype": 0,
"groups": [
{"groupid": "2"}
],
"timeperiods": [
{
"period": 3600,
"timeperiod_type": 3,
"start_time": 64800,
"every": 1,
"dayofweek": 64
}
],
"tags": [
{
"tag": "service",
"operator": "0",
"value": "mysqld"
},
{
"tag": "error",
"operator": "2",
"value": ""
}
]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"maintenanceids": [
"3"
]
},
"id": 1
}
See also
• Time period
Source
CMaintenance::create() in ui/include/classes/api/services/CMaintenance.php.
maintenance.delete
Description
1367
object maintenance.delete(array maintenanceIds)
This method allows to delete maintenance periods.
Note:
This method is only available to Admin and Super admin user types. Permissions to call the method can be revoked in user
role settings. See User roles for more information.
Parameters
(object) Returns an object containing the IDs of the deleted maintenance periods under the maintenanceids property.
Examples
Request:
{
"jsonrpc": "2.0",
"method": "maintenance.delete",
"params": [
"3",
"1"
],
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"maintenanceids": [
"3",
"1"
]
},
"id": 1
}
Source
CMaintenance::delete() in ui/include/classes/api/services/CMaintenance.php.
maintenance.get
Description
Note:
This method is available to users of any type. Permissions to call the method can be revoked in user role settings. See
User roles for more information.
Parameters
1368
Parameter Type Description
groupids ID/array Return only maintenances that are assigned to the given host groups.
hostids ID/array Return only maintenances that are assigned to the given hosts.
maintenanceids ID/array Return only maintenances with the given IDs.
selectHostGroups query Return a hostgroups property with host groups assigned to the
maintenance.
selectHosts query Return a hosts property with hosts assigned to the maintenance.
selectTags query Return a tags property with problem tags of the maintenance.
selectTimeperiods query Return a timeperiods property with time periods of the maintenance.
sortfield string/array Sort the result by the given properties.
Return values
Retrieving maintenances
Retrieve all configured maintenances, and the data about the assigned host groups, defined time periods and problem tags.
Request:
{
"jsonrpc": "2.0",
"method": "maintenance.get",
"params": {
"output": "extend",
"selectHostGroups": "extend",
"selectTimeperiods": "extend",
"selectTags": "extend"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": [
{
"maintenanceid": "3",
"name": "Sunday maintenance",
"maintenance_type": "0",
1369
"description": "",
"active_since": "1358844540",
"active_till": "1390466940",
"tags_evaltype": "0",
"hostgroups": [
{
"groupid": "4",
"name": "Zabbix servers",
"flags": "0",
"uuid": "6f6799aa69e844b4b3918f779f2abf08"
}
],
"timeperiods": [
{
"timeperiod_type": "3",
"every": "1",
"month": "0",
"dayofweek": "1",
"day": "0",
"start_time": "64800",
"period": "3600",
"start_date": "2147483647"
}
],
"tags": [
{
"tag": "service",
"operator": "0",
"value": "mysqld",
},
{
"tag": "error",
"operator": "2",
"value": ""
}
]
}
],
"id": 1
}
See also
• Host
• Host group
• Time period
Source
CMaintenance::get() in ui/include/classes/api/services/CMaintenance.php.
maintenance.update
Description
Note:
This method is only available to Admin and Super admin user types. Permissions to call the method can be revoked in user
role settings. See User roles for more information.
Parameters
1370
(object/array) Maintenance properties to be updated.
The maintenanceid property must be defined for each maintenance, all other properties are optional. Only the passed properties
will be updated, all others will remain unchanged.
Additionally to the standard maintenance properties, the method accepts the following parameters.
The host groups must have only the groupid property defined.
Parameter behavior:
- required if hosts is not set
hosts object/array Hosts to replace the current hosts.
Parameter behavior:
- required if groups is not set
timeperiods object/array Maintenance time periods to replace the current periods.
tags object/array Problem tags to replace the current tags.
Parameter behavior:
- supported if maintenance_type of Maintenance object is set to
”with data collection”
groupids array This parameter is deprecated, please use groups instead.
(deprecated) IDs of the host groups that will undergo maintenance.
hostids array This parameter is deprecated, please use hosts instead.
(deprecated) IDs of the hosts that will undergo maintenance.
Return values
(object) Returns an object containing the IDs of the updated maintenances under the maintenanceids property.
Examples
Replace the hosts currently assigned to maintenance with two different ones.
Request:
{
"jsonrpc": "2.0",
"method": "maintenance.update",
"params": {
"maintenanceid": "3",
"hosts": [
{"hostid": "10085"},
{"hostid": "10084"}
]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"maintenanceids": [
"3"
]
},
"id": 1
}
1371
See also
• Time period
Source
CMaintenance::update() in ui/include/classes/api/services/CMaintenance.php.
Map
Object references:
• Map
• Map element
• Map element Host
• Map element Host group
• Map element Map
• Map element Trigger
• Map element tag
• Map element URL
• Map link
• Map link trigger
• Map URL
• Map user
• Map user group
• Map shapes
• Map lines
Available methods:
Map object
Property behavior:
- read-only
- required for update operations
height integer Height of the map in pixels.
Property behavior:
- required for create operations
name string Name of the map.
Property behavior:
- required for create operations
width integer Width of the map in pixels.
Property behavior:
- required for create operations
backgroundid ID ID of the image used as the background for the map.
1372
Property Type Description
expand_macros integer Whether to expand macros in labels when configuring the map.
Possible values:
0 - (default) do not expand macros;
1 - expand macros.
expandproblem integer Whether the problem trigger will be displayed for elements with a
single problem.
Possible values:
0 - always display the number of problems;
1 - (default) display the problem trigger if there’s only one problem.
grid_align integer Whether to enable grid aligning.
Possible values:
0 - disable grid aligning;
1 - (default) enable grid aligning.
grid_show integer Whether to show the grid on the map.
Possible values:
0 - do not show the grid;
1 - (default) show the grid.
grid_size integer Size of the map grid in pixels.
Default: 50.
highlight integer Whether icon highlighting is enabled.
Possible values:
0 - highlighting disabled;
1 - (default) highlighting enabled.
iconmapid ID ID of the icon map used on the map.
label_format integer Whether to enable advanced labels.
Possible values:
0 - (default) disable advanced labels;
1 - enable advanced labels.
label_location integer Location of the map element label.
Possible values:
0 - (default) bottom;
1 - left;
2 - right;
3 - top.
label_string_host string Custom label for host elements.
Property behavior:
- required if label_type_host is set to ”custom”
label_string_hostgroup string Custom label for host group elements.
Property behavior:
- required if label_type_hostgroup is set to ”custom”
label_string_image string Custom label for image elements.
Property behavior:
- required if label_type_image is set to ”custom”
label_string_map string Custom label for map elements.
Property behavior:
- required if label_type_map is set to ”custom”
1373
Property Type Description
Property behavior:
- required if label_type_trigger is set to ”custom”
label_type integer Map element label type.
Possible values:
0 - label;
1 - IP address;
2 - (default) element name;
3 - status only;
4 - nothing.
label_type_host integer Label type for host elements.
Possible values:
0 - label;
1 - IP address;
2 - (default) element name;
3 - status only;
4 - nothing;
5 - custom.
label_type_hostgroup integer Label type for host group elements.
Possible values:
0 - label;
2 - (default) element name;
3 - status only;
4 - nothing;
5 - custom.
label_type_image integer Label type for host group elements.
Possible values:
0 - label;
2 - (default) element name;
4 - nothing;
5 - custom.
label_type_map integer Label type for map elements.
Possible values:
0 - label;
2 - (default) element name;
3 - status only;
4 - nothing;
5 - custom.
label_type_trigger integer Label type for trigger elements.
Possible values:
0 - label;
2 - (default) element name;
3 - status only;
4 - nothing;
5 - custom.
markelements integer Whether to highlight map elements that have recently changed their
status.
Possible values:
0 - (default) do not highlight elements;
1 - highlight elements.
severity_min integer Minimum severity of the triggers that will be displayed on the map.
1374
Property Type Description
Possible values:
0 - (default) display the count of all problems;
1 - display only the count of unacknowledged problems;
2 - display the count of acknowledged and unacknowledged problems
separately.
userid ID ID of the user that is the owner of the map.
private integer Type of map sharing.
Possible values:
0 - public map;
1 - (default) private map.
show_suppressed integer Whether suppressed problems are shown.
Possible values:
0 - (default) hide suppressed problems;
1 - show suppressed problems.
Map element
The map element object defines an object displayed on a map. It has the following properties.
Property behavior:
- read-only
elements array Element data object.
Property behavior:
- required if elementtype is set to ”host”, ”map”, ”trigger” or ”host
group”
elementtype integer Type of map element.
Possible values:
0 - host;
1 - map;
2 - trigger;
3 - host group;
4 - image.
Property behavior:
- required
iconid_off ID ID of the image used to display the element in default state.
Property behavior:
- required
areatype integer How separate host group hosts should be displayed.
Possible values:
0 - (default) the host group element will take up the whole map;
1 - the host group element will have a fixed size.
elementsubtype integer How a host group element should be displayed on a map.
Possible values:
0 - (default) display the host group as a single element;
1 - display each host in the group separately.
1375
Property Type Description
Possible values:
0 - (default) AND / OR;
2 - OR.
height integer Height of the fixed size host group element in pixels.
Default: 200.
iconid_disabled ID ID of the image used to display disabled map elements.
Property behavior:
- supported if elementtype is set to ”host”, ”map”, ”trigger”, or ”host
group”
iconid_maintenance ID ID of the image used to display map elements in maintenance.
Property behavior:
- supported if elementtype is set to ”host”, ”map”, ”trigger”, or ”host
group”
iconid_on ID ID of the image used to display map elements with problems.
Property behavior:
- supported if elementtype is set to ”host”, ”map”, ”trigger”, or ”host
group”
label string Label of the element.
label_location integer Location of the map element label.
Possible values:
-1 - (default) default location;
0 - bottom;
1 - left;
2 - right;
3 - top.
permission integer Type of permission level.
Possible values:
-1 - none;
2 - read only;
3 - read-write.
sysmapid ID ID of the map that the element belongs to.
Property behavior:
- read-only
urls array Map element URLs.
Possible values:
0 - do not use icon mapping;
1 - (default) use icon mapping.
viewtype integer Host group element placing algorithm.
Possible values:
0 - (default) grid.
width integer Width of the fixed size host group element in pixels.
Default: 200.
x integer X-coordinates of the element in pixels.
Default: 0.
1376
Property Type Description
Default: 0.
The map element Host group object defines one host group element.
The map element Trigger object defines one or more trigger elements.
Property behavior:
- required
operator integer Map element tag condition operator.
Possible values:
0 - (default) Contains;
1 - Equals;
2 - Does not contain;
3 - Does not equal;
4 - Exists;
5 - Does not exist.
value string Map element tag value.
The map element URL object defines a clickable link that will be available for a specific map element. It has the following properties:
1377
Property Type Description
Property behavior:
- read-only
name string Link caption.
Property behavior:
- required
url string Link URL.
Property behavior:
- required
selementid ID ID of the map element that the URL belongs to.
Map link
The map link object defines a link between two map elements. It has the following properties.
Property behavior:
- read-only
selementid1 ID ID of the first map element linked on one end.
Property behavior:
- required
selementid2 ID ID of the first map element linked on the other end.
Property behavior:
- required
color string Line color as a hexadecimal color code.
Default: 000000.
drawtype integer Link line draw style.
Possible values:
0 - (default) line;
2 - bold line;
3 - dotted line;
4 - dashed line.
label string Link label.
linktriggers array Map link triggers to use as link status indicators.
Possible values:
-1 - none;
2 - read only;
3 - read-write.
sysmapid ID ID of the map the link belongs to.
The map link trigger object defines a map link status indicator based on the state of a trigger. It has the following properties:
1378
Property Type Description
Property behavior:
- read-only
triggerid ID ID of the trigger used as a link indicator.
Property behavior:
- required
color string Indicator color as a hexadecimal color code.
Default: DD0000.
drawtype integer Indicator draw style.
Possible values:
0 - (default) line;
2 - bold line;
3 - dotted line;
4 - dashed line.
linkid ID ID of the map link that the link trigger belongs to.
Map URL
The map URL object defines a clickable link that will be available for all elements of a specific type on the map. It has the following
properties:
Property behavior:
- read-only
name string Link caption.
Property behavior:
- required
url string Link URL.
Property behavior:
- required
elementtype integer Type of map element for which the URL will be available.
Refer to the map element type property for a list of supported types.
Default: 0.
sysmapid ID ID of the map that the URL belongs to.
Map user
Property behavior:
- read-only
userid ID ID of the user.
Property behavior:
- required
1379
Property Type Description
Possible values:
2 - read only;
3 - read-write.
Property behavior:
- required
List of map permissions based on user groups. It has the following properties:
Property behavior:
- read-only
usrgrpid ID ID of the user group.
Property behavior:
- required
permission integer Type of permission level.
Possible values:
2 - read only;
3 - read-write.
Property behavior:
- required
Map shapes
The map shape object defines a geometric shape (with or without text) displayed on a map. It has the following properties:
Property behavior:
- read-only
type integer Type of map shape element.
Possible values:
0 - rectangle;
1 - ellipse.
Property behavior:
- required
x integer X-coordinates of the shape in pixels.
Default: 0.
y integer Y-coordinates of the shape in pixels.
Default: 0.
width integer Width of the shape in pixels.
Default: 200.
1380
Property Type Description
Default: 200.
text string Text of the shape.
font integer Font of the text within shape.
Possible values:
0 - Georgia, serif
1 - “Palatino Linotype”, “Book Antiqua”, Palatino, serif
2 - “Times New Roman”, Times, serif
3 - Arial, Helvetica, sans-serif
4 - “Arial Black”, Gadget, sans-serif
5 - “Comic Sans MS”, cursive, sans-serif
6 - Impact, Charcoal, sans-serif
7 - “Lucida Sans Unicode”, “Lucida Grande”, sans-serif
8 - Tahoma, Geneva, sans-serif
9 - “Trebuchet MS”, Helvetica, sans-serif
10 - Verdana, Geneva, sans-serif
11 - “Courier New”, Courier, monospace
12 - “Lucida Console”, Monaco, monospace
Default: 9.
font_size integer Font size in pixels.
Default: 11.
font_color string Font color.
Default: 000000.
text_halign integer Horizontal alignment of text.
Possible values:
0 - center;
1 - left;
2 - right.
Default: 0.
text_valign integer Vertical alignment of text.
Possible values:
0 - middle;
1 - top;
2 - bottom.
Default: 0.
border_type integer Type of the border.
Possible values:
0 - none;
1 - —————;
2 - ·····;
3- - - -.
Default: 0.
border_width integer Width of the border in pixels.
Default: 0.
border_color string Border color.
Default: 000000.
background_color string Background color (fill color).
Default: (empty).
1381
Property Type Description
zindex integer Value used to order all shapes and lines (z-index).
Default: 0.
Map lines
The map line object defines a line displayed on a map. It has the following properties:
Property behavior:
- read-only
x1 integer X-coordinates of the line point 1 in pixels.
Default: 0.
y1 integer Y-coordinates of the line point 1 in pixels.
Default: 0.
x2 integer X-coordinates of the line point 2 in pixels.
Default: 200.
y2 integer Y-coordinates of the line point 2 in pixels.
Default: 200.
line_type integer Type of the lines.
Possible values:
0 - none;
1 - —————;
2 - ·····;
3- - - -.
Default: 0.
line_width integer Width of the lines in pixels.
Default: 0.
line_color string Line color.
Default: 000000.
zindex integer Value used to order all shapes and lines (z-index).
Default: 0.
map.create
Description
Note:
This method is available to users of any type. Permissions to call the method can be revoked in user role settings. See
User roles for more information.
Parameters
1382
Parameter Type Description
Note:
To create map links you’ll need to set a map element selementid to an arbitrary value and then use this value to reference
this element in the links selementid1 or selementid2 properties. When the element is created, this value will be
replaced with the correct ID generated by Zabbix. See example.
Return values
(object) Returns an object containing the IDs of the created maps under the sysmapids property. The order of the returned IDs
matches the order of the passed maps.
Examples
Request:
{
"jsonrpc": "2.0",
"method": "map.create",
"params": {
"name": "Map",
"width": 600,
"height": 600
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"sysmapids": [
"8"
]
},
"id": 1
}
Create a map with two host elements and a link between them. Note the use of temporary ”selementid1” and ”selementid2”
values in the map link object to refer to map elements.
Request:
{
"jsonrpc": "2.0",
"method": "map.create",
"params": {
"name": "Host map",
"width": 600,
"height": 600,
"selements": [
{
1383
"selementid": "1",
"elements": [
{"hostid": "1033"}
],
"elementtype": 0,
"iconid_off": "2"
},
{
"selementid": "2",
"elements": [
{"hostid": "1037"}
],
"elementtype": 0,
"iconid_off": "2"
}
],
"links": [
{
"selementid1": "1",
"selementid2": "2"
}
]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"sysmapids": [
"9"
]
},
"id": 1
}
Request:
{
"jsonrpc": "2.0",
"method": "map.create",
"params": {
"name": "Trigger map",
"width": 600,
"height": 600,
"selements": [
{
"elements": [
{"triggerid": "12345"},
{"triggerid": "67890"}
],
"elementtype": 2,
"iconid_off": "2"
}
]
},
"id": 1
}
1384
Response:
{
"jsonrpc": "2.0",
"result": {
"sysmapids": [
"10"
]
},
"id": 1
}
Map sharing
Create a map with two types of sharing (user and user group).
Request:
{
"jsonrpc": "2.0",
"method": "map.create",
"params": {
"name": "Map sharing",
"width": 600,
"height": 600,
"users": [
{
"userid": "4",
"permission": "3"
}
],
"userGroups": [
{
"usrgrpid": "7",
"permission": "2"
}
]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"sysmapids": [
"9"
]
},
"id": 1
}
Map shapes
Request:
{
"jsonrpc": "2.0",
"method": "map.create",
"params": {
"name": "Host map",
"width": 600,
"height": 600,
"shapes": [
{
1385
"type": 0,
"x": 0,
"y": 0,
"width": 600,
"height": 11,
"text": "{MAP.NAME}"
}
]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"sysmapids": [
"10"
]
},
"id": 1
}
Map lines
Request:
{
"jsonrpc": "2.0",
"method": "map.create",
"params": {
"name": "Map API lines",
"width": 500,
"height": 500,
"lines": [
{
"x1": 30,
"y1": 10,
"x2": 100,
"y2": 50,
"line_type": 1,
"line_width": 10,
"line_color": "009900"
}
]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"sysmapids": [
"11"
]
},
"id": 1
}
See also
• Map element
1386
• Map link
• Map URL
• Map user
• Map user group
• Map shape
• Map line
Source
CMap::create() in ui/include/classes/api/services/CMap.php.
map.delete
Description
Note:
This method is available to users of any type. Permissions to call the method can be revoked in user role settings. See
User roles for more information.
Parameters
(object) Returns an object containing the IDs of the deleted maps under the sysmapids property.
Examples
Request:
{
"jsonrpc": "2.0",
"method": "map.delete",
"params": [
"12",
"34"
],
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"sysmapids": [
"12",
"34"
]
},
"id": 1
}
Source
CMap::delete() in ui/include/classes/api/services/CMap.php.
map.get
Description
1387
integer/array map.get(object parameters)
The method allows to retrieve maps according to the given parameters.
Note:
This method is available to users of any type. Permissions to call the method can be revoked in user role settings. See
User roles for more information.
Parameters
Return values
Retrieve a map
Request:
{
"jsonrpc": "2.0",
"method": "map.get",
"params": {
"output": "extend",
"selectSelements": "extend",
1388
"selectLinks": "extend",
"selectUsers": "extend",
"selectUserGroups": "extend",
"selectShapes": "extend",
"selectLines": "extend",
"sysmapids": "3"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": [
{
"selements": [
{
"selementid": "10",
"sysmapid": "3",
"elementtype": "4",
"evaltype": "0",
"iconid_off": "1",
"iconid_on": "0",
"label": "Zabbix server",
"label_location": "3",
"x": "11",
"y": "141",
"iconid_disabled": "0",
"iconid_maintenance": "0",
"elementsubtype": "0",
"areatype": "0",
"width": "200",
"height": "200",
"tags": [
{
"tag": "service",
"value": "mysqld",
"operator": "0"
}
],
"viewtype": "0",
"use_iconmap": "1",
"urls": [],
"elements": []
},
{
"selementid": "11",
"sysmapid": "3",
"elementtype": "4",
"evaltype": "0",
"iconid_off": "1",
"iconid_on": "0",
"label": "Web server",
"label_location": "3",
"x": "211",
"y": "191",
"iconid_disabled": "0",
"iconid_maintenance": "0",
"elementsubtype": "0",
"areatype": "0",
"width": "200",
"height": "200",
1389
"viewtype": "0",
"use_iconmap": "1",
"tags": [],
"urls": [],
"elements": []
},
{
"selementid": "12",
"sysmapid": "3",
"elementtype": "0",
"evaltype": "0",
"iconid_off": "185",
"iconid_on": "0",
"label": "{HOST.NAME}\r\n{HOST.CONN}",
"label_location": "0",
"x": "111",
"y": "61",
"iconid_disabled": "0",
"iconid_maintenance": "0",
"elementsubtype": "0",
"areatype": "0",
"width": "200",
"height": "200",
"viewtype": "0",
"use_iconmap": "0",
"tags": [],
"urls": [],
"elements": [
{
"hostid": "10084"
}
]
}
],
"links": [
{
"linkid": "23",
"sysmapid": "3",
"selementid1": "10",
"selementid2": "11",
"drawtype": "0",
"color": "00CC00",
"label": "",
"linktriggers": []
}
],
"users": [
{
"sysmapuserid": "1",
"userid": "2",
"permission": "2"
}
],
"userGroups": [
{
"sysmapusrgrpid": "1",
"usrgrpid": "7",
"permission": "2"
}
],
"shapes":[
{
1390
"sysmap_shapeid":"1",
"type":"0",
"x":"0",
"y":"0",
"width":"680",
"height":"15",
"text":"{MAP.NAME}",
"font":"9",
"font_size":"11",
"font_color":"000000",
"text_halign":"0",
"text_valign":"0",
"border_type":"0",
"border_width":"0",
"border_color":"000000",
"background_color":"",
"zindex":"0"
}
],
"lines":[
{
"sysmap_shapeid":"2",
"x1": 30,
"y1": 10,
"x2": 100,
"y2": 50,
"line_type": 1,
"line_width": 10,
"line_color": "009900",
"zindex":"1"
}
],
"sysmapid": "3",
"name": "Local network",
"width": "400",
"height": "400",
"backgroundid": "0",
"label_type": "2",
"label_location": "3",
"highlight": "1",
"expandproblem": "1",
"markelements": "0",
"show_unack": "0",
"grid_size": "50",
"grid_show": "1",
"grid_align": "1",
"label_format": "0",
"label_type_host": "2",
"label_type_hostgroup": "2",
"label_type_trigger": "2",
"label_type_map": "2",
"label_type_image": "2",
"label_string_host": "",
"label_string_hostgroup": "",
"label_string_trigger": "",
"label_string_map": "",
"label_string_image": "",
"iconmapid": "0",
"expand_macros": "0",
"severity_min": "0",
"userid": "1",
"private": "1",
1391
"show_suppressed": "1"
}
],
"id": 1
}
See also
• Icon map
• Map element
• Map link
• Map URL
• Map user
• Map user group
• Map shapes
• Map lines
Source
CMap::get() in ui/include/classes/api/services/CMap.php.
map.update
Description
Note:
This method is available to users of any type. Permissions to call the method can be revoked in user role settings. See
User roles for more information.
Parameters
Additionally to the standard map properties, the method accepts the following parameters.
Note:
To create map links between new map elements you’ll need to set an element’s selementid to an arbitrary value and
then use this value to reference this element in the links selementid1 or selementid2 properties. When the element
is created, this value will be replaced with the correct ID generated by Zabbix. See example for map.create.
Return values
(object) Returns an object containing the IDs of the updated maps under the sysmapids property.
Examples
Resize a map
Request:
1392
{
"jsonrpc": "2.0",
"method": "map.update",
"params": {
"sysmapid": "8",
"width": 1200,
"height": 1200
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"sysmapids": [
"8"
]
},
"id": 1
}
Request:
{
"jsonrpc": "2.0",
"method": "map.update",
"params": {
"sysmapid": "9",
"userid": "1"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"sysmapids": [
"9"
]
},
"id": 1
}
See also
• Map element
• Map link
• Map URL
• Map user
• Map user group
• Map shapes
• Map lines
Source
CMap::update() in ui/include/classes/api/services/CMap.php.
Media type
1393
This class is designed to work with media types.
Object references:
• Media type
• Webhook parameters
• Script parameters
• Message template
Available methods:
Property behavior:
- read-only
- required for update operations
name string Name of the media type.
Property behavior:
- required for create operations
type integer Transport used by the media type.
Possible values:
0 - Email;
1 - Script;
2 - SMS;
4 - Webhook.
Property behavior:
- required for create operations
exec_path string For script media types exec_path contains the name of the executed
script.
Property behavior:
- required if type is set to ”Script”
gsm_modem string Serial device name of the GSM modem.
Property behavior:
- required if type is set to ”SMS”
passwd string Authentication password.
Property behavior:
- supported if type is set to ”Email”
smtp_email string Email address from which notifications will be sent.
Property behavior:
- required if type is set to ”Email”
smtp_helo string SMTP HELO.
Property behavior:
- required if type is set to ”Email”
1394
Property Type Description
Property behavior:
- required if type is set to ”Email”
smtp_port integer SMTP server port to connect to.
smtp_security integer SMTP connection security level to use.
Possible values:
0 - None;
1 - STARTTLS;
2 - SSL/TLS.
smtp_verify_host integer SSL verify host for SMTP.
Possible values:
0 - No;
1 - Yes.
smtp_verify_peer integer SSL verify peer for SMTP.
Possible values:
0 - No;
1 - Yes.
smtp_authentication integer SMTP authentication method to use.
Possible values:
0 - None;
1 - Normal password.
status integer Whether the media type is enabled.
Possible values:
0 - (default) Enabled;
1 - Disabled.
username string User name.
Property behavior:
- supported if type is set to ”Email”
maxsessions integer The maximum number of alerts that can be processed in parallel.
Default value: 3.
attempt_interval string The interval between retry attempts. Accepts seconds and time unit
with suffix.
Possible values:
0 - plain text;
1 - (default) html.
1395
Property Type Description
Possible values:
0 - plain text;
1 - (default) html.
script string Media type webhook script javascript body.
timeout string Media type webhook script timeout.
Accepts seconds and time unit with suffix.
Default: 30s.
process_tags integer Defines should the webhook script response to be interpreted as tags
and these tags should be added to associated event.
Possible values:
0 - (default) Ignore webhook script response;
1 - Process webhook script response as tags.
show_event_menu integer Show media type entry in problem.get and event.get property
urls.
Possible values:
0 - (default) Do not add urls entry;
1 - Add media type to urls property.
event_menu_url string Define url property of media type entry in urls property of
problem.get and event.get.
event_menu_name string Define name property of media type entry in urls property of
problem.get and event.get.
parameters array Array of webhook or script input parameters.
description string Media type description.
Webhook parameters
Parameters passed to a webhook script when it is being called have the following properties.
Property behavior:
- required
value string Parameter value, supports macros.
Supported macros are described on the Supported macros page.
Script parameters
Parameters passed to a script when it is being called have the following properties.
sortorder integer The order in which the parameters will be passed to the script as
command-line arguments, starting with 0 as the first one.
Property behavior:
- required
value string Parameter value, supports macros.
Supported macros are described on the Supported macros page.
Message template
The message template object defines a template that will be used as a default message for action operations to send a notification.
It has the following properties.
1396
Property Type Description
Possible values:
0 - triggers;
1 - discovery;
2 - autoregistration;
3 - internal;
4 - services.
Property behavior:
- required
recovery integer Operation mode.
Possible values:
0 - operations;
1 - recovery operations;
2 - update operations.
Property behavior:
- required
subject string Message subject.
message string Message text.
mediatype.create
Description
Note:
This method is only available to Super admin user type. Permissions to call the method can be revoked in user role settings.
See User roles for more information.
Parameters
parameters array Script or webhook parameters to be created for the media type.
message_templates array Message templates to be created for the media type.
Return values
(object) Returns an object containing the IDs of the created media types under the mediatypeids property. The order of the
returned IDs matches the order of the passed media types.
Examples
Create a new email media type with a custom SMTP port and message templates.
Request:
{
"jsonrpc": "2.0",
"method": "mediatype.create",
"params": {
"type": "0",
1397
"name": "Email",
"smtp_server": "mail.example.com",
"smtp_helo": "example.com",
"smtp_email": "[email protected]",
"smtp_port": "587",
"message_format": "1",
"message_templates": [
{
"eventsource": "0",
"recovery": "0",
"subject": "Problem: {EVENT.NAME}",
"message": "Problem \"{EVENT.NAME}\" on host \"{HOST.NAME}\" started at {EVENT.TIME}."
},
{
"eventsource": "0",
"recovery": "1",
"subject": "Resolved in {EVENT.DURATION}: {EVENT.NAME}",
"message": "Problem \"{EVENT.NAME}\" on host \"{HOST.NAME}\" has been resolved at {EVENT.R
},
{
"eventsource": "0",
"recovery": "2",
"subject": "Updated problem in {EVENT.AGE}: {EVENT.NAME}",
"message": "{USER.FULLNAME} {EVENT.UPDATE.ACTION} problem \"{EVENT.NAME}\" on host \"{HOST
}
]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"mediatypeids": [
"7"
]
},
"id": 1
}
Create a new script media type with a custom value for the number of attempts and the interval between them.
Request:
{
"jsonrpc": "2.0",
"method": "mediatype.create",
"params": {
"type": "1",
"name": "Push notifications",
"exec_path": "push-notification.sh",
"maxattempts": "5",
"attempt_interval": "11s",
"parameters": [
{
"sortorder": "0",
"value": "{ALERT.SENDTO}"
},
{
"sortorder": "1",
"value": "{ALERT.SUBJECT}"
1398
},
{
"sortorder": "2",
"value": "{ALERT.MESSAGE}"
}
]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"mediatypeids": [
"8"
]
},
"id": 1
}
Request:
{
"jsonrpc": "2.0",
"method": "mediatype.create",
"params": {
"type": "4",
"name": "Webhook",
"script": "var Webhook = {\r\n token: null,\r\n to: null,\r\n subject: null,\r\n messa
"parameters": [
{
"name": "Message",
"value": "{ALERT.MESSAGE}"
},
{
"name": "Subject",
"value": "{ALERT.SUBJECT}"
},
{
"name": "To",
"value": "{ALERT.SENDTO}"
},
{
"name": "Token",
"value": "<Token>"
}
]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"mediatypeids": [
"9"
]
},
1399
"id": 1
}
Source
CMediaType::create() in ui/include/classes/api/services/CMediaType.php.
mediatype.delete
Description
Note:
This method is only available to Super admin user type. Permissions to call the method can be revoked in user role settings.
See User roles for more information.
Parameters
(object) Returns an object containing the IDs of the deleted media types under the mediatypeids property.
Examples
Request:
{
"jsonrpc": "2.0",
"method": "mediatype.delete",
"params": [
"3",
"5"
],
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"mediatypeids": [
"3",
"5"
]
},
"id": 1
}
Source
CMediaType::delete() in ui/include/classes/api/services/CMediaType.php.
mediatype.get
Description
1400
Note:
This method is available to users of any type. Permissions to call the method can be revoked in user role settings. See
User roles for more information.
Parameters
mediatypeids ID/array Return only media types with the given IDs.
mediaids ID/array Return only media types used by the given media.
userids ID/array Return only media types used by the given users.
selectActions query Return an actions property with the actions that use the media type.
selectMessageTemplates query Return a message_templates property with an array of media type
messages.
selectUsers query Return a users property with the users that use the media type.
sortfield string/array Sort the result by the given properties.
Return values
Retrieve all configured media types. The following example returns two media types:
Request:
{
"jsonrpc": "2.0",
"method": "mediatype.get",
"params": {
"output": "extend",
"selectMessageTemplates": "extend"
},
"id": 1
}
Response:
1401
{
"jsonrpc": "2.0",
"result": [
{
"mediatypeid": "1",
"type": "0",
"name": "Email",
"smtp_server": "mail.example.com",
"smtp_helo": "example.com",
"smtp_email": "[email protected]",
"exec_path": "",
"gsm_modem": "",
"username": "",
"passwd": "",
"status": "0",
"smtp_port": "25",
"smtp_security": "0",
"smtp_verify_peer": "0",
"smtp_verify_host": "0",
"smtp_authentication": "0",
"maxsessions": "1",
"maxattempts": "3",
"attempt_interval": "10s",
"message_format": "0",
"script": "",
"timeout": "30s",
"process_tags": "0",
"show_event_menu": "1",
"event_menu_url": "",
"event_menu_name": "",
"description": "",
"message_templates": [
{
"eventsource": "0",
"recovery": "0",
"subject": "Problem: {EVENT.NAME}",
"message": "Problem started at {EVENT.TIME} on {EVENT.DATE}\r\nProblem name: {EVENT.NA
},
{
"eventsource": "0",
"recovery": "1",
"subject": "Resolved: {EVENT.NAME}",
"message": "Problem has been resolved at {EVENT.RECOVERY.TIME} on {EVENT.RECOVERY.DATE
},
{
"eventsource": "0",
"recovery": "2",
"subject": "Updated problem: {EVENT.NAME}",
"message": "{USER.FULLNAME} {EVENT.UPDATE.ACTION} problem at {EVENT.UPDATE.DATE} {EVEN
},
{
"eventsource": "1",
"recovery": "0",
"subject": "Discovery: {DISCOVERY.DEVICE.STATUS} {DISCOVERY.DEVICE.IPADDRESS}",
"message": "Discovery rule: {DISCOVERY.RULE.NAME}\r\n\r\nDevice IP: {DISCOVERY.DEVICE.
},
{
"eventsource": "2",
"recovery": "0",
"subject": "Autoregistration: {HOST.HOST}",
"message": "Host name: {HOST.HOST}\r\nHost IP: {HOST.IP}\r\nAgent port: {HOST.PORT}"
}
1402
],
"parameters": []
},
{
"mediatypeid": "3",
"type": "2",
"name": "SMS",
"smtp_server": "",
"smtp_helo": "",
"smtp_email": "",
"exec_path": "",
"gsm_modem": "/dev/ttyS0",
"username": "",
"passwd": "",
"status": "0",
"smtp_port": "25",
"smtp_security": "0",
"smtp_verify_peer": "0",
"smtp_verify_host": "0",
"smtp_authentication": "0",
"maxsessions": "1",
"maxattempts": "3",
"attempt_interval": "10s",
"message_format": "1",
"script": "",
"timeout": "30s",
"process_tags": "0",
"show_event_menu": "1",
"event_menu_url": "",
"event_menu_name": "",
"description": "",
"message_templates": [
{
"eventsource": "0",
"recovery": "0",
"subject": "",
"message": "{EVENT.SEVERITY}: {EVENT.NAME}\r\nHost: {HOST.NAME}\r\n{EVENT.DATE} {EVENT
},
{
"eventsource": "0",
"recovery": "1",
"subject": "",
"message": "RESOLVED: {EVENT.NAME}\r\nHost: {HOST.NAME}\r\n{EVENT.DATE} {EVENT.TIME}"
},
{
"eventsource": "0",
"recovery": "2",
"subject": "",
"message": "{USER.FULLNAME} {EVENT.UPDATE.ACTION} problem at {EVENT.UPDATE.DATE} {EVEN
},
{
"eventsource": "1",
"recovery": "0",
"subject": "",
"message": "Discovery: {DISCOVERY.DEVICE.STATUS} {DISCOVERY.DEVICE.IPADDRESS}"
},
{
"eventsource": "2",
"recovery": "0",
"subject": "",
"message": "Autoregistration: {HOST.HOST}\r\nHost IP: {HOST.IP}\r\nAgent port: {HOST.P
}
1403
],
"parameters": []
}
],
"id": 1
}
Request:
{
"jsonrpc": "2.0",
"method": "mediatype.get",
"params": {
"output": ["mediatypeid", "name", "parameters"],
"filter": {
"type": [1, 4]
}
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": [
{
"mediatypeid": "10",
"name": "Script with parameters",
"parameters": [
{
"sortorder": "0",
"value": "{ALERT.SENDTO}"
},
{
"sortorder": "1",
"value": "{EVENT.NAME}"
},
{
"sortorder": "2",
"value": "{ALERT.MESSAGE}"
},
{
"sortorder": "3",
"value": "Zabbix alert"
}
]
},
{
"mediatypeid": "13",
"name": "Script without parameters",
"parameters": []
},
{
"mediatypeid": "11",
"name": "Webhook",
"parameters": [
{
1404
"name": "alert_message",
"value": "{ALERT.MESSAGE}"
},
{
"name": "event_update_message",
"value": "{EVENT.UPDATE.MESSAGE}"
},
{
"name": "host_name",
"value": "{HOST.NAME}"
},
{
"name": "trigger_description",
"value": "{TRIGGER.DESCRIPTION}"
},
{
"name": "trigger_id",
"value": "{TRIGGER.ID}"
},
{
"name": "alert_source",
"value": "Zabbix"
}
]
}
],
"id": 1
}
See also
• User
Source
CMediaType::get() in ui/include/classes/api/services/CMediaType.php.
mediatype.update
Description
Note:
This method is only available to Super admin user type. Permissions to call the method can be revoked in user role settings.
See User roles for more information.
Parameters
Additionally to the standard media type properties, the method accepts the following parameters.
Return values
(object) Returns an object containing the IDs of the updated media types under the mediatypeids property.
1405
Examples
Request:
{
"jsonrpc": "2.0",
"method": "mediatype.update",
"params": {
"mediatypeid": "6",
"status": "0"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"mediatypeids": [
"6"
]
},
"id": 1
}
Source
CMediaType::update() in ui/include/classes/api/services/CMediaType.php.
MFA
Object references:
• MFA
Available methods:
MFA object
Property behavior:
- read-only
- required for update operations
type integer Type of the MFA method.
Possible values:
1 - TOTP (Time-based One-Time Passwords);
2 - Duo Universal Prompt.
1406
Property Type Description
Property behavior:
- required for create operations
hash_function integer Type of the hash function for generating TOTP codes.
Possible values:
1 - SHA-1;
2 - SHA-256;
3 - SHA-512.
Property behavior:
- required if type is set to ”TOTP”
code_length integer Verification code length.
Possible values:
6 - 6-digit long;
8 - 8-digit long.
Property behavior:
- required if type is set to ”TOTP”
api_hostname string API hostname provided by the Duo authentication service.
Property behavior:
- required if type is set to ”Duo Universal Prompt”
clientid string Client ID provided by the Duo authentication service.
Property behavior:
- required if type is set to ”Duo Universal Prompt”
client_secret string Client secret provided by the Duo authentication service.
Property behavior:
- write-only
- required if type is set to ”Duo Universal Prompt”
mfa.create
Description
Note:
This method is only available to Super admin user type. Permissions to call the method can be revoked in user role settings.
See User roles for more information.
Parameters
Return values
(object) Returns an object containing the IDs of the created MFA methods under the mfaids property. The order of the returned
IDs matches the order of the passed items.
Examples
Create a ”Zabbix TOTP” MFA method utilizing time-based one-time passwords (TOTP), with the hash function for generating TOTP
codes set to SHA-1 and the verification code length set to 6 digits.
1407
Request:
{
"jsonrpc": "2.0",
"method": "mfa.create",
"params": {
"type": 1,
"name": "Zabbix TOTP",
"hash_function": 1,
"code_length": 6
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"mfaids": [
"1"
]
},
"id": 1
}
Source
CMfa::create() in ui/include/classes/api/services/CMfa.php.
mfa.delete
Description
Note:
This method is only available to Super admin user type. Permissions to call the method can be revoked in user role settings.
See User roles for more information.
Parameters
(object) Returns an object containing the IDs of the deleted MFA methods under the mfaids property.
Examples
Request:
{
"jsonrpc": "2.0",
"method": "mfa.delete",
"params": [
"2"
],
"id": 1
}
Response:
1408
{
"jsonrpc": "2.0",
"result": {
"mfaids": [
"2"
]
},
"id": 1
}
Source
CMfa::delete() in ui/include/classes/api/services/CMfa.php.
mfa.get
Description
Note:
This method is only available to Super admin user type. Permissions to call the method can be revoked in user role settings.
See User roles for more information.
Parameters
mfaids ID/array Return only MFA methods with the given IDs.
selectUsrgrps query Return a usrgrps property with user groups associated with MFA
methods.
Supports count.
filter object Return only those results that exactly match the given filter.
Accepts an object, where the keys are property names, and the values
are either a single value or an array of values to match against.
Supports properties:
mfaid - ID of the MFA method;
type - Type of the MFA method.
sortfield string/array Sort the result by the given properties.
Return values
1409
(integer/array) Returns either:
• an array of objects;
• the count of retrieved objects, if the countOutput parameter has been used.
Examples
Request:
{
"jsonrpc": "2.0",
"method": "mfa.get",
"params": {
"output": "extend",
"search": {
"name": "Zabbix"
}
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": [
{
"mfaid": "1",
"type": "1",
"name": "Zabbix TOTP 1",
"hash_function": "1",
"code_length": "6",
"api_hostname": "",
"clientid": ""
},
{
"mfaid": "2",
"type": "1",
"name": "Zabbix TOTP 2",
"hash_function": "3",
"code_length": "8",
"api_hostname": "",
"clientid": ""
}
],
"id": 1
}
Source
CMfa::get() in ui/include/classes/api/services/CMfa.php.
mfa.update
Description
Note:
This method is only available to Super admin user type. Permissions to call the method can be revoked in user role settings.
See User roles for more information.
1410
Parameters
The method accepts MFA methods with the standard MFA method properties.
Return values
(object) Returns an object containing the IDs of the updated MFA methods under the mfaids property.
Examples
Update the hash function for generating TOTP codes and the verification code length for the ”Zabbix TOTP” MFA method utilizing
time-based one-time passwords (TOTP).
Request:
{
"jsonrpc": "2.0",
"method": "mfa.update",
"params": {
"mfaid": "1",
"hash_function": 3,
"code_length": 8
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"mfaids": [
"1"
]
},
"id": 1
}
Source
CMfa::update() in ui/include/classes/api/services/CMfa.php.
Module
Object references:
• Module
Available methods:
Module object
1411
Property Type Description
Property behavior:
- read-only
- required for update operations
id string Unique module ID as defined by a developer in the manifest.json file of the module.
Property behavior:
- required for create operations
relative_path string Path to the directory of the module relative to the directory of the Zabbix frontend.
Possible values:
widgets/* - for built-in widget modules;
modules/* - for third-party modules.
Property behavior:
- required for create operations
status integer Whether the module is enabled or disabled.
Possible values:
0 - (default) Disabled;
1 - Enabled.
config object Module configuration.
module.create
Description
Note:
This method is only available to Super admin user type. Permissions to call the method can be revoked in user role settings.
See User roles for more information.
Attention:
Module files must be unpacked manually in the correct subdirectories, matching the relative_path property of the
modules.
Parameters
Return values
(object) Returns an object containing the IDs of the installed modules under the moduleids property. The order of the returned
IDs matches the order of the passed modules.
Examples
Installing a module
Request:
{
"jsonrpc": "2.0",
"method": "module.create",
1412
"params": {
"id": "example_module",
"relative_path": "modules/example_module",
"status": 1
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"moduleids": [
"25"
]
},
"id": 1
}
See also
• Module
• Frontend modules
Source
CModule::create() in ui/include/classes/api/services/CModule.php.
module.delete
Description
Note:
This method is only available to Super admin user type. Permissions to call the method can be revoked in user role settings.
See User roles for more information.
Attention:
Module files must be removed manually.
Parameters
(object) Returns an object containing the IDs of the uninstalled modules under the moduleids property.
Examples
Request:
{
"jsonrpc": "2.0",
"method": "module.delete",
"params": [
"27",
"28"
],
"id": 1
}
1413
Response:
{
"jsonrpc": "2.0",
"result": {
"moduleids": [
"27",
"28"
]
},
"id": 1
}
Source
CModule::delete() in ui/include/classes/api/services/CModule.php.
module.get
Description
Note:
This method is available to users of any type. Permissions to call the method can be revoked in user role settings. See
User roles for more information.
Parameters
Return values
Retrieving a module by ID
Request:
1414
{
"jsonrpc": "2.0",
"method": "module.get",
"params": {
"output": "extend",
"moduleids": [
"1",
"2",
"25"
]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": [
{
"moduleid": "1",
"id": "actionlog",
"relative_path": "widgets/actionlog",
"status": "1",
"config": []
},
{
"moduleid": "2",
"id": "clock",
"relative_path": "widgets/clock",
"status": "1",
"config": []
},
{
"moduleid": "25",
"id": "example",
"relative_path": "modules/example_module",
"status": "1",
"config": []
}
],
"id": 1
}
See also
• Module
• Dashboard widget
• Frontend modules
Source
CModule::get() in ui/include/classes/api/services/CModule.php.
module.update
Description
Note:
This method is only available to Super admin user type. Permissions to call the method can be revoked in user role settings.
See User roles for more information.
1415
Parameters
Return values
(object) Returns an object containing the IDs of the updated modules under the moduleids property.
Examples
Disabling a module
Request:
{
"jsonrpc": "2.0",
"method": "module.update",
"params": {
"moduleid": "25",
"status": 0
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"moduleids": [
"25"
]
},
"id": 1
}
See also
• Module
• Frontend modules
Source
CModule::update() in ui/include/classes/api/services/CModule.php.
Problem
Object references:
• Problem
• Problem tag
• Media type URL
Available methods:
Problem object
1416
Note:
Problems are created by the Zabbix server and cannot be modified via the API.
Possible values:
0 - event created by a trigger;
3 - internal event;
4 - event created on service status update.
object integer Type of object that is related to the problem event.
Possible values:
0 - not acknowledged;
1 - acknowledged.
severity integer Problem current severity.
Possible values:
0 - not classified;
1 - information;
2 - warning;
3 - average;
4 - high;
5 - disaster.
suppressed integer Whether the problem is suppressed.
Possible values:
0 - problem is in normal state;
1 - problem is suppressed.
opdata string Operational data with expanded macros.
urls array Active media type URLs.
Problem tag
1417
The problem tag object has the following properties.
Results will contain entries only for active media types with enabled event menu entry. Macro used in properties will be expanded,
but if one of the properties contains an unexpanded macro, both properties will be excluded from results. For supported macros,
see Supported macros.
problem.get
Description
This method is for retrieving unresolved problems. It is also possible, if specified, to additionally retrieve recently resolved problems.
The period that determines how old is ”recently” is defined in Administration → General. Problems that were resolved prior to that
period are not kept in the problem table. To retrieve problems that were resolved further back in the past, use the event.get
method.
Attention:
This method may return problems of a deleted entity if these problems have not been removed by the housekeeper yet.
Note:
This method is available to users of any type. Permissions to call the method can be revoked in user role settings. See
User roles for more information.
Parameters
Refer to the problem event object page for a list of supported event
types.
1418
Parameter Type Description
object integer Return only problems created by objects of the given type.
Refer to the problem event object page for a list of supported object
types.
Default: 0 - trigger.
acknowledged boolean true - return acknowledged problems only;
false - unacknowledged only.
action integer Return only problems for which the given event update actions have
been performed. For multiple actions, use a combination of any
acceptable bitmap values as bitmask.
action_userids ID/array Return only problems with the given IDs of users who performed the
problem event update actions.
suppressed boolean true - return only suppressed problems;
false - return problems in the normal state.
symptom boolean true - return only symptom problem events;
false - return only cause problem events.
severities integer/array Return only problems with given event severities. Applies only if object
is trigger.
evaltype integer Rules for tag searching.
Possible values:
0 - (default) And/Or;
2 - Or.
tags array Return only problems with given tags. Exact match by tag and
case-insensitive search by value and operator.
[{"tag": "<tag>", "value": "<value>",
Format:
"operator": "<operator>"}, ...].
An empty array returns all problems.
1419
Parameter Type Description
selectAcknowledges query Return an acknowledges property with the problem updates. Problem
updates are sorted in reverse chronological order.
Supports count.
selectTags query Return a tags property with the problem tags. Output format:
[{"tag": "<tag>", "value": "<value>"}, ...].
selectSuppressionData query Return a suppression_data property with the list of active
maintenances and manual suppressions:
maintenanceid - (ID) ID of the maintenance;
userid - (ID) ID of user who suppressed the problem;
suppress_until - (integer) time until the problem is suppressed.
filter object Return only those results that exactly match the given filter.
Accepts an object, where the keys are property names, and the values
are either a single value or an array of values to match against.
Return values
Request:
{
"jsonrpc": "2.0",
1420
"method": "problem.get",
"params": {
"output": "extend",
"selectAcknowledges": "extend",
"selectTags": "extend",
"selectSuppressionData": "extend",
"objectids": "15112",
"recent": "true",
"sortfield": ["eventid"],
"sortorder": "DESC"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": [
{
"eventid": "1245463",
"source": "0",
"object": "0",
"objectid": "15112",
"clock": "1472457242",
"ns": "209442442",
"r_eventid": "1245468",
"r_clock": "1472457285",
"r_ns": "125644870",
"correlationid": "0",
"userid": "1",
"name": "Zabbix agent on localhost is unreachable for 5 minutes",
"acknowledged": "1",
"severity": "3",
"cause_eventid": "0",
"opdata": "",
"acknowledges": [
{
"acknowledgeid": "14443",
"userid": "1",
"eventid": "1245463",
"clock": "1472457281",
"message": "problem solved",
"action": "6",
"old_severity": "0",
"new_severity": "0",
"suppress_until": "1472511600",
"taskid": "0"
}
],
"suppression_data": [
{
"maintenanceid": "15",
"suppress_until": "1472511600",
"userid": "0"
}
],
"suppressed": "1",
"tags": [
{
"tag": "test tag",
"value": "test value"
}
1421
]
}
],
"id": 1
}
Request:
{
"jsonrpc": "2.0",
"method": "problem.get",
"params": {
"output": "extend",
"action": 2,
"action_userids": [10],
"selectAcknowledges": ["userid", "action"],
"sortfield": ["eventid"],
"sortorder": "DESC"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": [
{
"eventid": "1248566",
"source": "0",
"object": "0",
"objectid": "15142",
"clock": "1472457242",
"ns": "209442442",
"r_eventid": "1245468",
"r_clock": "1472457285",
"r_ns": "125644870",
"correlationid": "0",
"userid": "10",
"name": "Zabbix agent on localhost is unreachable for 5 minutes",
"acknowledged": "1",
"severity": "3",
"cause_eventid": "0",
"opdata": "",
"acknowledges": [
{
"userid": "10",
"action": "2"
}
],
"suppressed": "0"
}
],
"id": 1
}
See also
• Alert
• Item
• Host
• LLD rule
1422
• Trigger
Source
CEvent::get() in ui/include/classes/api/services/CProblem.php.
Proxy
Object references:
• Proxy
Available methods:
Proxy object
Property behavior:
- read-only
- required for update operations
name string Name of the proxy.
Property behavior:
- required for create operations
proxy_groupid ID ID of the proxy group.
Property behavior:
- required if proxy_groupid is not 0
local_port string Local proxy port number to connect to.
Default: 10051.
Property behavior:
- supported if proxy_groupid is not 0
operating_mode integer Type of proxy.
Possible values:
0 - active proxy;
1 - passive proxy.
Property behavior:
- required for create operations
description text Description of the proxy.
1423
Property Type Description
lastaccess timestamp Time when the proxy last connected to the server.
Property behavior:
- read-only
address string IP address or DNS name to connect to.
Property behavior:
- required if the Zabbix proxy operating mode is passive
port string Port number to connect to.
Default: 10051.
Property behavior:
- supported if the Zabbix proxy operating mode is passive
allowed_addresses string Comma-delimited IP addresses or DNS names of active Zabbix proxy.
tls_connect integer Connections to host.
Possible values:
1 - (default) No encryption;
2 - PSK;
4 - certificate.
tls_accept integer Connections from host.
This is a bitmask field, any combination of possible bitmap values is
acceptable.
Property behavior:
- write-only
- required if tls_connect is set to ”PSK”, or tls_accept contains the
”PSK” bit
tls_psk string The preshared key, at least 32 hex digits.
Property behavior:
- write-only
- required if tls_connect is set to ”PSK”, or tls_accept contains the
”PSK” bit
custom_timeouts integer Whether to override global item timeouts on the proxy level.
Possible values:
0 - (default) use global settings;
1 - override timeouts.
1424
Property Type Description
Default: ””.
Property behavior:
- required if custom_timeouts is set to 1.
timeout_simple_check
timeout_snmp_agent
timeout_external_check
timeout_db_monitor
timeout_http_agent
timeout_ssh_agent
timeout_telnet_agent
timeout_script
timeout_browser
version integer Version of proxy.
Three-part Zabbix version number, where two decimal digits are used
for each part, e.g., 60401 for version 6.4.1, 70002 for version 7.0.2,
etc.
0 - Unknown proxy version.
Property behavior:
- read-only
compatibility integer Version of proxy compared to Zabbix server version.
Possible values:
0 - Undefined;
1 - Current version (proxy and server have the same major version);
2 - Outdated version (proxy version is older than server version, but is
partially supported);
3 - Unsupported version (proxy version is older than server previous
LTS release version or server major version is older than proxy major
version).
Property behavior:
- read-only
state integer State of the proxy.
Possible values:
0 - Unknown;
1 - Offline;
2 - Online.
Property behavior:
- read-only
proxy.create
Description
1425
Note:
This method is only available to Super admin user type. Permissions to call the method can be revoked in user role settings.
See User roles for more information.
Parameters
Return values
(object) Returns an object containing the IDs of the created proxies under the proxyids property. The order of the returned
IDs matches the order of the passed proxies.
Examples
Create an action proxy ”Active proxy” and assign a host to be monitored by it.
Request:
{
"jsonrpc": "2.0",
"method": "proxy.create",
"params": {
"name": "Active proxy",
"operating_mode": "0",
"hosts": [
{
"hostid": "10279"
}
]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"proxyids": [
"10280"
]
},
"id": 1
}
Create a passive proxy ”Passive proxy” and assign two hosts to be monitored by it.
Request:
{
"jsonrpc": "2.0",
"method": "proxy.create",
"params": {
"name": "Passive proxy",
1426
"operating_mode": "1",
"address": "127.0.0.1",
"port": "10051",
"hosts": [
{
"hostid": "10192"
},
{
"hostid": "10139"
}
]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"proxyids": [
"10284"
]
},
"id": 1
}
Create an active proxy ”Active proxy” and add it to proxy group with ID ”1”.
Request:
{
"jsonrpc": "2.0",
"method": "proxy.create",
"params": {
"name": "Active proxy",
"proxy_groupid": "1",
"operating_mode": "0"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"proxyids": [
"5"
]
},
"id": 1
}
See also
• Host
• Proxy group
Source
CProxy::create() in ui/include/classes/api/services/CProxy.php.
proxy.delete
1427
Description
Note:
This method is only available to Super admin user type. Permissions to call the method can be revoked in user role settings.
See User roles for more information.
Parameters
(object) Returns an object containing the IDs of the deleted proxies under the proxyids property.
Examples
Request:
{
"jsonrpc": "2.0",
"method": "proxy.delete",
"params": [
"10286",
"10285"
],
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"proxyids": [
"10286",
"10285"
]
},
"id": 1
}
Source
CProxy::delete() in ui/include/classes/api/services/CProxy.php.
proxy.get
Description
Note:
This method is available to users of any type. Permissions to call the method can be revoked in user role settings. See
User roles for more information.
Parameters
1428
Parameter Type Description
Supports count.
selectHosts query Return a hosts property with the hosts monitored by the proxy.
Supports count.
selectProxyGroup query Return a proxyGroup property with the proxy group object.
sortfield string/array Sort the result by the given properties.
Return values
Request:
{
"jsonrpc": "2.0",
"method": "proxy.get",
"params": {
"output": "extend"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": [
{
"proxyid": "11",
"name": "Active proxy",
"proxy_groupid": "0",
"local_address": "",
"local_port": "10051",
"operating_mode": "0",
"description": "",
"allowed_addresses": "",
"address": "127.0.0.1",
1429
"port": "10051",
"tls_connect": "1",
"tls_accept": "1",
"tls_issuer": "",
"tls_subject": "",
"custom_timeouts": "0",
"timeout_zabbix_agent": "",
"timeout_simple_check": "",
"timeout_snmp_agent": "",
"timeout_external_check": "",
"timeout_db_monitor": "",
"timeout_http_agent": "",
"timeout_ssh_agent": "",
"timeout_telnet_agent": "",
"timeout_script": "",
"last_access": "1693391880",
"version": "70000",
"compatibility": "1",
"state": "1"
},
{
"proxyid": "12",
"name": "Passive proxy",
"proxy_groupid": "1",
"local_address": "127.0.0.1",
"local_port": "10051",
"operating_mode": "1",
"description": "",
"allowed_addresses": "",
"address": "127.0.0.1",
"port": "10051",
"tls_connect": "1",
"tls_accept": "1",
"tls_issuer": "",
"tls_subject": "",
"custom_timeouts": "1",
"timeout_zabbix_agent": "5s",
"timeout_simple_check": "5s",
"timeout_snmp_agent": "5s",
"timeout_external_check": "5s",
"timeout_db_monitor": "5s",
"timeout_http_agent": "5s",
"timeout_ssh_agent": "5s",
"timeout_telnet_agent": "5s",
"timeout_script": "5s",
"lastaccess": "1693391875",
"version": "60400",
"compatibility": "2",
"state": "2"
}
],
"id": 1
}
See also
• Host
• Proxy group
Source
CProxy::get() in ui/include/classes/api/services/CProxy.php.
1430
proxy.update
Description
Note:
This method is only available to Super admin user type. Permissions to call the method can be revoked in user role settings.
See User roles for more information.
Parameters
Additionally to the standard proxy properties, the method accepts the following parameters.
Return values
(object) Returns an object containing the IDs of the updated proxies under the proxyids property.
Examples
Request:
{
"jsonrpc": "2.0",
"method": "proxy.update",
"params": {
"proxyid": "10293",
"hosts": [
{
"hostid": "10294"
},
{
"hostid": "10295"
},
]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"proxyids": [
"10293"
]
},
"id": 1
}
1431
Change proxy status
Request:
{
"jsonrpc": "2.0",
"method": "proxy.update",
"params": {
"proxyid": "10293",
"name": "Active proxy",
"operating_mode": "0"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"proxyids": [
"10293"
]
},
"id": 1
}
Update proxy with ID ”5” and add it to proxy group with ID ”1”.
Request:
{
"jsonrpc": "2.0",
"method": "proxy.create",
"params": {
"proxyid": "5",
"proxy_groupid": "1",
"local_address": "127.0.0.1"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"proxyids": [
"5"
]
},
"id": 1
}
See also
• Host
• Proxy group
Source
CProxy::update() in ui/include/classes/api/services/CProxy.php.
Proxy group
1432
This class is designed to work with proxy groups.
Object references:
• Proxy group
Available methods:
Property behavior:
- read-only
- required for update operations
name string Name of the proxy group.
Property behavior:
- required for create operations
description text Description of the proxy group.
failover_delay string Failover period for each proxy in the group to have online/offline state.
Default: 1m.
min_online string Minimum number of online proxies required for the group to be online.
Default: 1.
state integer State of the proxy group.
Possible values:
0 - Unknown - on server startup;
1 - Offline - less that minimum number of proxies are online;
2 - Recovering - transition from offline to online state;
3 - Online - minimum number of online proxies are online;
4 - Degrading - transition from online to offline state.
Property behavior:
- read-only
proxygroup.create
Description
1433
This method allows to create new proxy groups.
Note:
This method is only available to Super admin user type. Permissions to call the method can be revoked in user role settings.
See User roles for more information.
Parameters
Return values
(object) Returns an object containing the IDs of the created proxy groups under the proxy_groupids property. The order of
the returned IDs matches the order of the passed proxy groups.
Examples
Request:
{
"jsonrpc": "2.0",
"method": "proxygroup.create",
"params": {
"name": "Proxy group",
"failover_delay": "5m",
"min_online": "10"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"proxy_groupids": [
"5"
]
},
"id": 1
}
Source
CProxyGroup::create() in ui/include/classes/api/services/CProxyGroup.php.
proxygroup.delete
Description
Note:
This method is only available to Super admin user type. Permissions to call the method can be revoked in user role settings.
See User roles for more information.
Parameters
(object) Returns an object containing the IDs of the deleted proxy groups under the proxy_groupids property.
1434
Examples
Request:
{
"jsonrpc": "2.0",
"method": "proxygroup.delete",
"params": [
"5",
"10"
],
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"proxy_groupids": [
"5",
"10"
]
},
"id": 1
}
Source
CProxyGroup::delete() in ui/include/classes/api/services/CProxyGroup.php.
proxygroup.get
Description
Note:
This method is available to users of any type. Permissions to call the method can be revoked in user role settings. See
User roles for more information.
Parameters
proxy_groupids ID/array Return only proxy groups with the given IDs.
proxyids ID/array Return only proxy groups that contain the given proxies.
selectProxies query Return a proxies property with the proxies that belong to the proxy
group.
Supports count.
sortfield string/array Sort the result by the given properties.
1435
Parameter Type Description
filter object
limit integer
output query
preservekeys boolean
search object
searchByAny boolean
searchWildcardsEnabled boolean
sortorder string/array
startSearch boolean
Return values
Request:
{
"jsonrpc": "2.0",
"method": "proxygroup.get",
"params": {
"output": "extend",
"selectProxies": ["proxyid", "name"]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": [
{
"proxy_groupid": "1",
"name": "Proxy group 1",
"failover_delay": "1m",
"min_online": "3",
"description": "",
"state": "1",
"proxies": [
{
"proxyid": "1",
"name": "proxy 1"
},
{
"proxyid": "2",
"name": "proxy 2"
}
]
},
{
"proxy_groupid": "2",
"name": "Proxy group 2",
"failover_delay": "10m",
"min_online": "3",
"description": "",
"state": "3",
1436
"proxies": [
{
"proxyid": "3",
"name": "proxy 3"
},
{
"proxyid": "4",
"name": "proxy 4"
},
{
"proxyid": "5",
"name": "proxy 5"
}
]
}
],
"id": 1
}
See also
• Proxy
Source
CProxyGroup::get() in ui/include/classes/api/services/CProxyGroup.php.
proxygroup.update
Description
Note:
This method is only available to Super admin user type. Permissions to call the method can be revoked in user role settings.
See User roles for more information.
Parameters
The method accepts proxy groups with the standard proxy group properties.
Return values
(object) Returns an object containing the IDs of the updated proxy groups under the proxy_groupids property.
Examples
Change minimum number of online proxies required for the group to be online.
Request:
{
"jsonrpc": "2.0",
"method": "proxygroup.update",
"params": {
"proxy_groupid": "5",
"min_online": "3"
},
"id": 1
}
1437
Response:
{
"jsonrpc": "2.0",
"result": {
"proxy_groupids": [
"5"
]
},
"id": 1
}
Source
CProxyGroup::update() in ui/include/classes/api/services/CProxyGroup.php.
Regular expression
Object references:
• Regular expression
• Expressions
Available methods:
Property behavior:
- read-only
- required for update operations
name string Name of the regular expression.
Property behavior:
- required for create operations
test_string string Test string.
Expressions
Property behavior:
- required
1438
Property Type Description
Possible values:
0 - Character string included;
1 - Any character string included;
2 - Character string not included;
3 - Result is TRUE;
4 - Result is FALSE.
Property behavior:
- required
exp_delimiter string Expression delimiter.
Property behavior:
- supported if expression_type is set to ”Any character string
included”
case_sensitive integer Case sensitivity.
Default value: 0.
Possible values:
0 - Case insensitive;
1 - Case sensitive.
regexp.create
Description
Note:
This method is only available to Super admin user types. Permissions to call the method can be revoked in user role
settings. See User roles for more information.
Parameters
Parameter behavior:
- required
Return values
(object) Returns an object containing the IDs of the created regular expressions under the regexpids property.
Examples
Request:
1439
{
"jsonrpc": "2.0",
"method": "regexp.create",
"params": {
"name": "Storage devices for SNMP discovery",
"test_string": "/boot",
"expressions": [
{
"expression": "^(Physical memory|Virtual memory|Memory buffers|Cached memory|Swap space)$",
"expression_type": "4",
"case_sensitive": "1"
}
]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"regexpids": [
"16"
]
},
"id": 1
}
Source
CRegexp::create() in ui/include/classes/api/services/CRegexp.php.
regexp.delete
Description
Note:
This method is only available to Super admin user types. Permissions to call the method can be revoked in user role
settings. See User roles for more information.
Parameters
(object) Returns an object containing the IDs of the deleted regular expressions under the regexpids property.
Examples
Request:
{
"jsonrpc": "2.0",
"method": "regexp.delete",
"params": [
"16",
"17"
],
"id": 1
}
1440
Response:
{
"jsonrpc": "2.0",
"result": {
"regexpids": [
"16",
"17"
]
},
"id": 1
}
Source
CRegexp::delete() in ui/include/classes/api/services/CRegexp.php.
regexp.get
Description
Note:
This method is available only to Super Admin. Permissions to call the method can be revoked in user role settings. See
User roles for more information.
Parameters
regexpids ID/array Return only regular expressions with the given IDs.
selectExpressions query Return an expressions property.
sortfield string/array Sort the result by the given properties.
Return values
Request:
1441
{
"jsonrpc": "2.0",
"method": "regexp.get",
"params": {
"output": ["regexpid", "name"],
"selectExpressions": ["expression", "expression_type"],
"regexpids": [1, 2],
"preservekeys": true
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"1": {
"regexpid": "1",
"name": "File systems for discovery",
"expressions": [
{
"expression": "^(btrfs|ext2|ext3|ext4|reiser|xfs|ffs|ufs|jfs|jfs2|vxfs|hfs|apfs|refs|ntfs|fat32|
"expression_type": "3"
}
]
},
"2": {
"regexpid": "2",
"name": "Network interfaces for discovery",
"expressions": [
{
"expression": "^Software Loopback Interface",
"expression_type": "4"
},
{
"expression": "^(In)?[Ll]oop[Bb]ack[0-9._]*$",
"expression_type": "4"
},
{
"expression": "^NULL[0-9.]*$",
"expression_type": "4"
},
{
"expression": "^[Ll]o[0-9.]*$",
"expression_type": "4"
},
{
"expression": "^[Ss]ystem$",
"expression_type": "4"
},
{
"expression": "^Nu[0-9.]*$",
"expression_type": "4"
}
]
}
},
"id": 1
}
Source
CRegexp::get() in ui/include/classes/api/services/CRegexp.php.
1442
regexp.update
Description
Note:
This method is only available to Super admin user types. Permissions to call the method can be revoked in user role
settings. See User roles for more information.
Parameters
Additionally to the standard properties, the method accepts the following parameters.
Return values
(object) Returns an object containing the IDs of the updated regular expressions under the regexpids property.
Examples
Request:
{
"jsonrpc": "2.0",
"method": "regexp.update",
"params": {
"regexpid": "1",
"name": "File systems for discovery",
"test_string": "",
"expressions": [
{
"expression": "^(btrfs|ext2|ext3|ext4|reiser|xfs|ffs|ufs|jfs|jfs2|vxfs|hfs|apfs|refs|zfs)$",
"expression_type": "3",
"exp_delimiter": ",",
"case_sensitive": "0"
},
{
"expression": "^(ntfs|fat32|fat16)$",
"expression_type": "3",
"exp_delimiter": ",",
"case_sensitive": "0"
}
]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"regexpids": [
"1"
]
1443
},
"id": 1
}
Source
CRegexp::update() in ui/include/classes/api/services/CRegexp.php.
Report
Object references:
• Report
• Users
• User groups
Available methods:
Report object
Property behavior:
- read-only
- required for update operations
userid ID ID of the user who created the report.
Property behavior:
- required for create operations
name string Unique name of the report.
Property behavior:
- required for create operations
dashboardid ID ID of the dashboard that the report is based on.
Property behavior:
- required for create operations
period integer Period for which the report will be prepared.
Possible values:
0 - (default) previous day;
1 - previous week;
2 - previous month;
3 - previous year.
cycle integer Period repeating schedule.
Possible values:
0 - (default) daily;
1 - weekly;
2 - monthly;
3 - yearly.
1444
Property Type Description
start_time integer Time of the day, in seconds, when the report will be prepared for
sending.
Default: 0.
weekdays integer Days of the week for sending the report.
Days of the week are stored in binary form with each bit representing
the corresponding week day. For example, 12 equals 1100 in binary
and means that reports will be sent every Wednesday and Thursday.
Default: 0.
Property behavior:
- required if cycle is set to ”weekly”.
active_since string On which date to start.
Possible values:
empty string - (default) not specified (stored as 0);
specific date in YYYY-MM-DD format (stored as a timestamp of the
beginning of a day (00:00:00)).
active_till string On which date to end.
Possible values:
empty string - (default) not specified (stored as 0);
specific date in YYYY-MM-DD format (stored as a timestamp of the end
of a day (23:59:59)).
subject string Report message subject.
message string Report message text.
status integer Whether the report is enabled or disabled.
Possible values:
0 - Disabled;
1 - (default) Enabled.
description text Description of the report.
state integer State of the report.
Possible values:
0 - (default) report was not yet processed;
1 - report was generated and successfully sent to all recipients;
2 - report generating failed; ”info” contains error information;
3 - report was generated, but sending to some (or all) recipients failed;
”info” contains error information.
Property behavior:
- read-only
lastsent timestamp Unix timestamp of the last successfully sent report.
Property behavior:
- read-only
info string Error description or additional information.
Property behavior:
- read-only
Users
1445
Property Type Description
Property behavior:
- required
access_userid ID ID of user on whose behalf the report will be generated.
Possible values:
0 - (default) Include;
1 - Exclude.
User groups
Property behavior:
- required
access_userid ID ID of user on whose behalf the report will be generated.
report.create
Description
Note:
This method is only available to Admin and Super admin user types. Permissions to call the method can be revoked in user
role settings. See User roles for more information.
Parameters
Parameter behavior:
- required if user_groups is not set
user_groups object/array User groups to send the report to.
Parameter behavior:
- required if users is not set
Return values
(object) Returns an object containing the IDs of the created scheduled reports under the reportids property. The order of the
returned IDs matches the order of the passed scheduled reports.
Examples
1446
Creating a scheduled report
Create a weekly report that will be prepared for the previous week every Monday-Friday at 12:00 from 2021-04-01 to 2021-08-31.
Request:
{
"jsonrpc": "2.0",
"method": "report.create",
"params": {
"userid": "1",
"name": "Weekly report",
"dashboardid": "1",
"period": "1",
"cycle": "1",
"start_time": "43200",
"weekdays": "31",
"active_since": "2021-04-01",
"active_till": "2021-08-31",
"subject": "Weekly report",
"message": "Report accompanying text",
"status": "1",
"description": "Report description",
"users": [
{
"userid": "1",
"access_userid": "1",
"exclude": "0"
},
{
"userid": "2",
"access_userid": "0",
"exclude": "1"
}
],
"user_groups": [
{
"usrgrpid": "7",
"access_userid": "0"
}
]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"reportids": [
"1"
]
},
"id": 1
}
See also
• Users
• User groups
Source
CReport::create() in ui/include/classes/api/services/CReport.php.
1447
report.delete
Description
Note:
This method is only available to Admin and Super admin user type. Permissions to call the method can be revoked in user
role settings. See User roles for more information.
Parameters
(object) Returns an object containing the IDs of the deleted scheduled reports under the reportids property.
Examples
Request:
{
"jsonrpc": "2.0",
"method": "report.delete",
"params": [
"1",
"2"
],
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"reportids": [
"1",
"2"
]
},
"id": 1
}
Source
CReport::delete() in ui/include/classes/api/services/CReport.php.
report.get
Description
Note:
This method is available to users of any type. Permissions to call the method can be revoked in user role settings. See
User roles for more information.
Parameters
1448
The method supports the following parameters.
reportids ID/array Return only scheduled reports with the given report IDs.
expired boolean If set to true returns only expired scheduled reports, if false - only
active scheduled reports.
selectUsers query Return a users property the report is configured to be sent to.
selectUserGroups query Return a user_groups property the report is configured to be sent to.
sortfield string/array Sort the result by the given properties.
Return values
Request:
{
"jsonrpc": "2.0",
"method": "report.get",
"params": [
"output": "extend",
"selectUsers": "extend",
"selectUserGroups": "extend",
"reportids": ["1", "2"]
],
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": [
{
"reportid": "1",
"userid": "1",
"name": "Weekly report",
"dashboardid": "1",
"period": "1",
"cycle": "1",
"start_time": "43200",
"weekdays": "31",
"active_since": "2021-04-01",
"active_till": "2021-08-31",
1449
"subject": "Weekly report",
"message": "Report accompanying text",
"status": "1",
"description": "Report description",
"state": "1",
"lastsent": "1613563219",
"info": "",
"users": [
{
"userid": "1",
"access_userid": "1",
"exclude": "0"
},
{
"userid": "2",
"access_userid": "0",
"exclude": "1"
}
],
"user_groups": [
{
"usrgrpid": "7",
"access_userid": "0"
}
]
},
{
"reportid": "2",
"userid": "1",
"name": "Monthly report",
"dashboardid": "2",
"period": "2",
"cycle": "2",
"start_time": "0",
"weekdays": "0",
"active_since": "2021-05-01",
"active_till": "",
"subject": "Monthly report",
"message": "Report accompanying text",
"status": "1",
"description": "",
"state": "0",
"lastsent": "0",
"info": "",
"users": [
{
"userid": "1",
"access_userid": "1",
"exclude": "0"
}
],
"user_groups": []
}
],
"id": 1
}
See also
• Users
• User groups
Source
1450
CReport::get() in ui/include/classes/api/services/CReport.php.
report.update
Description
Note:
This method is only available to Admin and Super admin user type. Permissions to call the method can be revoked in user
role settings. See User roles for more information.
Parameters
Additionally to the standard scheduled report properties the method accepts the following parameters.
users object/array Users to replace the current users assigned to the scheduled report.
Parameter behavior:
- required if user_groups is not set
user_groups object/array User groups to replace the current user groups assigned to the
scheduled report.
Parameter behavior:
- required if users is not set
Return values
(object) Returns an object containing the IDs of the updated scheduled reports under the reportids property.
Examples
Request:
{
"jsonrpc": "2.0",
"method": "report.update",
"params": {
"reportid": "1",
"status": "0"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"reportids": [
"1"
]
},
"id": 1
}
See also
1451
• Users
• User groups
Source
CReport::update() in ui/include/classes/api/services/CReport.php.
Role
Object references:
• Role
• Role rules
• UI element
• Service
• Service tag
• Module
• Action
Available methods:
Role object
Property behavior:
- read-only
- required for update operations
name string Name of the role.
Property behavior:
- required for create operations
type integer User type.
Possible values:
1 - (default) User;
2 - Admin;
3 - Super admin.
Property behavior:
- required for create operations
readonly integer Whether the role is readonly.
Possible values:
0 - (default) No;
1 - Yes.
Property behavior:
- read-only
Role rules
1452
The role rules object has the following properties:
Possible values:
0 - Disabled;
1 - (default) Enabled.
services.read.mode integer Read-only access to services.
Possible values:
0 - Read-only access to the services, specified by the
services.read.list or matched by the services.read.tag
properties;
1 - (default) Read-only access to all services.
services.read.list array Array of Service objects.
Property behavior:
- supported if services.read.mode is set to ”0”
services.read.tag object Array of Service tag objects.
Property behavior:
- supported if services.read.mode is set to ”0”
services.write.mode integer Read-write access to services.
Possible values:
0 - (default) Read-write access to the services, specified by the
services.write.list or matched by the services.write.tag
properties;
1 - Read-write access to all services.
services.write.list array Array of Service objects.
Property behavior:
- supported if services.write.mode is set to ”0”
services.write.tag object Array of Service tag objects.
Property behavior:
- supported if services.write.mode is set to ”0”
modules array Array of the module objects.
modules.default_access integer Whether access to new modules is enabled.
Possible values:
0 - Disabled;
1 - (default) Enabled.
1453
Property Type Description
Possible values:
0 - Disabled;
1 - (default) Enabled.
api.mode integer Mode for treating API methods listed in the api property.
Possible values:
0 - (default) Deny list;
1 - Allow list.
api array Array of API methods.
actions array Array of the action objects.
actions.default_access integer Whether access to new actions is enabled.
Possible values:
0 - Disabled;
1 - (default) Enabled.
UI element
1454
Property Type Description
Property behavior:
- required
1455
Property Type Description
Possible values:
0 - Disabled;
1 - (default) Enabled.
Service
Service tag
If empty string is specified, the service tag will not be used for service
matching.
Property behavior:
- required
value string Tag value.
If no value or empty string is specified, only the tag name will be used
for service matching.
Module
Property behavior:
- required
status integer Whether access to the module is enabled.
Possible values:
0 - Disabled;
1 - (default) Enabled.
Action
1456
Property Type Description
Property behavior:
- required
status integer Whether access to perform the action is enabled.
Possible values:
0 - Disabled;
1 - (default) Enabled.
role.create
Description
Note:
This method is only available to Super admin user type. Permissions to call the method can be revoked in user role settings.
See User roles for more information.
Parameters
Return values
(object) Returns an object containing the IDs of the created roles under the roleids property. The order of the returned IDs
matches the order of the passed roles.
Examples
Creating a role
Create a role with type ”User” and denied access to two UI elements.
1457
Request:
{
"jsonrpc": "2.0",
"method": "role.create",
"params": {
"name": "Operator",
"type": "1",
"rules": {
"ui": [
{
"name": "monitoring.hosts",
"status": "0"
},
{
"name": "monitoring.maps",
"status": "0"
}
]
}
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"roleids": [
"5"
]
},
"id": 1
}
See also
• Role rules
• UI element
• Module
• Action
Source
CRole::create() in ui/include/classes/api/services/CRole.php.
role.delete
Description
Note:
This method is only available to Super admin user type. Permissions to call the method can be revoked in user role settings.
See User roles for more information.
Parameters
(object) Returns an object containing the IDs of the deleted roles under the roleids property.
Examples
1458
Deleting multiple user roles
Request:
{
"jsonrpc": "2.0",
"method": "role.delete",
"params": [
"4",
"5"
],
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"roleids": [
"4",
"5"
]
},
"id": 1
}
Source
CRole::delete() in ui/include/classes/api/services/CRole.php.
role.get
Description
Note:
This method is available to users of any type. Permissions to call the method can be revoked in user role settings. See
User roles for more information.
Parameters
1459
Parameter Type Description
searchByAny boolean
searchWildcardsEnabled boolean
sortorder string/array
startSearch boolean
Return values
Retrieve ”Super admin role” role data and its access rules.
Request:
{
"jsonrpc": "2.0",
"method": "role.get",
"params": {
"output": "extend",
"selectRules": "extend",
"roleids": "3"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": [
{
"roleid": "3",
"name": "Super admin role",
"type": "3",
"readonly": "1",
"rules": {
"ui": [
{
"name": "monitoring.dashboard",
"status": "1"
},
{
"name": "monitoring.problems",
"status": "1"
},
{
"name": "monitoring.hosts",
"status": "1"
},
{
"name": "monitoring.latest_data",
"status": "1"
},
{
"name": "monitoring.maps",
"status": "1"
},
{
"name": "services.services",
1460
"status": "1"
},
{
"name": "services.sla_report",
"status": "1"
},
{
"name": "inventory.overview",
"status": "1"
},
{
"name": "inventory.hosts",
"status": "1"
},
{
"name": "reports.availability_report",
"status": "1"
},
{
"name": "reports.top_triggers",
"status": "1"
},
{
"name": "monitoring.discovery",
"status": "1"
},
{
"name": "services.sla",
"status": "1"
},
{
"name": "reports.scheduled_reports",
"status": "1"
},
{
"name": "reports.notifications",
"status": "1"
},
{
"name": "configuration.template_groups",
"status": "1"
},
{
"name": "configuration.host_groups",
"status": "1"
},
{
"name": "configuration.templates",
"status": "1"
},
{
"name": "configuration.hosts",
"status": "1"
},
{
"name": "configuration.maintenance",
"status": "1"
},
{
"name": "configuration.discovery",
"status": "1"
},
1461
{
"name": "configuration.trigger_actions",
"status": "1"
},
{
"name": "configuration.service_actions",
"status": "1"
},
{
"name": "configuration.discovery_actions",
"status": "1"
},
{
"name": "configuration.autoregistration_actions",
"status": "1"
},
{
"name": "configuration.internal_actions",
"status": "1"
},
{
"name": "reports.system_info",
"status": "1"
},
{
"name": "reports.audit",
"status": "1"
},
{
"name": "reports.action_log",
"status": "1"
},
{
"name": "configuration.event_correlation",
"status": "1"
},
{
"name": "administration.media_types",
"status": "1"
},
{
"name": "administration.scripts",
"status": "1"
},
{
"name": "administration.user_groups",
"status": "1"
},
{
"name": "administration.user_roles",
"status": "1"
},
{
"name": "administration.users",
"status": "1"
},
{
"name": "administration.api_tokens",
"status": "1"
},
{
"name": "administration.authentication",
1462
"status": "1"
},
{
"name": "administration.general",
"status": "1"
},
{
"name": "administration.audit_log",
"status": "1"
},
{
"name": "administration.housekeeping",
"status": "1"
},
{
"name": "administration.proxies",
"status": "1"
},
{
"name": "administration.macros",
"status": "1"
},
{
"name": "administration.queue",
"status": "1"
}
],
"ui.default_access": "1",
"services.read.mode": "1",
"services.read.list": [],
"services.read.tag": {
"tag": "",
"value": ""
},
"services.write.mode": "1",
"services.write.list": [],
"services.write.tag": {
"tag": "",
"value": ""
},
"modules": [],
"modules.default_access": "1",
"api.access": "1",
"api.mode": "0",
"api": [],
"actions": [
{
"name": "edit_dashboards",
"status": "1"
},
{
"name": "edit_maps",
"status": "1"
},
{
"name": "acknowledge_problems",
"status": "1"
},
{
"name": "suppress_problems",
"status": "1"
},
1463
{
"name": "close_problems",
"status": "1"
},
{
"name": "change_severity",
"status": "1"
},
{
"name": "add_problem_comments",
"status": "1"
},
{
"name": "execute_scripts",
"status": "1"
},
{
"name": "manage_api_tokens",
"status": "1"
},
{
"name": "edit_maintenance",
"status": "1"
},
{
"name": "manage_scheduled_reports",
"status": "1"
},
{
"name": "manage_sla",
"status": "1"
},
{
"name": "invoke_execute_now",
"status": "1"
}
],
"actions.default_access": "1"
}
}
],
"id": 1
}
See also
• Role rules
• User
Source
CRole::get() in ui/include/classes/api/services/CRole.php.
role.update
Description
Note:
This method is only available to Super admin user type. Permissions to call the method can be revoked in user role settings.
See User roles for more information.
1464
Parameters
Additionally to the standard role properties the method accepts the following parameters.
Return values
(object) Returns an object containing the IDs of the updated roles under the roleids property.
Examples
Request:
{
"jsonrpc": "2.0",
"method": "role.update",
"params": [
{
"roleid": "5",
"rules": {
"actions": [
{
"name": "execute_scripts",
"status": "0"
}
]
}
}
],
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"roleids": [
"5"
]
},
"id": 1
}
Update role with ID ”5”, deny to call any ”create”, ”update” or ”delete” methods.
Request:
{
"jsonrpc": "2.0",
"method": "role.update",
"params": [
{
"roleid": "5",
"rules": {
"api.access": "1",
1465
"api.mode": "0",
"api": ["*.create", "*.update", "*.delete"]
}
}
],
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"roleids": [
"5"
]
},
"id": 1
}
Source
CRole::update() in ui/include/classes/api/services/CRole.php.
Script
Object references:
• Script
• Webhook parameters
• Debug
• Log entry
Available methods:
Script object
Property behavior:
- read-only
- required for update operations
name string Name of the script.
Property behavior:
- required for create operations
1466
Property Type Description
Property behavior:
- required for create operations
command string Command to run.
Property behavior:
- required if type is set to ”Script”, ”IPMI”, ”SSH”, ”TELNET”, or
”Webhook”
scope integer Script scope.
Possible values:
1 - action operation;
2 - manual host action;
4 - manual event action.
Property behavior:
- required for create operations
execute_on integer Where to run the script.
Possible values:
0 - run on Zabbix agent;
1 - run on Zabbix server. It is supported only if execution of global
scripts is enabled on Zabbix server;
2 - (default) run on Zabbix server or proxy.
Property behavior:
- supported if type is set to ”Script”
menu_path string Folders separated by slash that form a menu like navigation in
frontend when clicked on host or event.
Property behavior:
- supported if scope is set to ”manual host action” or ”manual event
action”
authtype integer Authentication method used for SSH script type.
Possible values:
0 - password;
1 - public key.
Property behavior:
- supported if type is set to ”SSH”
username string User name used for authentication.
Property behavior:
- required if type is set to ”SSH” or ”TELNET”
1467
Property Type Description
password string Password used for SSH scripts with password authentication and
TELNET scripts.
Property behavior:
- supported if type is set to ”SSH” and authtype is set to ”password”,
or type is set to ”TELNET”
publickey string Name of the public key file used for SSH scripts with public key
authentication.
Property behavior:
- required if type is set to ”SSH” and authtype is set to ”public key”
privatekey string Name of the private key file used for SSH scripts with public key
authentication.
Property behavior:
- required if type is set to ”SSH” and authtype is set to ”public key”
port string Port number used for SSH and TELNET scripts.
Property behavior:
- supported if type is set to ”SSH” or ”TELNET”
groupid ID ID of the host group that the script can be run on.
Default: 0.
usrgrpid ID ID of the user group that will be allowed to run the script.
If set to ”0”, the script will be available for all user groups.
Default: 0.
Property behavior:
- supported if scope is set to ”manual host action” or ”manual event
action”
host_access integer Host permissions needed to run the script.
Possible values:
2 - (default) read;
3 - write.
Property behavior:
- supported if scope is set to ”manual host action” or ”manual event
action”
confirmation string Confirmation pop up text.
The pop up will appear when trying to run the script from the Zabbix
frontend.
Property behavior:
- supported if scope is set to ”manual host action” or ”manual event
action”
timeout string Webhook script execution timeout in seconds. Time suffixes are
supported (e.g., 30s, 1m).
Default: 30s.
Property behavior:
- required if type is set to ”Webhook”
1468
Property Type Description
Property behavior:
- supported if type is set to ”Webhook”
description string Description of the script.
url string User defined URL.
Property behavior:
- required if type is set to ”URL”
new_window integer Open URL in a new window.
Possible values:
0 - No;
1 - (default) Yes.
Property behavior:
- supported if type is set to ”URL”
manualinput integer Indicates whether the script accepts user-provided input.
Possible values:
0 - (default) Disabled;
1 - Enabled;
Property behavior:
- supported if scope is set to ”manual host action” or ”manual event
action”
manualinput_prompt string Manual input prompt text.
Property behavior:
- required if manualinput is set to ”Enabled”
manualinput_validator string A character string field used to validate the user provided input. The
string consists of either a regular expression or a set of values
separated by commas.
Property behavior:
- required if manualinput is set to ”Enabled”
manualinput_validator_type
integer Determines the type of user input expected.
Possible values:
0 - (default) String. Indicates that manualinput_validator is to be
treated as a regular expression;
1 - List. Indicates that manualinput_validator is to be treated as a
comma-separated list of possible input values.
Property behavior:
- supported if manualinput is set to ”Enabled”
manualinput_default_valuestring Default value for auto-filling user input.
Property behavior:
- supported if manualinput_validator_type is set to ”String”
Webhook parameters
Parameters passed to webhook script when it is called have the following properties.
1469
Debug
Debug information of executed webhook script. The debug object has the following properties.
Log entry
script.create
Description
Note:
This method is only available to Super admin user type. Permissions to call the method can be revoked in user role settings.
See User roles for more information.
Parameters
Return values
(object) Returns an object containing the IDs of the created scripts under the scriptids property. The order of the returned
IDs matches the order of the passed scripts.
Examples
Request:
{
"jsonrpc": "2.0",
"method": "script.create",
"params": {
"name": "Webhook script",
"command": "try {\n var request = new HttpRequest(),\n response,\n data;\n\n request.addHeader('Co
"type": 5,
"timeout": "40s",
"parameters": [
{
"name": "token",
"value": "{$WEBHOOK.TOKEN}"
},
{
"name": "host",
"value": "{HOST.HOST}"
},
1470
{
"name": "v",
"value": "2.2"
}
]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"scriptids": [
"3"
]
},
"id": 1
}
Create an SSH script with public key authentication that can be executed on a host and has a context menu.
Request:
{
"jsonrpc": "2.0",
"method": "script.create",
"params": {
"name": "SSH script",
"command": "my script command",
"type": 2,
"authtype": 1,
"username": "John",
"publickey": "pub.key",
"privatekey": "priv.key",
"password": "secret",
"port": "12345",
"scope": 2,
"menu_path": "All scripts/SSH",
"usrgrpid": "7",
"groupid": "4"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"scriptids": [
"5"
]
},
"id": 1
}
Create a custom script that will reboot a server. The script will require write access to the host and will prompt the user for manual
input. Upon successful input submission, script will display confirmation message in the frontend.
Request:
1471
{
"jsonrpc": "2.0",
"method": "script.create",
"params": {
"name": "Reboot server",
"command": "reboot server {MANUALINPUT}",
"type": 0,
"scope": 2,
"confirmation": "Are you sure you would like to reboot the server {MANUALINPUT}?",
"manualinput": 1,
"manualinput_prompt": "Which server you want to reboot?",
"manualinput_validator": "[1-9]",
"manualinput_validator_type": 0,
"manualinput_default_value": "1"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"scriptids": [
"4"
]
},
"id": 1
}
Create a URL type script for host scope that remains in the same window and has confirmation text.
Request:
{
"jsonrpc": "2.0",
"method": "script.create",
"params": {
"name": "URL script",
"type": 6,
"scope": 2,
"url": "http://zabbix/ui/zabbix.php?action=host.edit&hostid={HOST.ID}",
"confirmation": "Edit host {HOST.NAME}?",
"new_window": 0
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"scriptids": [
"56"
]
},
"id": 1
}
Create a URL type script for event scope that opens in a new window and has manual input.
Request:
1472
{
"jsonrpc": "2.0",
"method": "script.create",
"params": {
"name": "URL script with manual input",
"type": 6,
"scope": 4,
"url": "http://zabbix/ui/zabbix.php?action={MANUALINPUT}",
"new_window": 1,
"manualinput": 1,
"manualinput_prompt": "Select a page to open:",
"manualinput_validator": "dashboard.view,script.list,actionlog.list",
"manualinput_validator_type": 1
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"scriptids": [
"57"
]
},
"id": 1
}
Source
CScript::create() in ui/include/classes/api/services/CScript.php.
script.delete
Description
Note:
This method is only available to Super admin user type. Permissions to call the method can be revoked in user role settings.
See User roles for more information.
Parameters
(object) Returns an object containing the IDs of the deleted scripts under the scriptids property.
Examples
Request:
{
"jsonrpc": "2.0",
"method": "script.delete",
"params": [
"3",
"4"
],
1473
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"scriptids": [
"3",
"4"
]
},
"id": 1
}
Source
CScript::delete() in ui/include/classes/api/services/CScript.php.
script.execute
Description
Note:
This method is available to users of any type. Permissions to call the method can be revoked in user role settings. See
User roles for more information.
Parameters
(object) Parameters containing the ID of the script to run, either the ID of the host or the ID of the event and manualinput value.
Parameter behavior:
- required
hostid ID ID of the host to run the script on.
Parameter behavior:
- required if eventid is not set
eventid ID ID of the event to run the script on.
Parameter behavior:
- required if hostid is not set
manualinput string User-provided value to run the script with, substituting
the {MANUALINPUT} macro.
Return values
1474
Examples
Request:
{
"jsonrpc": "2.0",
"method": "script.execute",
"params": {
"scriptid": "4",
"hostid": "30079"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"response": "success",
"value": "{\"status\":\"sent\",\"timestamp\":\"1611235391\"}",
"debug": {
"logs": [
{
"level": 3,
"ms": 480,
"message": "[Webhook Script] HTTP status: 200."
}
],
"ms": 495
}
},
"id": 1
}
Request:
{
"jsonrpc": "2.0",
"method": "script.execute",
"params": {
"scriptid": "1",
"hostid": "30079"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"response": "success",
"value": "PING 127.0.0.1 (127.0.0.1) 56(84) bytes of data.\n64 bytes from 127.0.0.1: icmp_req=1 tt
"debug": []
},
"id": 1
}
1475
Run a ”ping” script with command ”ping -c {MANUALINPUT} {HOST.CONN}; case $? in [01]) true;; *) false;; esac” on a host.
Request:
{
"jsonrpc": "2.0",
"method": "script.execute",
"params": {
"scriptid": "7",
"hostid": "30079",
"manualinput": "2"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"response": "success",
"value": "PING 127.0.0.1 (127.0.0.1) 56(84) bytes of data.\n64 bytes from 127.0.0.1: icmp_seq=1 tt
"debug": []
},
"id": 1
}
Source
CScript::execute() in ui/include/classes/api/services/CScript.php.
script.get
Description
Note:
This method is available to users of any type. Permissions to call the method can be revoked in user role settings. See
User roles for more information.
Parameters
groupids ID/array Return only scripts that can be run on the given host groups.
hostids ID/array Return only scripts that can be run on the given hosts.
scriptids ID/array Return only scripts with the given IDs.
usrgrpids ID/array Return only scripts that can be run by users in the given user groups.
selectHostGroups query Return a hostgroups property with host groups that the script can be
run on.
selectHosts query Return a hosts property with hosts that the script can be run on.
selectActions query Return an actions property with actions that the script is associated
with.
sortfield string/array Sort the result by the given properties.
1476
Parameter Type Description
filter object
limit integer
output query
preservekeys boolean
search object
searchByAny boolean
searchWildcardsEnabled boolean
sortorder string/array
startSearch boolean
selectGroups query This parameter is deprecated, please use selectHostGroups
(deprecated) instead.
Return a groups property with host groups that the script can be run
on.
Return values
Request:
{
"jsonrpc": "2.0",
"method": "script.get",
"params": {
"output": "extend"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": [
{
"scriptid": "1",
"name": "Ping",
"command": "/bin/ping -c 3 {HOST.CONN} 2>&1",
"host_access": "2",
"usrgrpid": "0",
"groupid": "0",
"description": "",
"confirmation": "",
"type": "0",
"execute_on": "1",
"timeout": "30s",
"scope": "2",
"port": "",
"authtype": "0",
"username": "",
"password": "",
"publickey": "",
"privatekey": "",
"menu_path": "",
"url": "",
"new_window": "1",
1477
"manualinput": "0",
"manualinput_prompt": "",
"manualinput_validator": "",
"manualinput_validator_type": "0",
"manualinput_default_value": "",
"parameters": []
},
{
"scriptid": "2",
"name": "Traceroute",
"command": "/usr/bin/traceroute {HOST.CONN} 2>&1",
"host_access": "2",
"usrgrpid": "0",
"groupid": "0",
"description": "",
"confirmation": "",
"type": "0",
"execute_on": "1",
"timeout": "30s",
"scope": "2",
"port": "",
"authtype": "0",
"username": "",
"password": "",
"publickey": "",
"privatekey": "",
"menu_path": "",
"url": "",
"new_window": "1",
"manualinput": "0",
"manualinput_prompt": "",
"manualinput_validator": "",
"manualinput_validator_type": "0",
"manualinput_default_value": "",
"parameters": []
},
{
"scriptid": "3",
"name": "Detect operating system",
"command": "sudo /usr/bin/nmap -O {HOST.CONN} 2>&1",
"host_access": "2",
"usrgrpid": "7",
"groupid": "0",
"description": "",
"confirmation": "",
"type": "0",
"execute_on": "1",
"timeout": "30s",
"scope": "2",
"port": "",
"authtype": "0",
"username": "",
"password": "",
"publickey": "",
"privatekey": "",
"menu_path": "",
"url": "",
"new_window": "1",
"manualinput": "0",
"manualinput_prompt": "",
"manualinput_validator": "",
"manualinput_validator_type": "0",
1478
"manualinput_default_value": "",
"parameters": []
},
{
"scriptid": "4",
"name": "Webhook",
"command": "try {\n var request = new HttpRequest(),\n response,\n data;\n\n request.addHeader
"host_access": "2",
"usrgrpid": "7",
"groupid": "0",
"description": "",
"confirmation": "",
"type": "5",
"execute_on": "1",
"timeout": "30s",
"scope": "2",
"port": "",
"authtype": "0",
"username": "",
"password": "",
"publickey": "",
"privatekey": "",
"menu_path": "",
"url": "",
"new_window": "1",
"manualinput": "0",
"manualinput_prompt": "",
"manualinput_validator": "",
"manualinput_validator_type": "0",
"manualinput_default_value": "",
"parameters": [
{
"name": "token",
"value": "{$WEBHOOK.TOKEN}"
},
{
"name": "host",
"value": "{HOST.HOST}"
},
{
"name": "v",
"value": "2.2"
}
]
},
{
"scriptid": "5",
"name": "URL",
"command": "",
"host_access": "2",
"usrgrpid": "0",
"groupid": "0",
"description": "",
"confirmation": "Go to {HOST.NAME}?",
"type": "6",
"execute_on": "1",
"timeout": "30s",
"scope": "4",
"port": "",
"authtype": "0",
"username": "",
"password": "",
1479
"publickey": "",
"privatekey": "",
"menu_path": "",
"url": "http://zabbix/ui/zabbix.php?action=latest.view&hostids[]={HOST.ID}",
"new_window": "0",
"manualinput": "0",
"manualinput_prompt": "",
"manualinput_validator": "",
"manualinput_validator_type": "0",
"manualinput_default_value": "",
"parameters": []
},
{
"scriptid": "6",
"name": "URL with user input",
"command": "",
"host_access": "2",
"usrgrpid": "0",
"groupid": "0",
"description": "",
"confirmation": "Open zabbix page {MANUALINPUT}?",
"type": "6",
"execute_on": "1",
"timeout": "30s",
"scope": "2",
"port": "",
"authtype": "0",
"username": "",
"password": "",
"publickey": "",
"privatekey": "",
"menu_path": "",
"url": "http://zabbix/ui/zabbix.php?action={MANUALINPUT}",
"new_window": "0",
"manualinput": "1",
"manualinput_prompt": "Select a page to open:",
"manualinput_validator": "dashboard.view,script.list,actionlog.list",
"manualinput_validator_type": "1",
"parameters": []
}
],
"id": 1
}
See also
• Host
• Host group
Source
CScript::get() in ui/include/classes/api/services/CScript.php.
script.getscriptsbyevents
Description
1480
Note:
This method is available to users of any type. Permissions to call the method can be revoked in user role settings. See
User roles for more information.
Parameters
(object/array) The method accepts object or array of objects with the following parameters.
Parameter behavior:
- required
scriptid ID ID of script to return.
manualinput string Value of the user-provided {MANUALINPUT} macro value.
Return values
(object) Returns an object with event IDs as properties and arrays of available scripts as values. If script ID is provided, the
associated value is an array containing the specific script.
Note:
The method will automatically expand macros in the confirmation text, manualinput prompt text and url.
If the manualinput parameter is provided, the {MANUALINPUT} macro will be resolved to the specified value.
Examples
Request:
{
"jsonrpc": "2.0",
"method": "script.getscriptsbyevents",
"params": [
{
"eventid": "632"
},
{
"eventid": "614"
}
],
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"632": [
{
"scriptid": "3",
"name": "Detect operating system",
"command": "sudo /usr/bin/nmap -O {HOST.CONN} 2>&1",
"host_access": "2",
"usrgrpid": "7",
"groupid": "0",
"description": "",
"confirmation": "",
"type": "0",
1481
"execute_on": "1",
"timeout": "30s",
"scope": "4",
"port": "",
"authtype": "0",
"username": "",
"password": "",
"publickey": "",
"privatekey": "",
"menu_path": "",
"url": "",
"new_window": "1",
"manualinput": "0",
"manualinput_prompt": "",
"manualinput_validator_type": "0",
"manualinput_validator": "",
"manualinput_default_value": "",
"parameters": []
},
{
"scriptid": "1",
"name": "Ping",
"command": "/bin/ping -c 3 {HOST.CONN} 2>&1",
"host_access": "2",
"usrgrpid": "0",
"groupid": "0",
"description": "",
"confirmation": "",
"type": "0",
"execute_on": "1",
"timeout": "30s",
"scope": "4",
"port": "",
"authtype": "0",
"username": "",
"password": "",
"publickey": "",
"privatekey": "",
"menu_path": "",
"url": "",
"new_window": "1",
"manualinput": "0",
"manualinput_prompt": "",
"manualinput_validator_type": "0",
"manualinput_validator": "",
"manualinput_default_value": "",
"parameters": []
},
{
"scriptid": "4",
"name": "Open Zabbix page",
"command": "",
"host_access": "2",
"usrgrpid": "0",
"groupid": "0",
"description": "",
"confirmation": "Are you sure you want to open page *UNKNOWN*?",
"type": "6",
"execute_on": "2",
"timeout": "30s",
"scope": "4",
"port": "",
1482
"authtype": "0",
"username": "",
"password": "",
"publickey": "",
"privatekey": "",
"menu_path": "",
"url": "http://localhost/ui/zabbix.php?action=*UNKNOWN*",
"new_window": "1",
"manualinput": "1",
"manualinput_prompt": "Zabbix page to open:",
"manualinput_validator_type": "1",
"manualinput_validator": "dashboard.view,discovery.view",
"manualinput_default_value": "",
"parameters": []
},
{
"scriptid": "2",
"name": "Traceroute",
"command": "/usr/bin/traceroute {HOST.CONN} 2>&1",
"host_access": "2",
"usrgrpid": "0",
"groupid": "0",
"description": "",
"confirmation": "",
"type": "0",
"execute_on": "1",
"timeout": "30s",
"scope": "4",
"port": "",
"authtype": "0",
"username": "",
"password": "",
"publickey": "",
"privatekey": "",
"menu_path": "",
"url": "",
"new_window": "1",
"manualinput": "0",
"manualinput_prompt": "",
"manualinput_validator_type": "0",
"manualinput_validator": "",
"manualinput_default_value": "",
"parameters": []
}
],
"614": [
{
"scriptid": "3",
"name": "Detect operating system",
"command": "sudo /usr/bin/nmap -O {HOST.CONN} 2>&1",
"host_access": "2",
"usrgrpid": "7",
"groupid": "0",
"description": "",
"confirmation": "",
"type": "0",
"execute_on": "1",
"timeout": "30s",
"scope": "4",
"port": "",
"authtype": "0",
"username": "",
1483
"password": "",
"publickey": "",
"privatekey": "",
"menu_path": "",
"url": "",
"new_window": "1",
"manualinput": "0",
"manualinput_prompt": "",
"manualinput_validator_type": "1",
"manualinput_validator": "",
"manualinput_default_value": "",
"parameters": []
},
{
"scriptid": "1",
"name": "Ping",
"command": "/bin/ping -c 3 {HOST.CONN} 2>&1",
"host_access": "2",
"usrgrpid": "0",
"groupid": "0",
"description": "",
"confirmation": "",
"type": "0",
"execute_on": "1",
"timeout": "30s",
"scope": "4",
"port": "",
"authtype": "0",
"username": "",
"password": "",
"publickey": "",
"privatekey": "",
"menu_path": "",
"url": "",
"new_window": "1",
"manualinput": "0",
"manualinput_prompt": "",
"manualinput_validator_type": "0",
"manualinput_validator": "",
"manualinput_default_value": "",
"parameters": []
},
{
"scriptid": "4",
"name": "Open Zabbix page",
"command": "",
"host_access": "2",
"usrgrpid": "0",
"groupid": "0",
"description": "",
"confirmation": "Are you sure you want to open page *UNKNOWN*?",
"type": "6",
"execute_on": "2",
"timeout": "30s",
"scope": "4",
"port": "",
"authtype": "0",
"username": "",
"password": "",
"publickey": "",
"privatekey": "",
"menu_path": "",
1484
"url": "http://localhost/ui/zabbix.php?action=*UNKNOWN*",
"new_window": "1",
"manualinput": "1",
"manualinput_prompt": "Zabbix page to open:",
"manualinput_validator_type": "1",
"manualinput_validator": "dashboard.view,discovery.view",
"manualinput_default_value": "",
"parameters": []
},
{
"scriptid": "2",
"name": "Traceroute",
"command": "/usr/bin/traceroute {HOST.CONN} 2>&1",
"host_access": "2",
"usrgrpid": "0",
"groupid": "0",
"description": "",
"confirmation": "",
"type": "0",
"execute_on": "1",
"timeout": "30s",
"scope": "4",
"port": "",
"authtype": "0",
"username": "",
"password": "",
"publickey": "",
"privatekey": "",
"menu_path": "",
"url": "",
"new_window": "1",
"manualinput": "0",
"manualinput_prompt": "",
"manualinput_validator_type": "0",
"manualinput_validator": "",
"manualinput_default_value": "",
"parameters": []
}
]
},
"id": 1
}
Retrieve script with ID ”4” on event ”632” with manualinput value ”dashboard.view”.
Request:
{
"jsonrpc": "2.0",
"method": "script.getscriptsbyevents",
"params": [
{
"eventid": "632",
"scriptid": "4",
"manualinput": "dashboard.view"
}
],
"id": 1
}
Response:
1485
{
"jsonrpc": "2.0",
"result": {
"632": [
{
"scriptid": "4",
"name": "Open Zabbix page",
"command": "",
"host_access": "2",
"usrgrpid": "0",
"groupid": "0",
"description": "",
"confirmation": "Are you sure you want to open page dashboard.view?",
"type": "6",
"execute_on": "2",
"timeout": "30s",
"scope": "4",
"port": "",
"authtype": "0",
"username": "",
"password": "",
"publickey": "",
"privatekey": "",
"menu_path": "",
"url": "http://localhost/ui/zabbix.php?action=dashboard.view",
"new_window": "1",
"manualinput": "1",
"manualinput_prompt": "Zabbix page to open:",
"manualinput_validator_type": "1",
"manualinput_validator": "dashboard.view,discovery.view",
"manualinput_default_value": "",
"parameters": []
}
]
},
"id": 1
}
Source
CScript::getScriptsByEvents() in ui/include/classes/api/services/CScript.php.
script.getscriptsbyhosts
Description
Note:
This method is available to users of any type. Permissions to call the method can be revoked in user role settings. See
User roles for more information.
Parameters
(object/array) The method accepts object or array of objects with the following parameters.
1486
Parameter Type Description
Parameter behavior:
- required
scriptid ID ID of script to return.
manualinput string Value of the user-provided {MANUALINPUT} macro value.
Return values
(object) Returns an object with host IDs as properties and arrays of available scripts as values. If script ID is provided, the
associated value is an array containing the specific script.
Note:
The method will automatically expand macros in the confirmation text, manualinput prompt text and url.
If the manualinput parameter is provided, the {MANUALINPUT} macro will be resolved to the specified value.
Examples
Request:
{
"jsonrpc": "2.0",
"method": "script.getscriptsbyhosts",
"params": [
{
"hostid": "30079"
},
{
"hostid": "30073"
}
],
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"30079": [
{
"scriptid": "3",
"name": "Detect operating system",
"command": "sudo /usr/bin/nmap -O {HOST.CONN} 2>&1",
"host_access": "2",
"usrgrpid": "7",
"groupid": "0",
"description": "",
"confirmation": "",
"type": "0",
"execute_on": "1",
"timeout": "30s",
"scope": "2",
"port": "",
"authtype": "0",
"username": "",
"password": "",
"publickey": "",
"privatekey": "",
1487
"menu_path": "",
"url": "",
"new_window": "1",
"manualinput": "0",
"manualinput_prompt": "",
"manualinput_validator_type": "0",
"manualinput_validator": "",
"manualinput_default_value": "",
"parameters": []
},
{
"scriptid": "1",
"name": "Ping",
"command": "/bin/ping -c 3 {HOST.CONN} 2>&1",
"host_access": "2",
"usrgrpid": "0",
"groupid": "0",
"description": "",
"confirmation": "",
"type": "0",
"execute_on": "1",
"timeout": "30s",
"scope": "2",
"port": "",
"authtype": "0",
"username": "",
"password": "",
"publickey": "",
"privatekey": "",
"menu_path": "",
"url": "",
"new_window": "1",
"manualinput": "0",
"manualinput_prompt": "",
"manualinput_validator_type": "0",
"manualinput_validator": "",
"manualinput_default_value": "",
"parameters": []
},
{
"scriptid": "4",
"name": "Open Zabbix page",
"command": "",
"host_access": "2",
"usrgrpid": "0",
"groupid": "0",
"description": "",
"confirmation": "Are you sure you want to open page *UNKNOWN*?",
"type": "6",
"execute_on": "2",
"timeout": "30s",
"scope": "2",
"port": "",
"authtype": "0",
"username": "",
"password": "",
"publickey": "",
"privatekey": "",
"menu_path": "",
"url": "http://localhost/ui/zabbix.php?action=*UNKNOWN*",
"new_window": "1",
"manualinput": "0",
1488
"manualinput_prompt": "Zabbix page to open:",
"manualinput_validator_type": "0",
"manualinput_validator": "dashboard.view,discovery.view",
"manualinput_default_value": "",
"parameters": []
},
{
"scriptid": "2",
"name": "Traceroute",
"command": "/usr/bin/traceroute {HOST.CONN} 2>&1",
"host_access": "2",
"usrgrpid": "0",
"groupid": "0",
"description": "",
"confirmation": "",
"type": "0",
"execute_on": "1",
"timeout": "30s",
"scope": "2",
"port": "",
"authtype": "0",
"username": "",
"password": "",
"publickey": "",
"privatekey": "",
"menu_path": "",
"url": "",
"new_window": "1",
"manualinput": "0",
"manualinput_prompt": "",
"manualinput_validator_type": "0",
"manualinput_validator": "",
"manualinput_default_value": "",
"parameters": []
}
],
"30073": [
{
"scriptid": "3",
"name": "Detect operating system",
"command": "sudo /usr/bin/nmap -O {HOST.CONN} 2>&1",
"host_access": "2",
"usrgrpid": "7",
"groupid": "0",
"description": "",
"confirmation": "",
"type": "0",
"execute_on": "1",
"timeout": "30s",
"scope": "2",
"port": "",
"authtype": "0",
"username": "",
"password": "",
"publickey": "",
"privatekey": "",
"menu_path": "",
"url": "",
"new_window": "1",
"manualinput": "0",
"manualinput_prompt": "",
"manualinput_validator_type": "0",
1489
"manualinput_validator": "",
"manualinput_default_value": "",
"parameters": []
},
{
"scriptid": "1",
"name": "Ping",
"command": "/bin/ping -c 3 {HOST.CONN} 2>&1",
"host_access": "2",
"usrgrpid": "0",
"groupid": "0",
"description": "",
"confirmation": "",
"type": "0",
"execute_on": "1",
"timeout": "30s",
"scope": "2",
"port": "",
"authtype": "0",
"username": "",
"password": "",
"publickey": "",
"privatekey": "",
"menu_path": "",
"url": "",
"new_window": "1",
"manualinput": "0",
"manualinput_prompt": "",
"manualinput_validator_type": "0",
"manualinput_validator": "",
"manualinput_default_value": "",
"parameters": []
},
{
"scriptid": "4",
"name": "Open Zabbix page",
"command": "",
"host_access": "2",
"usrgrpid": "0",
"groupid": "0",
"description": "",
"confirmation": "Are you sure you want to open page *UNKNOWN*?",
"type": "6",
"execute_on": "2",
"timeout": "30s",
"scope": "2",
"port": "",
"authtype": "0",
"username": "",
"password": "",
"publickey": "",
"privatekey": "",
"menu_path": "",
"url": "http://localhost/ui/zabbix.php?action=*UNKNOWN*",
"new_window": "1",
"manualinput": "1",
"manualinput_prompt": "Zabbix page to open:",
"manualinput_validator_type": "1",
"manualinput_validator": "dashboard.view,discovery.view",
"manualinput_default_value": "",
"parameters": []
},
1490
{
"scriptid": "2",
"name": "Traceroute",
"command": "/usr/bin/traceroute {HOST.CONN} 2>&1",
"host_access": "2",
"usrgrpid": "0",
"groupid": "0",
"description": "",
"confirmation": "",
"type": "0",
"execute_on": "1",
"timeout": "30s",
"scope": "2",
"port": "",
"authtype": "0",
"username": "",
"password": "",
"publickey": "",
"privatekey": "",
"menu_path": "",
"url": "",
"new_window": "1",
"manualinput": "0",
"manualinput_prompt": "",
"manualinput_validator_type": "0",
"manualinput_validator": "",
"manualinput_default_value": "",
"parameters": []
}
]
},
"id": 1
}
Retrieve script with ID ”4” on host ”30079” with manualinput value ”dashboard.view”.
Request:
{
"jsonrpc": "2.0",
"method": "script.getscriptsbyhosts",
"params": [
{
"hostid": "30079",
"scriptid": "4",
"manualinput": "dashboard.view"
}
],
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"30079": [
{
"scriptid": "4",
"name": "Open Zabbix page",
"command": "",
"host_access": "2",
"usrgrpid": "0",
1491
"groupid": "0",
"description": "",
"confirmation": "Are you sure you want to open page dashboard.view?",
"type": "6",
"execute_on": "2",
"timeout": "30s",
"scope": "2",
"port": "",
"authtype": "0",
"username": "",
"password": "",
"publickey": "",
"privatekey": "",
"menu_path": "",
"url": "http://localhost/ui/zabbix.php?action=dashboard.view",
"new_window": "1",
"manualinput": "1",
"manualinput_prompt": "Zabbix page to open:",
"manualinput_validator_type": "1",
"manualinput_validator": "dashboard.view,discovery.view",
"manualinput_default_value": "",
"parameters": []
}
]
},
"id": 1
}
Source
CScript::getScriptsByHosts() in ui/include/classes/api/services/CScript.php.
script.update
Description
Note:
This method is only available to Super admin user type. Permissions to call the method can be revoked in user role settings.
See User roles for more information.
Parameters
Return values
(object) Returns an object containing the IDs of the updated scripts under the scriptids property.
Examples
Request:
{
"jsonrpc": "2.0",
"method": "script.update",
"params": {
1492
"scriptid": "1",
"command": "/bin/ping -c 10 {HOST.CONN} 2>&1"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"scriptids": [
"1"
]
},
"id": 1
}
Request:
{
"jsonrpc": "2.0",
"method": "script.update",
"params": {
"scriptid": "1",
"command": "/bin/ping -c {MANUALINPUT} {HOST.CONN} 2>&1",
"manualinput": "1",
"manualinput_prompt": "Specify the number of ICMP packets to send with the ping command",
"manualinput_validator": "^(?:[1-9]|10)$",
"manualinput_validator_type": "0",
"manualinput_default_value": "10"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"scriptids": [
"1"
]
},
"id": 1
}
Source
CScript::update() in ui/include/classes/api/services/CScript.php.
Service
Object references:
• Service
• Status rule
• Service tag
• Service alarm
• Problem tag
Available methods:
1493
• service.create - create new services
• service.delete - delete services
• service.get - retrieve services
• service.update - update services
Service object
Property behavior:
- read-only
- required for update operations
algorithm integer Status calculation rule. Only applicable if child services exist.
Possible values:
0 - set status to OK;
1 - most critical if all children have problems;
2 - most critical of child services.
Property behavior:
- required for create operations
name string Name of the service.
Property behavior:
- required for create operations
sortorder integer Position of the service used for sorting.
Property behavior:
- required for create operations
weight integer Service weight.
Default: 0.
propagation_rule integer Status propagation rule.
Possible values:
0 - (default) propagate service status as is - without any changes;
1 - increase the propagated status by a given propagation_value
(by 1 to 5 severities);
2 - decrease the propagated status by a given propagation_value
(by 1 to 5 severities);
3 - ignore this service - the status is not propagated to the parent
service at all;
4 - set fixed service status using a given propagation_value.
Property behavior:
- required if propagation_value is set
1494
Property Type Description
Property behavior:
- required if propagation_rule is set
status integer Whether the service is in OK or problem state.
Property behavior:
- read-only
description string Description of the service.
uuid string Universal unique identifier, used for linking imported services to
already existing ones. Auto-generated, if not given.
created_at integer Unix timestamp when service was created.
readonly boolean Access to the service.
Possible values:
0 - Read-write;
1 - Read-only.
Property behavior:
- read-only
Status rule
1495
Property Type Description
Possible values:
0 - if at least (N) child services have (Status) status or above;
1 - if at least (N%) of child services have (Status) status or above;
2 - if less than (N) child services have (Status) status or below;
3 - if less than (N%) of child services have (Status) status or below;
4 - if weight of child services with (Status) status or above is at least
(W);
5 - if weight of child services with (Status) status or above is at least
(N%);
6 - if weight of child services with (Status) status or below is less than
(W);
7 - if weight of child services with (Status) status or below is less than
(N%).
Where:
- N (W) islimit_value;
limit_status;
- (Status) is
- (New status) is new_status.
Property behavior:
- required
limit_value integer Limit value.
Possible values:
- for N and W: 1-100000;
- for N%: 1-100.
Property behavior:
- required
limit_status integer Limit status.
Possible values:
-1 - OK;
0 - Not classified;
1 - Information;
2 - Warning;
3 - Average;
4 - High;
5 - Disaster.
Property behavior:
- required
new_status integer New status value.
Possible values:
0 - Not classified;
1 - Information;
2 - Warning;
3 - Average;
4 - High;
5 - Disaster.
Property behavior:
- required
Service tag
1496
Property Type Description
Service alarm
Note:
Service alarms cannot be directly created, updated or deleted via the Zabbix API.
The service alarm objects represent a service’s state change. It has the following properties.
clock timestamp Time when the service state change has happened.
value integer Status of the service.
Problem tag
Problem tags allow linking services with problem events. The problem tag object has the following properties.
Property behavior:
- required
operator integer Mapping condition operator.
Possible values:
0 - (default) equals;
2 - like.
value string Problem tag value.
service.create
Description
Note:
This method is available to users of any type. Permissions to call the method can be revoked in user role settings. See
User roles for more information.
Parameters
The child services must have only the serviceid property defined.
parents array Parent services to be linked to the service.
The parent services must have only the serviceid property defined.
1497
Parameter Type Description
Return values
(object) Returns an object containing the IDs of the created services under the serviceids property. The order of the returned
IDs matches the order of the passed services.
Examples
Creating a service
Create a service that will be switched to problem state, if at least one child has a problem.
Request:
{
"jsonrpc": "2.0",
"method": "service.create",
"params": {
"name": "Server 1",
"algorithm": 1,
"sortorder": 1
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"serviceids": [
"5"
]
},
"id": 1
}
Source
CService::create() in ui/include/classes/api/services/CService.php.
service.delete
Description
Note:
This method is available to users of any type. Permissions to call the method can be revoked in user role settings. See
User roles for more information.
Parameters
(object) Returns an object containing the IDs of the deleted services under the serviceids property.
Examples
1498
Request:
{
"jsonrpc": "2.0",
"method": "service.delete",
"params": [
"4",
"5"
],
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"serviceids": [
"4",
"5"
]
},
"id": 1
}
Source
CService::delete() in ui/include/classes/api/services/CService.php.
service.get
Description
Note:
This method is available to users of any type. Permissions to call the method can be revoked in user role settings. See
User roles for more information.
Parameters
Possible values:
0 - (default) And/Or;
2 - Or.
1499
Parameter Type Description
tags object/array Return only services with given tags. Exact match by tag and
case-sensitive or case-insensitive search by tag value depending on
operator value.
[{"tag": "<tag>", "value": "<value>",
Format:
"operator": "<operator>"}, ...].
An empty array returns all services.
Supports count.
selectParents query Return a parents property with the parent services.
Supports count.
selectTags query Return a tags property with service tags.
Supports count.
selectProblemEvents query Return a problem_events property with an array of problem event
objects.
Supports count.
selectProblemTags query Return a problem_tags property with problem tags.
Supports count.
selectStatusRules query Return a status_rules property with status rules.
Supports count.
1500
Parameter Type Description
Return values
Request:
{
"jsonrpc": "2.0",
"method": "service.get",
"params": {
"output": "extend",
"selectChildren": "extend",
"selectParents": "extend"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": [
{
"serviceid": "1",
"name": "My Service - 0001",
"status": "-1",
"algorithm": "2",
"sortorder": "0",
1501
"weight": "0",
"propagation_rule": "0",
"propagation_value": "0",
"description": "My Service Description 0001.",
"uuid": "dfa4daeaea754e3a95c04d6029182681",
"created_at": "946684800",
"readonly": false,
"parents": [],
"children": []
},
{
"serviceid": "2",
"name": "My Service - 0002",
"status": "-1",
"algorithm": "2",
"sortorder": "0",
"weight": "0",
"propagation_rule": "0",
"propagation_value": "0",
"description": "My Service Description 0002.",
"uuid": "20ea0d85212841219130abeaca28c065",
"created_at": "946684800",
"readonly": false,
"parents": [],
"children": []
}
],
"id": 1
}
Source
CService::get() in ui/include/classes/api/services/CService.php.
service.update
Description
Note:
This method is available to users of any type. Permissions to call the method can be revoked in user role settings. See
User roles for more information.
Parameters
Additionally to the standard service properties, the method accepts the following parameters.
The child services must have only the serviceid property defined.
parents array Parent services to replace the current parent services.
The parent services must have only the serviceid property defined.
tags array Service tags to replace the current service tags.
problem_tags array Problem tags to replace the current problem tags.
1502
Parameter Type Description
Return values
(object) Returns an object containing the IDs of the updated services under the serviceids property.
Examples
Make service with ID ”3” to be the parent for service with ID ”5”.
Request:
{
"jsonrpc": "2.0",
"method": "service.update",
"params": {
"serviceid": "5",
"parents": [
{
"serviceid": "3"
}
]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"serviceids": [
"5"
]
},
"id": 1
}
Add a downtime for service with ID ”4” scheduled weekly from Monday 22:00 till Tuesday 10:00.
Request:
{
"jsonrpc": "2.0",
"method": "service.update",
"params": {
"serviceid": "4",
"times": [
{
"type": "1",
"ts_from": "165600",
"ts_to": "201600"
}
]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
1503
"serviceids": [
"4"
]
},
"id": 1
}
Source
CService::update() in ui/include/classes/api/services/CService.php.
Settings
Object references:
• Settings
Available methods:
Settings object
Default: en_GB.
default_timezone string System time zone by default.
For the full list of supported time zones please refer to PHP
documentation.
default_theme string Default theme.
Possible values:
blue-theme - (default) Blue;
dark-theme - Dark;
hc-light - High-contrast light;
hc-dark - High-contrast dark.
search_limit integer Limit for search and filter results.
Default: 1000.
max_overview_table_size integer Max number of columns and rows in Data overview and Trigger
overview dashboard widgets.
Default: 50.
max_in_table integer Max count of elements to show inside table cell.
Default: 50.
server_check_interval integer Show warning if Zabbix server is down.
Possible values:
0 - Do not show warning;
10 - (default) Show warning.
1504
Property Type Description
Default: 1-5,09:00-18:00.
show_technical_errors integer Show technical errors (PHP/SQL) to non-Super admin users and to
users that are not part of user groups with debug mode enabled.
Possible values:
0 - (default) Do not technical errors;
1 - Show technical errors.
history_period string Max period to display history data in Latest data, Web, and Data
overview dashboard widgets.
Accepts seconds and time unit with suffix.
Default: 24h.
period_default string Time filter default period.
Accepts seconds and time unit with suffix with month and year support
(30s, 1m, 2h, 1d, 1M, 1y).
Default: 1h.
max_period string Max period for time filter.
Accepts seconds and time unit with suffix with month and year support
(30s, 1m, 2h, 1d, 1M, 1y).
Default: 2y.
severity_color_0 string Color for ”Not classified” severity as a hexadecimal color code.
Default: 97AAB3.
severity_color_1 string Color for ”Information” severity as a hexadecimal color code.
Default: 7499FF.
severity_color_2 string Color for ”Warning” severity as a hexadecimal color code.
Default: FFC859.
severity_color_3 string Color for ”Average” severity as a hexadecimal color code.
Default: FFA059.
severity_color_4 string Color for ”High” severity as a hexadecimal color code.
Default: E97659.
severity_color_5 string Color for ”Disaster” severity as a hexadecimal color code.
Default: E45959.
severity_name_0 string Name for ”Not classified” severity.
Default: Information.
severity_name_2 string Name for ”Warning” severity.
Default: Warning.
severity_name_3 string Name for ”Average” severity.
Default: Average.
severity_name_4 string Name for ”High” severity.
Default: High.
severity_name_5 string Name for ”Disaster” severity.
Default: Disaster.
1505
Property Type Description
Possible values:
0 - (default) Do not use custom event status colors;
1 - Use custom event status colors.
ok_period string Display OK triggers period.
Accepts seconds and time unit with suffix.
Default: 5m.
blink_period string On status change triggers blink period.
Accepts seconds and time unit with suffix.
Default: 2m.
problem_unack_color string Color for unacknowledged PROBLEM events as a hexadecimal color
code.
Default: CC0000.
problem_ack_color string Color for acknowledged PROBLEM events as a hexadecimal color code.
Default: CC0000.
ok_unack_color string Color for unacknowledged RESOLVED events as a hexadecimal color
code.
Default: 009900.
ok_ack_color string Color for acknowledged RESOLVED events as a hexadecimal color code.
Default: 009900.
problem_unack_style integer Blinking for unacknowledged PROBLEM events.
Possible values:
0 - Do not show blinking;
1 - (default) Show blinking.
problem_ack_style integer Blinking for acknowledged PROBLEM events.
Possible values:
0 - Do not show blinking;
1 - (default) Show blinking.
ok_unack_style integer Blinking for unacknowledged RESOLVED events.
Possible values:
0 - Do not show blinking;
1 - (default) Show blinking.
ok_ack_style integer Blinking for acknowledged RESOLVED events.
Possible values:
0 - Do not show blinking;
1 - (default) Show blinking.
url string Frontend URL.
discovery_groupid ID ID of the host group to which will be automatically placed discovered
hosts.
default_inventory_mode integer Default host inventory mode.
Possible values:
-1 - (default) Disabled;
0 - Manual;
1 - Automatic.
alert_usrgrpid ID ID of the user group to which will be sending database down alarm
message.
1506
Property Type Description
Possible values:
0 - Do not log unmatched SNMP traps;
1 - (default) Log unmatched SNMP traps.
login_attempts integer Number of failed login attempts after which login form will be blocked.
Default: 5.
login_block string Time interval during which login form will be blocked if number of
failed login attempts exceeds defined in login_attempts field.
Accepts seconds and time unit with suffix.
Default: 30s.
validate_uri_schemes integer Validate URI schemes.
Possible values:
0 - Do not validate;
1 - (default) Validate.
uri_valid_schemes string Valid URI schemes.
Default: http,https,ftp,file,mailto,tel,ssh.
x_frame_options string X-Frame-Options HTTP header.
Default: SAMEORIGIN.
iframe_sandboxing_enabled
integer Use iframe sandboxing.
Possible values:
0 - Do not use;
1 - (default) Use.
iframe_sandboxing_exceptions
string Iframe sandboxing exceptions.
connect_timeout string Connection timeout with Zabbix server.
Default: 3s.
Property behavior:
- required
socket_timeout string Network default timeout.
Default: 3s.
Property behavior:
- required
media_type_test_timeout string Network timeout for media type test.
Default: 65s.
Property behavior:
- required
item_test_timeout string Network timeout for item tests.
Default: 60s.
Property behavior:
- required
1507
Property Type Description
Default: 60s.
Property behavior:
- required
report_test_timeout string Network timeout for scheduled report test.
Default: 60s.
Property behavior:
- required
auditlog_enabled integer Whether to enable audit logging.
Possible values:
0 - Disable;
1 - (default) Enable.
auditlog_mode integer Whether to enable audit logging of low-level discovery, network
discovery and autoregistration activities performed by the server
(System user).
Possible values:
0 - Disable;
1 - (default) Enable.
ha_failover_delay string Failover delay in seconds.
Default: 1m.
Property behavior:
- read-only
geomaps_tile_provider string Geomap tile provider.
Possible values:
OpenStreetMap.Mapnik - (default) OpenStreetMap Mapnik;
OpenTopoMap - OpenTopoMap;
Stamen.TonerLite - Stamen Toner Lite;
Stamen.Terrain - Stamen Terrain;
USGS.USTopo - USGS US Topo;
USGS.USImagery - USGS US Imagery.
Property behavior:
- supported if geomaps_tile_provider is set to empty string
geomaps_max_zoom integer Geomap max zoom level.
Property behavior:
- supported if geomaps_tile_provider is set to empty string
geomaps_attribution string Geomap attribution text.
Property behavior:
- supported if geomaps_tile_provider is set to empty string
1508
Property Type Description
Possible values:
0 - (default) HashiCorp Vault;
1 - CyberArk Vault.
timeout_zabbix_agent string Spend no more than timeout_* seconds on processing.
Accepts seconds or time unit with suffix (e.g., 30s, 1m). Also accepts
user macros.
Default: 3s.
Default for timeout_browser: 60s.
Property behavior:
- required
timeout_simple_check
timeout_snmp_agent
timeout_external_check
timeout_db_monitor
timeout_http_agent
timeout_ssh_agent
timeout_telnet_agent
timeout_script
timeout_browser
settings.get
Description
Note:
This method is available to users of any type. Permissions to call the method can be revoked in user role settings. See
User roles for more information.
Parameters
output query This parameter being common for all get methods described in the
reference commentary.
Return values
Request:
{
"jsonrpc": "2.0",
"method": "settings.get",
"params": {
"output": "extend"
},
1509
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"default_theme": "blue-theme",
"search_limit": "1000",
"max_in_table": "50",
"server_check_interval": "10",
"work_period": "1-5,09:00-18:00",
"show_technical_errors": "0",
"history_period": "24h",
"period_default": "1h",
"max_period": "2y",
"severity_color_0": "97AAB3",
"severity_color_1": "7499FF",
"severity_color_2": "FFC859",
"severity_color_3": "FFA059",
"severity_color_4": "E97659",
"severity_color_5": "E45959",
"severity_name_0": "Not classified",
"severity_name_1": "Information",
"severity_name_2": "Warning",
"severity_name_3": "Average",
"severity_name_4": "High",
"severity_name_5": "Disaster",
"custom_color": "0",
"ok_period": "5m",
"blink_period": "2m",
"problem_unack_color": "CC0000",
"problem_ack_color": "CC0000",
"ok_unack_color": "009900",
"ok_ack_color": "009900",
"problem_unack_style": "1",
"problem_ack_style": "1",
"ok_unack_style": "1",
"ok_ack_style": "1",
"discovery_groupid": "5",
"default_inventory_mode": "-1",
"alert_usrgrpid": "7",
"snmptrap_logging": "1",
"default_lang": "en_GB",
"default_timezone": "system",
"login_attempts": "5",
"login_block": "30s",
"validate_uri_schemes": "1",
"uri_valid_schemes": "http,https,ftp,file,mailto,tel,ssh",
"x_frame_options": "SAMEORIGIN",
"iframe_sandboxing_enabled": "1",
"iframe_sandboxing_exceptions": "",
"max_overview_table_size": "50",
"connect_timeout": "3s",
"socket_timeout": "3s",
"media_type_test_timeout": "65s",
"script_timeout": "60s",
"item_test_timeout": "60s",
"url": "",
"report_test_timeout": "60s",
"auditlog_enabled": "1",
"auditlog_mode": "1",
1510
"ha_failover_delay": "1m",
"geomaps_tile_provider": "OpenStreetMap.Mapnik",
"geomaps_tile_url": "",
"geomaps_max_zoom": "0",
"geomaps_attribution": "",
"vault_provider": "0",
"timeout_zabbix_agent": "3s",
"timeout_simple_check": "3s",
"timeout_snmp_agent": "3s",
"timeout_external_check": "3s",
"timeout_db_monitor": "3s",
"timeout_http_agent": "3s",
"timeout_ssh_agent": "3s",
"timeout_telnet_agent": "3s",
"timeout_script": "3s"
},
"id": 1
}
Source
CSettings::get() in ui/include/classes/api/services/CSettings.php.
settings.update
Description
Note:
This method is only available to Super admin user type. Permissions to call the method can be revoked in user role settings.
See User roles for more information.
Parameters
Request:
{
"jsonrpc": "2.0",
"method": "settings.update",
"params": {
"login_attempts": "1",
"login_block": "1m"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": [
"login_attempts",
"login_block"
],
"id": 1
}
1511
Source
CSettings::update() in ui/include/classes/api/services/CSettings.php.
SLA
This class is designed to work with SLA (Service Level Agreement) objects used to estimate the performance of IT infrastructure
and business services.
Object references:
• SLA
• SLA schedule
• SLA excluded downtime
• SLA service tag
Available methods:
SLA object
The following objects are directly related to the sla (Service Level Agreement) API.
SLA
Property behavior:
- read-only
- required for update operations
name string Name of the SLA.
Property behavior:
- required for create operations
period integer Reporting period of the SLA.
Possible values:
0 - daily;
1 - weekly;
2 - monthly;
3 - quarterly;
4 - annually.
Property behavior:
- required for create operations
slo float Minimum acceptable Service Level Objective expressed as a percent. If
the Service Level Indicator (SLI) drops lower, the SLA is considered to
be in problem/unfulfilled state.
Property behavior:
- required for create operations
effective_date integer Effective date of the SLA.
1512
Property Type Description
For the full list of supported time zones please refer to PHP
documentation.
Property behavior:
- required for create operations
status integer Status of the SLA.
Possible values:
0 - (default) disabled SLA;
1 - enabled SLA.
description string Description of the SLA.
SLA Schedule
The SLA schedule object defines periods where the connected service(s) are scheduled to be in working order. It has the following
properties.
period_from integer Starting time of the recurrent weekly period of time (inclusive).
Property behavior:
- required
period_to integer Ending time of the recurrent weekly period of time (exclusive).
Property behavior:
- required
The excluded downtime object defines periods where the connected service(s) are scheduled to be out of working order, without
affecting SLI, e.g., undergoing planned maintenance. It has the following properties.
Property behavior:
- required
period_from integer Starting time of the excluded downtime (inclusive).
Property behavior:
- required
period_to integer Ending time of the excluded downtime (exclusive).
Property behavior:
- required
The SLA service tag object links services to include in the calculations for the SLA. It has the following properties.
1513
Property Type Description
Property behavior:
- required
operator integer SLA service tag operator.
Possible values:
0 - (default) equals;
2 - contains.
value string SLA service tag value.
sla.create
Description
Note:
This method is only available to Admin and Super admin user types. Permissions to call the method can be revoked in user
role settings. See User roles for more information.
Parameters
Parameter behavior:
- required
schedule array SLA schedule to be created for the SLA.
Specifying an empty parameter will be interpreted as a 24x7 schedule.
Default: 24x7 schedule.
excluded_downtimes array SLA excluded downtimes to be created for the SLA.
Return values
(object) Returns an object containing the IDs of the created SLAs under the slaids property. The order of the returned IDs
matches the order of the passed SLAs.
Examples
Creating an SLA
Instruct to create an SLA entry for: * tracking uptime for SQL-engine related services; * custom schedule of all weekdays excluding
last hour on Saturday; * an effective date of the last day of the year 2022; * with 1 hour and 15 minutes long planned downtime
starting at midnight on the 4th of July; * SLA weekly report calculation will be on; * the minimum acceptable SLO will be 99.9995%.
Request:
{
"jsonrpc": "2.0",
"method": "sla.create",
"params": [
{
"name": "Database Uptime",
"slo": "99.9995",
"period": "1",
"timezone": "America/Toronto",
1514
"description": "Provide excellent uptime for main database engines.",
"effective_date": 1672444800,
"status": 1,
"schedule": [
{
"period_from": 0,
"period_to": 601200
}
],
"service_tags": [
{
"tag": "Database",
"operator": "0",
"value": "MySQL"
},
{
"tag": "Database",
"operator": "0",
"value": "PostgreSQL"
}
],
"excluded_downtimes": [
{
"name": "Software version upgrade rollout",
"period_from": "1648760400",
"period_to": "1648764900"
}
]
}
],
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"slaids": [
"5"
]
},
"id": 1
}
Source
CSla::create() in ui/include/classes/api/services/CSla.php.
sla.delete
Description
Note:
This method is only available to Admin and Super admin user types. Permissions to call the method can be revoked in user
role settings. See User roles for more information.
Parameters
1515
Return values
(object) Returns an object containing the IDs of the deleted SLAs under the slaids property.
Examples
Request:
{
"jsonrpc": "2.0",
"method": "sla.delete",
"params": [
"4",
"5"
],
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"slaids": [
"4",
"5"
]
},
"id": 1
}
Source
CSla::delete() in ui/include/classes/api/services/CSla.php.
sla.get
Description
Note:
This method is available to users of any type. Permissions to call the method can be revoked in user role settings. See
User roles for more information.
Parameters
Supports count.
selectExcludedDowntimesquery Return an excluded_downtimes property with SLA excluded
downtimes.
Supports count.
1516
Parameter Type Description
Supports count.
sortfield string/array Sort the result by the given properties.
Return values
Request:
{
"jsonrpc": "2.0",
"method": "sla.get",
"params": {
"output": "extend",
"selectSchedule": ["period_from", "period_to"],
"selectExcludedDowntimes": ["name", "period_from", "period_to"],
"selectServiceTags": ["tag", "operator", "value"],
"preservekeys": true
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"1": {
"slaid": "1",
"name": "Database Uptime",
"period": "1",
"slo": "99.9995",
"effective_date": "1672444800",
"timezone": "America/Toronto",
"status": "1",
"description": "Provide excellent uptime for main SQL database engines.",
"service_tags": [
{
1517
"tag": "Database",
"operator": "0",
"value": "MySQL"
},
{
"tag": "Database",
"operator": "0",
"value": "PostgreSQL"
}
],
"schedule": [
{
"period_from": "0",
"period_to": "601200"
}
],
"excluded_downtimes": [
{
"name": "Software version upgrade rollout",
"period_from": "1648760400",
"period_to": "1648764900"
}
]
}
},
"id": 1
}
Source
CSla:get() in ui/include/classes/api/services/CSla.php.
sla.getsli
Description
Note:
This method is available to users of any type. Permissions to call the method can be revoked in user role settings. See
User roles for more information.
Parameters
(object) Parameters containing the SLA ID, reporting periods and, optionally, the IDs of the services - to calculate the SLI for.
Parameter behavior:
- required
period_from timestamp Starting date (inclusive) to report the SLI for.
1518
Partitioning of periods
The following table demonstrates the arrangement of returned period slices based on combinations of parameters.
Note:
The returned periods will not precede the first available period based on the effective date of the SLA and will not exceed
the current period.
Parameters Description
Return values
serviceids
The sorting order of the list is not defined. Even if
parameter was passed to the sla.getsli method.
sli array SLI data (as a two-dimensional array) for each reported period and
service.
SLI data
The SLI data returned for each reported period and service consists of:
uptime integer Amount of time service spent in an OK state during scheduled uptime,
less the excluded downtimes.
1519
Property Type Description
downtime integer Amount of time service spent in a not OK state during scheduled
uptime, less the excluded downtimes.
sli float SLI (per cent of total uptime), based on uptime and downtime.
error_budget integer Error budget (in seconds), based on the SLI and the SLO.
excluded_downtimes array Array of excluded downtimes in this reporting period.
Examples
Calculating SLI
Retrieve SLI data on services with IDs ”50”, ”60” and ”70” that are linked to the SLA with ID ”5”. Retrieve data for 3 periods starting
from Nov 01, 2021.
Request:
{
"jsonrpc": "2.0",
"method": "sla.getsli",
"params": {
"slaid": "5",
"serviceids": [
50,
60,
70
],
"periods": 3,
"period_from": "1635724800"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"periods": [
{
"period_from": 1635724800,
"period_to": 1638316800
},
{
"period_from": 1638316800,
"period_to": 1640995200
},
{
"period_from": 1640995200,
"period_to": 1643673600
}
],
"serviceids": [
50,
60,
70
1520
],
"sli": [
[
{
"uptime": 1186212,
"downtime": 0,
"sli": 100,
"error_budget": 0,
"excluded_downtimes": [
{
"name": "Excluded Downtime - 1",
"period_from": 1637836212,
"period_to": 1638316800
}
]
},
{
"uptime": 1186212,
"downtime": 0,
"sli": 100,
"error_budget": 0,
"excluded_downtimes": [
{
"name": "Excluded Downtime - 1",
"period_from": 1637836212,
"period_to": 1638316800
}
]
},
{
"uptime": 1186212,
"downtime": 0,
"sli": 100,
"error_budget": 0,
"excluded_downtimes": [
{
"name": "Excluded Downtime - 1",
"period_from": 1637836212,
"period_to": 1638316800
}
]
}
],
[
{
"uptime": 1147548,
"downtime": 0,
"sli": 100,
"error_budget": 0,
"excluded_downtimes": [
{
"name": "Excluded Downtime - 1",
"period_from": 1638439200,
"period_to": 1639109652
}
]
},
{
"uptime": 1147548,
"downtime": 0,
"sli": 100,
"error_budget": 0,
1521
"excluded_downtimes": [
{
"name": "Excluded Downtime - 1",
"period_from": 1638439200,
"period_to": 1639109652
}
]
},
{
"uptime": 1147548,
"downtime": 0,
"sli": 100,
"error_budget": 0,
"excluded_downtimes": [
{
"name": "Excluded Downtime - 1",
"period_from": 1638439200,
"period_to": 1639109652
}
]
}
],
[
{
"uptime": 1674000,
"downtime": 0,
"sli": 100,
"error_budget": 0,
"excluded_downtimes": []
},
{
"uptime": 1674000,
"downtime": 0,
"sli": 100,
"error_budget": 0,
"excluded_downtimes": []
},
{
"uptime": 1674000,
"downtime": 0,
"sli": 100,
"error_budget": 0,
"excluded_downtimes": []
}
]
]
},
"id": 1
}
Source
CSla::getSli() in ui/include/classes/api/services/CSla.php
sla.update
Description
1522
Note:
This method is only available to Admin and Super admin user types. Permissions to call the method can be revoked in user
role settings. See User roles for more information.
Parameters
Additionally to the standard SLA properties, the method accepts the following parameters.
service_tags array SLA service tags to replace the current SLA service tags.
schedule array SLA schedule to replace the current one.
Specifying parameter as empty will be interpreted as a 24x7 schedule.
excluded_downtimes array SLA excluded downtimes to replace the current ones.
Return values
(object) Returns an object containing the IDs of the updated SLAs under the slaids property.
Examples
Make SLA with ID ”5” to be calculated at monthly intervals for NoSQL related services, without changing its schedule or excluded
downtimes; set SLO to 95%.
Request:
{
"jsonrpc": "2.0",
"method": "sla.update",
"params": [
{
"slaid": "5",
"name": "NoSQL Database engines",
"slo": "95",
"period": 2,
"service_tags": [
{
"tag": "Database",
"operator": "0",
"value": "Redis"
},
{
"tag": "Database",
"operator": "0",
"value": "MongoDB"
}
]
}
],
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"slaids": [
"5"
]
1523
},
"id": 1
}
Request:
{
"jsonrpc": "2.0",
"method": "service.update",
"params": {
"slaid": "5",
"schedule": []
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"slaids": [
"5"
]
},
"id": 1
}
Add a planned 4 hour long RAM upgrade downtime on the 6th of April, 2022, while keeping (needs to be defined anew) a previously
existing software upgrade planned on the 4th of July for the SLA with ID ”5”.
Request:
{
"jsonrpc": "2.0",
"method": "service.update",
"params": {
"slaid": "5",
"excluded_downtimes": [
{
"name": "Software version upgrade rollout",
"period_from": "1648760400",
"period_to": "1648764900"
},
{
"name": "RAM upgrade",
"period_from": "1649192400",
"period_to": "1649206800"
}
]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"slaids": [
"5"
]
1524
},
"id": 1
}
Source
CSla::update() in ui/include/classes/api/services/CSla.php.
Task
This class is designed to work with tasks (such as checking items or low-level discovery rules without config reload).
Object references:
• Task
• ’Execute now’ request object
• ’Refresh proxy configuration’ request object
• ’Diagnostic information’ request object
• Statistic request object
• Statistic result object
Available methods:
Task object
Property behavior:
- read-only
type integer Type of the task.
Possible values:
1 - Diagnostic information;
2 - Refresh proxy configuration;
6 - Execute now.
Property behavior:
- required
status integer Status of the task.
Possible values:
1 - new task;
2 - task in progress;
3 - task is completed;
4 - task is expired.
Property behavior:
- read-only
clock timestamp Time when the task was created.
Property behavior:
- read-only
ttl integer The time in seconds after which task expires.
Property behavior:
- read-only
1525
Property Type Description
Property behavior:
- supported if type is set to ”Diagnostic information” or ”Refresh proxy
configuration”
request object Task request object according to the task type:
Object of ’Execute now’ task is described in detail below;
Object of ’Refresh proxy configuration’ task is described in detail below;
Object of ’Diagnostic information’ task is described in detail below.
Property behavior:
- required
result object Result object of the diagnostic information task.
May contain NULL if result is not yet ready.
Result object is described in detail below.
Property behavior:
- read-only
The ’Execute now’ task request object has the following properties.
The ’Refresh proxy configuration’ task request object has the following properties.
The diagnostic information task request object has the following properties. Statistic request object for all types of properties is
described in detail below.
historycache object History cache statistic request. Available on server and proxy.
valuecache object Items cache statistic request. Available on server.
preprocessing object Preprocessing manager statistic request. Available on server and proxy.
alerting object Alert manager statistic request. Available on server.
lld object LLD manager statistic request. Available on server.
Statistic request object is used to define what type of information should be collected about server/proxy internal processes. It has
the following properties.
1526
Property Type Description
Example: { “source.alerts”: 10 }
List of statistic fields available for each type of diagnostic information request
Following statistic fields can be requested for each type of diagnostic information request property.
List of sorting fields available for each type of diagnostic information request
Following statistic fields can be used to sort and limit requested information.
Possible values:
-1 - error occurred during performing task;
0 - task result is created.
Property behavior:
- read-only
data string/object Results according the statistic request object of particular diagnostic
information task.
Contains error message string if error occurred during performing task.
1527
task.create
Description
Note:
This method is only available to Super admin user type. Permissions to call the method can be revoked in user role settings.
See User roles for more information.
Parameters
Note that ’Execute now’ tasks can be created only for the following types of items/discovery rules:
• Zabbix agent
• SNMPv1/v2/v3 agent
• Simple check
• Internal check
• External check
• Database monitor
• HTTP agent
• IPMI agent
• SSH agent
• TELNET agent
• Calculated check
• JMX agent
• Dependent item
If item or discovery rule is of type ”Dependent item”, then top level master item must be of type:
• Zabbix agent
• SNMPv1/v2/v3 agent
• Simple check
• Internal check
• External check
• Database monitor
• HTTP agent
• IPMI agent
• SSH agent
• TELNET agent
• Calculated check
• JMX agent
Return values
(object) Returns an object containing the IDs of the created tasks under the taskids property. One task is created for each
itemids.
item and low-level discovery rule. The order of the returned IDs matches the order of the passed
Examples
Creating a task
Create a task Execute now for two items. One is an item, the other is a low-level discovery rule.
Request:
{
"jsonrpc": "2.0",
"method": "task.create",
"params": [
{
"type": 6,
"request": {
1528
"itemid": "10092"
}
},
{
"type": 6,
"request": {
"itemid": "10093"
}
}
],
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"taskids": [
"1",
"2"
]
},
"id": 1
}
{
"jsonrpc": "2.0",
"method": "task.create",
"params": [
{
"type": 2,
"request": {
"proxyids": ["10459", "10460"]
}
}
],
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"taskids": [
"1"
]
},
"id": 1
}
{
"jsonrpc": "2.0",
"method": "task.create",
"params": [
{
"type": 1,
"request": {
1529
"alerting": {
"stats": [
"alerts"
],
"top": {
"media.alerts": 10
}
},
"lld": {
"stats": "extend",
"top": {
"values": 5
}
}
},
"proxyid": 0
}
],
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"taskids": [
"3"
]
},
"id": 1
}
See also
• Task
• ’Execute now’ request object
• ’Diagnostic information’ request object
• Statistic request object
Source
CTask::create() in ui/include/classes/api/services/CTask.php.
task.get
Description
Note:
This method is only available to Super admin user type. Permissions to call the method can be revoked in user role settings.
See User roles for more information.
Parameters
1530
Parameter Type Description
output query These parameters being common for all get methods are described in
detail in the reference commentary.
preservekeys boolean
Return values
Retrieve task by ID
Retrieve all the data about the task with the ID ”1”.
Request:
{
"jsonrpc": "2.0",
"method": "task.get",
"params": {
"output": "extend",
"taskids": "1"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": [
{
"taskid": "1",
"type": "7",
"status": "3",
"clock": "1601039076",
"ttl": "3600",
"proxyid": null,
"request": {
"alerting": {
"stats": [
"alerts"
],
"top": {
"media.alerts": 10
}
},
"lld": {
"stats": "extend",
"top": {
"values": 5
}
}
},
"result": {
"data": {
"alerting": {
"alerts": 0,
"top": {
"media.alerts": []
},
"time": 0.000663
},
"lld": {
1531
"rules": 0,
"values": 0,
"top": {
"values": []
},
"time": 0.000442
}
},
"status": "0"
}
}
],
"id": 1
}
See also
• Task
• Statistic result object
Source
CTask::get() in ui/include/classes/api/services/CTask.php.
Template
Object references:
• Template
• Template tag
Available methods:
Template object
Property behavior:
- read-only
- required for update operations
host string Technical name of the template.
Property behavior:
- required for create operations
description text Description of the template.
name string Visible name of the template.
1532
Property Type Description
uuid string Universal unique identifier, used for linking imported templates to
already existing ones. Auto-generated, if not given.
vendor_name string Template vendor name.
Template tag
template.create
Description
Note:
This method is only available to Admin and Super admin user types. Permissions to call the method can be revoked in user
role settings. See User roles for more information.
Parameters
The template groups must have only the groupid property defined.
Parameter behavior:
- required
tags object/array Template tags.
templates object/array Templates to be linked to the template.
Return values
(object) Returns an object containing the IDs of the created templates under the templateids property. The order of the
returned IDs matches the order of the passed templates.
Examples
1533
Creating a template
Create a template with tags and link two templates to this template.
Request:
{
"jsonrpc": "2.0",
"method": "template.create",
"params": {
"host": "Linux template",
"groups": {
"groupid": 1
},
"templates": [
{
"templateid": "11115"
},
{
"templateid": "11116"
}
],
"tags": [
{
"tag": "Host name",
"value": "{HOST.NAME}"
}
]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"templateids": [
"11117"
]
},
"id": 1
}
Source
CTemplate::create() in ui/include/classes/api/services/CTemplate.php.
template.delete
Description
Deleting a template will cause deletion of all template entities (items, triggers, graphs, etc.). To leave template entities with the
hosts, but delete the template itself, first unlink the template from required hosts using one of these methods: template.update,
template.massupdate, host.update, host.massupdate.
Note:
This method is only available to Admin and Super admin user types. Permissions to call the method can be revoked in user
role settings. See User roles for more information.
Parameters
1534
Return values
(object) Returns an object containing the IDs of the deleted templates under the templateids property.
Examples
Request:
{
"jsonrpc": "2.0",
"method": "template.delete",
"params": [
"13",
"32"
],
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"templateids": [
"13",
"32"
]
},
"id": 1
}
Source
CTemplate::delete() in ui/include/classes/api/services/CTemplate.php.
template.get
Description
Note:
This method is available to users of any type. Permissions to call the method can be revoked in user role settings. See
User roles for more information.
Parameters
templateids ID/array Return only templates with the given template IDs.
groupids ID/array Return only templates that belong to the given template groups.
parentTemplateids ID/array Return only templates that the given template is linked to.
hostids ID/array Return only templates that are linked to the given hosts/templates.
graphids ID/array Return only templates that contain the given graphs.
itemids ID/array Return only templates that contain the given items.
triggerids ID/array Return only templates that contain the given triggers.
with_items flag Return only templates that have items.
with_triggers flag Return only templates that have triggers.
with_graphs flag Return only templates that have graphs.
1535
Parameter Type Description
Possible values:
0 - (default) And/Or;
2 - Or.
tags object/array Return only templates with given tags. Exact match by tag and
case-sensitive or case-insensitive search by tag value depending on
operator value.
[{"tag": "<tag>", "value": "<value>",
Format:
"operator": "<operator>"}, ...].
An empty array returns all templates.
Supports count.
selectTemplateGroups query Return the template groups that the template belongs to in the
templategroups property.
selectTemplates query Return templates to which the given template is linked in the
templates property.
Supports count.
selectParentTemplates query Return templates that are linked to the given template in the
parentTemplates property.
Supports count.
selectHttpTests query Return the web scenarios from the template in the httpTests
property.
Supports count.
selectItems query Return items from the template in the items property.
Supports count.
selectDiscoveries query Return low-level discoveries from the template in the discoveries
property.
Supports count.
selectTriggers query Return triggers from the template in the triggers property.
Supports count.
selectGraphs query Return graphs from the template in the graphs property.
Supports count.
selectMacros query Return the macros from the template in the macros property.
selectDashboards query Return dashboards from the template in the dashboards property.
Supports count.
selectValueMaps query Return a valuemaps property with template value maps.
1536
Parameter Type Description
Return values
Retrieve all data about two templates named ”Linux” and ”Windows”.
Request:
{
"jsonrpc": "2.0",
"method": "template.get",
"params": {
"output": "extend",
"filter": {
"host": [
"Linux",
"Windows"
]
}
},
"id": 1
}
Response:
1537
{
"jsonrpc": "2.0",
"result": [
{
"proxyid": "0",
"host": "Linux",
"status": "3",
"disable_until": "0",
"error": "",
"available": "0",
"errors_from": "0",
"lastaccess": "0",
"ipmi_authtype": "0",
"ipmi_privilege": "2",
"ipmi_username": "",
"ipmi_password": "",
"ipmi_disable_until": "0",
"ipmi_available": "0",
"snmp_disable_until": "0",
"snmp_available": "0",
"maintenanceid": "0",
"maintenance_status": "0",
"maintenance_type": "0",
"maintenance_from": "0",
"ipmi_errors_from": "0",
"snmp_errors_from": "0",
"ipmi_error": "",
"snmp_error": "",
"jmx_disable_until": "0",
"jmx_available": "0",
"jmx_errors_from": "0",
"jmx_error": "",
"name": "Linux",
"flags": "0",
"templateid": "10001",
"description": "",
"tls_connect": "1",
"tls_accept": "1",
"tls_issuer": "",
"tls_subject": "",
"tls_psk_identity": "",
"tls_psk": "",
"uuid": "282ffe33afc74cccaf1524d9aa9dc502"
},
{
"proxyid": "0",
"host": "Windows",
"status": "3",
"disable_until": "0",
"error": "",
"available": "0",
"errors_from": "0",
"lastaccess": "0",
"ipmi_authtype": "0",
"ipmi_privilege": "2",
"ipmi_username": "",
"ipmi_password": "",
"ipmi_disable_until": "0",
"ipmi_available": "0",
"snmp_disable_until": "0",
"snmp_available": "0",
"maintenanceid": "0",
1538
"maintenance_status": "0",
"maintenance_type": "0",
"maintenance_from": "0",
"ipmi_errors_from": "0",
"snmp_errors_from": "0",
"ipmi_error": "",
"snmp_error": "",
"jmx_disable_until": "0",
"jmx_available": "0",
"jmx_errors_from": "0",
"jmx_error": "",
"name": "Windows",
"flags": "0",
"templateid": "10081",
"description": "",
"tls_connect": "1",
"tls_accept": "1",
"tls_issuer": "",
"tls_subject": "",
"tls_psk_identity": "",
"tls_psk": "",
"uuid": "522d17e1834049be879287b7c0518e5d"
}
],
"id": 1
}
Retrieve template groups that the template ”Linux by Zabbix agent” is a member of.
Request:
{
"jsonrpc": "2.0",
"method": "template.get",
"params": {
"output": ["hostid"],
"selectTemplateGroups": "extend",
"filter": {
"host": [
"Linux by Zabbix agent"
]
}
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": [
{
"templateid": "10001",
"templategroups": [
{
"groupid": "10",
"name": "Templates/Operating systems",
"uuid": "846977d1dfed4968bc5f8bdb363285bc"
}
]
}
],
"id": 1
1539
}
Retrieve hosts that have the ”10001” (Linux by Zabbix agent) template linked to them.
Request:
{
"jsonrpc": "2.0",
"method": "template.get",
"params": {
"output": "templateid",
"templateids": "10001",
"selectHosts": ["hostid", "name"]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": [
{
"templateid": "10001",
"hosts": [
{
"hostid": "10084",
"name": "Zabbix server"
},
{
"hostid": "10603",
"name": "Host 1"
},
{
"hostid": "10604",
"name": "Host 2"
}
]
}
],
"id": 1
}
Request:
{
"jsonrpc": "2.0",
"method": "template.get",
"params": {
"output": ["hostid"],
"selectTags": "extend",
"evaltype": 0,
"tags": [
{
"tag": "Host name",
"value": "{HOST.NAME}",
"operator": 1
}
]
},
"id": 1
1540
}
Response:
{
"jsonrpc": "2.0",
"result": [
{
"hostid": "10402",
"tags": [
{
"tag": "Host name",
"value": "{HOST.NAME}"
}
]
}
],
"id": 1
}
See also
• Template group
• Template
• User macro
• Host interface
Source
CTemplate::get() in ui/include/classes/api/services/CTemplate.php.
template.massadd
Description
Note:
This method is only available to Admin and Super admin user types. Permissions to call the method can be revoked in user
role settings. See User roles for more information.
Parameters
(object) Parameters containing the IDs of the templates to update and the objects to add to the templates.
The method accepts the following parameters.
Parameter behavior:
- required
groups object/array Template groups to add the given templates to.
The template groups must have only the groupid property defined.
macros object/array User macros to be created for the given templates.
templates_link object/array Templates to link to the given templates.
Return values
1541
(object) Returns an object containing the IDs of the updated templates under the templateids property.
Examples
Request:
{
"jsonrpc": "2.0",
"method": "template.massadd",
"params": {
"templates": [
{
"templateid": "10085"
},
{
"templateid": "10086"
}
],
"groups": [
{
"groupid": "2"
}
]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"templateids": [
"10085",
"10086"
]
},
"id": 1
}
Request:
{
"jsonrpc": "2.0",
"method": "template.massadd",
"params": {
"templates": [
{
"templateid": "10073"
}
],
"templates_link": [
{
"templateid": "10106"
},
{
"templateid": "10104"
}
]
},
1542
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"templateids": [
"10073"
]
},
"id": 1
}
See also
• template.update
• Host
• Template group
• User macro
Source
CTemplate::massAdd() in ui/include/classes/api/services/CTemplate.php.
template.massremove
Description
Note:
This method is only available to Admin and Super admin user types. Permissions to call the method can be revoked in user
role settings. See User roles for more information.
Parameters
(object) Parameters containing the IDs of the templates to update and the objects that should be removed.
Parameter behavior:
- required
groupids ID/array IDs of the template groups from which to remove the given templates.
macros string/array IDs of the user macros to delete from the given templates.
templateids_clear ID/array IDs of the templates to unlink and clear from the given templates
(upstream).
templateids_link ID/array IDs of the templates to unlink from the given templates (upstream).
Return values
(object) Returns an object containing the IDs of the updated templates under the templateids property.
Examples
Request:
1543
{
"jsonrpc": "2.0",
"method": "template.massremove",
"params": {
"templateids": [
"10085",
"10086"
],
"groupids": "2"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"templateids": [
"10085",
"10086"
]
},
"id": 1
}
Request:
{
"jsonrpc": "2.0",
"method": "template.massremove",
"params": {
"templateids": "10085",
"templateids_link": [
"10106",
"10104"
]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"templateids": [
"10085"
]
},
"id": 1
}
See also
• template.update
• User macro
Source
CTemplate::massRemove() in ui/include/classes/api/services/CTemplate.php.
template.massupdate
1544
Description
Note:
This method is only available to Admin and Super admin user types. Permissions to call the method can be revoked in user
role settings. See User roles for more information.
Parameters
(object) Parameters containing the IDs of the templates to update and the objects to replace for the templates.
The method accepts the following parameters.
Parameter behavior:
- required
groups object/array Template groups to replace the current template groups the templates
belong to.
The template groups must have only the groupid property defined.
macros object/array User macros to replace all of the current user macros on the given
templates.
templates_clear object/array Templates to unlink and clear from the given templates.
Return values
(object) Returns an object containing the IDs of the updated templates under the templateids property.
Examples
Unlinking a template
Request:
{
"jsonrpc": "2.0",
"method": "template.massupdate",
"params": {
"templates": [
{
"templateid": "10085"
},
{
"templateid": "10086"
}
],
"templates_clear": [
{
"templateid": "10091"
}
]
},
1545
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"templateids": [
"10085",
"10086"
]
},
"id": 1
}
Replace all user macros with the given user macro on multiple templates.
Request:
{
"jsonrpc": "2.0",
"method": "template.massupdate",
"params": {
"templates": [
{
"templateid": "10074"
},
{
"templateid": "10075"
},
{
"templateid": "10076"
},
{
"templateid": "10077"
}
],
"macros": [
{
"macro": "{$AGENT.TIMEOUT}",
"value": "5m",
"description": "Timeout after which agent is considered unavailable. Works only for agents
}
]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"templateids": [
"10074",
"10075",
"10076",
"10077"
]
},
"id": 1
}
1546
See also
• template.update
• template.massadd
• Template group
• User macro
Source
CTemplate::massUpdate() in ui/include/classes/api/services/CTemplate.php.
template.update
Description
Note:
This method is only available to Admin and Super admin user types. Permissions to call the method can be revoked in user
role settings. See User roles for more information.
Parameters
Additionally to the standard template properties, the method accepts the following parameters.
groups object/array Template groups to replace the current template groups the templates
belong to.
The template groups must have only the groupid property defined.
tags object/array Template tags to replace the current template tags.
macros object/array User macros to replace the current user macros on the given
templates.
templates object/array Templates to replace the currently linked templates. Templates that
are not passed are only unlinked.
Return values
(object) Returns an object containing the IDs of the updated templates under the templateids property.
Examples
Renaming a template
Request:
{
"jsonrpc": "2.0",
"method": "template.update",
"params": {
"templateid": "10086",
"name": "Template OS Linux"
},
1547
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"templateids": [
"10086"
]
},
"id": 1
}
Request:
{
"jsonrpc": "2.0",
"method": "template.update",
"params": {
"templateid": "10086",
"tags": [
{
"tag": "Host name",
"value": "{HOST.NAME}"
}
]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"templateids": [
"10086"
]
},
"id": 1
}
Source
CTemplate::update() in ui/include/classes/api/services/CTemplate.php.
Template dashboard
Object references:
• Template dashboard
• Template dashboard page
• Template dashboard widget
• Template dashboard widget field
Available methods:
1548
• templatedashboard.update - update template dashboards
Property behavior:
- read-only
- required for update operations
name string Name of the template dashboard.
Property behavior:
- required for create operations
templateid ID ID of the template the dashboard belongs to.
Property behavior:
- constant
- required for create operations
display_period integer Default page display period (in seconds).
Default: 30.
auto_start integer Auto start slideshow.
Possible values:
0 - do not auto start slideshow;
1 - (default) auto start slideshow.
uuid string Universal unique identifier, used for linking imported template
dashboards to already existing ones. Auto-generated, if not given.
Property behavior:
- read-only
name string Dashboard page name.
1549
Property Type Description
Property behavior:
- read-only
type string Type of the dashboard widget.
Possible values:
actionlog - Action log;
clock - Clock;
(deprecated) dataover - Data overview;
discovery - Discovery status;
favgraphs - Favorite graphs;
favmaps - Favorite maps;
gauge - Gauge;
graph - Graph (classic);
graphprototype - Graph prototype;
honeycomb - Honeycomb;
hostavail - Host availability;
hostnavigator - Host navigator;
itemnavigator - Item navigator;
item - Item value;
map - Map;
navtree - Map Navigation Tree;
piechart - Pie chart;
plaintext - Plain text;
problemhosts - Problem hosts;
problems - Problems;
problemsbysv - Problems by severity;
slareport - SLA report;
svggraph - Graph;
systeminfo - System information;
tophosts - Top hosts;
toptriggers - Top triggers;
trigover - Trigger overview;
url - URL;
web - Web monitoring.
Property behavior:
- required
name string Custom widget name.
x integer A horizontal position from the left side of the dashboard.
Possible values:
0 - (default) default widget view;
1 - with hidden header;
fields array Array of the template dashboard widget field objects.
1550
The template dashboard widget field object has the following properties.
Possible values:
0 - Integer;
1 - String;
4 - Item;
5 - Item prototype;
6 - Graph;
7 - Graph prototype;
8 - Map;
9 - Service;
10 - SLA;
11 - User;
12 - Action;
13 - Media type.
Property behavior:
- required
name string Widget field name.
Property behavior:
- required
value mixed Widget field value depending on the type.
Property behavior:
- required
templatedashboard.create
Description
Note:
This method is only available to Admin and Super admin user types. Permissions to call the method can be revoked in user
role settings. See User roles for more information.
Parameters
1551
Parameter Type Description
Parameter behavior:
- required
Return values
(object) Returns an object containing the IDs of the created template dashboards under the dashboardids property. The order
of the returned IDs matches the order of the passed template dashboards.
Examples
Create a template dashboard named “Graphs” with one Graph widget on a single dashboard page.
Request:
{
"jsonrpc": "2.0",
"method": "templatedashboard.create",
"params": {
"templateid": "10318",
"name": "Graphs",
"pages": [
{
"widgets": [
{
"type": "graph",
"x": 0,
"y": 0,
"width": 12,
"height": 5,
"view_mode": 0,
"fields": [
{
"type": 6,
"name": "graphid",
"value": "1123"
}
]
}
]
}
]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"dashboardids": [
"32"
]
},
"id": 1
}
See also
1552
• Template dashboard page
• Template dashboard widget
• Template dashboard widget field
Source
CTemplateDashboard::create() in ui/include/classes/api/services/CTemplateDashboard.php.
templatedashboard.delete
Description
Note:
This method is only available to Admin and Super admin user types. Permissions to call the method can be revoked in user
role settings. See User roles for more information.
Parameters
(object) Returns an object containing the IDs of the deleted template dashboards under the dashboardids property.
Examples
Request:
{
"jsonrpc": "2.0",
"method": "templatedashboard.delete",
"params": [
"45",
"46"
],
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"dashboardids": [
"45",
"46"
]
},
"id": 1
}
Source
CTemplateDashboard::delete() in ui/include/classes/api/services/CTemplateDashboard.php.
templatedashboard.get
Description
1553
Note:
This method is available to users of any type. Permissions to call the method can be revoked in user role settings. See
User roles for more information.
Parameters
dashboardids ID/array Return only template dashboards with the given IDs.
templateids ID/array Return only template dashboards that belong to the given templates.
selectPages query Return a pages property with template dashboard pages, correctly
ordered.
sortfield string/array Sort the result by the given properties.
Return values
Request:
{
"jsonrpc": "2.0",
"method": "templatedashboard.get",
"params": {
"output": "extend",
"selectPages": "extend",
"templateids": "10001"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": [
{
"dashboardid": "23",
"name": "Docker overview",
1554
"templateid": "10001",
"display_period": "30",
"auto_start": "1",
"uuid": "6dfcbe0bc5ad400ea9c1c2dd7649282f",
"pages": [
{
"dashboard_pageid": "1",
"name": "",
"display_period": "0",
"widgets": [
{
"widgetid": "220",
"type": "graph",
"name": "",
"x": "0",
"y": "0",
"width": "36",
"height": "5",
"view_mode": "0",
"fields": [
{
"type": "6",
"name": "graphid",
"value": "1125"
}
]
},
{
"widgetid": "221",
"type": "graph",
"name": "",
"x": "12",
"y": "0",
"width": "36",
"height": "5",
"view_mode": "0",
"fields": [
{
"type": "6",
"name": "graphid",
"value": "1129"
}
]
},
{
"widgetid": "222",
"type": "graph",
"name": "",
"x": "0",
"y": "5",
"width": "36",
"height": "5",
"view_mode": "0",
"fields": [
{
"type": "6",
"name": "graphid",
"value": "1128"
}
]
},
{
1555
"widgetid": "223",
"type": "graph",
"name": "",
"x": "12",
"y": "5",
"width": "36",
"height": "5",
"view_mode": "0",
"fields": [
{
"type": "6",
"name": "graphid",
"value": "1126"
}
]
},
{
"widgetid": "224",
"type": "graph",
"name": "",
"x": "0",
"y": "10",
"width": "36",
"height": "5",
"view_mode": "0",
"fields": [
{
"type": "6",
"name": "graphid",
"value": "1127"
}
]
}
]
}
]
}
],
"id": 1
}
See also
Source
CTemplateDashboard::get() in ui/include/classes/api/services/CTemplateDashboard.php.
templatedashboard.update
Description
Note:
This method is only available to Admin and Super admin user types. Permissions to call the method can be revoked in user
role settings. See User roles for more information.
Parameters
1556
(object/array) Template dashboard properties to be updated.
The dashboardid property must be specified for each dashboard, all other properties are optional. Only the specified properties
will be updated.
Additionally to the standard template dashboard properties, the method accepts the following parameters.
pages array Template dashboard pages to replace the existing dashboard pages.
Return values
(object) Returns an object containing the IDs of the updated template dashboards under the dashboardids property.
Examples
Request:
{
"jsonrpc": "2.0",
"method": "templatedashboard.update",
"params": {
"dashboardid": "23",
"name": "Performance graphs"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"dashboardids": [
"23"
]
},
"id": 1
}
Rename the first dashboard page, replace widgets on the second dashboard page and add a new page as the third one. Delete all
other dashboard pages.
Request:
{
"jsonrpc": "2.0",
"method": "templatedashboard.update",
"params": {
"dashboardid": "2",
"pages": [
{
"dashboard_pageid": 1,
"name": "Renamed Page"
1557
},
{
"dashboard_pageid": 2,
"widgets": [
{
"type": "clock",
"x": 0,
"y": 0,
"width": 12,
"height": 3
}
]
},
{
"display_period": 60
}
]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"dashboardids": [
"2"
]
},
"id": 1
}
See also
Source
CTemplateDashboard::update() in ui/include/classes/api/services/CTemplateDashboard.php.
Template group
Object references:
• Template group
Available methods:
1558
The template group object has the following properties.
Property behavior:
- read-only
- required for update operations
name string Name of the template group.
Property behavior:
- required for create operations
uuid string Universal unique identifier, used for linking imported template groups
to already existing ones. Auto-generated, if not given.
templategroup.create
Description
Note:
This method is only available to Super admin user type. Permissions to call the method can be revoked in user role settings.
See User roles for more information.
Parameters
(object/array) Template groups to create. The method accepts template groups with the standard template group properties.
Return values
(object) Returns an object containing the IDs of the created template groups under the groupids property. The order of the
returned IDs matches the order of the passed template groups.
Examples
Request:
{
"jsonrpc": "2.0",
"method": "templategroup.create",
"params": {
"name": "Templates/Databases"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"groupids": [
"107820"
]
},
"id": 1
}
Source
CTemplateGroup::create() in ui/include/classes/api/services/CTemplateGroup.php.
1559
templategroup.delete
Description
A template group can not be deleted if it contains templates that belong to this group only.
Note:
This method is only available to Admin and Super admin user types. Permissions to call the method can be revoked in user
role settings. See User roles for more information.
Parameters
(object) Returns an object containing the IDs of the deleted template groups under the groupids property.
Examples
Request:
{
"jsonrpc": "2.0",
"method": "templategroup.delete",
"params": [
"107814",
"107815"
],
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"groupids": [
"107814",
"107815"
]
},
"id": 1
}
Source
CTemplateGroup::delete() in ui/include/classes/api/services/CTemplateGroup.php.
templategroup.get
Description
Note:
This method is available to users of any type. Permissions to call the method can be revoked in user role settings. See
User roles for more information.
Parameters
1560
(object) Parameters defining the desired output.
The method supports the following parameters.
graphids ID/array Return only template groups that contain templates with the given
graphs.
groupids ID/array Return only template groups with the given template group IDs.
templateids ID/array Return only template groups that contain the given templates.
triggerids ID/array Return only template groups that contain templates with the given
triggers.
with_graphs flag Return only template groups that contain templates with graphs.
with_graph_prototypes flag Return only template groups that contain templates with graph
prototypes.
with_httptests flag Return only template groups that contain templates with web checks.
with_items flag Return only template groups that contain templates with items.
Supports count.
limitSelects integer Limits the number of records returned by subselects.
Return values
1561
Retrieve all data about two template groups named ”Templates/Databases” and ”Templates/Modules”.
Request:
{
"jsonrpc": "2.0",
"method": "templategroup.get",
"params": {
"output": "extend",
"filter": {
"name": [
"Templates/Databases",
"Templates/Modules"
]
}
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": [
{
"groupid": "13",
"name": "Templates/Databases",
"uuid": "748ad4d098d447d492bb935c907f652f"
},
{
"groupid": "8",
"name": "Templates/Modules",
"uuid": "57b7ae836ca64446ba2c296389c009b7"
}
],
"id": 1
}
See also
• Template
Source
CTemplateGroup::get() in ui/include/classes/api/services/CTemplateGroup.php.
templategroup.massadd
Description
Note:
This method is only available to Admin and Super admin user types. Permissions to call the method can be revoked in user
role settings. See User roles for more information.
Parameters
(object) Parameters containing the IDs of the template groups to update and the objects to add to all the template groups.
The method accepts the following parameters.
1562
Parameter Type Description
The template groups must have only the groupid property defined.
Parameter behavior:
- required
templates object/array Templates to add to all template groups.
Parameter behavior:
- required
Return values
(object) Returns an object containing the IDs of the updated template groups under the groupids property.
Examples
Request:
{
"jsonrpc": "2.0",
"method": "templategroup.massadd",
"params": {
"groups": [
{
"groupid": "12"
},
{
"groupid": "13"
}
],
"templates": [
{
"templateid": "10486"
},
{
"templateid": "10487"
}
]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"groupids": [
"12",
"13"
]
},
"id": 1
}
See also
• Template
1563
Source
CTemplateGroup::massAdd() in ui/include/classes/api/services/CTemplateGroup.php.
templategroup.massremove
Description
Note:
This method is only available to Admin and Super admin user types. Permissions to call the method can be revoked in user
role settings. See User roles for more information.
Parameters
(object) Parameters containing the IDs of the template groups to update and the objects that should be removed.
Parameter behavior:
- required
templateids ID/array IDs of the templates to remove from all template groups.
Parameter behavior:
- required
Return values
(object) Returns an object containing the IDs of the updated template groups under the groupids property.
Examples
Request:
{
"jsonrpc": "2.0",
"method": "templategroup.massremove",
"params": {
"groupids": [
"5",
"6"
],
"templateids": [
"30050",
"30001"
]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"groupids": [
"5",
"6"
1564
]
},
"id": 1
}
Source
CTemplateGroup::massRemove() in ui/include/classes/api/services/CTemplateGroup.php.
templategroup.massupdate
Description
Note:
This method is only available to Admin and Super admin user types. Permissions to call the method can be revoked in user
role settings. See User roles for more information.
Parameters
(object) Parameters containing the IDs of the template groups to update and the objects that should be updated.
The template groups must have only the groupid property defined.
Parameter behavior:
- required
templates object/array Templates to replace the current template on the given template
groups.
All other template, except the ones mentioned, will be excluded from
template groups.
Parameter behavior:
- required
Return values
(object) Returns an object containing the IDs of the updated template groups under the groupids property.
Examples
Request:
{
"jsonrpc": "2.0",
"method": "templategroup.massupdate",
"params": {
"groups": [
{
"groupid": "8"
}
],
"templates": [
{
1565
"templateid": "40050"
}
]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"groupids": [
"8",
]
},
"id": 1
}
See also
• templategroup.update
• templategroup.massadd
• Template
Source
CTemplateGroup::massUpdate() in ui/include/classes/api/services/CTemplateGroup.php.
templategroup.propagate
Description
Note:
This method is only available to Super admin user types. Permissions to call the method can be revoked in user role
settings. See User roles for more information.
Parameters
The template groups must have only the groupid property defined.
Parameter behavior:
- required
permissions boolean Set true if need to propagate permissions.
Parameter behavior:
- required
Return values
(object) Returns an object containing the IDs of the propagated template groups under the groupids property.
Examples
1566
Propagate template group permissions to its subgroups.
Request:
{
"jsonrpc": "2.0",
"method": "templategroup.propagate",
"params": {
"groups": [
{
"groupid": "15"
}
],
"permissions": true
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"groupids": [
"15",
]
},
"id": 1
}
See also
• templategroup.update
• templategroup.massadd
• Template
Source
CTemplateGroup::propagate() in ui/include/classes/api/services/CTemplateGroup.php.
templategroup.update
Description
Note:
This method is only available to Admin and Super admin user types. Permissions to call the method can be revoked in user
role settings. See User roles for more information.
Parameters
Return values
(object) Returns an object containing the IDs of the updated template groups under the groupids property.
Examples
Request:
1567
{
"jsonrpc": "2.0",
"method": "templategroup.update",
"params": {
"groupid": "7",
"name": "Templates/Databases"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"groupids": [
"7"
]
},
"id": 1
}
Source
CTemplateGroup::update() in ui/include/classes/api/services/CTemplateGroup.php.
Token
Object references:
• Token
Available methods:
Token object
Property behavior:
- read-only
- required for update operations
name string Name of the token.
Property behavior:
- required for create operations
description text Description of the token.
1568
Property Type Description
userid ID ID of the user that the token has been assigned to.
Property behavior:
- constant
lastaccess timestamp Most recent date and time the token was authenticated.
Property behavior:
- read-only
status integer Token status.
Possible values:
0 - (default) enabled token;
1 - disabled token.
expires_at timestamp Token expiration date and time.
Property behavior:
- read-only
creator_userid ID ID of the user that created the token.
Property behavior:
- read-only
token.create
Description
Note:
The Manage API tokens permission is required for the user role to manage tokens for other users.
Attention:
A token created by this method also has to be generated before it is usable.
Parameters
Return values
(object) Returns an object containing the IDs of the created tokens under the tokenids property. The order of the returned IDs
matches the order of the passed tokens.
Examples
Create a token
Request:
{
"jsonrpc": "2.0",
"method": "token.create",
1569
"params": {
"name": "Your token",
"userid": "2"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"tokenids": [
"188"
]
},
"id": 1
}
Create a disabled token that expires at January 21st, 2021. This token will authenticate current user.
Request:
{
"jsonrpc": "2.0",
"method": "token.create",
"params": {
"name": "Your token",
"status": "1",
"expires_at": "1611238072"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"tokenids": [
"189"
]
},
"id": 1
}
Source
CToken::create() in ui/include/classes/api/services/CToken.php.
token.delete
Description
Note:
The Manage API tokens permission is required for the user role to manage tokens for other users.
Parameters
(object) Returns an object containing the IDs of the deleted tokens under the tokenids property.
1570
Examples
Request:
{
"jsonrpc": "2.0",
"method": "token.delete",
"params": [
"188",
"192"
],
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"tokenids": [
"188",
"192"
]
},
"id": 1
}
Source
CToken::delete() in ui/include/classes/api/services/CToken.php.
token.generate
Description
Note:
The Manage API tokens permission is required for the user role to manage tokens for other users.
Attention:
A token can be generated by this method only if it has been created.
Parameters
(array) Returns an array of objects containing the ID of the generated token under the tokenid property and generated autho-
token property.
rization string under
Examples
1571
Request:
{
"jsonrpc": "2.0",
"method": "token.generate",
"params": [
"1",
"2"
],
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": [
{
"tokenid": "1",
"token": "bbcfce79a2d95037502f7e9a534906d3466c9a1484beb6ea0f4e7be28e8b8ce2"
},
{
"tokenid": "2",
"token": "fa1258a83d518eabd87698a96bd7f07e5a6ae8aeb8463cae33d50b91dd21bd6d"
}
],
"id": 1
}
Source
CToken::generate() in ui/include/classes/api/services/CToken.php.
token.get
Description
Note:
Only Super admin user type is allowed to view tokens for other users.
Parameters
1572
Parameter Type Description
limit integer
output query
preservekeys boolean
search object
searchByAny boolean
searchWildcardsEnabled boolean
sortorder string/array
startSearch boolean
Return values
Retrieve a token
Request:
{
"jsonrpc": "2.0",
"method": "token.get",
"params": {
"output": "extend",
"tokenids": "2"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": [
{
"tokenid": "1",
"name": "The Token",
"description": "",
"userid": "1",
"lastaccess": "0",
"status": "0",
"expires_at": "1609406220",
"created_at": "1611239454",
"creator_userid": "1"
}
],
"id": 1
}
Source
CToken::get() in ui/include/classes/api/services/CToken.php.
token.update
Description
1573
Note:
The Manage API tokens permission is required for the user role to manage tokens for other users.
Parameters
Return values
(object) Returns an object containing the IDs of the updated tokens under the tokenids property.
Examples
Request:
{
"jsonrpc": "2.0",
"method": "token.update",
"params": {
"tokenid": "2",
"expires_at": "0"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"tokenids": [
"2"
]
},
"id": 1
}
Source
CToken::update() in ui/include/classes/api/services/CToken.php.
Trend
Object references:
• Float trend
• Integer trend
Available methods:
Trend object
1574
Note:
Trend objects differ depending on the item’s type of information. They are created by the Zabbix server and cannot be
modified via the API.
Float trend
clock timestamp Timestamp of an hour for which the value was calculated. For example,
timestamp of ”04:00:00” means values calculated for period
”04:00:00-04:59:59”.
itemid ID ID of the related item.
num integer Number of values that were available for the hour.
value_min float Hourly minimum value.
value_avg float Hourly average value.
value_max float Hourly maximum value.
Integer trend
clock timestamp Timestamp of an hour for which the value was calculated. For example,
timestamp of ”04:00:00” means values calculated for period
”04:00:00-04:59:59”.
itemid ID ID of the related item.
num integer Number of values that were available for the hour.
value_min integer Hourly minimum value.
value_avg integer Hourly average value.
value_max integer Hourly maximum value.
trend.get
Description
Note:
This method is available to users of any type. Permissions to call the method can be revoked in user role settings. See
User roles for more information.
Parameters
itemids ID/array Return only trends with the given item IDs.
time_from timestamp Return only values that have been collected after or at the given time.
time_till timestamp Return only values that have been collected before or at the given
time.
countOutput boolean Count the number of retrieved objects.
limit integer Limit the amount of retrieved objects.
output query Set fields to output.
Return values
1575
• an array of objects;
• the count of retrieved objects, if the countOutput parameter has been used.
Examples
Request:
{
"jsonrpc": "2.0",
"method": "trend.get",
"params": {
"output": [
"itemid",
"clock",
"num",
"value_min",
"value_avg",
"value_max",
],
"itemids": [
"23715"
],
"limit": "1"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": [
{
"itemid": "23715",
"clock": "1446199200",
"num": "60",
"value_min": "0.165",
"value_avg": "0.2168",
"value_max": "0.35",
}
],
"id": 1
}
Source
CTrend::get() in ui/include/classes/api/services/CTrend.php.
Trigger
Object references:
• Trigger
• Trigger tag
Available methods:
Trigger object
1576
The following objects are directly related to the trigger API.
Trigger
Property behavior:
- read-only
- required for update operations
description string Name of the trigger.
Property behavior:
- required for create operations
expression string Reduced trigger expression.
Property behavior:
- required for create operations
event_name string Event name generated by the trigger.
opdata string Operational data.
comments string Additional description of the trigger.
error string Error text if there have been any problems when updating the state of
the trigger.
Property behavior:
- read-only
flags integer Origin of the trigger.
Possible values:
0 - (default) a plain trigger;
4 - a discovered trigger.
Property behavior:
- read-only
lastchange timestamp Time when the trigger last changed its state.
Property behavior:
- read-only
priority integer Severity of the trigger.
Possible values:
0 - (default) not classified;
1 - information;
2 - warning;
3 - average;
4 - high;
5 - disaster.
state integer State of the trigger.
Possible values:
0 - (default) trigger state is up to date;
1 - current trigger state is unknown.
Property behavior:
- read-only
status integer Whether the trigger is enabled or disabled.
Possible values:
0 - (default) enabled;
1 - disabled.
1577
Property Type Description
Property behavior:
- read-only
type integer Whether the trigger can generate multiple problem events.
Possible values:
0 - (default) do not generate multiple events;
1 - generate multiple events.
url string URL associated with the trigger.
url_name string Label for the URL associated with the trigger.
value integer Whether the trigger is in OK or problem state.
Possible values:
0 - (default) OK;
1 - problem.
Property behavior:
- read-only
recovery_mode integer OK event generation mode.
Possible values:
0 - (default) Expression;
1 - Recovery expression;
2 - None.
recovery_expression string Reduced trigger recovery expression.
correlation_mode integer OK event closes.
Possible values:
0 - (default) All problems;
1 - All problems if tag values match.
correlation_tag string Tag for matching.
manual_close integer Allow manual close.
Possible values:
0 - (default) No;
1 - Yes.
uuid string Universal unique identifier, used for linking imported triggers to
already existing ones. Auto-generated, if not given.
Property behavior:
- supported if the trigger belongs to a template
Trigger tag
trigger.create
Description
1578
Note:
This method is only available to Admin and Super admin user types. Permissions to call the method can be revoked in user
role settings. See User roles for more information.
Parameters
Attention:
The trigger expression has to be given in its expanded form.
Return values
(object) Returns an object containing the IDs of the created triggers under the triggerids property. The order of the returned
IDs matches the order of the passed triggers.
Examples
Creating a trigger
Request:
{
"jsonrpc": "2.0",
"method": "trigger.create",
"params": [
{
"description": "Processor load is too high on {HOST.NAME}",
"expression": "last(/Linux server/system.cpu.load[percpu,avg1])>5",
"dependencies": [
{
"triggerid": "17367"
}
]
},
{
"description": "Service status",
"expression": "length(last(/Linux server/log[/var/log/system,Service .* has stopped]))<>0",
"dependencies": [
{
"triggerid": "17368"
}
],
"tags": [
{
"tag": "service",
"value": "{{ITEM.VALUE}.regsub(\"Service (.*) has stopped\", \"\\1\")}"
},
{
"tag": "error",
"value": ""
}
]
}
1579
],
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"triggerids": [
"17369",
"17370"
]
},
"id": 1
}
Source
CTrigger::create() in ui/include/classes/api/services/CTrigger.php.
trigger.delete
Description
Note:
This method is only available to Admin and Super admin user types. Permissions to call the method can be revoked in user
role settings. See User roles for more information.
Parameters
(object) Returns an object containing the IDs of the deleted triggers under the triggerids property.
Examples
Request:
{
"jsonrpc": "2.0",
"method": "trigger.delete",
"params": [
"12002",
"12003"
],
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"triggerids": [
"12002",
"12003"
]
},
1580
"id": 1
}
Source
CTrigger::delete() in ui/include/classes/api/services/CTrigger.php.
trigger.get
Description
Note:
This method is available to users of any type. Permissions to call the method can be revoked in user role settings. See
User roles for more information.
Parameters
Possible values:
0 - (default) And/Or;
2 - Or.
1581
Parameter Type Description
tags array Return only triggers with given tags. Exact match by tag and
case-sensitive or case-insensitive search by tag value depending on
operator value.
[{"tag": "<tag>", "value": "<value>",
Format:
"operator": "<operator>"}, ...].
An empty array returns all triggers.
1582
Parameter Type Description
filter object Return only those results that exactly match the given filter.
Accepts an object, where the keys are property names, and the values
are either a single value or an array of values to match against.
Return values
Request:
{
"jsonrpc": "2.0",
"method": "trigger.get",
"params": {
"triggerids": "14062",
"output": "extend",
"selectFunctions": "extend"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
1583
"result": [
{
"triggerid": "14062",
"expression": "{13513}<10m",
"description": "{HOST.NAME} has been restarted (uptime < 10m)",
"url": "",
"status": "0",
"value": "0",
"priority": "2",
"lastchange": "0",
"comments": "The host uptime is less than 10 minutes",
"error": "",
"templateid": "10016",
"type": "0",
"state": "0",
"flags": "0",
"recovery_mode": "0",
"recovery_expression": "",
"correlation_mode": "0",
"correlation_tag": "",
"manual_close": "0",
"opdata": "",
"event_name": "",
"uuid": "",
"url_name": "",
"functions": [
{
"functionid": "13513",
"itemid": "24350",
"triggerid": "14062",
"parameter": "$",
"function": "last"
}
]
}
],
"id": 1
}
Retrieve the ID, name and severity of all triggers in problem state and sort them by severity in descending order.
Request:
{
"jsonrpc": "2.0",
"method": "trigger.get",
"params": {
"output": [
"triggerid",
"description",
"priority"
],
"filter": {
"value": 1
},
"sortfield": "priority",
"sortorder": "DESC"
},
"id": 1
}
Response:
1584
{
"jsonrpc": "2.0",
"result": [
{
"triggerid": "13907",
"description": "Zabbix self-monitoring processes < 100% busy",
"priority": "4"
},
{
"triggerid": "13824",
"description": "Zabbix discoverer processes more than 75% busy",
"priority": "3"
}
],
"id": 1
}
Request:
{
"jsonrpc": "2.0",
"method": "trigger.get",
"params": {
"output": [
"triggerid",
"description"
],
"selectTags": "extend",
"triggerids": [
"17578"
]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": [
{
"triggerid": "17370",
"description": "Service status",
"tags": [
{
"tag": "service",
"value": "{{ITEM.VALUE}.regsub(\"Service (.*) has stopped\", \"\\1\")}"
},
{
"tag": "error",
"value": ""
}
]
}
],
"id": 1
}
See also
• Discovery rule
• Item
1585
• Host
• Host group
• Template group
Source
CTrigger::get() in ui/include/classes/api/services/CTrigger.php.
trigger.update
Description
Note:
This method is only available to Admin and Super admin user types. Permissions to call the method can be revoked in user
role settings. See User roles for more information.
Parameters
Additionally to the standard trigger properties the method accepts the following parameters.
Attention:
The trigger expression has to be given in its expanded form.
Return values
(object) Returns an object containing the IDs of the updated triggers under the triggerids property.
Examples
Enabling a trigger
Request:
{
"jsonrpc": "2.0",
"method": "trigger.update",
"params": {
"triggerid": "13938",
"status": 0
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"triggerids": [
"13938"
1586
]
},
"id": 1
}
Request:
{
"jsonrpc": "2.0",
"method": "trigger.update",
"params": {
"triggerid": "13938",
"tags": [
{
"tag": "service",
"value": "{{ITEM.VALUE}.regsub(\"Service (.*) has stopped\", \"\\1\")}"
},
{
"tag": "error",
"value": ""
}
]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"triggerids": [
"13938"
]
},
"id": 1
}
Replacing dependencies
Request:
{
"jsonrpc": "2.0",
"method": "trigger.update",
"params": {
"triggerid": "22713",
"dependencies": [
{
"triggerid": "22712"
},
{
"triggerid": "22772"
}
]
},
"id": 1
}
Response:
1587
{
"jsonrpc": "2.0",
"result": {
"triggerids": [
"22713"
]
},
"id": 1
}
Source
CTrigger::update() in ui/include/classes/api/services/CTrigger.php.
Trigger prototype
Object references:
• Trigger prototype
• Trigger prototype tag
Available methods:
Property behavior:
- read-only
- required for update operations
description string Name of the trigger prototype.
Property behavior:
- required for create operations
expression string Reduced trigger expression.
Property behavior:
- required for create operations
event_name string Event name generated by the trigger.
opdata string Operational data.
comments string Additional comments to the trigger prototype.
priority integer Severity of the trigger prototype.
Possible values:
0 - (default) not classified;
1 - information;
2 - warning;
3 - average;
4 - high;
5 - disaster.
1588
Property Type Description
Possible values:
0 - (default) enabled;
1 - disabled.
templateid ID ID of the parent template trigger prototype.
Property behavior:
- read-only
type integer Whether the trigger prototype can generate multiple problem events.
Possible values:
0 - (default) do not generate multiple events;
1 - generate multiple events.
url string URL associated with the trigger prototype.
url_name string Label for the URL associated with the trigger prototype.
recovery_mode integer OK event generation mode.
Possible values:
0 - (default) Expression;
1 - Recovery expression;
2 - None.
recovery_expression string Reduced trigger recovery expression.
correlation_mode integer OK event closes.
Possible values:
0 - (default) All problems;
1 - All problems if tag values match.
correlation_tag string Tag for matching.
manual_close integer Allow manual close.
Possible values:
0 - (default) No;
1 - Yes.
discover integer Trigger prototype discovery status.
Possible values:
0 - (default) new triggers will be discovered;
1 - new triggers will not be discovered and existing triggers will be
marked as lost.
uuid string Universal unique identifier, used for linking imported trigger prototypes
to already existing ones. Auto-generated, if not given.
Property behavior:
- supported if the trigger prototype belongs to a template
Property behavior:
- required
value string Trigger prototype tag value.
triggerprototype.create
1589
Description
Note:
This method is only available to Admin and Super admin user types. Permissions to call the method can be revoked in user
role settings. See User roles for more information.
Parameters
dependencies array Triggers and trigger prototypes that the trigger prototype is dependent
on.
Attention:
The trigger expression has to be given in its expanded form and must contain at least one item prototype.
Return values
(object) Returns an object containing the IDs of the created trigger prototypes under the triggerids property. The order of
the returned IDs matches the order of the passed trigger prototypes.
Examples
Create a trigger prototype to detect when a file system has less than 20% free disk space.
Request:
{
"jsonrpc": "2.0",
"method": "triggerprototype.create",
"params": {
"description": "Free disk space is less than 20% on volume {#FSNAME}",
"expression": "last(/Zabbix server/vfs.fs.size[{#FSNAME},pfree])<20",
"tags": [
{
"tag": "volume",
"value": "{#FSNAME}"
},
{
"tag": "type",
"value": "{#FSTYPE}"
}
]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"triggerids": [
"17372"
1590
]
},
"id": 1
}
Source
CTriggerPrototype::create() in ui/include/classes/api/services/CTriggerPrototype.php.
triggerprototype.delete
Description
Note:
This method is only available to Admin and Super admin user types. Permissions to call the method can be revoked in user
role settings. See User roles for more information.
Parameters
(object) Returns an object containing the IDs of the deleted trigger prototypes under the triggerids property.
Examples
Request:
{
"jsonrpc": "2.0",
"method": "triggerprototype.delete",
"params": [
"12002",
"12003"
],
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"triggerids": [
"12002",
"12003"
]
},
"id": 1
}
Source
CTriggerPrototype::delete() in ui/include/classes/api/services/CTriggerPrototype.php.
triggerprototype.get
Description
1591
The method allows to retrieve trigger prototypes according to the given parameters.
Note:
This method is available to users of any type. Permissions to call the method can be revoked in user role settings. See
User roles for more information.
Parameters
active flag Return only enabled trigger prototypes that belong to monitored hosts.
discoveryids ID/array Return only trigger prototypes that belong to the given LLD rules.
functions string/array Return only triggers that use the given functions.
1592
Parameter Type Description
filter object Return only those results that exactly match the given filter.
Accepts an object, where the keys are property names, and the values
are either a single value or an array of values to match against.
Return values
Retrieve all trigger prototypes and their functions from an LLD rule.
Request:
{
"jsonrpc": "2.0",
"method": "triggerprototype.get",
"params": {
"output": "extend",
"selectFunctions": "extend",
"discoveryids": "22450"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": [
1593
{
"triggerid": "13272",
"expression": "{12598}<20",
"description": "Free inodes is less than 20% on volume {#FSNAME}",
"url": "",
"status": "0",
"value": "0",
"priority": "2",
"lastchange": "0",
"comments": "",
"error": "",
"templateid": "0",
"type": "0",
"state": "0",
"flags": "2",
"recovery_mode": "0",
"recovery_expression": "",
"correlation_mode": "0",
"correlation_tag": "",
"manual_close": "0",
"opdata": "",
"discover": "0",
"event_name": "",
"uuid": "6ce467d05e8745409a177799bed34bb3",
"url_name": "",
"functions": [
{
"functionid": "12598",
"itemid": "22454",
"triggerid": "13272",
"parameter": "$",
"function": "last"
}
]
},
{
"triggerid": "13266",
"expression": "{13500}<20",
"description": "Free disk space is less than 20% on volume {#FSNAME}",
"url": "",
"status": "0",
"value": "0",
"priority": "2",
"lastchange": "0",
"comments": "",
"error": "",
"templateid": "0",
"type": "0",
"state": "0",
"flags": "2",
"recovery_mode": "0",
"recovery_expression": "",
"correlation_mode": "0",
"correlation_tag": "",
"manual_close": "0",
"opdata": "",
"discover": "0",
"event_name": "",
"uuid": "74a1fc62bfe24b7eabe4e244c70dc384",
"url_name": "",
"functions": [
{
1594
"functionid": "13500",
"itemid": "22686",
"triggerid": "13266",
"parameter": "$",
"function": "last"
}
]
}
],
"id": 1
}
Request:
{
"jsonrpc": "2.0",
"method": "triggerprototype.get",
"params": {
"output": [
"triggerid",
"description"
],
"selectTags": "extend",
"triggerids": [
"17373"
]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": [
{
"triggerid": "17373",
"description": "Free disk space is less than 20% on volume {#FSNAME}",
"tags": [
{
"tag": "volume",
"value": "{#FSNAME}"
},
{
"tag": "type",
"value": "{#FSTYPE}"
}
]
}
],
"id": 1
}
See also
• Discovery rule
• Item
• Host
• Host group
• Template group
Source
CTriggerPrototype::get() in ui/include/classes/api/services/CTriggerPrototype.php.
1595
triggerprototype.update
Description
Note:
This method is only available to Admin and Super admin user types. Permissions to call the method can be revoked in user
role settings. See User roles for more information.
Parameters
Additionally to the standard trigger prototype properties the method accepts the following parameters.
dependencies array Triggers and trigger prototypes that the trigger prototype is dependent
on.
Attention:
The trigger expression has to be given in its expanded form and must contain at least one item prototype.
Return values
(object) Returns an object containing the IDs of the updated trigger prototypes under the triggerids property.
Examples
Request:
{
"jsonrpc": "2.0",
"method": "triggerprototype.update",
"params": {
"triggerid": "13938",
"status": 0
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"triggerids": [
"13938"
]
},
"id": 1
}
1596
Replace tags for one trigger prototype.
Request:
{
"jsonrpc": "2.0",
"method": "triggerprototype.update",
"params": {
"triggerid": "17373",
"tags": [
{
"tag": "volume",
"value": "{#FSNAME}"
},
{
"tag": "type",
"value": "{#FSTYPE}"
}
]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"triggerids": [
"17373"
]
},
"id": 1
}
Source
CTriggerPrototype::update() in ui/include/classes/api/services/CTriggerPrototype.php.
User
Object references:
• User
• Media
Available methods:
User object
1597
The user object has the following properties.
Property behavior:
- read-only
- required for update operations
username string User’s name.
Property behavior:
- required for create operations
- read-only for provisioned users if the user is linked to a user directory
(userdirectoryid is set to a valid value that is not ”0”), and user
directory provisioning status is enabled (provision_status of User
directory object is set to ”1”), and authentication status of all LDAP or
SAML provisioning is enabled (ldap_jit_status of Authentication
object is set to ”Enabled for configured LDAP IdPs” or
saml_jit_status of Authentication object is set to ”Enabled for
configured SAML IdPs”)
passwd string User’s password.
The value of this parameter can be an empty string if the user is linked
to a user directory.
Property behavior:
- write-only
roleid ID ID of the role of the user.
Note that users without a role can log into Zabbix only using LDAP or
SAML authentication, provided their LDAP/SAML information matches
the user group mappings configured in Zabbix.
attempt_clock timestamp Time of the last unsuccessful login attempt.
Property behavior:
- read-only
attempt_failed integer Recent failed login attempt count.
Property behavior:
- read-only
attempt_ip string IP address from where the last unsuccessful login attempt came from.
Property behavior:
- read-only
autologin integer Whether to enable auto-login.
Possible values:
0 - (default) auto-login disabled;
1 - auto-login enabled.
autologout string User session life time. Accepts seconds and time unit with suffix. If set
to 0s, the session will never expire.
Default: 15m.
lang string Language code of the user’s language, for example, en_GB.
Default: 30s.
1598
Property Type Description
Default: 50.
surname string Surname of the user.
theme string User’s theme.
Possible values:
default - (default) system default;
blue-theme - Blue;
dark-theme - Dark.
ts_provisioned timestamp Time when the latest provisioning operation was made.
Property behavior:
- read-only
url string URL of the page to redirect the user to after logging in.
userdirectoryid ID ID of the user directory that the user in linked to.
For login operations the value of this property will have priority over the
userdirectoryid property of user groups that the user belongs to.
Default: 0.
Property behavior:
- read-only
timezone string User’s time zone, for example, Europe/London, UTC.
For the full list of supported time zones please refer to PHP
documentation.
Media
Property behavior:
- required
sendto string/array Address, user name or other identifier of the recipient.
Property behavior:
- required
active integer Whether the media is enabled.
Possible values:
0 - (default) enabled;
1 - disabled.
1599
Property Type Description
Severities are stored in binary form with each bit representing the
corresponding severity. For example, ”12” equals ”1100” in binary and
means that notifications will be sent from triggers with severities
”warning” and ”average”.
Refer to the trigger object page for a list of supported trigger severities.
Default: 63.
period string Time when the notifications can be sent as a time period or user
macros separated by a semicolon.
Default: 1-7,00:00-24:00.
userdirectory_mediaid ID User directory media mapping ID for provisioned media.
Property behavior:
- read-only
user.checkAuthentication
Description
object user.checkAuthentication
This method checks and prolongs the user session.
Attention:
Calling the user.checkAuthentication method using the parameter sessionid prolongs the user session by default.
Parameters
Parameter behavior:
- supported if sessionid is set
sessionid string User authentication token.
Parameter behavior:
- required if token is not set
secret string Random 32 characters string. Is generated on user login.
token string User API token.
Parameter behavior:
- required if sessionid is not set
Return values
1600
Property Type Description
Refer to the debug_mode property of the User group object for a list of
possible values.
deprovisioned boolean Whether the user belongs to a deprovisioned users group.
gui_access string User’s authentication method to the frontend.
Refer to the gui_access property of the User group object for a list of
possible values.
secret string Random 32 characters string. Is generated on user login.
Refer to the type property of the Role object for a list of possible
values.
userip string IP address of the user.
Examples
Check and prolong a user session using the user authentication token, and return additional information about the user.
Request:
{
"jsonrpc": "2.0",
"method": "user.checkAuthentication",
"params": {
"sessionid": "673b8ba11562a35da902c66cf5c23fa2"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"userid": "1",
"username": "Admin",
"name": "Zabbix",
"surname": "Administrator",
"url": "",
"autologin": "1",
"autologout": "0",
"lang": "ru_RU",
"refresh": "0",
"theme": "default",
"attempt_failed": "0",
"attempt_ip": "127.0.0.1",
"attempt_clock": "1355919038",
"rows_per_page": "50",
1601
"timezone": "Europe/Riga",
"roleid": "3",
"userdirectoryid": "0",
"ts_provisioned": "0",
"type": 3,
"userip": "127.0.0.1",
"debug_mode": 0,
"gui_access": "0",
"deprovisioned": false,
"auth_type": 0,
"sessionid": "673b8ba11562a35da902c66cf5c23fa2",
"secret": "0e329b933e46984e49a5c1051ecd0751"
},
"id": 1
}
Check a user session using the user API token, and return additional information about the user.
Request:
{
"jsonrpc": "2.0",
"method": "user.checkAuthentication",
"params": {
"token": "00aff470e07c12d707e50d98cfe39edef9e6ec349c14728dbdfbc8ddc5ea3eae"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"userid": "1",
"username": "Admin",
"name": "Zabbix",
"surname": "Administrator",
"url": "",
"autologin": "1",
"autologout": "0",
"lang": "ru_RU",
"refresh": "0",
"theme": "default",
"attempt_failed": "0",
"attempt_ip": "127.0.0.1",
"attempt_clock": "1355919338",
"rows_per_page": "50",
"timezone": "Europe/Riga",
"roleid": "3",
"userdirectoryid": "0",
"ts_provisioned": "0",
"type": 3,
"userip": "127.0.0.1",
"debug_mode": 0,
"gui_access": "1",
"deprovisioned": false,
"auth_type": 0
},
"id": 1
}
Source
1602
CUser::checkAuthentication() in ui/include/classes/api/services/CUser.php.
user.create
Description
Note:
This method is only available to Super admin user type. Permissions to call the method can be revoked in user role settings.
See User roles for more information.
Note:
The strength of user password is validated according the password policy rules defined by Authentication API. See Authen-
tication API for more information.
Parameters
The user groups must have only the usrgrpid property defined.
medias array User media to be created.
Return values
(object) Returns an object containing the IDs of the created users under the userids property. The order of the returned IDs
matches the order of the passed users.
Examples
Creating a user
Create a new user, add him to a user group and create a new media for him.
Request:
{
"jsonrpc": "2.0",
"method": "user.create",
"params": {
"username": "John",
"passwd": "Doe123",
"roleid": "5",
"usrgrps": [
{
"usrgrpid": "7"
}
],
"medias": [
{
"mediatypeid": "1",
"sendto": [
"[email protected]"
],
"active": 0,
"severity": 63,
"period": "1-7,00:00-24:00"
}
1603
]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"userids": [
"12"
]
},
"id": 1
}
See also
• Authentication
• Media
• User group
• Role
Source
CUser::create() in ui/include/classes/api/services/CUser.php.
user.delete
Description
Note:
This method is only available to Super admin user type. Permissions to call the method can be revoked in user role settings.
See User roles for more information.
Parameters
(object) Returns an object containing the IDs of the deleted users under the userids property.
Examples
Request:
{
"jsonrpc": "2.0",
"method": "user.delete",
"params": [
"1",
"5"
],
"id": 1
}
Response:
{
"jsonrpc": "2.0",
1604
"result": {
"userids": [
"1",
"5"
]
},
"id": 1
}
Source
CUser::delete() in ui/include/classes/api/services/CUser.php.
user.get
Description
Note:
This method is available to users of any type. Permissions to call the method can be revoked in user role settings. See
User roles for more information.
Parameters
mediaids ID/array Return only users that use the given media.
mediatypeids ID/array Return only users that use the given media types.
userids ID/array Return only users with the given IDs.
usrgrpids ID/array Return only users that belong to the given user groups.
getAccess flag Adds additional information about user permissions.
1605
Parameter Type Description
sortorder string/array
startSearch boolean
Return values
Retrieving users
Request:
{
"jsonrpc": "2.0",
"method": "user.get",
"params": {
"output": "extend"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": [
{
"userid": "1",
"username": "Admin",
"name": "Zabbix",
"surname": "Administrator",
"url": "",
"autologin": "1",
"autologout": "0",
"lang": "en_GB",
"refresh": "0s",
"theme": "default",
"attempt_failed": "0",
"attempt_ip": "",
"attempt_clock": "0",
"rows_per_page": "50",
"timezone": "default",
"roleid": "3",
"userdirectoryid": "0",
"ts_provisioned": "0"
},
{
"userid": "2",
"username": "guest",
"name": "",
"surname": "",
"url": "",
"autologin": "0",
"autologout": "15m",
"lang": "default",
"refresh": "30s",
"theme": "default",
"attempt_failed": "0",
"attempt_ip": "",
1606
"attempt_clock": "0",
"rows_per_page": "50",
"timezone": "default",
"roleid": "4",
"userdirectoryid": "0",
"ts_provisioned": "0"
},
{
"userid": "3",
"username": "user",
"name": "Zabbix",
"surname": "User",
"url": "",
"autologin": "0",
"autologout": "0",
"lang": "ru_RU",
"refresh": "15s",
"theme": "dark-theme",
"attempt_failed": "0",
"attempt_ip": "",
"attempt_clock": "0",
"rows_per_page": "100",
"timezone": "default",
"roleid": "1",
"userdirectoryid": "0",
"ts_provisioned": "0"
}
],
"id": 1
}
Request:
{
"jsonrpc": "2.0",
"method": "user.get",
"params": {
"output": ["userid", "username"],
"selectRole": "extend",
"userids": "12"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": [
{
"userid": "12",
"username": "John",
"role": {
"roleid": "5",
"name": "Operator",
"type": "1",
"readonly": "0"
}
}
],
"id": 1
1607
}
See also
• Media
• Media type
• User group
• Role
Source
CUser::get() in ui/include/classes/api/services/CUser.php.
user.login
Description
Warning:
When using this method, you also need to do user.logout to prevent the generation of a large number of open session
records.
Attention:
This method is only available to unauthenticated users who do not belong to any user group with enabled multi-factor
authentication. This method must be called without the auth parameter in the JSON-RPC request.
Parameters
Return values
(string/object) If the userData parameter is used, returns an object containing information about the authenticated user.
Additionally to the standard user properties, the following information is returned:
Refer to the debug_mode property of the User group object for a list of
possible values.
deprovisioned boolean Whether the user belongs to a deprovisioned users group.
gui_access string User’s authentication method to the frontend.
Refer to the gui_access property of the User group object for a list of
possible values.
1608
Property Type Description
mfaid integer ID of the MFA method to use for the user during login.
Returns ”0” if MFA is disabled globally or for all user groups that the
user belongs to.
secret string Random 32 characters string. Is generated on user login.
sessionid string Authentication token, which must be used in the following API requests.
type integer User type.
Refer to the type property of the Role object for a list of possible
values.
userip string IP address of the user.
Note:
If a user has been successfully authenticated after one or more failed attempts, the method will return the current values
for the attempt_clock, attempt_failed and attempt_ip properties and then reset them.
If the userData parameter is not used, the method returns an authentication token.
Note:
The generated authentication token should be remembered and used in the auth parameter of the following JSON-RPC
requests. It is also required when using HTTP authentication.
Examples
Authenticating a user
Authenticate a user.
Request:
{
"jsonrpc": "2.0",
"method": "user.login",
"params": {
"username": "Admin",
"password": "zabbix"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": "0424bd59b807674191e7d77572075f33",
"id": 1
}
Request:
{
"jsonrpc": "2.0",
"method": "user.login",
"params": {
"username": "Admin",
"password": "zabbix",
"userData": true
},
"id": 1
}
1609
Response:
{
"jsonrpc": "2.0",
"result": {
"userid": "1",
"username": "Admin",
"name": "Zabbix",
"surname": "Administrator",
"url": "",
"autologin": "1",
"autologout": "0",
"lang": "ru_RU",
"refresh": "0",
"theme": "default",
"attempt_failed": "0",
"attempt_ip": "127.0.0.1",
"attempt_clock": "1355919038",
"rows_per_page": "50",
"timezone": "Europe/Riga",
"roleid": "3",
"userdirectoryid": "0",
"type": 3,
"userip": "127.0.0.1",
"debug_mode": 0,
"gui_access": "0",
"mfaid": "1",
"deprovisioned": false,
"auth_type": 0,
"sessionid": "5b56eee8be445e98f0bd42b435736e42",
"secret": "cd0ba923319741c6586f3d866423a8f4"
},
"id": 1
}
See also
• user.logout
Source
CUser::login() in ui/include/classes/api/services/CUser.php.
user.logout
Description
string/object user.logout(array)
This method allows to log out of the API and invalidates the current authentication token.
Note:
This method is available to users of any type. Permissions to call the method can be revoked in user role settings. See
User roles for more information.
Parameters
(boolean) Returns true if the user has been logged out successfully.
Examples
Logging out
1610
Request:
{
"jsonrpc": "2.0",
"method": "user.logout",
"params": [],
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": true,
"id": 1
}
See also
• user.login
Source
CUser::login() in ui/include/classes/api/services/CUser.php.
user.provision
Description
Note:
This method is only available to Super admin user type. Permissions to call the method can be revoked in user role settings.
See User roles for more information.
Parameters
(object) Returns an object containing the IDs of the provisioned users under the userids property.
Examples
Request:
{
"jsonrpc": "2.0",
"method": "user.provision",
"params": [
"1",
"5"
],
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"userids": [
"1",
"5"
1611
]
},
"id": 1
}
Source
CUser::provision() in ui/include/classes/api/services/CUser.php.
user.resettotp
Description
Note:
This method is only available to Super admin user type. Permissions to call the method can be revoked in user role settings.
See User roles for more information.
Parameters
Note:
User sessions for the specified users will also be deleted (except for the user sending the request).
Return values
(object) Returns an object containing the IDs of the users for which TOTP secrets have been reset, under the userids property.
Examples
Request:
{
"jsonrpc": "2.0",
"method": "user.resettotp",
"params": [
"1",
"5"
],
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"userids": [
"1",
"5"
]
},
"id": 1
}
See also
• MFA object
Source
CUser::resettotp() in ui/include/classes/api/services/CUser.php.
1612
user.unblock
Description
Note:
This method is only available to Super admin user type. Permissions to call the method can be revoked in user role settings.
See User roles for more information.
Parameters
(object) Returns an object containing the IDs of the unblocked users under the userids property.
Examples
Request:
{
"jsonrpc": "2.0",
"method": "user.unblock",
"params": [
"1",
"5"
],
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"userids": [
"1",
"5"
]
},
"id": 1
}
Source
CUser::unblock() in ui/include/classes/api/services/CUser.php.
user.update
Description
Note:
This method is available to users of any type. Permissions to call the method can be revoked in user role settings. See
User roles for more information.
1613
Note:
The strength of user password is validated according the password policy rules defined by Authentication API. See Authen-
tication API for more information.
Parameters
Additionally to the standard user properties, the method accepts the following parameters.
The value of this parameter can be an empty string if the user is linked
to a user directory.
Parameter behavior:
- write-only
- required if passwd of User object is set and user changes own user
password
usrgrps array User groups to replace existing user groups.
The user groups must have only the usrgrpid property defined.
medias array User media to replace existing, non-provisioned media. Provisioned
media can be omitted when updating media.
Return values
(object) Returns an object containing the IDs of the updated users under the userids property.
Examples
Renaming a user
Request:
{
"jsonrpc": "2.0",
"method": "user.update",
"params": {
"userid": "1",
"name": "John",
"surname": "Doe"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"userids": [
"1"
]
},
"id": 1
}
1614
Request:
{
"jsonrpc": "2.0",
"method": "user.update",
"params": {
"userid": "12",
"roleid": "6"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"userids": [
"12"
]
},
"id": 1
}
See also
• Authentication
Source
CUser::update() in ui/include/classes/api/services/CUser.php.
User directory
Object references:
• User directory
• Media type mappings
• Provisioning groups mappings
Available methods:
1615
Property Type Description
Property behavior:
- read-only
- required for update operations
idp_type integer Type of the authentication protocol used by the identity provider for
the user directory.
Note that only one user directory of type SAML can exist.
Possible values:
1 - User directory of type LDAP;
2 - User directory of type SAML.
Property behavior:
- required for create operations
group_name string LDAP/SAML user directory attribute that contains the group name used
to map groups between the LDAP/SAML user directory and Zabbix.
Example: cn
Property behavior:
provision_status is set to ”Enabled” and
- required if
saml_jit_status of Authentication object is set to ”Enabled for
configured SAML IdPs”
user_username string LDAP/SAML user directory attribute (also SCIM attribute if
scim_status is set to ”SCIM provisioning is enabled”) that contains
the user’s name which is used as the value for the User object property
name when the user is provisioned.
Possible values:
0 - (default) Disabled (provisioning of users created by this user
directory is disabled);
1 - Enabled (provisioning of users created by this user directory is
enabled; additionally, the status of LDAP or SAML provisioning
(ldap_jit_status or saml_jit_status of Authentication object)
must be enabled).
provision_groups array Array of provisioning groups mappings objects for mapping LDAP/SAML
user group pattern to Zabbix user group and user role.
Property behavior:
- required if provision_status is set to ”Enabled”
provision_media array Array of media type mappings objects for mapping user’s LDAP/SAML
media attributes (e.g., email) to Zabbix user media for sending
notifications.
LDAP-specific
properties:
1616
Property Type Description
Property behavior:
- required if idp_type is set to ”User directory of type LDAP”
host string Host name, IP or URI of the LDAP server.
URI must contain schema (ldap:// or ldaps://), host, and port
(optional).
Examples:
host.example.com
127.0.0.1
ldap://ldap.example.com:389
Property behavior:
- required if idp_type is set to ”User directory of type LDAP”
port integer Port of the LDAP server.
Property behavior:
- required if idp_type is set to ”User directory of type LDAP”
base_dn string LDAP user directory base path to user accounts.
Examples:
ou=Users,dc=example,dc=org
ou=Users,ou=system (for OpenLDAP)
DC=company,DC=com (for Microsoft Active Directory)
uid=%{user},dc=example,dc=com (for direct user binding;
placeholder ”%{user}” is mandatory)
Property behavior:
- required if idp_type is set to ”User directory of type LDAP”
search_attribute string LDAP user directory attribute by which to identify the user account
from the information provided in the login request.
Examples:
uid (for OpenLDAP)
sAMAccountName (for Microsoft Active Directory)
Property behavior:
- required if idp_type is set to ”User directory of type LDAP”
bind_dn string LDAP server account for binding and searching over the LDAP server.
Examples:
uid=ldap_search,ou=system (for OpenLDAP)
CN=ldap_search,OU=user_group,DC=company,DC=com (for Microsoft
Active Directory)
CN=Admin,OU=Users,OU=Zabbix,DC=zbx,DC=local
Property behavior:
- supported if idp_type is set to ”User directory of type LDAP”
bind_password string LDAP password of the account for binding and searching over the LDAP
server.
Property behavior:
- supported if idp_type is set to ”User directory of type LDAP”
1617
Property Type Description
Property behavior:
- supported if idp_type is set to ”User directory of type LDAP”
group_basedn string LDAP user directory base path to groups; used to configure a user
membership check in the LDAP user directory.
Example: ou=Groups,dc=example,dc=com
Property behavior:
- supported if idp_type is set to ”User directory of type LDAP”
group_filter string Filter string for retrieving LDAP user directory groups that the user is a
member of; used to configure a user membership check in the LDAP
user directory.
Default: (%{groupattr}=%{user})
Examples:
- (member=uid=%{ref},ou=Users,dc=example,dc=com) will match
”User1” if an LDAP group object contains the ”member” attribute with
the value ”uid=User1,ou=Users,dc=example,dc=com”, and will return
the group that ”User1” is a member of;
-
(%{groupattr}=cn=%{ref},ou=Users,ou=Zabbix,DC=example,DC=com)
will match ”User1” if an LDAP group object contains the attribute
specified in the group_member property with the value
”cn=User1,ou=Users,ou=Zabbix,DC=example,DC=com”, and will
return the group that ”User1” is a member of.
Property behavior:
- supported if idp_type is set to ”User directory of type LDAP”
group_member string LDAP user directory attribute that contains information about the
group members; used to configure a user membership check in the
LDAP user directory.
Property behavior:
- supported if idp_type is set to ”User directory of type LDAP”
group_membership string LDAP user directory attribute that contains information about the
groups that a user belongs to.
Example: memberOf
Property behavior:
- supported if idp_type is set to ”User directory of type LDAP”
1618
Property Type Description
search_filter string Custom filter string used to locate and authenticate a user in an LDAP
user directory based on the information provided in the login request.
Default: (%{attr}=%{user})
Property behavior:
- supported if idp_type is set to ”User directory of type LDAP”
start_tls integer LDAP server configuration option that allows the communication with
the LDAP server to be secured using Transport Layer Security (TLS).
Possible values:
0 - (default) Disabled;
1 - Enabled.
Property behavior:
- supported if idp_type is set to ”User directory of type LDAP”
user_ref_attr string LDAP user directory attribute used to reference a user object. The
value of user_ref_attr is used to get values from the specified
attribute in the user directory and place them instead of the
%{ref} placeholder in the group_filter string.
Property behavior:
- supported if idp_type is set to ”User directory of type LDAP”
SAML-specific
properties:
idp_entityid string URI that identifies the identity provider and is used to communicate
with the identity provider in SAML messages.
Example: https://idp.example.com/idp
Property behavior:
- required if idp_type is set to ”User directory of type SAML”
sp_entityid string URL or any string that identifies the identity provider’s service provider.
Examples:
https://idp.example.com/sp
zabbix
Property behavior:
- required if idp_type is set to ”User directory of type SAML”
username_attribute string SAML user directory attribute (also SCIM attribute if scim_status is
set to ”SCIM provisioning is enabled”) that contains the user’s
username which is compared with the value of the User object
property username when authenticating.
Property behavior:
- required if idp_type is set to ”User directory of type SAML”
1619
Property Type Description
sso_url string URL of the identity provider’s SAML single sign-on service, to which
Zabbix will send the SAML authentication requests.
Example: http://idp.example.com/idp/sso/saml
Property behavior:
- required if idp_type is set to ”User directory of type SAML”
slo_url string URL of the identity provider’s SAML single log-out service, to which
Zabbix will send the SAML logout requests.
Example: https://idp.example.com/idp/slo/saml
Property behavior:
- supported if idp_type is set to ”User directory of type SAML”
encrypt_nameid integer Whether the SAML name ID should be encrypted.
Possible values:
0 - (default) Do not encrypt name ID;
1 - Encrypt name ID.
Property behavior:
- supported if idp_type is set to ”User directory of type SAML”
encrypt_assertions integer Whether the SAML assertions should be encrypted.
Possible values:
0 - (default) Do not encrypt assertions;
1 - Encrypt assertions.
Property behavior:
- supported if idp_type is set to ”User directory of type SAML”
nameid_format string Name ID format of the SAML identity provider’s service provider.
Examples:
urn:oasis:names:tc:SAML:2.0:nameid-format:persistent
urn:oasis:names:tc:SAML:2.0:nameid-format:transient
urn:oasis:names:tc:SAML:2.0:nameid-format:kerberos
urn:oasis:names:tc:SAML:2.0:nameid-format:entity
Property behavior:
- supported if idp_type is set to ”User directory of type SAML”
scim_status integer Whether SCIM provisioning for SAML is enabled or disabled.
Possible values:
0 - (default) SCIM provisioning is disabled;
1 - SCIM provisioning is enabled.
Property behavior:
- supported if idp_type is set to ”User directory of type SAML”
sign_assertions integer Whether the SAML assertions should be signed with a SAML signature.
Possible values:
0 - (default) Do not sign assertions;
1 - Sign assertions.
Property behavior:
- supported if idp_type is set to ”User directory of type SAML”
1620
Property Type Description
sign_authn_requests integer Whether the SAML AuthN requests should be signed with a SAML
signature.
Possible values:
0 - (default) Do not sign AuthN requests;
1 - Sign AuthN requests.
Property behavior:
- supported if idp_type is set to ”User directory of type SAML”
sign_messages integer Whether the SAML messages should be signed with a SAML signature.
Possible values:
0 - (default) Do not sign messages;
1 - Sign messages.
Property behavior:
- supported if idp_type is set to ”User directory of type SAML”
sign_logout_requests integer Whether the SAML logout requests should be signed with a SAML
signature.
Possible values:
0 - (default) Do not sign logout requests;
1 - Sign logout requests.
Property behavior:
- supported if idp_type is set to ”User directory of type SAML”
sign_logout_responses integer Whether the SAML logout responses should be signed with a SAML
signature.
Possible values:
0 - (default) Do not sign logout responses;
1 - Sign logout responses.
Property behavior:
- supported if idp_type is set to ”User directory of type SAML”
userdirectory_mediaid
ID Media type mapping ID.
Property behavior:
- read-only
name string Visible name in the list of media type mappings.
Property behavior:
- required
mediatypeid ID ID of the media type to be created; used as the value for the Media object property
mediatypeid.
Property behavior:
- required
1621
Property Type Description
attribute string LDAP/SAML user directory attribute (also SCIM attribute if scim_status is set to ”SCIM
provisioning is enabled”) that contains the user’s media (e.g., [email protected]) which is
used as the value for the Media object property sendto.
If present in data received from the LDAP/SAML identity provider, and the value is not empty,
this will trigger media creation for the provisioned user.
Property behavior:
- required
active integer User media active property value when media is created for the provisioned user.
Possible values:
0 - (default) enabled;
1 - disabled.
severity integer User media severity property value when media is created for the provisioned user.
Default: 63.
period string User media period property value when media is created for the provisioned user.
Default: 1-7,00:00-24:00.
name string Full name of a group (e.g., Zabbix administrators) in LDAP/SAML user directory (also SCIM if
scim_status is set to ”SCIM provisioning is enabled”).
Supports the wildcard character ”*”.
Unique across all provisioning groups mappings.
Property behavior:
- required
roleid ID ID of the user role to assign to the user.
If multiple provisioning groups mappings are matched, the role of the highest user type (User,
Admin, or Super admin) is assigned to the user. If there are multiple roles with the same user
type, the first role (sorted in alphabetical order) is assigned to the user.
Property behavior:
- required
user_groups array Array of Zabbix user group ID objects. Each object has the following properties:
usrgrpid - (ID) ID of Zabbix user group to assign to the user.
If multiple provisioning groups mappings are matched, Zabbix user groups of all matched
mappings is assigned to the user.
Property behavior:
- required
userdirectory.create
Description
Note:
This method is only available to Super admin user type.
1622
Parameters
Return values
(object) Returns an object containing the IDs of the created user directories under the userdirectoryids property. The order
of the returned IDs matches the order of the passed user directories.
Examples
Create a user directory to authenticate users with StartTLS over LDAP. Note that to authenticate users over LDAP, LDAP authenti-
cation must be enabled.
Request:
{
"jsonrpc": "2.0",
"method": "userdirectory.create",
"params": {
"idp_type": "1",
"name": "LDAP API server #1",
"host": "ldap://local.ldap",
"port": "389",
"base_dn": "ou=Users,dc=example,dc=org",
"bind_dn": "cn=ldap_search,dc=example,dc=org",
"bind_password": "ldapsecretpassword",
"search_attribute": "uid",
"start_tls": "1"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"userdirectoryids": [
"3"
]
},
"id": 1
}
Create a user directory to authenticate users over LDAP (with JIT provisioning enabled). Note that to authenticate users over LDAP,
LDAP authentication must be enabled.
Request:
{
"jsonrpc": "2.0",
"method": "userdirectory.create",
"params": {
"idp_type": "1",
"name": "AD server",
"provision_status": "1",
"description": "",
"host": "host.example.com",
"port": "389",
"base_dn": "DC=zbx,DC=local",
"search_attribute": "sAMAccountName",
"bind_dn": "CN=Admin,OU=Users,OU=Zabbix,DC=zbx,DC=local",
"start_tls": "0",
1623
"search_filter": "",
"group_basedn": "OU=Zabbix,DC=zbx,DC=local",
"group_name": "CN",
"group_member": "member",
"group_filter": "(%{groupattr}=CN=%{ref},OU=Users,OU=Zabbix,DC=zbx,DC=local)",
"group_membership": "",
"user_username": "givenName",
"user_lastname": "sn",
"user_ref_attr": "CN",
"provision_media": [
{
"name": "example.com",
"mediatypeid": "1",
"attribute": "[email protected]"
}
],
"provision_groups": [
{
"name": "*",
"roleid": "4",
"user_groups": [
{
"usrgrpid": "8"
}
]
},
{
"name": "Zabbix administrators",
"roleid": "2",
"user_groups": [
{
"usrgrpid": "7"
},
{
"usrgrpid": "8"
}
]
}
]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"userdirectoryids": [
"2"
]
},
"id": 1
}
Source
CUserDirectory::create() in ui/include/classes/api/services/CUserDirectory.php.
userdirectory.delete
Description
1624
This method allows to delete user directories. User directory cannot be deleted when it is directly used for at least one user group.
Default LDAP user directory cannot be deleted when authentication.ldap_configured is set to 1 or when there are more
user directories left.
Note:
This method is only available to Super admin user type.
Parameters
(object) Returns an object containing the IDs of the deleted user directories under the userdirectoryids property.
Examples
Request:
{
"jsonrpc": "2.0",
"method": "userdirectory.delete",
"params": [
"2",
"12"
],
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"userdirectoryids": [
"2",
"12"
]
},
"id": 1
}
Source
CUserDirectory::delete() in ui/include/classes/api/services/CUserDirectory.php.
userdirectory.get
Description
Note:
This method is only available to Super admin user types.
Parameters
userdirectoryids ID/array Return only user directories with the given IDs.
1625
Parameter Type Description
selectUsrgrps query Return a usrgrps property with user groups associated with a user
directory.
Supports count.
selectProvisionMedia query Return a provision_media property with media type mappings
associated with a user directory.
selectProvisionGroups query Return a provision_groups property with provisioning groups
mappings associated with a user directory.
sortfield string/array Sort the result by the given properties.
Accepts an object, where the keys are property names, and the values
are either a single value or an array of values.
userdirectoryid, idp_type,
Supports properties:
provision_status.
search object Return results that match the given pattern (case-insensitive).
Accepts an object, where the keys are property names, and the values
are strings to search for. If no additional options are given, this will
perform a LIKE "%…%" search.
Return values
Retrieve all user directories with additional properties that display media type mappings and provisioning groups mappings asso-
ciated with each user directory.
Request:
{
"jsonrpc": "2.0",
"method": "userdirectory.get",
"params": {
"output": "extend",
"selectProvisionMedia": "extend",
"selectProvisionGroups": "extend"
},
1626
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": [
{
"userdirectoryid": "1",
"idp_type": "2",
"name": "",
"provision_status": "1",
"description": "",
"group_name": "groups",
"user_username": "",
"user_lastname": "",
"idp_entityid": "http://example.com/simplesaml/saml2/idp/metadata.php",
"sso_url": "http://example.com/simplesaml/saml2/idp/SSOService.php",
"slo_url": "",
"username_attribute": "uid",
"sp_entityid": "zabbix",
"nameid_format": "",
"sign_messages": "0",
"sign_assertions": "0",
"sign_authn_requests": "0",
"sign_logout_requests": "0",
"sign_logout_responses": "0",
"encrypt_nameid": "0",
"encrypt_assertions": "0",
"scim_status": "1",
"provision_media": [
{
"userdirectory_mediaid": "1",
"name": "example.com",
"mediatypeid": "1",
"attribute": "[email protected]",
"active": "0",
"severity": "63",
"period": "1-7,00:00-24:00"
}
],
"provision_groups": [
{
"name": "*",
"roleid": "1",
"user_groups": [
{
"usrgrpid": "13"
}
]
}
]
},
{
"userdirectoryid": "2",
"idp_type": "1",
"name": "AD server",
"provision_status": "1",
"description": "",
"host": "host.example.com",
"port": "389",
"base_dn": "DC=zbx,DC=local",
1627
"search_attribute": "sAMAccountName",
"bind_dn": "CN=Admin,OU=Users,OU=Zabbix,DC=zbx,DC=local",
"start_tls": "0",
"search_filter": "",
"group_basedn": "OU=Zabbix,DC=zbx,DC=local",
"group_name": "CN",
"group_member": "member",
"group_filter": "(%{groupattr}=CN=%{ref},OU=Users,OU=Zabbix,DC=zbx,DC=local)",
"group_membership": "",
"user_username": "givenName",
"user_lastname": "sn",
"user_ref_attr": "CN",
"provision_media": [
{
"userdirectory_mediaid": "2",
"name": "example.com",
"mediatypeid": "1",
"attribute": "[email protected]",
"active": "0",
"severity": "63",
"period": "1-7,00:00-24:00"
}
],
"provision_groups": [
{
"name": "*",
"roleid": "4",
"user_groups": [
{
"usrgrpid": "8"
}
]
},
{
"name": "Zabbix administrators",
"roleid": "2",
"user_groups": [
{
"usrgrpid": "7"
},
{
"usrgrpid": "8"
}
]
}
]
},
{
"userdirectoryid": "3",
"idp_type": "1",
"name": "LDAP API server #1",
"provision_status": "0",
"description": "",
"host": "ldap://local.ldap",
"port": "389",
"base_dn": "ou=Users,dc=example,dc=org",
"search_attribute": "uid",
"bind_dn": "cn=ldap_search,dc=example,dc=org",
"start_tls": "1",
"search_filter": "",
"group_basedn": "",
"group_name": "",
1628
"group_member": "",
"group_filter": "",
"group_membership": "",
"user_username": "",
"user_lastname": "",
"user_ref_attr": "",
"provision_media": [],
"provision_groups": []
}
],
"id": 1
}
See also
• User group
Source
CUserDirectory::get() in ui/include/classes/api/services/CUserDirectory.php.
userdirectory.test
Description
Note:
This method also allows to test what configured data matches the user directory settings for user provisioning (e.g., what
user role, user groups, user medias will be assigned to the user). For this type of test the API request should be made for
a user directory that has provision_status set to enabled.
Note:
This method is only available to Super admin user type.
Parameters
Return values
Request:
{
"jsonrpc": "2.0",
"method": "userdirectory.test",
"params": {
"userdirectoryid": "3",
1629
"host": "127.0.0.1",
"port": "389",
"base_dn": "ou=Users,dc=example,dc=org",
"search_attribute": "uid",
"bind_dn": "cn=ldap_search,dc=example,dc=org",
"bind_password": "password",
"test_username": "user1",
"test_password": "password"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": true,
"id": 1
}
Request:
{
"jsonrpc": "2.0",
"method": "userdirectory.test",
"params": {
"userdirectoryid": "3",
"host": "127.0.0.1",
"port": "389",
"base_dn": "ou=Users,dc=example,dc=org",
"search_attribute": "uid",
"bind_dn": "cn=ldap_search,dc=example,dc=org",
"test_username": "user2",
"test_password": "password"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"error": {
"code": -32500,
"message": "Application error.",
"data": "Incorrect user name or password or account is temporarily blocked."
},
"id": 1
}
Test userdirectory ”3” for what configured data matches the user directory settings for ”user3” provisioning (e.g., what user role,
user groups, user medias will be assigned to the user).
Request:
{
"jsonrpc": "2.0",
"method": "userdirectory.test",
"params": {
"userdirectoryid": "2",
"host": "host.example.com",
"port": "389",
1630
"base_dn": "DC=zbx,DC=local",
"search_attribute": "sAMAccountName",
"bind_dn": "CN=Admin,OU=Users,OU=Zabbix,DC=zbx,DC=local",
"test_username": "user3",
"test_password": "password"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"username": "user3",
"name": "John",
"surname": "Doe",
"medias": [],
"usrgrps": [
{
"usrgrpid": "8"
},
{
"usrgrpid": "7"
}
],
"roleid": "2",
"userdirectoryid": "2"
},
"id": 1
}
Source
CUserDirectory::test() in ui/include/classes/api/services/CUserDirectory.php.
userdirectory.update
Description
Note:
This method is only available to Super admin user type.
Parameters
Return values
(object) Returns an object containing the IDs of the updated user directories under the userdirectoryids property.
Examples
Request:
{
"jsonrpc": "2.0",
"method": "userdirectory.update",
1631
"params": {
"userdirectoryid": "3",
"bind_password": "newldappassword"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"userdirectoryids": [
"3"
]
},
"id": 1
}
Update provisioning groups mappings and media type mappings for user directory ”2”.
Request:
{
"jsonrpc": "2.0",
"method": "userdirectory.update",
"params": {
"userdirectoryid": "2",
"provision_media": [
{
"userdirectory_mediaid": "2"
}
],
"provision_groups": [
{
"name": "Zabbix administrators",
"roleid": "2",
"user_groups": [
{
"usrgrpid": "7"
},
{
"usrgrpid": "8"
},
{
"usrgrpid": "11"
}
]
}
]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"userdirectoryids": [
"2"
]
},
"id": 1
1632
}
Source
CUserDirectory::update() in ui/include/classes/api/services/CUserDirectory.php.
User group
Object references:
• User group
• Permission
• Tag-based permission
Available methods:
Property behavior:
- read-only
- required for update operations
name string Name of the user group.
Property behavior:
- required for create operations
debug_mode integer Whether debug mode is enabled or disabled.
Possible values:
0 - (default) disabled;
1 - enabled.
gui_access integer Frontend authentication method of the users in the group.
Possible values:
0 - (default) use the system default authentication method;
1 - use internal authentication;
2 - use LDAP authentication;
3 - disable access to the frontend.
mfa_status integer Whether MFA is enabled or disabled for the users in the group.
Possible values:
0 - disabled (for all configured MFA methods);
1 - enabled (for all configured MFA methods).
mfaid ID MFA method used for the users in the group.
Property behavior:
- supported if mfa_status of Authentication object is set to ”Enabled”
1633
Property Type Description
Possible values:
0 - (default) enabled;
1 - disabled.
userdirectoryid ID ID of the user directory used for authentication.
Property behavior:
- supported if gui_access is set to ”use the system default
authentication method” or ”use LDAP authentication”
Permission
Property behavior:
- required
permission integer Access level to the host group or template group.
Possible values:
0 - access denied;
2 - read-only access;
3 - read-write access.
Property behavior:
- required for create operations
Tag-based permission
Property behavior:
- required
tag string Tag name.
value string Tag value.
usergroup.create
Description
Note:
This method is only available to Super admin user type. Permissions to call the method can be revoked in user role settings.
See User roles for more information.
Parameters
1634
Parameter Type Description
Return values
(object) Returns an object containing the IDs of the created user groups under the usrgrpids property. The order of the
returned IDs matches the order of the passed user groups.
Examples
Create a user group Operation managers with denied access to host group ”2”, and add a user to it.
Request:
{
"jsonrpc": "2.0",
"method": "usergroup.create",
"params": {
"name": "Operation managers",
"hostgroup_rights": {
"id": "2",
"permission": 0
},
"users": [
{
"userid": "12"
}
]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"usrgrpids": [
"20"
]
},
"id": 1
}
See also
• Permission
Source
CUserGroup::create() in ui/include/classes/api/services/CUserGroup.php.
usergroup.delete
Description
1635
This method allows to delete user groups.
Attention:
Deprovisioned users group (the user group specified for disabled_usrgrpid in Authentication) cannot be deleted.
Note:
This method is only available to Super admin user type. Permissions to call the method can be revoked in user role settings.
See User roles for more information.
Parameters
(object) Returns an object containing the IDs of the deleted user groups under the usrgrpids property.
Examples
Request:
{
"jsonrpc": "2.0",
"method": "usergroup.delete",
"params": [
"20",
"21"
],
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"usrgrpids": [
"20",
"21"
]
},
"id": 1
}
Source
CUserGroup::delete() in ui/include/classes/api/services/CUserGroup.php.
usergroup.get
Description
Note:
This method is available to users of any type. Permissions to call the method can be revoked in user role settings. See
User roles for more information.
Parameters
1636
Parameter Type Description
mfaids ID/array Return only user groups with the given MFA methods.
mfa_status integer Return only user groups with the given MFA status.
Refer to the user group page for a list of access levels to host groups.
selectTemplateGroupRights
query Return user group template group permissions in the
templategroup_rights property.
Refer to the user group page for a list of access levels to template
groups.
limitSelects integer Limits the number of records returned by subselects.
sortfield string/array Sort the result by the given properties.
Return values
Request:
{
"jsonrpc": "2.0",
"method": "usergroup.get",
"params": {
"output": "extend",
"status": 0
1637
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": [
{
"usrgrpid": "7",
"name": "Zabbix administrators",
"gui_access": "0",
"users_status": "0",
"debug_mode": "1",
"userdirectoryid": "0",
"mfa_status": "0",
"mfaid": "0"
},
{
"usrgrpid": "8",
"name": "Guests",
"gui_access": "0",
"users_status": "0",
"debug_mode": "0",
"userdirectoryid": "0",
"mfa_status": "0",
"mfaid": "0"
},
{
"usrgrpid": "11",
"name": "Enabled debug mode",
"gui_access": "0",
"users_status": "0",
"debug_mode": "1",
"userdirectoryid": "0",
"mfa_status": "0",
"mfaid": "0"
},
{
"usrgrpid": "12",
"name": "No access to the frontend",
"gui_access": "2",
"users_status": "0",
"debug_mode": "0",
"userdirectoryid": "0",
"mfa_status": "0",
"mfaid": "0"
},
{
"usrgrpid": "14",
"name": "Read only",
"gui_access": "0",
"users_status": "0",
"debug_mode": "0",
"userdirectoryid": "0",
"mfa_status": "0",
"mfaid": "0"
},
{
"usrgrpid": "18",
"name": "Deny",
"gui_access": "0",
1638
"users_status": "0",
"debug_mode": "0",
"userdirectoryid": "0",
"mfa_status": "0",
"mfaid": "0"
}
],
"id": 1
}
See also
• User
Source
CUserGroup::get() in ui/include/classes/api/services/CUserGroup.php.
usergroup.update
Description
Note:
This method is only available to Super admin user type. Permissions to call the method can be revoked in user role settings.
See User roles for more information.
Parameters
Additionally to the standard user group properties, the method accepts the following parameters.
hostgroup_rights object/array Host group permissions to replace the current permissions assigned to
the user group.
templategroup_rights object/array Template group permissions to replace the current permissions
assigned to the user group.
tag_filters array Tag-based permissions to replace the current permissions assigned to
the user group.
users object/array Users to replace the current users assigned to the user group.
Return values
(object) Returns an object containing the IDs of the updated user groups under the usrgrpids property.
Examples
Enable a user group and provide read-write access for it to host groups ”2” and ”4”.
Request:
1639
{
"jsonrpc": "2.0",
"method": "usergroup.update",
"params": {
"usrgrpid": "17",
"users_status": "0",
"hostgroup_rights": [
{
"id": "2",
"permission": 3
},
{
"id": "4",
"permission": 3
}
]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"usrgrpids": [
"17"
]
},
"id": 1
}
See also
• Permission
Source
CUserGroup::update() in ui/include/classes/api/services/CUserGroup.php.
User macro
This class is designed to work with host-level and global user macros.
Object references:
• Global macro
• Host macro
Available methods:
1640
Property Type Description
Property behavior:
- read-only
- required for update operations
macro string Macro string.
Property behavior:
- required for create operations
value string Value of the macro.
Property behavior:
- write-only if type is set to ”Secret macro”
- required for create operations
type integer Type of macro.
Possible values:
0 - (default) Text macro;
1 - Secret macro;
2 - Vault secret.
description string Description of the macro.
Host macro
The host macro object defines a macro available on a host, host prototype or template. It has the following properties.
Property behavior:
- read-only
- required for update operations
hostid ID ID of the host that the macro belongs to.
Property behavior:
- constant
- required for create operations
macro string Macro string.
Property behavior:
- required for create operations
value string Value of the macro.
Property behavior:
- write-only if type is set to ”Secret macro”
- required for create operations
type integer Type of macro.
Possible values:
0 - (default) Text macro;
1 - Secret macro;
2 - Vault secret.
description string Description of the macro.
1641
Property Type Description
Possible values:
0 - (default) Macro is managed by user;
1 - Macro is managed by discovery rule.
usermacro.create
Description
Note:
This method is only available to Admin and Super admin user types. Permissions to call the method can be revoked in user
role settings. See User roles for more information.
Parameters
Return values
(object) Returns an object containing the IDs of the created host macros under the hostmacroids property. The order of the
returned IDs matches the order of the passed host macros.
Examples
Create a host macro ”{$SNMP_COMMUNITY}” with the value ”public” on host ”10198”.
Request:
{
"jsonrpc": "2.0",
"method": "usermacro.create",
"params": {
"hostid": "10198",
"macro": "{$SNMP_COMMUNITY}",
"value": "public"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"hostmacroids": [
"11"
]
},
"id": 1
}
Source
CUserMacro::create() in ui/include/classes/api/services/CUserMacro.php.
1642
usermacro.createglobal
Description
Note:
This method is only available to Super admin user type. Permissions to call the method can be revoked in user role settings.
See User roles for more information.
Parameters
Return values
(object) Returns an object containing the IDs of the created global macros under the globalmacroids property. The order of
the returned IDs matches the order of the passed global macros.
Examples
Request:
{
"jsonrpc": "2.0",
"method": "usermacro.createglobal",
"params": {
"macro": "{$SNMP_COMMUNITY}",
"value": "public"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"globalmacroids": [
"6"
]
},
"id": 1
}
Source
CUserMacro::createGlobal() in ui/include/classes/api/services/CUserMacro.php.
usermacro.delete
Description
Note:
This method is only available to Admin and Super admin user types. Permissions to call the method can be revoked in user
role settings. See User roles for more information.
Parameters
1643
(array) IDs of the host macros to delete.
Return values
(object) Returns an object containing the IDs of the deleted host macros under the hostmacroids property.
Examples
Request:
{
"jsonrpc": "2.0",
"method": "usermacro.delete",
"params": [
"32",
"11"
],
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"hostmacroids": [
"32",
"11"
]
},
"id": 1
}
Source
CUserMacro::delete() in ui/include/classes/api/services/CUserMacro.php.
usermacro.deleteglobal
Description
Note:
This method is only available to Super admin user type. Permissions to call the method can be revoked in user role settings.
See User roles for more information.
Parameters
(object) Returns an object containing the IDs of the deleted global macros under the globalmacroids property.
Examples
Request:
{
"jsonrpc": "2.0",
"method": "usermacro.deleteglobal",
1644
"params": [
"32",
"11"
],
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"globalmacroids": [
"32",
"11"
]
},
"id": 1
}
Source
CUserMacro::deleteGlobal() in ui/include/classes/api/services/CUserMacro.php.
usermacro.get
Description
Note:
This method is available to users of any type. Permissions to call the method can be revoked in user role settings. See
User roles for more information.
Parameters
1645
Parameter Type Description
selectTemplates query Return templates that the host macro belongs to in the templates
property.
Return values
Request:
{
"jsonrpc": "2.0",
"method": "usermacro.get",
"params": {
"output": "extend",
"hostids": "10198"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": [
{
"hostmacroid": "9",
"hostid": "10198",
"macro": "{$INTERFACE}",
"value": "eth0",
"description": "",
"type": "0",
"automatic": "0"
1646
},
{
"hostmacroid": "11",
"hostid": "10198",
"macro": "{$SNMP_COMMUNITY}",
"value": "public",
"description": "",
"type": "0",
"automatic": "0"
}
],
"id": 1
}
Request:
{
"jsonrpc": "2.0",
"method": "usermacro.get",
"params": {
"output": "extend",
"globalmacro": true
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": [
{
"globalmacroid": "6",
"macro": "{$SNMP_COMMUNITY}",
"value": "public",
"description": "",
"type": "0"
}
],
"id": 1
}
Source
CUserMacro::get() in ui/include/classes/api/services/CUserMacro.php.
usermacro.update
Description
Note:
This method is only available to Admin and Super admin user types. Permissions to call the method can be revoked in user
role settings. See User roles for more information.
Parameters
1647
The hostmacroid property must be defined for each host macro, all other properties are optional. Only the passed properties will
be updated, all others will remain unchanged.
Return values
(object) Returns an object containing the IDs of the updated host macros under the hostmacroids property.
Examples
Request:
{
"jsonrpc": "2.0",
"method": "usermacro.update",
"params": {
"hostmacroid": "1",
"value": "public"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"hostmacroids": [
"1"
]
},
"id": 1
}
Convert discovery rule created ”automatic” macro to ”manual” and change its value to ”new-value”.
Request:
{
"jsonrpc": "2.0",
"method": "usermacro.update",
"params": {
"hostmacroid": "1",
"value": "new-value",
"automatic": "0"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"hostmacroids": [
"1"
]
},
"id": 1
}
Source
CUserMacro::update() in ui/include/classes/api/services/CUserMacro.php.
1648
usermacro.updateglobal
Description
Note:
This method is only available to Super admin user type. Permissions to call the method can be revoked in user role settings.
See User roles for more information.
Parameters
Return values
(object) Returns an object containing the IDs of the updated global macros under the globalmacroids property.
Examples
Request:
{
"jsonrpc": "2.0",
"method": "usermacro.updateglobal",
"params": {
"globalmacroid": "1",
"value": "public"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"globalmacroids": [
"1"
]
},
"id": 1
}
Source
CUserMacro::updateGlobal() in ui/include/classes/api/services/CUserMacro.php.
Value map
Object references:
• Value map
• Value mappings
Available methods:
1649
• valuemap.update - update value maps
Property behavior:
- read-only
- required for update operations
hostid ID ID of the host or template that the value map belongs to.
Property behavior:
- constant
- required for create operations
name string Name of the value map.
Property behavior:
- required for create operations
mappings array Value mappings for current value map. The mapping object is
described in detail below.
Property behavior:
- required for create operations
uuid string Universal unique identifier, used for linking imported value maps to
already existing ones. Auto-generated, if not given.
Property behavior:
- supported if the value map belongs to a template
Value mappings
The value mappings object defines value mappings of the value map. It has the following properties.
Possible values:
0 - (default) mapping will be applied if value is equal;
1
1 - mapping will be applied if value is greater or equal ;
1
2 - mapping will be applied if value is less or equal ;
3 - mapping will be applied if value is in range (ranges are inclusive;
1
multiple ranges, separated by comma character, can be defined) ;
2
4 - mapping will be applied if value matches a regular expression ;
5 - if no matches are found, mapping will not be applied, and the
default value will be used.
If type is set to ”0”, ”1”, ”2”, ”3”, ”4”, then value cannot be empty.
Property behavior:
- required if type is set to ”1”, ”2”, ”3”, ”4”
- supported if type is set to ”5”
1650
Property Type Description
Property behavior:
- required
1
supported only for items having value type ”numeric unsigned”, ”numeric float”.
2
supported only for items having value type ”character”.
valuemap.create
Description
Note:
This method is only available to Super admin user type. Permissions to call the method can be revoked in user role settings.
See User roles for more information.
Parameters
Return values
(object) Returns an object containing the IDs of the created value maps the valuemapids property. The order of the returned
IDs matches the order of the passed value maps.
Examples
Request:
{
"jsonrpc": "2.0",
"method": "valuemap.create",
"params": {
"hostid": "50009",
"name": "Service state",
"mappings": [
{
"type": "1",
"value": "1",
"newvalue": "Up"
},
{
"type": "5",
"newvalue": "Down"
}
]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
1651
"valuemapids": [
"1"
]
},
"id": 1
}
Source
CValueMap::create() in ui/include/classes/api/services/CValueMap.php.
valuemap.delete
Description
Note:
This method is only available to Super admin user type. Permissions to call the method can be revoked in user role settings.
See User roles for more information.
Parameters
(object) Returns an object containing the IDs of the deleted value maps under the valuemapids property.
Examples
Request:
{
"jsonrpc": "2.0",
"method": "valuemap.delete",
"params": [
"1",
"2"
],
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"valuemapids": [
"1",
"2"
]
},
"id": 1
}
Source
CValueMap::delete() in ui/include/classes/api/services/CValueMap.php.
valuemap.get
Description
1652
integer/array valuemap.get(object parameters)
The method allows to retrieve value maps according to the given parameters.
Note:
This method is available to users of any type. Permissions to call the method can be revoked in user role settings. See
User roles for more information.
Parameters
valuemapids ID/array Return only value maps with the given IDs.
selectMappings query Return the value mappings for current value map in the mappings
property.
Supports count.
sortfield string/array Sort the result by the given properties.
Return values
Request:
{
"jsonrpc": "2.0",
"method": "valuemap.get",
"params": {
"output": "extend"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": [
{
1653
"valuemapid": "4",
"name": "APC Battery Replacement Status"
},
{
"valuemapid": "5",
"name": "APC Battery Status"
},
{
"valuemapid": "7",
"name": "Dell Open Manage System Status"
}
],
"id": 1
}
Request:
{
"jsonrpc": "2.0",
"method": "valuemap.get",
"params": {
"output": "extend",
"selectMappings": "extend",
"valuemapids": ["4"]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": [
{
"valuemapid": "4",
"name": "APC Battery Replacement Status",
"mappings": [
{
"type": "0",
"value": "1",
"newvalue": "unknown"
},
{
"type": "0",
"value": "2",
"newvalue": "notInstalled"
},
{
"type": "0",
"value": "3",
"newvalue": "ok"
},
{
"type": "0",
"value": "4",
"newvalue": "failed"
},
{
"type": "0",
"value": "5",
"newvalue": "highTemperature"
},
{
1654
"type": "0",
"value": "6",
"newvalue": "replaceImmediately"
},
{
"type": "0",
"value": "7",
"newvalue": "lowCapacity"
}
]
}
],
"id": 1
}
Source
CValueMap::get() in ui/include/classes/api/services/CValueMap.php.
valuemap.update
Description
Note:
This method is only available to Super admin user type. Permissions to call the method can be revoked in user role settings.
See User roles for more information.
Parameters
Return values
(object) Returns an object containing the IDs of the updated value maps under the valuemapids property.
Examples
Request:
{
"jsonrpc": "2.0",
"method": "valuemap.update",
"params": {
"valuemapid": "2",
"name": "Device status"
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"valuemapids": [
"2"
]
},
1655
"id": 1
}
Request:
{
"jsonrpc": "2.0",
"method": "valuemap.update",
"params": {
"valuemapid": "2",
"mappings": [
{
"type": "0",
"value": "0",
"newvalue": "Online"
},
{
"type": "0",
"value": "1",
"newvalue": "Offline"
}
]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"valuemapids": [
"2"
]
},
"id": 1
}
Source
CValueMap::update() in ui/include/classes/api/services/CValueMap.php.
Web scenario
Object references:
• Web scenario
• Web scenario tag
• Scenario step
• HTTP field
Available methods:
1656
The web scenario object has the following properties.
Property behavior:
- read-only
- required for update operations
hostid ID ID of the host that the web scenario belongs to.
Property behavior:
- constant
- required for create operations
name string Name of the web scenario.
Property behavior:
- required for create operations
agent string User agent string that will be used by the web scenario.
Default: Zabbix
authentication integer Authentication method that will be used by the web scenario.
Possible values:
0 - (default) none;
1 - basic HTTP authentication;
2 - NTLM authentication.
delay string Execution interval of the web scenario.
Accepts seconds or time unit with suffix (e.g., 30s, 1m, 2h, 1d), or a
user macro.
Default: 1m.
headers array HTTP headers that will be sent when performing a request.
http_password string Password used for basic HTTP or NTLM authentication.
http_proxy string Proxy that will be used by the web scenario given as
http://[username[:password]@]proxy.example.com[:port].
http_user string User name used for basic HTTP or NTLM authentication.
retries integer Number of times a web scenario will try to execute each step before
failing.
Default: 1.
ssl_cert_file string Name of the SSL certificate file used for client authentication (must be
in PEM format).
ssl_key_file string Name of the SSL private key file used for client authentication (must
be in PEM format).
ssl_key_password string SSL private key password.
status integer Whether the web scenario is enabled.
Possible values:
0 - (default) enabled;
1 - disabled.
templateid ID ID of the parent template web scenario.
Property behavior:
- read-only
variables array Web scenario variables.
verify_host integer Whether to validate that the host name for the connection matches the
one in the host’s certificate.
Possible values:
0 - (default) skip host verification;
1 - verify host.
1657
Property Type Description
Possible values:
0 - (default) skip peer verification;
1 - verify peer.
uuid string Global unique identifier, used for linking imported web scenarios to
already existing ones. Auto-generated, if not given.
Property behavior:
- supported if the web scenario belongs to a template
Scenario step
The scenario step object defines a specific web scenario check. It has the following properties.
Property behavior:
- required
no integer Sequence number of the step in a web scenario.
Property behavior:
- required
url string URL to be checked.
Property behavior:
- required
follow_redirects integer Whether to follow HTTP redirects.
Possible values:
0 - don’t follow redirects;
1 - (default) follow redirects.
headers array HTTP headers that will be sent when performing a request. Scenario
step headers will overwrite headers specified for the web scenario.
posts string/array HTTP POST variables as a string (raw post data) or as an array of HTTP
fields (form field data).
required string Text that must be present in the response.
retrieve_mode integer Part of the HTTP response that the scenario step must retrieve.
Possible values:
0 - (default) only body;
1 - only headers;
2 - headers and body.
status_codes string Ranges of required HTTP status codes, separated by commas.
timeout string Request timeout in seconds. Accepts seconds, time unit with suffix, or
a user macro.
1658
Property Type Description
HTTP field
The HTTP field object defines the name and value that is used to specify the web scenario variables, HTTP headers, and POST fields
or query fields. It has the following properties.
Property behavior:
- required
value string Value of header/variable/POST or GET field.
Property behavior:
- required
httptest.create
Description
Note:
Creating a web scenario will automatically create a set of web monitoring items.
Note:
This method is only available to Admin and Super admin user types. Permissions to call the method can be revoked in user
role settings. See User roles for more information.
Parameters
Return values
(object) Returns an object containing the IDs of the created web scenarios under the httptestids property. The order of the
returned IDs matches the order of the passed web scenarios.
Examples
Create a web scenario to monitor the company home page. The scenario will have two steps, to check the home page and the
”About” page and make sure they return the HTTP status code 200.
Request:
1659
{
"jsonrpc": "2.0",
"method": "httptest.create",
"params": {
"name": "Homepage check",
"hostid": "10085",
"steps": [
{
"name": "Homepage",
"url": "http://example.com",
"status_codes": "200",
"no": 1
},
{
"name": "Homepage / About",
"url": "http://example.com/about",
"status_codes": "200",
"no": 2
}
]
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"httptestids": [
"5"
]
},
"id": 1
}
See also
• Scenario step
Source
CHttpTest::create() in ui/include/classes/api/services/CHttpTest.php.
httptest.delete
Description
Note:
This method is only available to Admin and Super admin user types. Permissions to call the method can be revoked in user
role settings. See User roles for more information.
Parameters
(object) Returns an object containing the IDs of the deleted web scenarios under the httptestids property.
Examples
1660
Delete two web scenarios.
Request:
{
"jsonrpc": "2.0",
"method": "httptest.delete",
"params": [
"2",
"3"
],
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"httptestids": [
"2",
"3"
]
},
"id": 1
}
Source
CHttpTest::delete() in ui/include/classes/api/services/CHttpTest.php.
httptest.get
Description
Note:
This method is available to users of any type. Permissions to call the method can be revoked in user role settings. See
User roles for more information.
Parameters
groupids ID/array Return only web scenarios that belong to the given host groups.
hostids ID/array Return only web scenarios that belong to the given hosts.
httptestids ID/array Return only web scenarios with the given IDs.
inherited boolean If set to true return only web scenarios inherited from a template.
monitored boolean If set to true return only enabled web scenarios that belong to
monitored hosts.
templated boolean If set to true return only web scenarios that belong to templates.
templateids ID/array Return only web scenarios that belong to the given templates.
expandName flag Expand macros in the name of the web scenario.
expandStepName flag Expand macros in the names of scenario steps.
evaltype integer Rules for tag searching.
Possible values:
0 - (default) And/Or;
2 - Or.
1661
Parameter Type Description
tags array Return only web scenarios with given tags. Exact match by tag and
case-sensitive or case-insensitive search by tag value depending on
operator value.
[{"tag": "<tag>", "value": "<value>",
Format:
"operator": "<operator>"}, ...].
An empty array returns all web scenarios.
Supports count.
selectTags query Return web scenario tags in the tags property.
sortfield string/array Sort the result by the given properties.
Return values
Request:
{
"jsonrpc": "2.0",
"method": "httptest.get",
"params": {
"output": "extend",
"selectSteps": "extend",
"httptestids": "9"
},
"id": 1
}
Response:
1662
{
"jsonrpc": "2.0",
"result": [
{
"httptestid": "9",
"name": "Homepage check",
"delay": "1m",
"status": "0",
"variables": [],
"agent": "Zabbix",
"authentication": "0",
"http_user": "",
"http_password": "",
"hostid": "10084",
"templateid": "0",
"http_proxy": "",
"retries": "1",
"ssl_cert_file": "",
"ssl_key_file": "",
"ssl_key_password": "",
"verify_peer": "0",
"verify_host": "0",
"headers": [],
"steps": [
{
"httpstepid": "36",
"httptestid": "9",
"name": "Homepage",
"no": "1",
"url": "http://example.com",
"timeout": "15s",
"posts": "",
"required": "",
"status_codes": "200",
"variables": [
{
"name":"{var}",
"value":"12"
}
],
"follow_redirects": "1",
"retrieve_mode": "0",
"headers": [],
"query_fields": []
},
{
"httpstepid": "37",
"httptestid": "9",
"name": "Homepage / About",
"no": "2",
"url": "http://example.com/about",
"timeout": "15s",
"posts": "",
"required": "",
"status_codes": "200",
"variables": [],
"follow_redirects": "1",
"retrieve_mode": "0",
"headers": [],
"query_fields": []
}
]
1663
}
],
"id": 1
}
See also
• Host
• Scenario step
Source
CHttpTest::get() in ui/include/classes/api/services/CHttpTest.php.
httptest.update
Description
Note:
This method is only available to Admin and Super admin user types. Permissions to call the method can be revoked in user
role settings. See User roles for more information.
Parameters
Additionally to the standard web scenario properties, the method accepts the following parameters.
Return values
(object) Returns an object containing the IDs of the updated web scenarios under the httptestid property.
Examples
Request:
{
"jsonrpc": "2.0",
"method": "httptest.update",
"params": {
"httptestid": "5",
"status": 0
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"httptestids": [
"5"
1664
]
},
"id": 1
}
See also
• Scenario step
Source
CHttpTest::update() in ui/include/classes/api/services/CHttpTest.php.
Type Description
Attention:
Zabbix API always returns values as strings or arrays only.
Property behavior
Some of the object properties are marked with short labels to describe their behavior. The following labels are used:
• read-only - the value of the property is set automatically and cannot be defined or changed by the user, even in some specific
conditions (e.g., read-only for inherited objects or discovered objects);
• write-only - the value of the property can be set, but cannot be accessed after;
• constant - the value of the property can be set when creating an object, but cannot be changed after;
• supported - the value of the property is not required to be set, but is allowed to be set in some specific conditions (e.g.,
supported if type is set to ”Simple check”, ”External check”, ”SSH agent”, ”TELNET agent”, or ”HTTP agent”);
• required - the value of the property is required to be set for all operations (except get operations) or in some specific
conditions (e.g., required for create operations; required if operationtype is set to ”global script” and opcommand_hst is
not set).
Note:
For update operations a property is considered as ”set” when setting it during the update operation.
Parameter behavior
Some of the operation parameters are marked with short labels to describe their behavior for the operation. The following labels
are used:
1665
• read-only - the value of the parameter is set automatically and cannot be defined or changed by the user, even in some
specific conditions (e.g., read-only for inherited objects or discovered objects);
• write-only - the value of the parameter can be set, but cannot be accessed after;
• supported - the value of the parameter is not required to be set, but is allowed to be set in some specific conditions (e.g.,
supported if operating_mode of Proxy object is set to ”passive proxy”);
• required - the value of the parameter is required to be set.
Reserved ID value ”0” Reserved ID value ”0” can be used to filter elements and to remove referenced objects. For example,
to remove a referenced proxy from a host, proxyid should be set to 0 (”proxyid”: ”0”) or to filter hosts monitored by server option
proxyids should be set to 0 (”proxyids”: ”0”).
Common ”get” method parameters The following parameters are supported by all get methods:
countOutput boolean Return the number of records in the result instead of the actual data.
editable boolean If set to true return only objects that the user has write permissions to.
Default: false.
excludeSearchboolean Return results that do not match the criteria given in the search parameter.
filter object Return only those results that exactly match the given filter.
Accepts an object, where the keys are property names, and the values are either a single
value or an array of values to match against.
Default: extend.
preservekeys boolean Use IDs as keys in the resulting array.
search object Return results that match the given pattern (case-insensitive).
Accepts an object, where the keys are property names, and the values are strings to search
for. If no additional options are given, this will perform a LIKE "%…%" search.
Default: false.
searchWildcardsEnabled
boolean If set to true enables the use of ”*” as a wildcard character in the search parameter.
Default: false.
sortfield string/array Sort the result by the given properties. Refer to a specific API get method description for a list
of properties that can be used for sorting. Macros are not expanded before sorting.
Possible values:
ASC - (default) ascending;
DESC - descending.
startSearch boolean The search parameter will compare the beginning of fields, that is, perform a LIKE "…%"
search instead.
Does the user have permission to write to hosts whose names begin with ”MySQL” or ”Linux” ?
1666
Request:
{
"jsonrpc": "2.0",
"method": "host.get",
"params": {
"countOutput": true,
"search": {
"host": ["MySQL", "Linux"]
},
"editable": true,
"startSearch": true,
"searchByAny": true
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": "0",
"id": 1
}
Note:
Zero result means no hosts with read/write permissions.
Mismatch counting
Count the number of hosts whose names do not contain the substring ”ubuntu”
Request:
{
"jsonrpc": "2.0",
"method": "host.get",
"params": {
"countOutput": true,
"search": {
"host": "ubuntu"
},
"excludeSearch": true
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": "44",
"id": 1
}
Find hosts whose name contains word ”server” and have interface ports ”10050” or ”10071”. Sort the result by host name in
descending order and limit it to 5 hosts.
Request:
{
"jsonrpc": "2.0",
"method": "host.get",
"params": {
"output": ["hostid", "host"],
"selectInterfaces": ["port"],
1667
"filter": {
"port": ["10050", "10071"]
},
"search": {
"host": "*server*"
},
"searchWildcardsEnabled": true,
"searchByAny": true,
"sortfield": "host",
"sortorder": "DESC",
"limit": 5
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": [
{
"hostid": "50003",
"host": "WebServer-Tomcat02",
"interfaces": [
{
"port": "10071"
}
]
},
{
"hostid": "50005",
"host": "WebServer-Tomcat01",
"interfaces": [
{
"port": "10071"
}
]
},
{
"hostid": "50004",
"host": "WebServer-Nginx",
"interfaces": [
{
"port": "10071"
}
]
},
{
"hostid": "99032",
"host": "MySQL server 01",
"interfaces": [
{
"port": "10050"
}
]
},
{
"hostid": "99061",
"host": "Linux server 01",
"interfaces": [
{
"port": "10050"
}
1668
]
}
],
"id": 1
}
If you add the parameter ”preservekeys” to the previous request, the result is returned as an associative array, where the keys
are the id of the objects.
Request:
{
"jsonrpc": "2.0",
"method": "host.get",
"params": {
"output": ["hostid", "host"],
"selectInterfaces": ["port"],
"filter": {
"port": ["10050", "10071"]
},
"search": {
"host": "*server*"
},
"searchWildcardsEnabled": true,
"searchByAny": true,
"sortfield": "host",
"sortorder": "DESC",
"limit": 5,
"preservekeys": true
},
"id": 1
}
Response:
{
"jsonrpc": "2.0",
"result": {
"50003": {
"hostid": "50003",
"host": "WebServer-Tomcat02",
"interfaces": [
{
"port": "10071"
}
]
},
"50005": {
"hostid": "50005",
"host": "WebServer-Tomcat01",
"interfaces": [
{
"port": "10071"
}
]
},
"50004": {
"hostid": "50004",
"host": "WebServer-Nginx",
"interfaces": [
{
"port": "10071"
}
1669
]
},
"99032": {
"hostid": "99032",
"host": "MySQL server 01",
"interfaces": [
{
"port": "10050"
}
]
},
"99061": {
"hostid": "99061",
"host": "Linux server 01",
"interfaces": [
{
"port": "10050"
}
]
}
},
"id": 1
}
authentication
connector
ZBXNEXT-8735 Added new property item_value_type, which is supported if data_type is set to ”Item values” (0).
ZBXNEXT-8735 Added new property attempt_interval, which is supported if max_attempts is greater than 1.
dashboard
ZBXNEXT-8316, ZBXNEXT-9193, ZBX-24488, ZBX-24490 Renamed dashboard widget type plaintext to itemhistory, replaced
its dashboard widget fields itemids.0, style, show_as_html with columns.0.itemid, layout, columns.0.display, and
added new dashboard widget fields.
ZBXNEXT-8496 Replaced dashboard widget fields columns.0.timeshift, columns.0.aggregate_interval with columns.0.time_per
columns.0.time_period.from, columns.0.time_period.from in tophosts widget.
ZBXNEXT-2299 Replaced dashboard widget field unacknowledged with two new fields acknowledgement_status and
acknowledged_by_me in problems widget.
ZBXNEXT-8245 Removed dashboard widget field adv_conf in clock and item widgets.
ZBXNEXT-8145 Changed dashboard widget field naming: complex data fields renamed from str.str.index1.index2 to
str.index1.str.index2 (e.g. thresholds.0.threshold.1, ds.0.hosts.1); fields referencing database objects renamed
from str to str.index1 (e.g. itemid.0, severities.0).
ZBXNEXT-8145 Replaced dashboard widget field filter_widget_reference with sysmapid._reference, and removed field
source_type in map widget.
ZBXNEXT-8145 Replaced dashboard widget field dynamic with override_hostid._reference in gauge, graph, graphprototype,
item, plaintext, and url widgets.
ZBXNEXT-8145 Replaced dashboard widget fields graph_time with time_period._reference, time_from with time_period.from,
time_to with time_period.to in svggraph widget.
ZBXNEXT-9044 Changed the value range of the dashboard widget parameters x (from 0-23 to 0-71 ) and y (from 0-62 to 0-63) as
1670
well as width (from 1–24 to 1–72) and height (from 2–32 to 1–64).
discoveryrule
event
host
hostgroup
item
ZBXNEXT-7726 The params property is now required for preprocessing steps of the type ”Check for not supported value”.
ZBXNEXT-7578 item.get, item.create, item.update: The properties headers and query_fields changed from name-
indexed object to array of objects with separate name and value properties.
item prototype
ZBXNEXT-7726 Theparams property is now required for preprocessing steps of the type ”Check for not supported value”.
ZBXNEXT-7578 itemprototype.get, itemprototype.create, itemprototype.update: The properties headers and
query_fields changed from name-indexed object to array of objects with separate name and value properties.
problem
proxy
script
ZBXNEXT-8880 script.create and script.update: Parameter execute_on value ”1” (run on Zabbix server) will be sup-
ported only if execution of global scripts is enabled on Zabbix server.
ZBXNEXT-8121 script.getscriptsbyhosts: Method no longer accepts array of host IDs. It now accepts object with the
following parameters hostid, scriptid, manualinput.
ZBXNEXT-8121 script.getscriptsbyevents: Method no longer accepts array of event IDs. It now accepts object with the
following parameters eventid, scriptid, manualinput.
1671
task
templatedashboard
ZBXNEXT-9044 Changed the value range of the dashboard widget parameters x (from 0-23 to 0-71 ) and y (from 0-62 to 0-63) as
well as width (from 1–24 to 1–72) and height (from 2–32 to 1–64).
user
userdirectory
usergroup
ZBXNEXT-8760 usergroup.update: Added restriction on changes of group users for provisioned users.
ZBXNEXT-6524 Added support of two new values in operationtype property (13 - Add host tags, 14 - Remove host tags) and
new property optag for two eventsource action types (1 - Discovery, 2 - Autoregistration) available only in the operations
property.
ZBX-21850 action.get: Filter conditions will be sorted in the order of their placement in the formula.
auditlog
ZBXNEXT-8541 Added new audit log entry action (12 - Push) and resource type (53 - History).
authentication
correlation
ZBX-21850 correlation.get: Filter conditions will be sorted in the order of their placement in the formula.
dashboard
1672
actionlog, graph, graphprototype, and toptriggers widgets.
dcheck
discoveryrule
drule
graph
ZBXNEXT-2020 graph.get: Method now also supports status property if selectGraphDiscovery parameter is used.
history
host
hostgroup
ZBXNEXT-2020 hostgroup.get: Method now also supports status property if selectGroupDiscoveries parameter is used.
item
1673
ZBXNEXT-7578 It is now possible to store more data for the query_fields property, have repeated header and query_fields
entries.
ZBXNEXT-2020 item.get: Method now also supports status, ts_disable and disable_source properties if selectItemDiscovery
parameter is used.
item prototype
mediatype
mfa
ZBXNEXT-6876 Added new MFA API with methods mfa.create, mfa.update, mfa.get, mfa.delete.
problem
proxy
timeout_browser.
ZBXNEXT-9150 Added new property
state.
ZBXNEXT-8758 Added new read-only property
ZBXNEXT-8758 proxy.get: Added new parameters selectAssignedHosts and selectProxyGroup.
ZBXNEXT-8758 proxy.get: Parameter selectHosts now supports count. ZBXNEXT-1096 Added new properties custom_timeouts,
timeout_zabbix_agent, timeout_simple_check, timeout_snmp_agent, timeout_external_check, timeout_db_monitor,
timeout_http_agent, timeout_ssh_agent, timeout_telnet_agent, timeout_script.
ZBXNEXT-8500 Added address and port properties for passive Zabbix proxies.
proxygroup
script
settings
templatedashboard
1674
hostavail, map, navtree, problemhosts, problems, problemsbysv, slareport, svggraph, systeminfo, tophosts,
trigover, web.
ZBXNEXT-8086 Added new template dashboard widget field types (8 - Map, 9 - Service, 10 - SLA, 11 - User, 12 - Action, 13 - Media
type).
ZBXNEXT-8331 Added new template dashboard widget type piechart.
trigger
ZBXNEXT-2020 trigger.get: Method now also supports status, ts_disable and disable_source properties if
selectTriggerDiscovery parameter is used.
user
usergroup
7.0.1 dashboard
ZBXNEXT-9215 Added new dashboard widget field ds.0.itemids.0._reference in svggraph and piechart widgets.
ZBXNEXT-8331 Added new dashboard widget field stroke in piechart widget.
20 Extensions
Overview Although Zabbix offers a multiplicity of features, there is always room for additional functionality. Extensions are a
convenient way of modifying and enhancing the monitoring capabilities of Zabbix without changing its source code.
You can extend Zabbix functionality either by using built-in extension options (trapper items, user parameters, etc.) or by using or
creating custom extensions (loadable modules, plugins, etc.).
This section provides an overview with references to all the options for extending Zabbix.
Trapper items are items that accept incoming data instead of querying for it. Trapper items are useful for sending specific data to
Zabbix server or proxy, for example, periodic availability and performance data in the case of long-running user scripts. Sending
data to Zabbix server or proxy is possible using the Zabbix sender utility or Zabbix sender protocol. Sending data to Zabbix server
is also possible using the history.push API method.
External checks
An external check is an item for executing checks by running an executable, for example, a shell script or a binary.
External checks are executed by Zabbix server or proxy (when host is monitored by proxy), and do not require an agent running
on the host being monitored.
User parameters
A user parameter is a user-defined command (associated with a user-defined key) that, when executed, can retrieve the data you
need from the host where Zabbix agent is running. User parameters are useful for configuring agent or agent 2 items that are not
predefined in Zabbix.
1675
Note: system.run[] items are disabled by default and, if used, must be enabled (allowed) and defined in the Zabbix agent or
agent 2 configuration file (AllowKey configuration parameter).
Attention:
User-defined commands in items such as external checks, user parameters and system.run[] Zabbix agent items are
executed from the OS user that is used to run Zabbix components. To execute these commands, this user must have the
necessary permissions.
HTTP agent item is an item for executing data requests over HTTP/HTTPS. HTTP agent items are useful for sending requests to
HTTP endpoints to retrieve data from services such as Elasticsearch and OpenWeatherMap, for checking the status of Zabbix API
or the status of Apache or Nginx web server, etc. HTTP agent items (with trapping enabled) can also function as trapper items.
Script items
A script item is an item for executing user-defined JavaScript code that retrieves data over HTTP/HTTPS. Script items are useful
when the functionality provided by HTTP agent items is not enough. For example, in demanding data collection scenarios that
require multiple steps or complex logic, a script item can be configured to make an HTTP call, then process the data received, and
then pass the transformed value to a second HTTP call.
Note:
HTTP agent items and script items are supported by Zabbix server and proxy, and do not require an agent running on the
host being monitored.
Loadable modules, written in C, are a versatile and performance-minded option for extending the functionality of Zabbix compo-
nents (server, proxy, agent) on UNIX platforms. A loadable module is basically a shared library used by Zabbix daemon and loaded
on startup. The library should contain certain functions, so that a Zabbix process may detect that the file is indeed a module it can
load and work with.
Loadable modules have a number of benefits, including the ability to add new metrics or implement any other logic (for exam-
ple, Zabbix history data export), great performance, and the option to develop, use and share the functionality they provide. It
contributes to trouble-free maintenance and helps to deliver new functionality easier and independently of the Zabbix core code
base.
Loadable modules are especially useful in a complex monitoring setup. When monitoring embedded systems, having a large
number of monitored parameters or heavy scripts with complex logic or long startup time, extensions such as user parameters,
system.run[] Zabbix agent items, and external checks will have an impact on performance. Loadable modules offer a way of
extending Zabbix functionality without sacrificing performance.
Plugins
Plugins provide an alternative to loadable modules (written in C). However, plugins are a way to extend Zabbix agent 2 only.
A plugin is a Go package that defines the structure and implements one or several plugin interfaces (Exporter, Collector, Configu-
rator, Runner, Watcher). Two types of Zabbix agent 2 plugins are supported:
For instructions and tutorials on writing your own plugins, see Developer center.
A webhook is a Zabbix media type that provides an option to extend Zabbix alerting capabilities to external software such as
helpdesk systems, chats, or messengers. Similarly to script items, webhooks are useful for making HTTP calls using custom
JavaScript code, for example, to push notifications to different platforms such as Microsoft Teams, Discord, and Jira. It is also
possible to return some data (for example, about created helpdesk tickets) that is then displayed in Zabbix.
Existing webhooks are available in the Zabbix Git repository. For custom webhook development, see Webhook development
guidelines.
Alert scripts
An alert script is a Zabbix media type that provides an option to create an alternative way (script) to handle Zabbix alerts. Alert
scripts are useful if you are not satisfied with the existing media types for sending alerts in Zabbix.
1676
Frontend customization Custom themes
It is possible to change Zabbix frontend visual appearance by using custom themes. See the instructions on creating and applying
your own themes.
Frontend modules
Frontend modules provide an option to extend Zabbix frontend functionality by adding third-party modules or by developing your
own. With frontend modules you can add new menu items, their respective views, actions, etc.
Global scripts A global script is a user-defined set of commands that can be executed on a monitoring target (by shell (/bin/sh)
interpreter), depending on the configured scope and user permissions. Global scripts can be configured for the following actions:
• Action operation
• Manual host action
• Manual event action
Global scripts are useful in many cases. For example, if configured for action operations or manual host actions, you can use global
scripts to automatically or manually execute remote commands such as restarting an application (web server, middleware, CRM,
etc.) or freeing disk space (removing older files, cleaning /tmp, etc). Or, another example, if configured for manual event actions,
you can use global scripts to manage problem tickets in external systems.
Attention:
User-defined commands are executed from the OS user that is used to run Zabbix components. To execute these com-
mands, this user must have the necessary permissions.
Zabbix API Zabbix API is an HTTP-based API that is part of Zabbix frontend. With Zabbix API, you can do any of the following
operations:
Zabbix API consists of a multiplicity of methods that are nominally grouped into separate APIs. Each method performs a specific
task. For the available methods, as well as an overview of the functions provided by Zabbix API, see Zabbix API Method reference.
1 Loadable modules
Overview
You can extend Zabbix functionality in many ways, for example, with user parameters, external checks, and system.run[] Zabbix
agent items. These work very well, but have one major drawback, namely fork(). Zabbix has to fork a new process every time
it handles a user metric, which is not good for performance. It is not a big deal normally, however it could be a serious issue
when monitoring embedded systems, having a large number of monitored parameters or heavy scripts with complex logic or long
startup time.
Support of loadable modules offers ways for extending Zabbix agent, server and proxy without sacrificing performance.
A loadable module is basically a shared library used by Zabbix daemon and loaded on startup. The library should contain certain
functions, so that a Zabbix process may detect that the file is indeed a module it can load and work with.
Loadable modules have a number of benefits. Great performance and ability to implement any logic are very important, but
perhaps the most important advantage is the ability to develop, use and share Zabbix modules. It contributes to trouble-free
maintenance and helps to deliver new functionality easier and independently of the Zabbix core code base.
Module licensing and distribution in binary form is governed by the AGPL-3.0 license (modules are linking with Zabbix in runtime
and are using Zabbix headers; the whole Zabbix code is licensed under AGPL-3.0 license since Zabbix 7.0). Binary compatibility is
not guaranteed by Zabbix.
1677
Module API stability is guaranteed during one Zabbix LTS (Long Term Support) release cycle. Stability of Zabbix API is not guaranteed
(technically it is possible to call Zabbix internal functions from a module, but there is no guarantee that such modules will work).
Module API
In order for a shared library to be treated as a Zabbix module, it should implement and export several functions. There are currently
six functions in the Zabbix module API, only one of which is mandatory and the other five are optional.
Mandatory interface
int zbx_module_api_version(void);
This function should return the API version implemented by this module and in order for the module to be loaded this version must
match module API version supported by Zabbix. Version of module API supported by Zabbix is ZBX_MODULE_API_VERSION. So this
function should return this constant. Old constant ZBX_MODULE_API_VERSION_ONE used for this purpose is now defined to equal
ZBX_MODULE_API_VERSION to preserve source compatibility, but it’s usage is not recommended.
Optional interface
int zbx_module_init(void);
This function should perform the necessary initialization for the module (if any). If successful, it should return ZBX_MODULE_OK.
Otherwise, it should return ZBX_MODULE_FAIL. In the latter case Zabbix will not start.
ZBX_METRIC *zbx_module_item_list(void);
This function should return a list of items supported by the module. Each item is defined in a ZBX_METRIC structure, see the section
below for details. The list is terminated by a ZBX_METRIC structure with ”key” field of NULL.
If module exports zbx_module_item_list() then this function is used by Zabbix to specify the timeout settings in Zabbix configu-
ration file that the item checks implemented by the module should obey. Here, the ”timeout” parameter is in seconds.
ZBX_HISTORY_WRITE_CBS zbx_module_history_write_cbs(void);
This function should return callback functions Zabbix server will use to export history of different data types. Callback functions
are provided as fields of ZBX_HISTORY_WRITE_CBS structure, fields can be NULL if module is not interested in the history of certain
type.
int zbx_module_uninit(void);
This function should perform the necessary uninitialization (if any) like freeing allocated resources, closing file descriptors, etc.
All functions are called once on Zabbix startup when the module is loaded, with the exception of zbx_module_uninit(), which is
called once on Zabbix shutdown when the module is unloaded.
Defining items
Here, key is the item key (e.g., ”dummy.random”), flags is either CF_HAVEPARAMS or 0 (depending on whether the item accepts
parameters or not), function is a C function that implements the item (e.g., ”zbx_module_dummy_random”), and test_param is
the parameter list to be used when Zabbix agent is started with the ”-p” flag (e.g., ”1,1000”, can be NULL). An example definition
may look like this:
1678
{ NULL }
}
Each function that implements an item should accept two pointer parameters, the first one of type AGENT_REQUEST and the
second one of type AGENT_RESULT:
return SYSINFO_RET_OK;
}
These functions should return SYSINFO_RET_OK, if the item value was successfully obtained. Otherwise, they should return SYS-
INFO_RET_FAIL. See example ”dummy” module below for details on how to obtain information from AGENT_REQUEST and how to
set information in AGENT_RESULT.
Attention:
History export via module is no longer supported by Zabbix proxy.
Module can specify functions to export history data by type: Numeric (float), Numeric (unsigned), Character, Text and Log:
typedef struct
{
void (*history_float_cb)(const ZBX_HISTORY_FLOAT *history, int history_num);
void (*history_integer_cb)(const ZBX_HISTORY_INTEGER *history, int history_num);
void (*history_string_cb)(const ZBX_HISTORY_STRING *history, int history_num);
void (*history_text_cb)(const ZBX_HISTORY_TEXT *history, int history_num);
void (*history_log_cb)(const ZBX_HISTORY_LOG *history, int history_num);
}
ZBX_HISTORY_WRITE_CBS;
Each of them should take ”history” array of ”history_num” elements as arguments. Depending on history data type to be exported,
”history” is an array of the following structures, respectively:
typedef struct
{
zbx_uint64_t itemid;
int clock;
int ns;
double value;
}
ZBX_HISTORY_FLOAT;
typedef struct
{
zbx_uint64_t itemid;
int clock;
int ns;
zbx_uint64_t value;
}
ZBX_HISTORY_INTEGER;
typedef struct
{
zbx_uint64_t itemid;
int clock;
int ns;
const char *value;
}
ZBX_HISTORY_STRING;
1679
typedef struct
{
zbx_uint64_t itemid;
int clock;
int ns;
const char *value;
}
ZBX_HISTORY_TEXT;
typedef struct
{
zbx_uint64_t itemid;
int clock;
int ns;
const char *value;
const char *source;
int timestamp;
int logeventid;
int severity;
}
ZBX_HISTORY_LOG;
Callbacks will be used by Zabbix server history syncer processes in the end of history sync procedure after data is written into
Zabbix database and saved in value cache.
Attention:
In case of internal error in history export module, it is recommended that module is written in such a way that it does not
block whole monitoring until it recovers but discards data instead and allows Zabbix server to continue running.
Building modules
Modules are currently meant to be built inside Zabbix source tree, because the module API depends on some data structures that
are defined in Zabbix headers.
The most important header for loadable modules is include/module.h, which defines these data structures. Other necessary
system headers that help include/module.h to work properly are stdlib.h and stdint.h.
With this information in mind, everything is ready for the module to be built. The module should include stdlib.h, stdint.h and
module.h, and the build script should make sure that these files are in the include path. See example ”dummy” module below
for details.
Another useful header is include/zbxcommon.h, which defines zabbix_log() function, which can be used for logging and debug-
ging purposes.
Configuration parameters
Zabbix agent, server and proxy support two parameters to deal with modules:
For example, to extend Zabbix agent we could add the following parameters:
LoadModulePath=/usr/local/lib/zabbix/agent/
LoadModule=mariadb.so
LoadModule=apache.so
LoadModule=kernel.so
LoadModule=/usr/local/lib/zabbix/dummy.so
Upon agent startup it will load the mariadb.so, apache.so and kernel.so modules from the /usr/local/lib/zabbix/agent directory while
dummy.so will be loaded from /usr/local/lib/zabbix. The agent will fail to start if a module is missing, in case of bad permissions or
if a shared library is not a Zabbix module.
Frontend configuration
1680
Loadable modules are supported by Zabbix agent, server and proxy. Therefore, item type in Zabbix frontend depends on where the
module is loaded. If the module is loaded into the agent, then the item type should be ”Zabbix agent” or ”Zabbix agent (active)”.
If the module is loaded into server or proxy, then the item type should be ”Simple check”.
History export through Zabbix modules does not need any frontend configuration. If the module is successfully loaded by server
and provides zbx_module_history_write_cbs() function which returns at least one non-NULL callback function then history export
will be enabled automatically.
Dummy module
Zabbix includes a sample module written in C language. The module is located under src/modules/dummy:
alex@alex:~trunk/src/modules/dummy$ ls -l
-rw-rw-r-- 1 alex alex 9019 Apr 24 17:54 dummy.c
-rw-rw-r-- 1 alex alex 67 Apr 24 17:54 Makefile
-rw-rw-r-- 1 alex alex 245 Apr 24 17:54 README
The module is well documented, it can be used as a template for your own modules.
After ./configure has been run in the root of Zabbix source tree as described above, just run make in order to build dummy.so.
/*
** Zabbix
** Copyright (C) 2001-2020 Zabbix SIA
**
** This program is free software; you can redistribute it and/or modify
** it under the terms of the GNU General Public License as published by
** the Free Software Foundation; either version 2 of the License, or
** (at your option) any later version.
**
** This program is distributed in the hope that it will be useful,
** but WITHOUT ANY WARRANTY; without even the implied warranty of
** MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
** GNU General Public License for more details.
**
** You should have received a copy of the GNU General Public License
** along with this program; if not, write to the Free Software
** Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
**/
###include <stdlib.h>
###include <string.h>
###include <time.h>
###include <stdint.h>
###include "module.h"
/* module SHOULD define internal functions as static and use a naming pattern different from Zabbix intern
/* symbols (zbx_*) and loadable module API functions (zbx_module_*) to avoid conflicts
static int dummy_ping(AGENT_REQUEST *request, AGENT_RESULT *result);
static int dummy_echo(AGENT_REQUEST *request, AGENT_RESULT *result);
static int dummy_random(AGENT_REQUEST *request, AGENT_RESULT *result);
/******************************************************************************
1681
* *
* Function: zbx_module_api_version *
* *
* Purpose: returns version number of the module interface *
* *
* Return value: ZBX_MODULE_API_VERSION - version of module.h module is *
* compiled with, in order to load module successfully Zabbix *
* MUST be compiled with the same version of this header file *
* *
******************************************************************************/
int zbx_module_api_version(void)
{
return ZBX_MODULE_API_VERSION;
}
/******************************************************************************
* *
* Function: zbx_module_item_timeout *
* *
* Purpose: set timeout value for processing of items *
* *
* Parameters: timeout - timeout in seconds, 0 - no timeout set *
* *
******************************************************************************/
void zbx_module_item_timeout(int timeout)
{
item_timeout = timeout;
}
/******************************************************************************
* *
* Function: zbx_module_item_list *
* *
* Purpose: returns list of item keys supported by the module *
* *
* Return value: list of item keys *
* *
******************************************************************************/
ZBX_METRIC *zbx_module_item_list(void)
{
return keys;
}
return SYSINFO_RET_OK;
}
if (1 != request->nparam)
{
/* set optional error message */
SET_MSG_RESULT(result, strdup("Invalid number of parameters."));
return SYSINFO_RET_FAIL;
}
1682
SET_STR_RESULT(result, strdup(param));
return SYSINFO_RET_OK;
}
/******************************************************************************
* *
* Function: dummy_random *
* *
* Purpose: a main entry point for processing of an item *
* *
* Parameters: request - structure that contains item key and parameters *
* request->key - item key without parameters *
* request->nparam - number of parameters *
* request->params[N-1] - pointers to item key parameters *
* request->types[N-1] - item key parameters types: *
* REQUEST_PARAMETER_TYPE_UNDEFINED (key parameter is empty) *
* REQUEST_PARAMETER_TYPE_ARRAY (array) *
* REQUEST_PARAMETER_TYPE_STRING (quoted or unquoted string) *
* *
* result - structure that will contain result *
* *
* Return value: SYSINFO_RET_FAIL - function failed, item will be marked *
* as not supported by zabbix *
* SYSINFO_RET_OK - success *
* *
* Comment: get_rparam(request, N-1) can be used to get a pointer to the Nth *
* parameter starting from 0 (first parameter). Make sure it exists *
* by checking value of request->nparam. *
* In the same manner get_rparam_type(request, N-1) can be used to *
* get a parameter type. *
* *
******************************************************************************/
static int dummy_random(AGENT_REQUEST *request, AGENT_RESULT *result)
{
char *param1, *param2;
int from, to;
if (2 != request->nparam)
{
/* set optional error message */
SET_MSG_RESULT(result, strdup("Invalid number of parameters."));
return SYSINFO_RET_FAIL;
}
return SYSINFO_RET_OK;
}
1683
/******************************************************************************
* *
* Function: zbx_module_init *
* *
* Purpose: the function is called on agent startup *
* It should be used to call any initialization routines *
* *
* Return value: ZBX_MODULE_OK - success *
* ZBX_MODULE_FAIL - module initialization failed *
* *
* Comment: the module won't be loaded in case of ZBX_MODULE_FAIL *
* *
******************************************************************************/
int zbx_module_init(void)
{
/* initialization for dummy.random */
srand(time(NULL));
return ZBX_MODULE_OK;
}
/******************************************************************************
* *
* Function: zbx_module_uninit *
* *
* Purpose: the function is called on agent shutdown *
* It should be used to cleanup used resources if there are any *
* *
* Return value: ZBX_MODULE_OK - success *
* ZBX_MODULE_FAIL - function failed *
* *
******************************************************************************/
int zbx_module_uninit(void)
{
return ZBX_MODULE_OK;
}
/******************************************************************************
* *
* Functions: dummy_history_float_cb *
* dummy_history_integer_cb *
* dummy_history_string_cb *
* dummy_history_text_cb *
* dummy_history_log_cb *
* *
* Purpose: callback functions for storing historical data of types float, *
* integer, string, text and log respectively in external storage *
* *
* Parameters: history - array of historical data *
* history_num - number of elements in history array *
* *
******************************************************************************/
static void dummy_history_float_cb(const ZBX_HISTORY_FLOAT *history, int history_num)
{
int i;
1684
static void dummy_history_integer_cb(const ZBX_HISTORY_INTEGER *history, int history_num)
{
int i;
/******************************************************************************
* *
* Function: zbx_module_history_write_cbs *
* *
* Purpose: returns a set of module functions Zabbix will call to export *
* different types of historical data *
* *
* Return value: structure with callback function pointers (can be NULL if *
* module is not interested in data of certain types) *
* *
******************************************************************************/
ZBX_HISTORY_WRITE_CBS zbx_module_history_write_cbs(void)
{
static ZBX_HISTORY_WRITE_CBS dummy_callbacks =
{
dummy_history_float_cb,
dummy_history_integer_cb,
dummy_history_string_cb,
dummy_history_text_cb,
dummy_history_log_cb,
};
1685
return dummy_callbacks;
}
Support of loadable modules is implemented for the Unix platform only. It means that it does not work for Windows agents.
In some cases a module may need to read module-related configuration parameters from zabbix_agentd.conf. It is not supported
currently. If you need your module to use some configuration parameters you should probably implement parsing of a module-
specific configuration file.
2 Plugins
Overview
Plugins provide an option to extend the monitoring capabilities of Zabbix. Plugins are written in Go programming language and
are supported by Zabbix agent 2 only. Plugins provide an alternative to loadable modules (written in C), and other methods for
extending Zabbix functionality.
• support of scheduled and flexible intervals for both passive and active checks;
• task queue management with respect to schedule and task concurrency;
• plugin-level timeouts;
• compatibility check of Zabbix agent 2 and its plugins on start up.
Since Zabbix 6.0, plugins don’t have to be integrated into the agent 2 directly and can be added as loadable plugins, thus making
the creation process of additional plugins for gathering new monitoring metrics easier.
This page lists Zabbix native and loadable plugins, and describes plugin configuration principles from the user perspective.
For instructions and tutorials on writing your own plugins, see Developer center.
For more information on the communication process between Zabbix agent 2 and a loadable plugin, as well as the metrics collection
process, see Connection diagram.
Configuring plugins
This section provides common plugin configuration principles and best practices.
All plugins are configured using Plugins.* parameter, which can either be part of the Zabbix agent 2 configuration file or a plugin’s
own configuration file. If a plugin uses a separate configuration file, path to this file should be specified in the Include parameter
of Zabbix agent 2 configuration file.
Plugins.<PluginName>.<Parameter>=<Value>
1686
For example, to perform active checks that do not have Scheduling update interval immediately after the agent restart only for
the Uptime plugin, set Plugins.Uptime.System.ForceActiveChecksOnStart=1 in the configuration file. Similarly, to set
Plugins.CPU.System.Capacity=N in the configuration file.
custom limit for concurrent checks for the CPU plugin, set the
Default values
You can set default values for the connection-related parameters (URI, username, password, etc.) in the configuration file in the
format:
Plugins.<PluginName>.Default.<Parameter>=<Value>
If a value for such parameter is not provided in an item key or in the named session parameters, the plugin will use the default
value. If a default parameter is also undefined, hardcoded defaults will be used.
Note:
If an item key does not have any parameters, Zabbix agent 2 will attempt to collect the metric using values defined in the
default parameters section.
Named sessions
Named sessions represent an additional level of plugin parameters and can be used to specify separate sets of authentication
parameters for each of the instances being monitored. Each named session parameter should have the following structure:
Plugins.<PluginName>.Sessions.<SessionName>.<Parameter>=<Value>
A session name can be used as a connString item key parameter instead of specifying a URI, username, and/or password separately.
In item keys, the first parameter can be either a connString or a URI. If the first key parameter doesn’t match any session name,
it will be treated as a URI. Note that passing embedded URI credentials in the item key is not supported, use named session
parameters instead.
It is possible to override session parameters by specifying new values in the item key parameters (see example).
If a parameter is not defined for the named session, Zabbix agent 2 will use the value defined in the default plugin parameter.
Parameter priority
Zabbix agent 2 plugins search for connection-related parameter values in the following order:
1. The first item key parameter is compared to session names. If no match is found, it is treated as an actual value; in this
case, step 3 will be skipped. If a match is found, the parameter value (usually, a URI) must be defined in the named session.
2. Other parameters will be taken from the item key if defined.
3. If an item key parameter (for example, password) is empty, plugin will look for the corresponding named session parameter.
4. If the session parameter is also not specified, the value defined in the corresponding default parameter will be used.
5. If all else fails, the plugin will use the hardcoded default value.
1687
Example 1
Configuration parameters:
Plugins.Mysql.Sessions.MySQL1.Uri=tcp://127.0.0.1:3306
Plugins.Mysql.Sessions.MySQL1.User=mysql1_user
Plugins.Mysql.Sessions.MySQL1.Password=unique_password
Plugins.Mysql.Sessions.MySQL2.Uri=tcp://192.0.2.0:3306
Plugins.Mysql.Sessions.MySQL2.User=mysql2_user
Plugins.Mysql.Sessions.MySQL2.Password=different_password
As a result of this configuration, each session name may be used as a connString in an item key, e.g., mysql.ping[MySQL1] or
mysql.ping[MySQL2].
Example 2
Configuration parameters:
Plugins.PostgreSQL.Sessions.Session1.Uri=tcp://192.0.2.234:5432
Plugins.PostgreSQL.Sessions.Session1.User=old_username
Plugins.PostgreSQL.Sessions.Session1.Password=session_password
Example 3
Configuration parameters:
Plugins.PostgreSQL.Default.Uri=tcp://192.0.2.234:5432
Plugins.PostgreSQL.Default.User=zabbix
Plugins.PostgreSQL.Default.Password=password
Connections
Some plugins support gathering metrics from multiple instances simultaneously. Both local and remote instances can be monitored.
TCP and Unix-socket connections are supported.
It is recommended to configure plugins to keep connections to instances in an open state. The benefits are reduced network
congestion, latency, and CPU and memory usage due to the lower number of connections. The client library takes care of this.
Note:
Time period for which unused connections should remain open can be determined by Plugins.<PluginName>.KeepAlive
parameter. Example: Plugins.Memcached.KeepAlive
Plugins
Built-in
The following plugins for Zabbix agent 2 are available out-of-the-box. Click on the plugin name to go to the plugin repository with
additional information.
1688
Plugin name Description Supported item keys Comments
Agent Metrics of the Zabbix agent.hostname, Supported keys have the same parameters
agent being used. agent.ping, as Zabbix agent keys.
agent.version
Ceph Ceph monitoring. ceph.df.details,
ceph.osd.stats,
ceph.osd.discovery,
ceph.osd.dump,
ceph.ping,
ceph.pool.discovery,
ceph.status
CPU System CPU system.cpu.discovery, Supported keys have the same parameters
monitoring (number of system.cpu.num, as Zabbix agent keys.
CPUs/CPU cores, system.cpu.util
discovered CPUs,
utilization percentage).
Docker Monitoring of Docker docker.container_info, See also:
containers. docker.container_stats, Configuration parameters
docker.containers,
docker.containers.discovery,
docker.data_usage,
docker.images,
docker.images.discovery,
docker.info,
docker.ping
File File metrics collection. vfs.file.cksum, Supported keys have the same parameters
vfs.file.contents, as Zabbix agent keys.
vfs.file.exists,
vfs.file.md5sum,
vfs.file.regexp,
vfs.file.regmatch,
vfs.file.size,
vfs.file.time
Kernel Kernel monitoring. kernel.maxfiles, Supported keys have the same parameters
kernel.maxproc as Zabbix agent keys.
Log Log file monitoring. log, log.count, logrt, Supported keys have the same parameters
logrt.count as Zabbix agent keys.
See also:
Plugin configuration parameters
(Unix/Windows)
Memcached Memcached server memcached.ping,
monitoring. memcached.stats
Modbus Reads Modbus data. modbus.get Supported keys have the same parameters
as Zabbix agent keys.
MQTT Receives published mqtt.get To configure encrypted connection to the
values of MQTT topics. MQTT broker, specify the TLS parameters in
the agent configuration file as named
session or default parameters. Currently, TLS
parameters cannot be passed as item key
parameters.
MySQL Monitoring of MySQL mysql.custom.query, To configure encrypted connection to the
and its forks. mysql.db.discovery, database, specify the TLS parameters in the
mysql.db.size, agent configuration file as named session or
mysql.get_status_variables,
default parameters. Currently, TLS
mysql.ping, parameters cannot be passed as item key
mysql.replication.discovery,
parameters.
mysql.replication.get_slave_status,
mysql.version
NetIf Monitoring of network net.if.collisions, Supported keys have the same parameters
interfaces. net.if.discovery, as Zabbix agent keys.
net.if.in, net.if.out,
net.if.total
1689
Plugin name Description Supported item keys Comments
Oracle Oracle Database oracle.diskgroups.stats, Install the Oracle Instant Client before using
monitoring. ora- the plugin.
cle.diskgroups.discovery,
oracle.archive.info, or-
acle.archive.discovery,
oracle.cdb.info,
oracle.custom.query,
oracle.datafiles.stats,
oracle.db.discovery,
oracle.fra.stats,
oracle.instance.info,
oracle.pdb.info,
oracle.pdb.discovery,
oracle.pga.stats,
oracle.ping,
oracle.proc.stats,
oracle.redolog.info,
oracle.sga.stats,
oracle.sessions.stats,
oracle.sys.metrics,
oracle.sys.params,
oracle.ts.stats,
oracle.ts.discovery,
oracle.user.info,
oracle.version
Proc Process CPU utilization proc.cpu.util Supported key has the same parameters as
percentage. Zabbix agent key.
Redis Redis server redis.config, redis.info,
monitoring. redis.ping,
redis.slowlog.count
Smart S.M.A.R.T. monitoring. smart.attribute.discovery, Sudo/root access rights to smartctl are
smart.disk.discovery, required for the user executing Zabbix agent
smart.disk.get 2. The minimum required smartctl version is
7.1.
See also:
Plugin configuration parameters
(Unix/Windows)
Systemd Monitoring of systemd systemd.unit.discovery,
services. systemd.unit.get,
systemd.unit.info
TCP TCP connection net.tcp.port Supported key has the same parameters as
availability check. Zabbix agent key.
UDP Monitoring of the UDP net.udp.service, Supported keys have the same parameters
services availability net.udp.service.perf as Zabbix agent keys.
and performance.
Uname Retrieval of system.hostname, Supported keys have the same parameters
information about the system.sw.arch, as Zabbix agent keys.
system. system.uname
1690
Plugin name Description Supported item keys Comments
Uptime System uptime metrics system.uptime Supported key has the same parameters as
collection. Zabbix agent key.
VFSDev VFS metrics collection. vfs.dev.discovery, Supported keys have the same parameters
vfs.dev.read, as Zabbix agent keys.
vfs.dev.write
WebCertificate Monitoring of TLS/SSL web.certificate.get
website certificates.
WebPage Web page monitoring. web.page.get, Supported keys have the same parameters
web.page.perf, as Zabbix agent keys.
web.page.regexp
ZabbixAsync Asynchronous metrics net.tcp.listen, Supported keys have the same parameters
collection. net.udp.listen, sensor, as Zabbix agent keys.
system.boottime,
system.cpu.intr,
system.cpu.load,
system.cpu.switches,
system.hw.cpu,
system.hw.macaddr,
system.localtime,
system.sw.os,
system.swap.in,
system.swap.out,
vfs.fs.discovery
ZabbixStats Zabbix server/proxy zabbix.stats Supported keys have the same parameters
internal metrics or as Zabbix agent keys.
number of delayed
items in a queue.
ZabbixSync Synchronous metrics net.dns, Supported keys have the same parameters
collection. net.dns.record, as Zabbix agent keys.
net.tcp.service,
net.tcp.service.perf,
proc.mem,
proc.num,
system.hw.chassis,
system.hw.devices,
system.sw.packages,
system.users.num,
vfs.dir.count,
vfs.dir.size, vfs.fs.get,
vfs.fs.inode,
vfs.fs.size,
vm.memory.size.
Loadable
Note:
Loadable plugins, when launched with:<br> - -V --version - print plugin version and license information;<br> - -h --help -
print help information.
Click on the plugin name to go to the plugin repository with additional information.
Ember+ Monitoring of Ember+. ember.get Currently only available to be built from the
source (for both Unix and Windows).
1691
Plugin name Description Supported item keys Comments
1692
See also:
Overview
This page provides the steps required to build a loadable plugin binary from the sources.
If the source tarball is downloaded, it is possible to build the plugin offline, i.e. without the internet connection.
The PostgreSQL plugin is used as an example. Other loadable plugins can be built in a similar way.
Steps
1. Download the plugin sources from Zabbix Cloud Images and Appliances. The official download page will be available soon.
2. Transfer the archive to the machine where you are going to build the plugin.
cd <path to directory>
5. Run:
make
6. The plugin executable may be placed anywhere as long as it is loadable by Zabbix agent 2. Specify the path to the plugin binary
in the plugin configuration file, e.g. in postgresql.conf for the PostgreSQL plugin:
Plugins.PostgreSQL.System.Path=/path/to/executable/zabbix-agent2-plugin-postgresql
7. Path to the plugin configuration file must be specified in the Include parameter of the Zabbix agent 2 configuration file:
Include=/path/to/plugin/configuration/file/postgresql.conf
Makefile targets
Loadable plugins provided by Zabbix have simple makefiles with the following targets:
Target Description
3 Frontend modules
Overview
It is possible to enhance Zabbix frontend functionality by adding third-party modules or by developing your own modules without
the need to change the source code of Zabbix.
Note that the module code will run with the same privileges as Zabbix source code. This means:
• third-party modules can be harmful. You must trust the modules you are installing;
• Errors in a third-party module code may crash the frontend. If this happens, just remove the module code from the frontend.
As soon as you reload Zabbix frontend, you’ll see a note saying that some modules are absent. Go to Module administration
(in Administration → General → Modules) and click Scan directory again to remove non-existent modules from the database.
1693
Installation
Please always read the installation manual for a particular module. It is recommended to install new modules one by one to catch
failures easily.
• Make sure you have downloaded the module from a trusted source. Installation of harmful code may lead to consequences,
such as data loss
• Different versions of the same module (same ID) can be installed in parallel, but only a single version can be enabled at once
• Unpack your module within its own folder in the modules folder of the Zabbix frontend
• Ensure that your module folder contains at least the manifest.json file
• Navigate to Module administration and click the Scan directory button
• New module will appear in the list along with its version, author, description and status
• Enable module by clicking on its status
Troubleshooting:
Problem Solution
Module did not appear in the list Make sure that the manifest.json file exists in
modules/your-module/ folder of the Zabbix frontend. If it
does that means the module does not suit the current Zabbix
version. If manifest.json file does not exist, you have probably
unpacked in the wrong directory.
Frontend crashed The module code is not compatible with the current Zabbix
version or server configuration. Please delete module files and
reload the frontend. You’ll see a notice that some modules are
absent. Go to Module administration and click Scan directory
again to remove non-existent modules from the database.
Error message about identical namespace, ID or actions New module tried to register a namespace, ID or actions which
appears are already registered by other enabled modules. Disable the
conflicting module (mentioned in error message) prior to
enabling the new one.
Technical error messages appear Report errors to the developer of the module.
Developing modules
21 Appendixes
1694
references in these locations:
• Notifications (actions)
1 Database creation
Overview
A Zabbix database must be created during the installation of Zabbix server or proxy.
This section provides instructions for creating a Zabbix database. A separate set of instructions is available for each supported
database.
Note:
To improve database security by creating database roles/users with minimal privileges, see database creation best practices
for each supported database: <br><br>
• MySQL/MariaDB
• PostgreSQL/TimescaleDB
• Oracle
For configuring secure TLS connections, see Secure connection to the database.
UTF-8 is the only encoding supported by Zabbix. It is known to work without any security flaws. Users should be aware that there
are known security issues if using some of the other encodings. For switching to UTF-8, see Repairing Zabbix database character
set and collation.
Note:
If installing from Zabbix Git repository, you need to run the following command prior to proceeding to the next steps:
<br><br> make dbschema
1695
MySQL/MariaDB
Character sets utf8 (aka utf8mb3) and utf8mb4 are supported (with utf8_bin and utf8mb4_bin collation respectively) for Zabbix
server/proxy to work properly with MySQL database. It is recommended to use utf8mb4 for new installations.
Deterministic triggers need to be created during the import of schema. On MySQL and MariaDB, this requires GLOBAL
log_bin_trust_function_creators = 1 to be set if binary logging is enabled and there is no superuser privileges and
log_bin_trust_function_creators = 1 is not set in MySQL configuration file.
If you are installing from Zabbix packages, proceed to the instructions for your platform.
• Import the data into the database and set utf8mb4 character set as default. For a Zabbix proxy database, only schema.sql
should be imported (no images.sql nor data.sql).
cd database/mysql
mysql -uzabbix -p<password> zabbix < schema.sql
#### stop here if you are creating database for Zabbix proxy
mysql -uzabbix -p<password> zabbix < images.sql
mysql -uzabbix -p<password> --default-character-set=utf8mb4 zabbix < data.sql
log_bin_trust_function_creators can be disabled after the schema has been successfully imported:
mysql -uroot -p<password>
PostgreSQL
You need to have database user with permissions to create database objects.
If you are installing from Zabbix packages, proceed to the instructions for your platform.
The following shell command will create user zabbix. Specify a password when prompted and repeat the password (note, you
may first be asked for sudo password):
sudo -u postgres createuser --pwprompt zabbix
• Create a database.
The following shell command will create the database zabbix (last parameter) with the previously created user as the owner (-O
zabbix).
sudo -u postgres createdb -O zabbix -E Unicode -T template0 zabbix
• Import the initial schema and data (assuming you are in the root directory of Zabbix sources). For a Zabbix proxy database,
only schema.sql should be imported (no images.sql nor data.sql).
cd database/postgresql
cat schema.sql | sudo -u zabbix psql zabbix
#### stop here if you are creating database for Zabbix proxy
cat images.sql | sudo -u zabbix psql zabbix
cat data.sql | sudo -u zabbix psql zabbix
1696
Attention:
The above commands are provided as an example that will work in most of GNU/Linux installations. You can use differ-
ent commands depending on how your system/database is configured, for example: <br><br> psql -U <username>
<br><br> If you have any trouble setting up the database, please consult your Database administrator.
TimescaleDB
Instructions for creating and configuring TimescaleDB are provided in a separate section.
Oracle
Instructions for creating and configuring Oracle database are provided in a separate section.
SQLite
MySQL/MariaDB
Historically, MySQL and derivatives used ’utf8’ as an alias for utf8mb3 - MySQL’s own 3-byte implementation of the standard UTF8,
which is 4-byte. Starting from MySQL 8.0.28 and MariaDB 10.6.1, ’utf8mb3’ character set is deprecated and at some point its
support will be dropped while ’utf8’ will become a reference to ’utf8mb4’. Since Zabbix 6.0, ’utf8mb4’ is supported. To avoid future
problems, it is highly recommended to use ’utf8mb4’. Another advantage of switching to ’utf8mb4’ is support of supplementary
Unicode characters.
Warning:
As versions before Zabbix 6.0 are not aware of utf8mb4, make sure to first upgrade Zabbix server and DB schema to 6.0.x
or later before executing utf8mb4 conversion.
For example:
2. Stop Zabbix.
1697
| utf8mb4 | utf8mb4_bin |
+--------------------------+----------------------+
5. Load the script to fix character set and collation on table and column level:
7. If no errors - you may want to create a database backup copy with the fixed database.
8. Start Zabbix.
Overview
Since Zabbix 6.0, primary keys are used for all tables in new installations. Since Zabbix 5.0, double precision data types are used
for all tables in new installations.
This section provides instructions for manually upgrading tables in existing installations - history tables to primary keys, and history
and trends tables to double precision data types.
• MySQL
• PostgreSQL
• TimescaleDB
• Oracle
Attention:
The instructions provided on this page are designed for advanced users. Note that these instructions might need to be
adjusted for your specific configuration.
Important notes
Export and import must be performed in tmux/screen to ensure that the session isn’t dropped.
This method can be used with a running Zabbix server, but it is recommended to stop the server for the time of the upgrade. The
MySQL Shell (mysqlsh) must be installed and able to connect to the DB.
• Log in to MySQL console as root (recommended) or as any user with FILE privileges.
1698
Connect via mysqlsh. If using a socket connection, specifying the path might be required.
CSVPATH="/var/lib/mysql-files";
This upgrade method takes more time and should be used only if an upgrade with mysqlsh is not possible.
Table upgrade
• Log in to MySQL console as root (recommended) or any user with FILE privileges.
max_execution_time must be disabled before migrating data to avoid timeout during migration.
SET @@max_execution_time=0;
If secure_file_priv value is a path to a directory, export/import will be performed for files in that directory. In this case, edit paths
to files in queries accordingly or set the secure_file_priv value to an empty string for the upgrade time.
If secure_file_priv value is NULL, set it to the path that contains exported table data (’/var/lib/mysql-files/’ in the example above).
1699
max_execution_time must be disabled before exporting data to avoid timeout during export.
SET @@max_execution_time=0;
SELECT * INTO OUTFILE '/var/lib/mysql-files/history.csv' FIELDS TERMINATED BY ',' ESCAPED BY '"' LINES TER
LOAD DATA INFILE '/var/lib/mysql-files/history.csv' IGNORE INTO TABLE history FIELDS TERMINATED BY ',' ESC
SELECT * INTO OUTFILE '/var/lib/mysql-files/history_uint.csv' FIELDS TERMINATED BY ',' ESCAPED BY '"' LINE
LOAD DATA INFILE '/var/lib/mysql-files/history_uint.csv' IGNORE INTO TABLE history_uint FIELDS TERMINATED
SELECT * INTO OUTFILE '/var/lib/mysql-files/history_str.csv' FIELDS TERMINATED BY ',' ESCAPED BY '"' LINES
LOAD DATA INFILE '/var/lib/mysql-files/history_str.csv' IGNORE INTO TABLE history_str FIELDS TERMINATED BY
SELECT * INTO OUTFILE '/var/lib/mysql-files/history_log.csv' FIELDS TERMINATED BY ',' ESCAPED BY '"' LINES
LOAD DATA INFILE '/var/lib/mysql-files/history_log.csv' IGNORE INTO TABLE history_log FIELDS TERMINATED BY
SELECT * INTO OUTFILE '/var/lib/mysql-files/history_text.csv' FIELDS TERMINATED BY ',' ESCAPED BY '"' LINE
LOAD DATA INFILE '/var/lib/mysql-files/history_text.csv' IGNORE INTO TABLE history_text FIELDS TERMINATED
PostgreSQL
Export and import must be performed in tmux/screen to ensure that the session isn’t dropped. For installations with TimescaleDB,
skip this section and proceed to PostgreSQL + TimescaleDB.
Table upgrade
• Export current history, import it to the temp table, then insert the data into new tables while ignoring duplicates:
See tips for improving INSERT performance: PostgreSQL: Bulk Loading Huge Amounts of Data, Checkpoint Distance and Amount
of WAL.
• Export current history, import it to the temp table, then insert the data into new tables while ignoring duplicates:
1700
clock integer DEFAULT '0' NOT NULL,
value numeric(20) DEFAULT '0' NOT NULL,
ns integer DEFAULT '0' NOT NULL
);
\copy temp_history_uint FROM '/tmp/history_uint.csv' DELIMITER ',' CSV
INSERT INTO history_uint SELECT * FROM temp_history_uint ON CONFLICT (itemid,clock,ns) DO NOTHING;
PostgreSQL + TimescaleDB
Export and import must be performed in tmux/screen to ensure that the session isn’t dropped. Zabbix server should be down
during the upgrade.
cat /usr/share/zabbix-sql-scripts/postgresql/timescaledb/option-patches/with-compression/history_
– If compression is disabled, run the script from /usr/share/zabbix-sql-scripts/postgresql/timescaledb/option-patc
cat /usr/share/zabbix-sql-scripts/postgresql/timescaledb/option-patches/without-compression/histo
• Run TimescaleDB hypertable migration scripts based on compression settings:
– If compression is enabled (on default installation), run scripts from /usr/share/zabbix-sql-scripts/postgresql/timesca
cat /usr/share/zabbix-sql-scripts/postgresql/timescaledb/option-patches/with-compression/history_
cat /usr/share/zabbix-sql-scripts/postgresql/timescaledb/option-patches/with-compression/history_
cat /usr/share/zabbix-sql-scripts/postgresql/timescaledb/option-patches/with-compression/history_
cat /usr/share/zabbix-sql-scripts/postgresql/timescaledb/option-patches/with-compression/history_
cat /usr/share/zabbix-sql-scripts/postgresql/timescaledb/option-patches/with-compression/history_
cat /usr/share/zabbix-sql-scripts/postgresql/timescaledb/option-patches/with-compression/trends_u
1701
– If compression is disabled, run scripts from /usr/share/zabbix-sql-scripts/postgresql/timescaledb/option-patche
cat /usr/share/zabbix-sql-scripts/postgresql/timescaledb/option-patches/without-compression/histo
cat /usr/share/zabbix-sql-scripts/postgresql/timescaledb/option-patches/without-compression/histo
cat /usr/share/zabbix-sql-scripts/postgresql/timescaledb/option-patches/without-compression/histo
cat /usr/share/zabbix-sql-scripts/postgresql/timescaledb/option-patches/without-compression/histo
cat /usr/share/zabbix-sql-scripts/postgresql/timescaledb/option-patches/without-compression/histo
cat /usr/share/zabbix-sql-scripts/postgresql/timescaledb/option-patches/without-compression/trend
Oracle
Attention:
The support for Oracle DB is deprecated since Zabbix 7.0.
Export and import must be performed in tmux/screen to ensure that the session isn’t dropped. Zabbix server should be down
during the upgrade.
Table upgrade
• Install Oracle Data Pump (available in the Instant Client Tools package).
Data Pump must have read and write permissions to these directories.
Example:
• Create a directory object and grant read and write permissions to this object to the user used for Zabbix authentication
(’zabbix’ in the example below). Under sysdba role, run:
expdp zabbix/password@oracle_host/service \
DIRECTORY=history \
TABLES=history_old,history_uint_old,history_str_old,history_log_old,history_text_old \
PARALLEL=N
impdp zabbix/password@oracle_host/service \
DIRECTORY=history \
TABLES=history_uint_old \
REMAP_TABLE=history_old:history,history_uint_old:history_uint,history_str_old:history_str,history_log_old
data_options=SKIP_CONSTRAINT_ERRORS table_exists_action=APPEND PARALLEL=N CONTENT=data_only
1702
• Prepare directories for Data Pump for each history table. Data Pump must have read and write permissions to these directo-
ries.
Example:
• Create a directory object and grant read and write permissions to this object to the user used for Zabbix authentication
(’zabbix’ in the example below). Under sysdba role, run:
• Export and import each table. Replace N with the desired thread count.
Post-migration
See also
1703
• Preparing auditlog table for partitioning
Overview
Some databases (for example, MySQL) require the partitioning column to be part of the table’s unique constraint. Therefore, to
partition the auditlog table by time, the primary key must be changed from auditid to a composite key auditid + clock.
This section provides instructions for altering the primary key of the auditlog table.
Attention:
The instructions provided on this page are designed for advanced users. Note that these instructions might need to be
adjusted for your specific configuration. Altering the primary key can also be incompatible with future upgrade patches,
so manually handling future upgrades may be necessary. <br><br> Altering the primary key can be a resource-intensive
operation that takes a lot of time depending on the auditlog table size. Stopping Zabbix server and switching Zabbix
frontend to maintenance mode for the time of the alteration is recommended. However, if absolutely necessary, there is
a way to alter the primary key without downtime (see below).
Partitioning the auditlog table can improve, for example, housekeeping in large setups. Although Zabbix housekeeping currently
cannot take advantage of partitioned tables (except for TimescaleDB), you can disable Zabbix housekeeping and delete partitions
using scripts.
Since Zabbix 7.0, the auditlog table for TimescaleDB has been converted to a hypertable, which allows the housekeeper to drop
data by chunks. To upgrade the existing auditlog table to a hypertable, rerun the postgresql/timescaledb/schema.sql
script before starting Zabbix server. Zabbix server will log a warning, if started without running this script first. See also:
TimescaleDB setup.
MySQL
MySQL automatically rebuilds indexes for the primary key during the ALTERTABLE operation. However, it is highly recommended
to also manually rebuild indexes with the OPTIMIZE TABLE statement to ensure optimal database performance.
Rebuilding indexes may temporarily require as much additional disk space as the table itself uses. To obtain the current size of
data and indexes, you can execute the following statements:
ANALYZE TABLE auditlog;
SHOW TABLE STATUS LIKE 'auditlog';
If the available disk space is a concern, follow the Altering primary key without downtime instructions. Other options are also
available:
• Increasing the sort_buffer_size MySQL parameter may help to reduce disk space usage when manually rebuilding
indexes. However, modifying this variable may impact overall memory usage of the database.
• Consider freeing up space by deleting potentially unnecessary data.
• Consider decreasing the Data storage period housekeeper parameter before executing the housekeeper.
1. Drop the current auditlog table primary key and add the new primary key.
ALTER TABLE auditlog DROP PRIMARY KEY, ADD PRIMARY KEY (auditid, clock);
2. Rebuild indexes (optional but highly recommended, see Important notes on rebuilding indexes).
OPTIMIZE TABLE auditlog;
Manual method of altering the primary key is described here. Alternatively, you can use the pt-online-schema-change toolkit from
Percona. This toolkit performs the following actions automatically, while also minimizing the space used for altering the auditlog
table.
1. Create a new table with the new primary key and create indexes.
1704
`clock` integer DEFAULT '0' NOT NULL,
`ip` varchar(39) DEFAULT '' NOT NULL,
`action` integer DEFAULT '0' NOT NULL,
`resourcetype` integer DEFAULT '0' NOT NULL,
`resourceid` bigint unsigned NULL,
`resource_cuid` varchar(25) NULL,
`resourcename` varchar(255) DEFAULT '' NOT NULL,
`recordsetid` varchar(25) NOT NULL,
`details` longtext NOT NULL,
PRIMARY KEY (auditid,clock)
) ENGINE=InnoDB;
CREATE INDEX `auditlog_1` ON `auditlog_new` (`userid`,`clock`);
CREATE INDEX `auditlog_2` ON `auditlog_new` (`clock`);
CREATE INDEX `auditlog_3` ON `auditlog_new` (`resourcetype`,`resourceid`);
2. Swap tables.
RENAME TABLE auditlog TO auditlog_old, auditlog_new TO auditlog;
This can be done in chunks (multiple INSERT INTO statements with WHERE clock clauses as needed) to avoid excessive resource
usage.
PostgreSQL
PostgreSQL automatically rebuilds indexes for the primary key during the ALTER TABLE operation. However, it is highly rec-
ommended to also manually rebuild indexes with the REINDEX TABLE CONCURRENTLY statement to ensure optimal database
performance.
Rebuilding indexes may temporarily require up to three times of disk space currently used by indexes. To obtain the current size
of indexes, you can execute the following query:
SELECT pg_size_pretty(pg_indexes_size('auditlog'));
If the available disk space is a concern, follow the Altering primary key without downtime instructions. Other options are also
available:
• Increasing the maintenance_work_mem PostgreSQL parameter may help to reduce disk space usage when manually re-
building indexes. However, modifying this variable may impact overall memory usage of the database.
• If you have another disk or tablespace with more available space, you might consider changing the temporary storage
location for the index rebuild. You can set the temp_tablespaces PostgreSQL parameter to specify a different tablespace
for temporary objects.
• Consider freeing up space by deleting potentially unnecessary data.
• Consider decreasing the Data storage period housekeeper parameter before executing the housekeeper.
1. Drop the current auditlog table primary key and add the new primary key.
ALTER TABLE auditlog DROP CONSTRAINT auditlog_pkey;
ALTER TABLE auditlog ADD PRIMARY KEY (auditid,clock);
2. Rebuild indexes (optional but highly recommended, see Important notes on rebuilding indexes).
REINDEX TABLE CONCURRENTLY auditlog;
Manual method of altering the primary key is described here. Alternatively, the pg_repack extension can be considered for
creating a new table, copying data, and swapping tables.
1. Create a new table with the new primary key and create indexes.
1705
CREATE TABLE auditlog_new (
auditid varchar(25) NOT NULL,
userid bigint NULL,
username varchar(100) DEFAULT '' NOT NULL,
clock integer DEFAULT '0' NOT NULL,
ip varchar(39) DEFAULT '' NOT NULL,
action integer DEFAULT '0' NOT NULL,
resourcetype integer DEFAULT '0' NOT NULL,
resourceid bigint NULL,
resource_cuid varchar(25) NULL,
resourcename varchar(255) DEFAULT '' NOT NULL,
recordsetid varchar(25) NOT NULL,
details text DEFAULT '' NOT NULL,
PRIMARY KEY (auditid,clock)
);
CREATE INDEX auditlog_new_1 ON auditlog_new (userid,clock);
CREATE INDEX auditlog_new_2 ON auditlog_new (clock);
CREATE INDEX auditlog_new_3 ON auditlog_new (resourcetype,resourceid);
2. Swap tables.
ALTER TABLE auditlog RENAME TO auditlog_old;
ALTER TABLE auditlog_new RENAME TO auditlog;
This can be done in chunks (multiple INSERT INTO statements with WHERE clock clauses as needed) to avoid excessive resource
usage.
Oracle
Oracle automatically rebuilds indexes for the primary key during the ALTER
TABLE operation. However, it is highly recommended
to also manually rebuild indexes with the ALTER INDEX <index> REBUILD PARALLEL statements to ensure optimal database
performance.
If the available disk space is a concern, follow the Altering primary key without downtime instructions. Other options are also
available:
• Increasing the SORT_AREA_SIZE Oracle parameter may help to reduce disk space usage when manually rebuilding indexes.
However, modifying this variable will impact the overall memory usage of the database.
• You can set parallel degree using the PARALLEL clause, for example: ALTER INDEX auditlog_1 REBUILD PARALLEL 4
• Consider freeing up space by deleting potentially unnecessary data.
• Consider decreasing the Data storage period housekeeper parameter before executing the housekeeper.
2. Drop the current auditlog table primary key and add the new primary key.
ALTER TABLE auditlog DROP CONSTRAINT <constraint_name>;
ALTER TABLE auditlog ADD CONSTRAINT auditlog_pk PRIMARY KEY (auditid, clock);
3. Rebuild indexes (optional but highly recommended, see Important notes on rebuilding indexes).
1706
ALTER INDEX auditlog_pk REBUILD PARALLEL;
ALTER INDEX auditlog_1 REBUILD PARALLEL;
ALTER INDEX auditlog_2 REBUILD PARALLEL;
ALTER INDEX auditlog_3 REBUILD PARALLEL;
1. Create a new table with the new primary key and create indexes.
2. Swap tables.
ALTER TABLE auditlog RENAME TO auditlog_old;
ALTER TABLE auditlog_new RENAME TO auditlog;
This can be done in chunks (multiple INSERT INTO statements with WHERE clock clauses as needed) to avoid excessive resource
usage.
See also
Overview
This section provides Zabbix setup steps and configuration examples for secure TLS connections between:
To set up connection encryption within the DBMS, see official vendor documentation for details:
All examples are based on the GA releases of MySQL CE (8.0) and PgSQL (13) available through official repositories using CentOS
8.
1707
Requirements
Note:
It is recommended to avoid OS in the end-of-life status, especially in the case of new installations
• Database engine (RDBMS) installed and maintained from the official repository provided by developer. Operating systems
often shipped with outdated database software versions for which encryption support is not implemented, for example RHEL
7 based systems and PostgreSQL 9.2, MariaDB 5.5 without encryption support.
Terminology
Setting this option enforces to use TLS connection to database from Zabbix server/proxy and frontend to database:
Zabbix configuration
• Mark the Database TLS encryption checkbox in the Configure DB connection step to enable transport encryption.
• Mark the Verify database certificate checkbox that appears when TLS encryption field is checked to enable encryption with
certificates.
Note:
For MySQL, the Database TLS encryption checkbox is disabled, if Database host is set to localhost, because connection
that uses a socket file (on Unix) or shared memory (on Windows) cannot be encrypted.
For PostgreSQL, the TLS encryption checkbox is disabled, if the value of the Database host field begins with a slash or the
field is empty.
The following parameters become available in the TLS encryption in certificates mode (if both checkboxes are marked):
Parameter Description
Database TLS CA file Specify the full path to a valid TLS certificate authority (CA) file.
Database TLS key file Specify the full path to a valid TLS key file.
Database TLS certificate Specify the full path to a valid TLS certificate file.
file
Database host Mark this checkbox to activate host verification.
verification Disabled for MYSQL, because PHP MySQL library does not allow to skip the peer certificate
validation step.
Database TLS cipher list Specify a custom list of valid ciphers. The format of the cipher list must conform to the OpenSSL
standard.
Available for MySQL only.
Attention:
TLS parameters must point to valid files. If they point to non-existent or invalid files, it will lead to the authorization error.
If certificate files are writable, the frontend generates a warning in the System information report that ”TLS certificate files
must be read-only.” (displayed only if the PHP user is the owner of the certificate).
Use cases
Zabbix frontend uses GUI interface to define possible options: required, verify_ca, verify_full. Specify required options in the
installation wizard step Configure DB connections. These options are mapped to the configuration file (zabbix.conf.php) in the
following manner:
1708
GUI settings Configuration file Description Result
1709
GUI settings Configuration file Description Result
Or:
...
// Used for TLS connection
without Cipher list defined -
selected by MySQL server
$DB[’ENCRYPTION’] = true;
$DB[’KEY_FILE’] =
’<key_file_path>’;
$DB[’CERT_FILE’] =
’<key_file_path>’;
$DB[’CA_FILE’] =
’<key_file_path>’;
$DB[’VERIFY_HOST’] = true;
$DB[’CIPHER_LIST’] = ”;
...
See also: Encryption configuration examples for MySQL, Encryption configuration examples for PostgreSQL.
Secure connections to the database can be configured with the respective parameters in the Zabbix server and/or proxy configu-
ration file.
1710
Configuration Result
Overview
This section provides several encryption configuration examples for CentOS 8.2 and MySQL 8.0.21 and can be used as a quickstart
guide for encrypting the connection to the database.
Attention:
If MySQL host is set to localhost, encryption options will not be available. In this case a connection between Zabbix frontend
and the database uses a socket file (on Unix) or shared memory (on Windows) and cannot be encrypted.
Note:
List of encryption combinations is not limited to the ones listed on this page. There are a lot more combinations available.
Pre-requisites
To see, which users are using an encrypted connection, run the following query (Performance Schema should be turned ON):
MySQL configuration
Modern versions of the database are ready out-of-the-box for ’required’ encryption mode. A server-side certificate will be created
after initial setup and launch.
1711
mysql> CREATE ROLE 'zbx_srv_role', 'zbx_web_role';
mysql> GRANT SELECT, UPDATE, DELETE, INSERT, CREATE, DROP, ALTER, INDEX, REFERENCES ON zabbix.* TO 'zbx_sr
mysql> GRANT SELECT, UPDATE, DELETE, INSERT ON zabbix.* TO 'zbx_web_role';
Run to check connection (socket connection cannot be used to test secure connections):
mysql> status
--------------
mysql Ver 8.0.21 for Linux on x86_64 (MySQL Community Server - GPL)
Connection id: 62
Current database:
Current user: [email protected]
SSL: Cipher in use is TLS_AES_256_GCM_SHA384
ERROR:
No query specified
Frontend
To enable transport-only encryption for connections between Zabbix frontend and the database:
1712
Server
To enable transport-only encryption for connections between server and the database, configure /etc/zabbix/zabbix_server.conf:
...
DBHost=10.211.55.9
DBName=zabbix
DBUser=zbx_srv
DBPassword=<strong_password>
DBTLSConnect=required
...
Verify CA mode
Copy required MySQL CA to the Zabbix frontend server, assign proper permissions to allow the webserver to read this file.
Note:
Verify CA mode doesn’t work on RHEL 7 due to older MySQL libraries.
Frontend
To enable encryption with certificate verification for connections between Zabbix frontend and the database:
1713
Alternatively, this can be set in /etc/zabbix/web/zabbix.conf.php:
...
$DB['ENCRYPTION'] = true;
$DB['KEY_FILE'] = '';
$DB['CERT_FILE'] = '';
$DB['CA_FILE'] = '/etc/ssl/mysql/ca.pem';
$DB['VERIFY_HOST'] = false;
$DB['CIPHER_LIST'] = '';
...
Troubleshoot user using command-line tool to check if connection is possible for required user:
To enable encryption with certificate verification for connections between Zabbix server and the database, configure
/etc/zabbix/zabbix_server.conf:
...
DBHost=10.211.55.9
DBName=zabbix
DBUser=zbx_srv
DBPassword=<strong_password>
DBTLSConnect=verify_ca
DBTLSCAFile=/etc/ssl/mysql/ca.pem
...
Verify Full mode
MySQL configuration
[mysqld]
...
# in this examples keys are located in the MySQL CE datadir directory
ssl_ca=ca.pem
ssl_cert=server-cert.pem
ssl_key=server-key.pem
require_secure_transport=ON
1714
tls_version=TLSv1.3
...
Keys for the MySQL CE server and client (Zabbix frontend) should be created manually according to the MySQL CE documentation:
Creating SSL and RSA certificates and keys using MySQL or Creating SSL certificates and keys using openssl
Attention:
MySQL server certificate should contain the Common Name field set to the FQDN name as Zabbix frontend will use the
DNS name to communicate with the database or IP address of the database host.
To enable encryption with full verification for connections between Zabbix frontend and the database:
Note that Database host verification is checked and grayed out - this step cannot be skipped for MySQL.
Warning:
Cipher list should be empty, so that frontend and server can negotiate required one from the supported by both ends.
...
// Used for TLS connection with strictly defined Cipher list.
$DB['ENCRYPTION'] = true;
$DB['KEY_FILE'] = '/etc/ssl/mysql/client-key.pem';
$DB['CERT_FILE'] = '/etc/ssl/mysql/client-cert.pem';
$DB['CA_FILE'] = '/etc/ssl/mysql/ca.pem';
1715
$DB['VERIFY_HOST'] = true;
$DB['CIPHER_LIST'] = 'TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256:TLS_AES_128_GCM_SHA256:TLS_AES_1
...
// or
...
// Used for TLS connection without Cipher list defined - selected by MySQL server
$DB['ENCRYPTION'] = true;
$DB['KEY_FILE'] = '/etc/ssl/mysql/client-key.pem';
$DB['CERT_FILE'] = '/etc/ssl/mysql/client-cert.pem';
$DB['CA_FILE'] = '/etc/ssl/mysql/ca.pem';
$DB['VERIFY_HOST'] = true;
$DB['CIPHER_LIST'] = '';
...
Server
To enable encryption with full verification for connections between Zabbix server and the database, configure /etc/zabbix/zabbix_server.conf:
...
DBHost=10.211.55.9
DBName=zabbix
DBUser=zbx_srv
DBPassword=<strong_password>
DBTLSConnect=verify_full
DBTLSCAFile=/etc/ssl/mysql/ca.pem
DBTLSCertFile=/etc/ssl/mysql/client-cert.pem
DBTLSKeyFile=/etc/ssl/mysql/client-key.pem
...
Overview
This section provides several encryption configuration examples for CentOS 8.2 and PostgreSQL 13.
Note:
Connection between Zabbix frontend and PostgreSQL cannot be encrypted (parameters in GUI are disabled), if the value
of Database host field begins with a slash or the field is empty.
Pre-requisites
PostgreSQL is not configured to accept TLS connections out-of-the-box. Please follow instructions from PostgreSQL documentation
for certificate preparation with postgresql.conf and also for user access control through ph_hba.conf.
By default, the PostgreSQL socket is binded to the localhost, for the network remote connections allow to listen on the real network
interface.
/var/lib/pgsql/13/data/postgresql.conf:
...
ssl = on
ssl_ca_file = 'root.crt'
ssl_cert_file = 'server.crt'
ssl_key_file = 'server.key'
ssl_ciphers = 'HIGH:MEDIUM:+3DES:!aNULL'
ssl_prefer_server_ciphers = on
ssl_min_protocol_version = 'TLSv1.3'
...
For access control adjust /var/lib/pgsql/13/data/pg_hba.conf:
...
### require
hostssl all all 0.0.0.0/0 md5
1716
### verify CA
hostssl all all 0.0.0.0/0 md5 clientcert=verify-ca
Frontend
To enable transport-only encryption for connections between Zabbix frontend and the database:
Server
To enable transport-only encryption for connections between server and the database, configure /etc/zabbix/zabbix_server.conf:
...
DBHost=10.211.55.9
DBName=zabbix
DBUser=zbx_srv
DBPassword=<strong_password>
DBTLSConnect=required
...
Verify CA mode
Frontend
To enable encryption with certificate authority verification for connections between Zabbix frontend and the database:
1717
Alternatively, this can be set in /etc/zabbix/web/zabbix.conf.php:
...
$DB['ENCRYPTION'] = true;
$DB['KEY_FILE'] = '';
$DB['CERT_FILE'] = '';
$DB['CA_FILE'] = '/etc/ssl/pgsql/root.crt';
$DB['VERIFY_HOST'] = false;
$DB['CIPHER_LIST'] = '';
...
Server
To enable encryption with certificate verification for connections between Zabbix server and the database, configure
/etc/zabbix/zabbix_server.conf:
...
DBHost=10.211.55.9
DBName=zabbix
DBUser=zbx_srv
DBPassword=<strong_password>
DBTLSConnect=verify_ca
DBTLSCAFile=/etc/ssl/pgsql/root.crt
...
Verify full mode
Frontend
To enable encryption with certificate and database host identity verification for connections between Zabbix frontend and the
database:
1718
Alternatively, this can be set in /etc/zabbix/web/zabbix.conf.php:
$DB['ENCRYPTION'] = true;
$DB['KEY_FILE'] = '';
$DB['CERT_FILE'] = '';
$DB['CA_FILE'] = '/etc/ssl/pgsql/root.crt';
$DB['VERIFY_HOST'] = true;
$DB['CIPHER_LIST'] = '';
...
Server
To enable encryption with certificate and database host identity verification for connections between Zabbix server and the
database, configure /etc/zabbix/zabbix_server.conf:
...
DBHost=10.211.55.9
DBName=zabbix
DBUser=zbx_srv
DBPassword=<strong_password>
DBTLSConnect=verify_full
DBTLSCAFile=/etc/ssl/pgsql/root.crt
DBTLSCertFile=/etc/ssl/pgsql/client.crt
DBTLSKeyFile=/etc/ssl/pgsql/client.key
...
6 TimescaleDB setup
Overview
Zabbix supports TimescaleDB, a PostgreSQL-based database solution of automatically partitioning data into time-based chunks to
support faster performance at scale.
Warning:
Currently, TimescaleDB is not supported by Zabbix proxy.
Instructions on this page can be used for creating TimescaleDB database or migrating from existing PostgreSQL tables to
TimescaleDB.
Configuration
We assume that TimescaleDB extension has been already installed on the database server (see installation instructions).
1719
TimescaleDB extension must also be enabled for the specific DB by executing:
echo "CREATE EXTENSION IF NOT EXISTS timescaledb CASCADE;" | sudo -u postgres psql zabbix
Running this command requires database administrator privileges.
Note:
If you use a database schema other than ’public’ you need to add a SCHEMA clause to the command above. E.g.:
echo "CREATE EXTENSION IF NOT EXISTS timescaledb SCHEMA yourschema CASCADE;" | sudo -u
postgres psql zabbix
For new installations, run the postgresql/timescaledb/schema.sql script. The script must be run after the regular Post-
greSQL database has been created with initial schema/data (see database creation).
Attention:
Please ignore warning messages informing that the best practices are not followed while running schema.sql script on
TimescaleDB version 2.9.0 and higher. Regardless of this warning, the configuration will be completed successfully.
The migration of existing history, trends and audit log data may take a lot of time. Zabbix server and frontend must be down for
the period of migration.
In order to use partitioned housekeeping for history and trends, both these options must be enabled. It is also possible to enable
override individually either for history only or trends only.
To successfully remove compressed data by housekeeper, both Override item history period and Override item trend period options
must be enabled. If override is disabled and tables have compressed chunks, the housekeeper will not remove data from these
tables, and warnings about incorrect configuration will be displayed in the Housekeeping and System information sections.
All of these parameters can be changed in Administration → Housekeeping after the installation.
Note:
You may want to run the timescaledb-tune tool provided by TimescaleDB to optimize PostgreSQL configuration parameters
in your postgresql.conf.
TimescaleDB compression
Native TimescaleDB compression is supported for all Zabbix tables that are TimescaleDB hypertables. During the upgrade or
migration to TimescaleDB, initial compression of the large tables may take a lot of time.
Note that compression is supported under the ”timescale” Timescale Community license and it is not supported under ”apache”
Apache 2.0 license. If Zabbix detects that compression is not supported a warning message is written into the Zabbix server log
and users cannot enable compression in the frontend.
Note:
Users are encouraged to get familiar with TimescaleDB compression documentation before using compression.
1720
• Compressed chunk modifications (inserts, deletes, updates) are not allowed
• Schema changes for compressed tables are not allowed.
Compression settings can be changed in the History and trends compression block in Administration → Housekeeping section of
Zabbix frontend.
Enable compression Enabled Checking or unchecking the checkbox does not activate/deactivate
compression immediately. Because compression is handled by the
Housekeeper, the changes will take effect in up to 2 times
HousekeepingFrequency hours (set in zabbix_server.conf)
After disabling compression, new chunks that fall into the compression
period will not be compressed. However, all previously compressed
data will stay compressed. To uncompress previously compressed
chunks, follow instructions in TimescaleDB documentation.
7 Elasticsearch setup
Attention:
Elasticsearch support is experimental!
Zabbix supports the storage of historical data by means of Elasticsearch instead of a database. Users can choose the storage place
for historical data between a compatible database and Elasticsearch. The setup procedure described in this section is applicable to
Elasticsearch version 7.X. In case an earlier or later version of Elasticsearch is used, some functionality may not work as intended.
Warning:
If all history data is stored in Elasticsearch, trends are not calculated nor stored in the database. With no trends calculated
and stored, the history storage period may need to be extended.
Configuration
To ensure proper communication between all elements involved make sure server configuration file and frontend configuration file
parameters are properly configured.
HistoryStorageURL=http://test.elasticsearch.lan:9200
HistoryStorageTypes=str,log,text
1721
This configuration forces Zabbix Server to store history values of numeric types in the corresponding database and textual history
data in Elasticsearch.
uint,dbl,str,log,text
Supported item type explanation:
// Elasticsearch url (can be string if same url is used for all types).
$HISTORY['url'] = [
'uint' => 'http://localhost:9200',
'text' => 'http://localhost:9200'
];
// Value types stored in Elasticsearch.
$HISTORY['types'] = ['uint', 'text'];
Example parameter values to fill the Zabbix frontend configuration file with:
$HISTORY['url'] = 'http://test.elasticsearch.lan:9200';
$HISTORY['types'] = ['str', 'text', 'log'];
This configuration forces to store Text, Character and Log history values in Elasticsearch.
conf/zabbix.conf.php
It is also required to make $HISTORY global in to ensure everything is working properly (see
conf/zabbix.conf.php.example for how to do it):
// Zabbix GUI configuration file.
global $DB, $HISTORY;
Installing Elasticsearch and creating mapping
Final two steps of making things work are installing Elasticsearch itself and creating mapping process.
Note:
Mapping is a data structure in Elasticsearch (similar to a table in a database). Mapping for all history data types is available
here: database/elasticsearch/elasticsearch.map.
Warning:
Creating mapping is mandatory. Some functionality will be broken if mapping is not created according to the instruction.
To create mapping for text type send the following request to Elasticsearch:
curl -X PUT \
http://your-elasticsearch.here:9200/text \
-H 'content-type:application/json' \
-d '{
"settings": {
"index": {
"number_of_replicas": 1,
"number_of_shards": 5
}
},
"mappings": {
"properties": {
"itemid": {
1722
"type": "long"
},
"clock": {
"format": "epoch_second",
"type": "date"
},
"value": {
"fields": {
"analyzed": {
"index": true,
"type": "text",
"analyzer": "standard"
}
},
"index": false,
"type": "text"
}
}
}
}'
Similar request is required to be executed for Character and Log history values mapping creation with corresponding type
correction.
Note:
To work with Elasticsearch please refer to Requirement page for additional information.
Note:
Housekeeper is not deleting any data from Elasticsearch.
This section describes additional steps required to work with pipelines and ingest nodes.
curl -X PUT \
http://your-elasticsearch.here:9200/_template/uint_template \
-H 'content-type:application/json' \
-d '{
"index_patterns": [
"uint*"
],
"settings": {
"index": {
"number_of_replicas": 1,
"number_of_shards": 5
}
},
"mappings": {
"properties": {
"itemid": {
"type": "long"
},
"clock": {
"format": "epoch_second",
"type": "date"
},
"value": {
"type": "long"
}
}
1723
}
}'
"index_patterns" field to
To create other templates, user should change the URL (last part is the name of template), change
match index name and to set valid mapping, which can be taken from database/elasticsearch/elasticsearch.map.
For example, the following command can be used to create a template for text index:
curl -X PUT \
http://your-elasticsearch.here:9200/_template/text_template \
-H 'content-type:application/json' \
-d '{
"index_patterns": [
"text*"
],
"settings": {
"index": {
"number_of_replicas": 1,
"number_of_shards": 5
}
},
"mappings": {
"properties": {
"itemid": {
"type": "long"
},
"clock": {
"format": "epoch_second",
"type": "date"
},
"value": {
"fields": {
"analyzed": {
"index": true,
"type": "text",
"analyzer": "standard"
}
},
"index": false,
"type": "text"
}
}
}
}'
This is required to allow Elasticsearch to set valid mapping for indices created automatically. Then it is required to create the
pipeline definition. Pipeline is some sort of preprocessing of data before putting data in indices. The following command can be
used to create pipeline for uint index:
curl -X PUT \
http://your-elasticsearch.here:9200/_ingest/pipeline/uint-pipeline \
-H 'content-type:application/json' \
-d '{
"description": "daily uint index naming",
"processors": [
{
"date_index_name": {
"field": "clock",
"date_formats": [
"UNIX"
],
"index_name_prefix": "uint-",
"date_rounding": "d"
}
}
1724
]
}'
User can change the rounding parameter (”date_rounding”) to set a specific index rotation period. To create other pipelines, user
should change the URL (last part is the name of pipeline) and change ”index_name_prefix” field to match index name.
Additionally, storing history data in multiple date-based indices should also be enabled in the new parameter in Zabbix server
configuration:
The following steps may help you troubleshoot problems with Elasticsearch setup:
1. Check if the mapping is correct (GET request to required index URL like http://localhost:9200/uint).
2. Check if shards are not in failed state (restart of Elasticsearch should help).
3. Check the configuration of Elasticsearch. Configuration should allow access from the Zabbix frontend host and the Zabbix
server host.
4. Check Elasticsearch logs.
If you are still experiencing problems with your installation then please create a bug report with all the information from this list
(mapping, error logs, configuration, version, etc.)
RHEL
In SUSE Linux Enterprise Server 15 you need to configure php-fpm (the path to configuration file may vary slightly depending on
the service pack):
cp /etc/php8/fpm/php-fpm.conf{.default,}
cp /etc/php8/fpm/php-fpm.d/www.conf{.default,}
sed -i 's/user = nobody/user = wwwrun/; s/group = nobody/group = www/' /etc/php8/fpm/php-fpm.d/www.conf
Since Zabbix 5.0.0, the systemd service file for Zabbix agent in official packages explicitly includes directives for User and Group.
Both are set to zabbix.
It is no longer possible to configure which user Zabbix agent runs as via zabbix_agentd.conf file, because the agent will bypass
this configuration and run as the user specified in the systemd service file. To run Zabbix agent as root you need to make the
modifications described below.
Zabbix agent
To override the default user and group for Zabbix agent, run:
[Service]
User=root
Group=root
1725
Reload daemons and restart the zabbix-agent service:
systemctl daemon-reload
systemctl restart zabbix-agent
For Zabbix agent this re-enables the functionality of configuring user in the zabbix_agentd.conf file. Now you need to set
User=root and AllowRoot=1 configuration parameters in the agent configuration file.
Zabbix agent 2
To override the default user and group for Zabbix agent 2, run:
[Service]
User=root
Group=root
Reload daemons and restart the zabbix-agent service:
systemctl daemon-reload
systemctl restart zabbix-agent2
For Zabbix agent2 this completely determines the user that it runs as. No additional modifications are required.
Configuring agent
Both generations of Zabbix agents run as a Windows service. For Zabbix agent 2, replace agentd with agent2 in the instructions
below.
You can run a single instance of Zabbix agent or multiple instances of the agent on a Microsoft Windows host. A single instance
may use either:
• the default configuration file, located in the same directory as the agent binary;
• a configuration file specified in the command line.
In case of multiple instances each agent instance must have its own configuration file (one of the instances can use the default
configuration file).
• conf/zabbix_agentd.conf should be copied manually to the directory where zabbix_agentd.exe will be installed;
• conf/zabbix_agent2.conf and the conf/zabbix_agent2.d directory should be copied manually to the directory
where zabbix_agent2.exe will be installed.
See the configuration file options for details on configuring Zabbix Windows agent.
Hostname parameter
To perform active checks on a host Zabbix agent needs to have the hostname defined. Moreover, the hostname value set on the
agent side should exactly match the ”Host name” configured for the host in the frontend.
The hostname value on the agent side can be defined by either the Hostname or HostnameItem parameter in the agent config-
uration file - or the default values are used if any of these parameters are not specified.
The default value for HostnameItem parameter is the value returned by the ”system.hostname” agent key. For Windows, it returns
result of the gethostname() function, which queries namespace providers to determine the local host name. If no namespace
provider responds, the NetBIOS name is returned.
The default value for Hostname is the value returned by the HostnameItem parameter. So, in effect, if both these parameters are
unspecified, the actual hostname will be the host NetBIOS name; Zabbix agent will use NetBIOS host name to retrieve the list of
active checks from Zabbix server and send results to it.
The ”system.hostname” key supports two optional parameters - type and transform.
1726
Type determines the type of the name the item should return:
• netbios (default) - returns the NetBIOS host name which is limited to 15 symbols and is in the UPPERCASE only;
• host - case-sensitive, returns the full, real Windows host name (without a domain);
• shorthost - returns part of the hostname before the first dot. It will return a full string if the name does not contain a dot.
• fqdn - returns Fully Qualified Domain Name (without the trailing dot).
So, to simplify the configuration of zabbix_agentd.conf file and make it unified, three different approaches can be used:
1. Leave Hostname or HostnameItem parameters undefined and Zabbix agent will use NetBIOS host name as the hostname.
2. Leave Hostname parameter undefined and define HostnameItem like this:
HostnameItem=system.hostname[host] - for Zabbix agent to use the full, real (case-sensitive) Windows host name as
the hostname
HostnameItem=system.hostname[shorthost,lower] - for Zabbix agent to use only part of the hostname before the
first dot, converted into lowercase.
HostnameItem=system.hostname[fqdn] - for Zabbix agent to use the Fully Qualified Domain Name as the hostname.
Host name is also used as part of Windows service name which is used for installing, starting, stopping and uninstalling the Windows
service. For example, if Zabbix agent configuration file specifies Hostname=Windows_db_server, then the agent will be installed
as a Windows service ”Zabbix Agent [Windows_db_server]”. Therefore, to have a different Windows service name for each
Zabbix agent instance, each instance must use a different host name.
Before installing the agent, copy conf/zabbix_agentd.conf manually to the directory where zabbix_agentd.exe will be in-
stalled.
To install a single instance of Zabbix agent with the default configuration file:
zabbix_agentd.exe --install
Attention:
On a 64-bit system, a 64-bit Zabbix agent version is required for all checks related to running 64-bit processes to work
correctly.
If you wish to use a configuration file other than the default one, you should use the following command for service installation:
Starting agent
To start the agent service, you can use Control Panel or do it from command line.
To start a single instance of Zabbix agent with the default configuration file:
zabbix_agentd.exe --start
To start a single instance of Zabbix agent with another configuration file:
To stop the agent service, you can use Control Panel or do it from command line.
To stop a single instance of Zabbix agent started with the default configuration file:
1727
zabbix_agentd.exe --stop
To stop a single instance of Zabbix agent started with another configuration file:
To uninstall a single instance of Zabbix agent using the default configuration file:
zabbix_agentd.exe --uninstall
To uninstall a single instance of Zabbix agent using a non-default configuration file:
Zabbix agent for Windows does not support non-standard Windows configurations where CPUs are distributed non-uniformly across
NUMA nodes. If logical CPUs are distributed non-uniformly, then CPU performance metrics may not be available for some CPUs.
For example, if there are 72 logical CPUs with 2 NUMA nodes, both nodes must have 36 CPUs each.
Overview
This section provides guidelines for configuring single sign-on and user provisioning into Zabbix from Microsoft Azure Active Direc-
tory using SAML 2.0 authentication.
Creating application
1. Log into your account at Microsoft Azure. For testing purposes, you may create a free trial account in Microsoft Azure.
2. From the main menu (see top left of the screen) select Azure Active Directory.
3. Select Enterprise applications -> Add new application -> Create your own application.
4. Add the name of your app and select the Integrate any other application... option. After that, click on Create.
1. In your application page, go to Set up single sign on and click on Get started. Then select SAML.
zabbix;
• In Identifier (Entity ID) set a unique name to identify your app to Azure Active Directory, for example,
• In Reply URL (Assertion Consumer Service URL) set the Zabbix single sign-on endpoint: https://<zabbix-instance-url>/zabbix/
1728
Note that this field requires ”https”. To make that work with Zabbix, it is necessary to add to conf/zabbix.conf.php the
following line:
The attribute names are arbitrary. Different attribute names may be used, however, it is required that they match the respective
field value in Zabbix SAML settings.
• Click on Add a group claim to add an attribute for passing groups to Zabbix:
4. In SAML Certificates download the certificate provided by Azure and place it into conf/certs of the Zabbix frontend installation.
1729
Set 644 permissions to it by running:
Zabbix configuration
1. In Zabbix, go to the SAML settings and fill the configuration options based on the Azure configuration:
1730
Zabbix field Setup field in Azure Sample value
1731
Zabbix field Setup field in Azure Sample value
1. In your Azure AD application page, from the main menu open the Provisioning page. Click on Get started and then select
Automatic provisioning mode:
2. Now you can add all the attributes that will be passed with SCIM to Zabbix. To do that, click on Mappings and then on Provision
Azure Active Directory Users.
At the bottom of the Attribute Mapping list, enable Show advanced options, and then click on Edit attribute list for customappsso.
At the bottom of the attribute list, add your own attributes with type ’String’:
1732
Save the list.
3. Now you can add mappings for the added attributes. At the bottom of the Attribute Mapping list, click on Add New Mapping and
create mappings as shown below:
4. As a prerequisite of user provisioning into Zabbix, you must have users and groups configured in Azure.
To do that, select Azure Active Directory from the main Azure menu (see top left of the screen) and then add users/groups in the
respective Users and Groups pages.
5. When users and groups have been created in Azure AD, you can go to the Users and groups menu of your application and add
them to the app.
6. Go to the Provisioning menu of your app, and click on Start provisioning to have users provisioned to Zabbix.
Note that the Users PATCH request in Azure does not support changes in media.
This section provides guidelines for configuring Okta to enable SAML 2.0 authentication and user provisioning for Zabbix.
Okta configuration
1733
Select ”SAML 2.0” as the sign-in method and click on Next.
5. In SAML configuration, enter the values provided below, then click on Next.
• In General add:
– Single sign-on URL: http://<your-zabbix-url>/zabbix/index_sso.php?acs
acs parameter is not cut out in
Note the use of ”http”, and not ”https”, so that the the request. The Use this for
Recipient URL and Destination URL checkbox should also be marked.
– Audience URI (SP Entity ID): zabbix
Note that this value will be used within the SAML assertion as a unique service provider identifier (if not matching, the
operation will be rejected). It is possible to specify a URL or any string of data in this field.
– Default RelayState:
Leave this field blank; if a custom redirect is required, it can be added in Zabbix in the Users → Users settings.
– Fill in other fields according to your preferences.
• In Attribute Statements/Group Attribute Statements add:
1734
These attribute statements are inserted into the SAML assertions shared with Zabbix.
The attribute names used here are arbitrary examples. You may use different attribute names, however, it is required that they
match the respective field value in Zabbix SAML settings.
If you want to configure SAML sign-in into Zabbix without JIT user provisioning, then only the email attribute is required.
Note:
If planning to use an encrypted connection, generate the private and public encryption certificates, then upload the public
certificate to Okta. The certificate upload form appears when Assertion Encryption is set to ”Encrypted” (click Show
Advanced Settings to find this parameter).
6. In the next tab, select ”I’m a software vendor. I’d like to integrate my app with Okta” and press ”Finish”.
7. Navigate to the ”Assignments” tab of the newly-created application and click on the Assign button, then select ”Assign to
People” from the drop-down.
1735
8. In a popup that appears, assign the app to people that will use SAML 2.0 to authenticate with Zabbix, then click on Save and go
back.
9. Navigate to the ”Sign On” tab and click on the View Setup Instructions button.
Setup instructions will be opened in a new tab; keep this tab open while configuring Zabbix.
Zabbix configuration
1. In Zabbix, go to the SAML settings and fill the configuration options based on setup instructions from Okta:
1736
Zabbix field Setup field in Okta Sample value
1737
Zabbix field Setup field in Okta Sample value
2. Download the certificate provided in the Okta SAML setup instructions into ui/conf/certs folder as idp.crt.
SCIM provisioning
1. To turn on SCIM provisioning, go to ”General” -> ”App Settings” of the application in Okta.
Mark the Enable SCIM provisioning checkbox. As a result, a new Provisioning tab appears.
• In SCIM connector base URL specify the path to the Zabbix frontend and append api_scim.php to it, i.e.:
https://<your-zabbix-url>/zabbix/api_scim.php
• Unique identifier field for users: email
• Authentication mode: HTTP header
• In Authorization enter a valid API token with Super admin rights
Attention:
If you are using Apache, you may need to change the default Apache configuration in /etc/apache2/apache2.conf by
adding the following line:
SetEnvIf Authorization "(.*)" HTTP_AUTHORIZATION=$1
Otherwise Apache does not send the Authorization header in request.
3. Click on Test Connector Configuration to test the connection. If all is correct a success message will be displayed.
1738
4. In ”Provisioning” -> ”To App”, make sure to mark the following checkboxes:
• Create Users
• Update User Attributes
• Deactivate Users
This will make sure that these request types will be sent to Zabbix.
5. Make sure that all attributes defined in SAML are defined in SCIM. You can access the profile editor for your app in ”Provisioning”
-> ”To App”, by clicking on Go to Profile Editor.
Click on Add Attribute. Fill the values for Display name, Variable name, External name with the SAML attribute name, for example,
user_name.
1739
8. Add users in the ”Assignments” tab. The users previously need to be added in Directory -> People. All these assignments will
be sent as requests to Zabbix.
9. Add groups in the ”Push Groups” tab. The user group mapping pattern in Zabbix SAML settings must match a group specified
here. If there is no match, the user cannot be created in Zabbix.
Information about group members is sent every time when some change is made.
Overview
This section provides guidelines for configuring single sign-on and user provisioning into Zabbix from OneLogin using SAML 2.0
authentication.
OneLogin configuration
Creating application
1. Log into your account at OneLogin. For testing purposes, you may create a free developer account in OneLogin.
3. Click on ”Add App” and search for the appropriate app. The guidelines in this page are based on the SCIM Provisioner with SAML
(SCIM v2 Enterprise, full SAML) app example.
4. To begin with, you may want to customize the display name of your app. You may also want to add the icon and app details.
After that, click on Save.
1. In Configuration -> Application details, set the Zabbix single sign-on endpoint http://<zabbix-instance-url>/zabbix/index_sso.p
as the value of these fields:
Note the use of ”http”, and not ”https”, so that the acs parameter is not cut out in the request.
1740
It is also possible to use ”https”. To make that work with Zabbix, it is necessary to add to conf/zabbix.conf.php the following
line:
Note that for user provisioning to work, OneLogin needs to receive in response a ’name’ attribute with ’givenName’ and ’family-
Name’, even if it was not required by the service provider. Thus it is necessary to specify this in the schema in the application
configuration part.
• SCIM Bearer Token: enter a Zabbix API token with Super admin permissions.
1741
3. In the Provisioning page, enable the Provisioning option:
• Make sure that the ’scimusername’ matches the user login value in OneLogin (e.g. email);
• Mark the Include in User Provisioning option for the ’Groups’ parameter;
• Click on ”+” to create the custom parameters that are required for SAML assertions and user provisioning such as
user_name, user_lastname, user_email, and user_mobile:
1742
When adding a parameter, make sure to mark both the Include in SAML assertion and Include in User Provisioning options.
• Add a ’group’ parameter that matches user roles in OneLogin. User roles will be passed as a string, separated by a semicolon
;. The OneLogin user roles will be the used for creating user groups in Zabbix:
1743
Verify the list of parameters:
1744
5. In the Rules page, create user role mappings to the default Groups parameter.
You may use a regular expression to pass specific roles as groups. The role names should not contain ; as OneLogin uses it as a
separator when sending an attribute with several roles.
Zabbix configuration
1. In Zabbix, go to the SAML settings and fill the configuration options based on the OneLogin configuration:
1745
Zabbix field Setup field in OneLogin Sample value
1746
Zabbix field Setup field in OneLogin Sample value
It is also required to configure user group mapping. Media mapping is optional. Click on Update to save these settings.
2. Download the certificate provided by OneLogin and place it into conf/certs of the Zabbix frontend installation, as idp.crt.
Set 644 permissions to it by running:
It is possible to use a different certificate name and location. In that case, make sure to add to conf/zabbix.conf.php the
following line:
$SSO['IDP_CERT'] = 'path/to/certname.crt';
SCIM user provisioning
With user provisioning enabled, it is now possible to add/update users and their roles in OneLogin and have them immediately
provisioned to Zabbix.
Add it to a user role and the application that will provision the user:
When saving the user, it will be provisioned to Zabbix. In Application -> Users you can check the provisioning status of current
application users:
1747
If successfully provisioned, the user can be seen in the Zabbix user list.
Overview
This section contains instructions for creating Oracle database and configuring connections between the database and Zabbix
server, proxy, and frontend.
Attention:
The support for Oracle DB is deprecated since Zabbix 7.0.
Database creation
We assume that a zabbix database user with password password exists and has permissions to create database objects in ORCL
service located on the host Oracle database server. Zabbix requires a Unicode database character set and a UTF8 national character
set. Check current settings:
cd /path/to/zabbix-sources/database/oracle
sqlplus zabbix/password@oracle_host/ORCL
sqlplus> @schema.sql
# stop here if you are creating database for Zabbix proxy
sqlplus> @images.sql
sqlplus> @data.sql
Note:
Please set the initialization parameter CURSOR_SHARING=FORCE for best performance.
Connection setup
• Easy Connect
• Net Service Name
Connection configuration parameters for Zabbix server and Zabbix proxy can be set in the configuration files. Important parameters
for the server and proxy are DBHost, DBUser, DBName and DBPassword. The same parameters are important for the frontend:
$DB[”SERVER”], $DB[”PORT”], $DB[”DATABASE”], $DB[”USER”], $DB[”PASSWORD”].
{DBUser/DBPassword[@<connect_identifier>]}
<connect_identifier> can be specified either in the form of ”Net Service Name” or ”Easy Connect”.
@[[//]Host[:Port]/<service_name> | <net_service_name>]
Easy Connect
1748
• Host - the host name or IP address of the database server computer (DBHost parameter in the configuration file).
• Port - the listening port on the database server (DBPort parameter in the configuration file; if not set the default 1521 port
will be used).
• <service_name> - the service name of the database you want to access (DBName parameter in the configuration file).
Example
Database parameters set in the server or proxy configuration file (zabbix_server.conf and zabbix_proxy.conf):
DBHost=localhost
DBPort=1521
DBUser=myusername
DBName=ORCL
DBPassword=mypassword
Connection string used by Zabbix to establish connection:
DBUser/DBPassword@DBHost:DBPort/DBName
During Zabbix frontend installation, set the corresponding parameters in the Configure DB connection step of the setup wizard:
Alternatively, these parameters can be set in the frontend configuration file (zabbix.conf.php):
$DB["TYPE"] = 'ORACLE';
$DB["SERVER"] = 'localhost';
$DB["PORT"] = '1521';
$DB["DATABASE"] = 'ORCL';
$DB["USER"] = 'myusername';
$DB["PASSWORD"] = 'mypassword';
Net service name
In order to use the service name for creating a connection, this service name has to be defined in the tnsnames.ora file located on
both the database server and the client systems. The easiest way to make sure that the connection will succeed is to define the
location of tnsnames.ora file in the TNS_ADMIN environment variable. The default location of the tnsnames.ora file is:
$ORACLE_HOME/network/admin/
A simple tnsnames.ora file example:
ORCL =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1521))
1749
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = ORCL)
)
)
To set configuration parameters for the ”Net Service Name” connection method, use one of the following options:
DBHost=
DBName=ORCL
• Set both parameters and leave both empty:
DBHost=
DBName=
In the second case, the TWO_TASK environment variable has to be set. It specifies the default remote Oracle service (service
name). When this variable is defined, the connector connects to the specified database by using an Oracle listener that accepts
connection requests. This variable is for use on Linux and UNIX only. Use the LOCAL environment variable for Microsoft Windows.
Example
Connect to a database using Net Service Name set as ORCL and the default port. Database parameters set in the server or proxy
configuration file (zabbix_server.conf and zabbix_proxy.conf):
DBHost=
#DBPort=
DBUser=myusername
DBName=ORCL
DBPassword=mypassword
During Zabbix frontend installation, set the corresponding parameters in the Configure DB connection step of the setup wizard:
• Database host:
• Database port: 0
• Database name: ORCL
• User: myusername
• Password: mypassword
Alternatively, these parameters can be set in the frontend configuration file (zabbix.conf.php):
$DB["TYPE"] = 'ORACLE';
$DB["SERVER"] = '';
$DB["PORT"] = '0';
$DB["DATABASE"] = 'ORCL';
$DB["USER"] = 'myusername';
$DB["PASSWORD"] = 'mypassword';
Connection string used by Zabbix to establish connection:
DBUser/DBPassword@ORCL
1750
Known issues
To improve performance, you can convert the field types from nclob to nvarchar2, see known issues.
Overview
This section provides instructions on installing Zabbix web service and configuring Zabbix to enable generation of scheduled
reports.
Installation
A new Zabbix web service process and Google Chrome browser should be installed to enable generation of scheduled reports. The
web service may be installed on the same machine where the Zabbix server is installed or on a different machine. Google Chrome
browser should be installed on the same machine, where the web service is installed.
The official zabbix-web-service package is available in the Zabbix repository. Google Chrome browser is not included into these
packages and has to be installed separately.
To compile Zabbix web service from sources, see Installing Zabbix web service.
After the installation, run zabbix_web_service on the machine, where the web service is installed:
zabbix_web_service
Configuration
To ensure proper communication between all elements involved make sure server configuration file and frontend configuration
parameters are properly configured.
Zabbix server
The following parameters in Zabbix server configuration file need to be updated: WebServiceURL and StartReportWriters.
WebServiceURL
This parameter is required to enable communication with the web service. The URL should be in the format <host:port>/report.
• By default, the web service listens on port 10053. A different port can be specified in the web service configuration file.
• Specifying the /report path is mandatory (the path is hardcoded and cannot be changed).
Example:
WebServiceURL=http://localhost:10053/report
StartReportWriters
This parameter determines how many report writer processes should be started. If it is not set or equals 0, report generation is
disabled. Based on the number and frequency of reports required, it is possible to enable from 1 to 100 report writer processes.
Example:
StartReportWriters=3
Zabbix frontend
A Frontend URL parameter should be set to enable communication between Zabbix frontend and Zabbix web service:
1751
Note:
Once the setup procedure is completed, you may want to configure and send a test report to make sure everything works
correctly.
Overview
In order to use any other language than English in Zabbix web interface, its locale should be installed on the web server. Additionally,
the PHP gettext extension is required for the translations to work.
Installing locales
locale -a
If some languages that are needed are not listed, open the /etc/locale.gen file and uncomment the required locales. Since Zabbix
uses UTF-8 encoding, you need to select locales with UTF-8 charset.
Now run:
locale-gen
Restart the web server.
The locales should now be installed. It may be required to reload Zabbix frontend page in browser using Ctrl + F5 for new languages
to appear.
Installing Zabbix
If installing Zabbix directly from Zabbix git repository, translation files should be generated manually. To generate translation files,
run:
make gettext
locale/make_mo.sh
This step is not needed when installing Zabbix from packages or source tar.gz files.
Selecting a language
• When installing web interface - in the frontend installation wizard. Selected language will be set as system default.
• After the installation, system default language can be changed in the Administration→General→GUI menu section.
• Language for a particular user can be changed in the user profile.
If a locale for a language is not installed on the machine, this language will be greyed out in Zabbix language selector. An orange
icon is displayed next to the language selector if at least one locale is missing. Upon pressing on this icon the following message
will be displayed: ”You are not able to choose some of the languages, because locales for them are not installed on the web server.”
3 Process configuration
1 Zabbix server
Overview
The parameters supported by the Zabbix server configuration file (zabbix_server.conf) are listed in this section.
The parameters are listed without additional information. Click on the parameter to see the full details.
1752
Parameter Description
1753
Parameter Description
ProxyDataFrequency Determines how often Zabbix server requests history data from a Zabbix proxy.
ServiceManagerSyncFrequency
Determines how often Zabbix will synchronize the configuration of a service manager.
SNMPTrapperFile The temporary file used for passing data from the SNMP trap daemon to the server.
SocketDir The directory to store the IPC sockets used by internal Zabbix services.
SourceIP The source IP address.
SSHKeyLocation The location of public and private keys for SSH checks and actions.
SSLCertLocation The location of SSL client certificate files for client authentication.
SSLKeyLocation The location of SSL private key files for client authentication.
SSLCALocation Override the location of certificate authority (CA) files for SSL server certificate verification.
StartAgentPollers The number of pre-forked instances of asynchronous Zabbix agent pollers.
StartAlerters The number of pre-forked instances of alerters.
StartBrowserPollers The number of pre-forked instances of browser item pollers.
StartConnectors The number of pre-forked instances of connector workers.
StartDBSyncers The number of pre-forked instances of history syncers.
StartDiscoverers The number of pre-forked instances of discovery workers.
StartEscalators The number of pre-forked instances of escalators.
StartHistoryPollers The number of pre-forked instances of history pollers.
StartHTTPAgentPollers The number of pre-forked instances of asynchronous HTTP agent pollers.
StartHTTPPollers The number of pre-forked instances of HTTP pollers.
StartIPMIPollers The number of pre-forked instances of IPMI pollers.
StartJavaPollers The number of pre-forked instances of Java pollers.
StartLLDProcessors The number of pre-forked instances of low-level discovery (LLD) workers.
StartODBCPollers The number of pre-forked instances of ODBC pollers.
StartPingers The number of pre-forked instances of ICMP pingers.
StartPollersUnreachable The number of pre-forked instances of pollers for unreachable hosts (including IPMI and Java).
StartPollers The number of pre-forked instances of pollers.
StartPreprocessors The number of pre-started instances of preprocessing workers.
StartProxyPollers The number of pre-forked instances of pollers for passive proxies.
StartReportWriters The number of pre-forked instances of report writers.
StartSNMPPollers The number of pre-forked instances of asynchronous SNMP pollers.
StartSNMPTrapper If set to 1, an SNMP trapper process will be started.
StartTimers The number of pre-forked instances of timers.
StartTrappers The number of pre-forked instances of trappers.
StartVMwareCollectors The number of pre-forked VMware collector instances.
StatsAllowedIP A list of comma-delimited IP addresses, optionally in CIDR notation, or DNS names of external
Zabbix instances. The stats request will be accepted only from the addresses listed here.
Timeout Specifies how long to wait for connection to proxy, agent, Zabbix web service, or SNMP checks
(except SNMP walk[OID] and get[OID] items), in seconds.
TLSCAFile The full pathname of a file containing the top-level CA(s) certificates for peer certificate
verification, used for encrypted communications between Zabbix components.
TLSCertFile The full pathname of a file containing the server certificate or certificate chain, used for
encrypted communications between Zabbix components.
TLSCipherAll The GnuTLS priority string or OpenSSL (TLS 1.2) cipher string. Override the default ciphersuite
selection criteria for certificate- and PSK-based encryption.
TLSCipherAll13 The cipher string for OpenSSL 1.1.1 or newer in TLS 1.3. Override the default ciphersuite
selection criteria for certificate- and PSK-based encryption.
TLSCipherCert The GnuTLS priority string or OpenSSL (TLS 1.2) cipher string. Override the default ciphersuite
selection criteria for certificate-based encryption.
TLSCipherCert13 The cipher string for OpenSSL 1.1.1 or newer in TLS 1.3. Override the default ciphersuite
selection criteria for certificate-based encryption.
TLSCipherPSK The GnuTLS priority string or OpenSSL (TLS 1.2) cipher string. Override the default ciphersuite
selection criteria for PSK-based encryption.
TLSCipherPSK13 The cipher string for OpenSSL 1.1.1 or newer in TLS 1.3. Override the default ciphersuite
selection criteria for PSK-based encryption.
TLSCRLFile The full pathname of a file containing revoked certificates. This parameter is used for encrypted
communications between Zabbix components.
TLSKeyFile The full pathname of a file containing the server private key, used for encrypted communications
between Zabbix components.
TmpDir The temporary directory.
TrapperTimeout Specifies how many seconds the trapper may spend processing new data.
TrendCacheSize The size of the trend cache.
1754
Parameter Description
All parameters are non-mandatory unless explicitly stated that the parameter is mandatory.
Note that:
• The default values reflect daemon defaults, not the values in the shipped configuration files;
• Zabbix supports configuration files only in UTF-8 encoding without BOM;
• Comments starting with ”#” are only supported in the beginning of the line.
Parameter details
AlertScriptsPath
The location of custom alert scripts (depends on the datadir compile-time installation variable).
Default: /usr/local/share/zabbix/alertscripts
AllowRoot
Allow the server to run as ’root’. If disabled and the server is started by ’root’, the server will try to switch to the ’zabbix’ user
instead. Has no effect if started under a regular user.
AllowSoftwareUpdateCheck
AllowUnsupportedDBVersions
CacheSize
The size of the configuration cache, in bytes. The shared memory size for storing host, item and trigger data.
CacheUpdateFrequency
This parameter determines how often Zabbix will perform the configuration cache update in seconds. See also runtime control
options.
DBHost
1755
The database host name.<br>With MySQL localhost or empty string results in using a socket. With PostgreSQL only empty
string results in attempt to use socket. With Oracle empty string results in using the Net Service Name connection method; in this
case consider using the TNS_ADMIN environment variable to specify the directory of the tnsnames.ora file.
Default: localhost
DBName
The database name.<br>With Oracle, if the Net Service Name connection method is used, specify the service name from
tnsnames.ora or set to empty string; set the TWO_TASK environment variable if DBName is set to empty string.
Mandatory: Yes
DBPassword
DBPort
The database port when not using local socket.<br>With Oracle, if the Net Service Name connection method is used, this parameter
will be ignored; the port number from the tnsnames.ora file will be used instead.
Range: 1024-65535
DBSchema
DBSocket
DBUser
DBTLSConnect
Setting this option to the following values enforces to use a TLS connection to the database:<br>required - connect using
TLS<br>verify_ca - connect using TLS and verify certificate<br>verify_full - connect using TLS, verify certificate and verify that
database identity specified by DBHost matches its certificate<br><br>With MySQL, starting from 5.7.11, and PostgreSQL the
required, verify_ca, verify_full.<br>With MariaDB, starting from version 10.2.6, the
following values are supported:
required and verify_full values are supported.<br>By default not set to any option and the behavior depends on database
configuration.
DBTLSCAFile
The full pathname of a file containing the top-level CA(s) certificates for database certificate verification.
DBTLSCertFile
The full pathname of a file containing the Zabbix server certificate for authenticating to database.
DBTLSKeyFile
The full pathname of a file containing the private key for authenticating to database.
DBTLSCipher
The list of encryption ciphers that Zabbix server permits for TLS protocols up through TLS v1.2. Supported only for MySQL.
DBTLSCipher13
The list of encryption ciphersuites that Zabbix server permits for the TLS v1.3 protocol. Supported only for MySQL, starting from
version 8.0.16.
DebugLevel
Specify the debug level:<br>0 - basic information about starting and stopping of Zabbix processes<br>1 - critical informa-
tion;<br>2 - error information;<br>3 - warnings;<br>4 - for debugging (produces lots of information);<br>5 - extended debugging
(produces even more information).<br>See also runtime control options.
EnableGlobalScripts
1756
Default: 0<br> Values: 0 - disable; 1 - enable
ExportDir
The directory for real-time export of events, history and trends in newline-delimited JSON format. If set, enables the real-time
export.
ExportFileSize
The maximum size per export file in bytes. Used for rotation if ExportDir is set.
Default: 1G<br> Range: 1M-1G
ExportType
The list of comma-delimited entity types (events, history, trends) for real-time export (all types by default). Valid only if ExportDir
is set.<br>Note that if ExportType is specified, but ExportDir is not, then this is a configuration error and the server will not start.
ExportType=history,trends
Example for event export only:
ExportType=events
ExternalScripts
The location of external scripts (depends on the datadir compile-time installation variable).
Default: /usr/local/share/zabbix/externalscripts
Fping6Location
The location of fping6. Make sure that the fping6 binary has root ownership and the SUID flag set. Make empty (”Fping6Location=”)
if your fping utility is capable to process IPv6 addresses.
Default: /usr/sbin/fping6
FpingLocation
The location of fping. Make sure that the fping binary has root ownership and the SUID flag set.
Default: /usr/sbin/fping
HANodeName
The high availability cluster node name. When empty the server is working in standalone mode and a node with empty name is
created.
HistoryCacheSize
The size of the history cache, in bytes. The shared memory size for storing history data.
HistoryIndexCacheSize
The size of the history index cache, in bytes. The shared memory size for indexing the history data stored in history cache. The
index cache size needs roughly 100 bytes to cache one item.
HistoryStorageDateIndex
Enable preprocessing of history values in history storage to store values in different indices based on date.
HistoryStorageURL
The history storage HTTP[S] URL. This parameter is used for Elasticsearch setup.
HistoryStorageTypes
A comma-separated list of value types to be sent to the history storage. This parameter is used for Elasticsearch setup.
Default: uint,dbl,str,log,text
HousekeepingFrequency
1757
This parameter determines how often Zabbix will perform the housekeeping procedure in hours. Housekeeping is removing out-
dated information from the database.<br>Note: To prevent housekeeper from being overloaded (for example, when history and
trend periods are greatly reduced), no more than 4 times HousekeepingFrequency hours of outdated information are deleted in one
housekeeping cycle, for each item. Thus, if HousekeepingFrequency is 1, no more than 4 hours of outdated information (starting
from the oldest entry) will be deleted per cycle.<br>Note: To lower load on server startup housekeeping is postponed for 30 min-
utes after server start. Thus, if HousekeepingFrequency is 1, the very first housekeeping procedure after server start will run after
30 minutes, and will repeat with one hour delay thereafter.<br>It is possible to disable automatic housekeeping by setting House-
keepingFrequency to 0. In this case the housekeeping procedure can only be started by housekeeper_execute runtime control
option and the period of outdated information deleted in one housekeeping cycle is 4 times the period since the last housekeeping
cycle, but not less than 4 hours and not greater than 4 days.<br>See also runtime control options.
Include
You may include individual files or all files in a directory in the configuration file. To only include relevant files in the specified
directory, the asterisk wildcard character is supported for pattern matching. See special notes about limitations.
Example:
Include=/absolute/path/to/config/files/*.conf
JavaGateway
The IP address (or hostname) of Zabbix Java gateway. Only required if Java pollers are started.
JavaGatewayPort
ListenBacklog
The maximum number of pending connections in the TCP queue.<br>The default value is a hard-coded constant, which depends
on the system.<br>The maximum supported value depends on the system, too high values may be silently truncated to the
’implementation-specified maximum’.
ListenIP
A list of comma-delimited IP addresses that the trapper should listen on.<br>Trapper will listen on all network interfaces if this
parameter is missing.
Default: 0.0.0.0
ListenPort
LoadModule
The module to load at server startup. Modules are used to extend the functionality of the server. The module must be located in the
directory specified by LoadModulePath or the path must precede the module name. If the preceding path is absolute (starts with ’/’)
then LoadModulePath is ignored.<br>Formats:<br>LoadModule=<module.so><br>LoadModule=<path/module.so><br>LoadModule=</ab
is allowed to include multiple LoadModule parameters.
LoadModulePath
The full path to the location of server modules. The default depends on compilation options.
LogFile
LogFileSize
The maximum size of the log file in MB.<br>0 - disable automatic log rotation.<br>Note: If the log file size limit is reached and
file rotation fails, for whatever reason, the existing log file is truncated and started anew.
Default: 1<br> Range: 0-1024<br> Mandatory: Yes, if LogType is set to file; otherwise no
LogSlowQueries
1758
Determines how long a database query may take before being logged in milliseconds.<br>0 - don’t log slow queries.<br>This
option becomes enabled starting with DebugLevel=3.
LogType
The type of the log output:<br>file - write log to file specified by LogFile parameter;<br>system - write log to syslog;<br>console
- write log to standard output.
Default: file
MaxConcurrentChecksPerPoller
The maximum number of asynchronous checks that can be executed at once by each HTTP agent poller, agent poller or SNMP
poller. See StartHTTPAgentPollers, StartAgentPollers, and StartSNMPPollers.
MaxHousekeeperDelete
No more than ’MaxHousekeeperDelete’ rows (corresponding to [tablename], [field], [value]) will be deleted per one task in one
housekeeping cycle.<br>If set to 0 then no limit is used at all. In this case you must know what you are doing, so as not to overload
2
the database! <br>This parameter applies only to deleting history and trends of already deleted items.
NodeAddress
IP or hostname with optional port to override how the frontend should connect to the server.<br>Format: <address>[:<port>]<br><br>If
IP or hostname is not set, the value of ListenIP will be used. If ListenIP is not set, the value localhost will be used.<br>If port
is not set, the value of ListenPort will be used. If ListenPort is not set, the value 10051 will be used.<br><br>This option can be
overridden by the address specified in the frontend configuration.<br><br>See also: HANodeName parameter; Enabling high
availability.
Default: ’localhost:10051’
PidFile
Default: /tmp/zabbix_server.pid
ProblemHousekeepingFrequency
Determines how often Zabbix will delete problems for deleted triggers in seconds.
ProxyConfigFrequency
Determines how often Zabbix server sends configuration data to a Zabbix proxy in seconds. Used only for proxies in a passive
mode.
ProxyDataFrequency
Determines how often Zabbix server requests history data from a Zabbix proxy in seconds. Used only for proxies in the passive
mode.
ServiceManagerSyncFrequency
Determines how often Zabbix will synchronize the configuration of a service manager in seconds.
SNMPTrapperFile
Temporary file used for passing data from the SNMP trap daemon to the server.<br>Must be the same as in zabbix_trap_receiver.pl
or SNMPTT configuration file.
Default: /tmp/zabbix_traps.tmp
SocketDir
1759
Default: /tmp
SourceIP
Source IP address for:<br>- outgoing connections to Zabbix proxy and Zabbix agent;<br>- agentless connections (VMware, SSH,
JMX, SNMP, Telnet and simple checks);<br>- HTTP agent connections;<br>- script item JavaScript HTTP requests;<br>- prepro-
cessing JavaScript HTTP requests;<br>- sending notification emails (connections to SMTP server);<br>- webhook notifications
(JavaScript HTTP connections);<br>- connections to the Vault
SSHKeyLocation
Location of public and private keys for SSH checks and actions.
SSLCertLocation
Location of SSL client certificate files for client authentication.<br>This parameter is used in web monitoring only.
SSLKeyLocation
Location of SSL private key files for client authentication.<br>This parameter is used in web monitoring only.
SSLCALocation
Override the location of certificate authority (CA) files for SSL server certificate verification. If not set, system-wide directory will be
used.<br>Note that the value of this parameter will be set as libcurl option CURLOPT_CAPATH. For libcurl versions before 7.42.0,
this only has effect if libcurl was compiled to use OpenSSL. For more information see cURL web page.<br>This parameter is used
in web monitoring and in SMTP authentication.
StartAgentPollers
StartAlerters
StartBrowserPollers
StartConnectors
The number of pre-forked instances of connector workers. The connector manager process is started automatically when a con-
nector worker is started.
StartDBSyncers
The number of pre-forked instances of history syncers.<br>Note: Be careful when changing this value, increasing it may do more
harm than good. Roughly, the default value should be enough to handle up to 4000 NVPS.
StartDiscoverers
StartEscalators
StartHistoryPollers
The number of pre-forked instances of history pollers.<br>Only required for calculated checks.
StartHTTPAgentPollers
1760
Default: 1<br> Range: 0-1000
StartHTTPPollers
1
The number of pre-forked instances of HTTP pollers .
StartIPMIPollers
StartJavaPollers
1
The number of pre-forked instances of Java pollers .
StartLLDProcessors
1
The number of pre-forked instances of low-level discovery (LLD) workers .<br>The LLD manager process is automatically started
when an LLD worker is started.
StartODBCPollers
1
The number of pre-forked instances of ODBC pollers .
StartPingers
1
The number of pre-forked instances of ICMP pingers .
StartPollersUnreachable
1
The number of pre-forked instances of pollers for unreachable hosts (including IPMI and Java) .<br>At least one poller for unreach-
able hosts must be running if regular, IPMI or Java pollers are started.
StartPollers
1
The number of pre-forked instances of pollers .
StartPreprocessors
1
The number of pre-started instances of preprocessing workers .
StartProxyPollers
1
The number of pre-forked instances of pollers for passive proxies .
StartReportWriters
The number of pre-forked instances of report writers.<br>If set to 0, scheduled report generation is disabled.<br>The report
manager process is automatically started when a report writer is started.
StartSNMPPollers
StartSNMPTrapper
1761
StartTimers
StartTrappers
1
The number of pre-forked instances of trappers .<br>Trappers accept incoming connections from Zabbix sender, active agents
and active proxies.
StartVMwareCollectors
StatsAllowedIP
A list of comma delimited IP addresses, optionally in CIDR notation, or DNS names of external Zabbix instances. Stats request
will be accepted only from the addresses listed here. If this parameter is not set no stats requests will be accepted.<br>If IPv6
support is enabled then ’127.0.0.1’, ’::127.0.0.1’, ’::ffff:127.0.0.1’ are treated equally and ’::/0’ will allow any IPv4 or IPv6 address.
’0.0.0.0/0’ can be used to allow any IPv4 address.
Example:
StatsAllowedIP=127.0.0.1,192.168.1.0/24,::1,2001:db8::/32,zabbix.example.com
Timeout
Specifies how long to wait for connection to proxy, agent, Zabbix web service, or SNMP checks (except SNMP walk[OID] and
get[OID] items), in seconds.
Default: 3<br> Range: 1-30
TLSCAFile
The full pathname of a file containing the top-level CA(s) certificates for peer certificate verification, used for encrypted communi-
cations between Zabbix components.
TLSCertFile
The full pathname of a file containing the server certificate or certificate chain, used for encrypted communications between Zabbix
components.
TLSCipherAll
The GnuTLS priority string or OpenSSL (TLS 1.2) cipher string. Override the default ciphersuite selection criteria for certificate- and
PSK-based encryption.
Example:
TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256:TLS_AES_128_GCM_SHA256
TLSCipherAll13
The cipher string for OpenSSL 1.1.1 or newer in TLS 1.3. Override the default ciphersuite selection criteria for certificate- and
PSK-based encryption.
NONE:+VERS-TLS1.2:+ECDHE-RSA:+RSA:+ECDHE-PSK:+PSK:+AES-128-GCM:+AES-128-CBC:+AEAD:+SHA256:+SHA1:+CURVE-ALL
Example for OpenSSL:
EECDH+aRSA+AES128:RSA+aRSA+AES128:kECDHEPSK+AES128:kPSK+AES128
TLSCipherCert
The GnuTLS priority string or OpenSSL (TLS 1.2) cipher string. Override the default ciphersuite selection criteria for certificate-based
encryption.
NONE:+VERS-TLS1.2:+ECDHE-RSA:+RSA:+AES-128-GCM:+AES-128-CBC:+AEAD:+SHA256:+SHA1:+CURVE-ALL:+COMP-NULL:+SIG
Example for OpenSSL:
1762
EECDH+aRSA+AES128:RSA+aRSA+AES128
TLSCipherCert13
The cipher string for OpenSSL 1.1.1 or newer in TLS 1.3. Override the default ciphersuite selection criteria for certificate-based
encryption.
TLSCipherPSK
The GnuTLS priority string or OpenSSL (TLS 1.2) cipher string. Override the default ciphersuite selection criteria for PSK-based
encryption.
NONE:+VERS-TLS1.2:+ECDHE-PSK:+PSK:+AES-128-GCM:+AES-128-CBC:+AEAD:+SHA256:+SHA1:+CURVE-ALL:+COMP-NULL:+SIG
Example for OpenSSL:
kECDHEPSK+AES128:kPSK+AES128
TLSCipherPSK13
The cipher string for OpenSSL 1.1.1 or newer in TLS 1.3. Override the default ciphersuite selection criteria for PSK-based encryption.
Example:
TLS_CHACHA20_POLY1305_SHA256:TLS_AES_128_GCM_SHA256
TLSCRLFile
The full pathname of a file containing revoked certificates. This parameter is used for encrypted communications between Zabbix
components.
TLSKeyFile
The full pathname of a file containing the server private key, used for encrypted communications between Zabbix components.
TmpDir
Default: /tmp
TrapperTimeout
Specifies how many seconds the trapper may spend processing new data.
TrendCacheSize
The size of the trend cache, in bytes.<br>The shared memory size for storing trends data.
TrendFunctionCacheSize
The size of the trend function cache, in bytes.<br>The shared memory size for caching calculated trend function data.
UnavailableDelay
Determines how often host is checked for availability during the unavailability period in seconds.
UnreachableDelay
Determines how often host is checked for availability during the unreachability period in seconds.
UnreachablePeriod
User
Drop privileges to a specific, existing user on the system.<br>Only has effect if run as ’root’ and AllowRoot is disabled.
1763
Default: zabbix
ValueCacheSize
The size of the history value cache, in bytes.<br>The shared memory size for caching item history data requests.<br>Setting to
0 disables the value cache (not recommended).<br>When the value cache runs out of the shared memory a warning message is
written to the server log every 5 minutes.
Vault
Specifies the vault provider:<br>HashiCorp - HashiCorp KV Secrets Engine version 2<br>CyberArk - CyberArk Central Credential
Provider<br>Must match the vault provider set in the frontend.
Default: HashiCorp
VaultDBPath
Specifies a location, from where database credentials should be retrieved by keys. Depending on the Vault, can be vault path or
query.<br> The keys used for HashiCorp are ’password’ and ’username’.
Example:
secret/zabbix/database
The keys used for CyberArk are ’Content’ and ’UserName’.
Example:
AppID=zabbix_server&Query=Safe=passwordSafe;Object=zabbix_proxy_database
This option can only be used if DBUser and DBPassword are not specified.
VaultPrefix
A custom prefix for the vault path or query, depending on the vault; if not specified, the most suitable defaults will be used.
VaultTLSCertFile
The name of the SSL certificate file used for client authentication<br> The certificate file must be in PEM1 format. <br> If the
certificate file contains also the private key, leave the SSL key file field empty. <br> The directory containing this file is specified
by the configuration parameter SSLCertLocation.<br>This option can be omitted but is recommended for CyberArkCCP vault.
VaultTLSKeyFile
The name of the SSL private key file used for client authentication. <br> The private key file must be in PEM1 format. <br> The
directory containing this file is specified by the configuration parameter SSLKeyLocation. <br>This option can be omitted but is
recommended for CyberArkCCP vault.
VaultToken
The HashiCorp Vault authentication token that should have been generated exclusively for Zabbix server with read-only permission
to the paths specified in Vault macros and read-only permission to the path specified in the optional VaultDBPath configuration
parameter.<br>It is an error if VaultToken and VAULT_TOKEN environment variable are defined at the same time.
VaultURL
The vault server HTTP[S] URL. The system-wide CA certificates directory will be used if SSLCALocation is not specified.
Default: https://127.0.0.1:8200
VMwareCacheSize
The shared memory size for storing VMware data.<br>A VMware internal check zabbix[vmware,buffer,...] can be used to monitor
the VMware cache usage (see Internal checks).<br>Note that shared memory is not allocated if there are no vmware collector
instances configured to start.
VMwareFrequency
The delay in seconds between data gathering from a single VMware service.<br>This delay should be set to the least update
interval of any VMware monitoring item.
1764
VMwarePerfFrequency
The delay in seconds between performance counter statistics retrieval from a single VMware service. This delay should be set to
the least update interval of any VMware monitoring item that uses VMware performance counters.
VMwareTimeout
The maximum number of seconds a vmware collector will wait for a response from VMware service (vCenter or ESX hypervisor).
WebServiceURL
WebServiceURL=http://localhost:10053/report
WebDriverURL
WebDriverURL=http://localhost:4444
Footnotes
1
Note that too many data gathering processes (pollers, unreachable pollers, ODBC pollers, HTTP pollers, Java pollers, pingers,
trappers, proxypollers) together with IPMI manager, SNMP trapper and preprocessing workers can exhaust the per-process file
descriptor limit for the preprocessing manager.
Warning:
This will cause Zabbix server to stop (usually shortly after the start, but sometimes it can take more time). The configuration
file should be revised or the limit should be raised to avoid this situation.
2
When a lot of items are deleted it increases the load to the database, because the housekeeper will need to remove all the
history data that these items had. For example, if we only have to remove 1 item prototype, but this prototype is linked to 50 hosts
and for every host the prototype is expanded to 100 real items, 5000 items in total have to be removed (1*50*100). If 500 is set
for MaxHousekeeperDelete (MaxHousekeeperDelete=500), the housekeeper process will have to remove up to 2500000 values
(5000*500) for the deleted items from history and trends tables in one cycle.
2 Zabbix proxy
Overview
The parameters supported by the Zabbix proxy configuration file (zabbix_proxy.conf) are listed in this section.
The parameters are listed without additional information. Click on the parameter to see the full details.
Parameter Description
1765
Parameter Description
DBTLSCertFile The full pathname of a file containing the Zabbix proxy certificate for authenticating to database.
DBTLSKeyFile The full pathname of a file containing the private key for authenticating to database.
DBTLSCipher The list of encryption ciphers that Zabbix proxy permits for TLS protocols up through TLS v1.2.
Supported only for MySQL.
DBTLSCipher13 The list of encryption ciphersuites that Zabbix proxy permits for the TLS v1.3 protocol. Supported
only for MySQL, starting from version 8.0.16.
DebugLevel The debug level.
EnableRemoteCommands Whether remote commands from Zabbix server are allowed.
ExternalScripts The location of external scripts.
Fping6Location The location of fping6.
FpingLocation The location of fping.
HistoryCacheSize The size of the history cache.
HistoryIndexCacheSize The size of the history index cache.
Hostname A unique, case sensitive proxy name.
HostnameItem The item used for setting Hostname if it is undefined.
HousekeepingFrequency How often Zabbix will perform the housekeeping procedure in hours.
Include You may include individual files or all files in a directory in the configuration file.
JavaGateway The IP address (or hostname) of Zabbix Java gateway.
JavaGatewayPort The port that Zabbix Java gateway listens on.
ListenBacklog The maximum number of pending connections in the TCP queue.
ListenIP A list of comma-delimited IP addresses that the trapper should listen on.
ListenPort The listen port for trapper.
LoadModule The module to load at proxy startup.
LoadModulePath The full path to the location of proxy modules.
LogFile The name of the log file.
LogFileSize The maximum size of the log file.
LogRemoteCommands Enable logging of executed shell commands as warnings.
LogSlowQueries How long a database query may take before being logged.
LogType The type of the log output.
MaxConcurrentChecksPerPoller
The maximum number of asynchronous checks that can be executed at once by each HTTP
agent poller, agent poller or SNMP poller.
PidFile The name of the PID file.
ProxyBufferMode Specifies history, discovery and autoregistration data storage mechanism (disk/memory/hybrid).
ProxyConfigFrequency How often the proxy retrieves configuration data from Zabbix server in seconds.
ProxyLocalBuffer The proxy will keep data locally for N hours, even if the data have already been synced with the
server.
ProxyMemoryBufferAge The maximum age of data in proxy memory buffer in seconds.
ProxyMemoryBufferSize The size of shared memory cache for collected history, discovery and auto registration data.
ProxyMode The proxy operating mode (active/passive).
ProxyOfflineBuffer The proxy will keep data for N hours in case of no connectivity with Zabbix server.
Server If ProxyMode is set to active mode: Zabbix server IP address or DNS name (address:port) or
cluster (address:port;address2:port) to get configuration data from and send data to.
If ProxyMode is set to passive mode: List of comma delimited IP addresses, optionally in CIDR
notation, or DNS names of Zabbix server.
SNMPTrapperFile The temporary file used for passing data from the SNMP trap daemon to the proxy.
SocketDir The directory to store the IPC sockets used by internal Zabbix services.
SourceIP The source IP address.
SSHKeyLocation The location of public and private keys for SSH checks and actions.
SSLCertLocation The location of SSL client certificate files for client authentication.
SSLKeyLocation The location of SSL private key files for client authentication.
SSLCALocation Override the location of certificate authority (CA) files for SSL server certificate verification.
StartAgentPollers The number of pre-forked instances of asynchronous Zabbix agent pollers.
StartBrowserPollers The number of pre-forked instances of browser item pollers.
StartDBSyncers The number of pre-forked instances of history syncers.
StartDiscoverers The number of pre-forked instances of discovery workers.
StartHTTPAgentPollers The number of pre-forked instances of asynchronous HTTP agent pollers.
StartHTTPPollers The number of pre-forked instances of HTTP pollers.
StartIPMIPollers The number of pre-forked instances of IPMI pollers.
StartJavaPollers The number of pre-forked instances of Java pollers.
StartODBCPollers The number of pre-forked instances of ODBC pollers.
StartPingers The number of pre-forked instances of ICMP pingers.
1766
Parameter Description
StartPollersUnreachable The number of pre-forked instances of pollers for unreachable hosts (including IPMI and Java).
StartPollers The number of pre-forked instances of pollers.
StartPreprocessors The number of pre-started instances of preprocessing workers.
StartSNMPPollers The number of pre-forked instances of asynchronous SNMP pollers.
StartSNMPTrapper If set to 1, an SNMP trapper process will be started.
StartTrappers The number of pre-forked instances of trappers.
StartVMwareCollectors The number of pre-forked VMware collector instances.
StatsAllowedIP A list of comma-delimited IP addresses, optionally in CIDR notation, or DNS names of external
Zabbix instances. The stats request will be accepted only from the addresses listed here.
Timeout Specifies how long to wait for connection to server, agent, Zabbix web service, or SNMP checks
(except SNMP walk[OID] and get[OID] items), in seconds.
TLSAccept What incoming connections to accept from Zabbix server.
TLSCAFile The full pathname of a file containing the top-level CA(s) certificates for peer certificate
verification, used for encrypted communications between Zabbix components.
TLSCertFile The full pathname of a file containing the server certificate or certificate chain, used for
encrypted communications between Zabbix components.
TLSCipherAll The GnuTLS priority string or OpenSSL (TLS 1.2) cipher string. Override the default ciphersuite
selection criteria for certificate- and PSK-based encryption.
TLSCipherAll13 The cipher string for OpenSSL 1.1.1 or newer in TLS 1.3. Override the default ciphersuite
selection criteria for certificate- and PSK-based encryption.
TLSCipherCert The GnuTLS priority string or OpenSSL (TLS 1.2) cipher string. Override the default ciphersuite
selection criteria for certificate-based encryption.
TLSCipherCert13 The cipher string for OpenSSL 1.1.1 or newer in TLS 1.3. Override the default ciphersuite
selection criteria for certificate-based encryption.
TLSCipherPSK The GnuTLS priority string or OpenSSL (TLS 1.2) cipher string. Override the default ciphersuite
selection criteria for PSK-based encryption.
TLSCipherPSK13 The cipher string for OpenSSL 1.1.1 or newer in TLS 1.3. Override the default ciphersuite
selection criteria for PSK-based encryption.
TLSConnect How the proxy should connect to Zabbix server.
TLSCRLFile The full pathname of a file containing revoked certificates. This parameter is used for encrypted
communications between Zabbix components.
TLSKeyFile The full pathname of a file containing the proxy private key, used for encrypted communications
between Zabbix components.
TLSPSKFile The full pathname of a file containing the proxy pre-shared key, used for encrypted
communications with Zabbix server.
TLSPSKIdentity The pre-shared key identity string, used for encrypted communications with Zabbix server.
TLSServerCertIssuer The allowed server certificate issuer.
TLSServerCertSubject The allowed server certificate subject.
TmpDir The temporary directory.
TrapperTimeout How many seconds the trapper may spend processing new data.
UnavailableDelay How often a host is checked for availability during the unavailability period.
UnreachableDelay How often a host is checked for availability during the unreachability period.
UnreachablePeriod After how many seconds of unreachability treat the host as unavailable.
User Drop privileges to a specific, existing user on the system.
Vault The vault provider.
VaultDBPath The location, from where database credentials should be retrieved by keys.
VaultPrefix Custom prefix for the vault path or query.
VaultTLSCertFile The name of the SSL certificate file used for client authentication.
VaultTLSKeyFile The name of the SSL private key file used for client authentication.
VaultToken The HashiCorp vault authentication token.
VaultURL The vault server HTTP[S] URL.
VMwareCacheSize The shared memory size for storing VMware data.
VMwareFrequency The delay in seconds between data gathering from a single VMware service.
VMwarePerfFrequency The delay in seconds between performance counter statistics retrieval from a single VMware
service.
VMwareTimeout The maximum number of seconds a vmware collector will wait for a response from VMware
service.
WebDriverURL WebDriver interface HTTP[S] URL.
All parameters are non-mandatory unless explicitly stated that the parameter is mandatory.
1767
Note that:
• The default values reflect daemon defaults, not the values in the shipped configuration files;
• Zabbix supports configuration files only in UTF-8 encoding without BOM;
• Comments starting with ”#” are only supported in the beginning of the line.
Parameter details
AllowRoot
Allow the proxy to run as ’root’. If disabled and the proxy is started by ’root’, the proxy will try to switch to the ’zabbix’ user instead.
Has no effect if started under a regular user.
AllowUnsupportedDBVersions
CacheSize
The size of the configuration cache, in bytes. The shared memory size for storing host and item data.
ConfigFrequency
This parameter is deprecated (use ProxyConfigFrequency instead).<br>How often the proxy retrieves configuration data from
Zabbix server in seconds.<br>Active proxy parameter. Ignored for passive proxies (see ProxyMode parameter).
DataSenderFrequency
The proxy will send collected data to the server every N seconds. Note that an active proxy will still poll Zabbix server every second
for remote command tasks.<br>Active proxy parameter. Ignored for passive proxies (see ProxyMode parameter).
DBHost
The database host name.<br>With MySQL localhost or empty string results in using a socket. With PostgreSQL only empty
string results in attempt to use socket. With Oracle empty string results in using the Net Service Name connection method; in this
case consider using the TNS_ADMIN environment variable to specify the directory of the tnsnames.ora file.
Default: localhost
DBName
The database name or path to the database file for SQLite3 (the multi-process architecture of Zabbix does not allow to use in-
memory database, e.g. :memory:, file::memory:?cache=shared or file:memdb1?mode=memory&cache=shared).<br>Warning:
Do not attempt to use the same database the Zabbix server is using.<br>With Oracle, if the Net Service Name connection method
is used, specify the service name from tnsnames.ora or set to empty string; set the TWO_TASK environment variable if DBName
is set to empty string.
Mandatory: Yes
DBPassword
The database password. Comment this line if no password is used. Ignored for SQLite.
DBPort
The database port when not using local socket. Ignored for SQLite.<br>With Oracle, if the Net Service Name connection method
is used, this parameter will be ignored; the port number from the tnsnames.ora file will be used instead.
Range: 1024-65535
DBSchema
DBSocket
The path to the MySQL socket file.<br>The database port when not using local socket. Ignored for SQLite.
Default: 3306
1768
DBUser
DBTLSConnect
Setting this option enforces to use TLS connection to the database:<br>required - connect using TLS<br>verify_ca - connect using
TLS and verify certificate<br>verify_full - connect using TLS, verify certificate and verify that database identity specified by DBHost
matches its certificate<br>On MySQL starting from 5.7.11 and PostgreSQL the following values are supported: ”required”, ”verify”,
”verify_full”.<br>On MariaDB starting from version 10.2.6 ”required” and ”verify_full” values are supported.<br>By default not
set to any option and the behavior depends on database configuration.
DBTLSCAFile
The full pathname of a file containing the top-level CA(s) certificates for database certificate verification.
DBTLSCertFile
The full pathname of a file containing the Zabbix proxy certificate for authenticating to database.
DBTLSKeyFile
The full pathname of a file containing the private key for authenticating to the database.
DBTLSCipher
The list of encryption ciphers that Zabbix proxy permits for TLS protocols up through TLS v1.2. Supported only for MySQL.
DBTLSCipher13
The list of encryption ciphersuites that Zabbix proxy permits for the TLS v1.3 protocol. Supported only for MySQL, starting from
version 8.0.16.
DebugLevel
Specify the debug level:<br>0 - basic information about starting and stopping of Zabbix processes<br>1 - critical informa-
tion;<br>2 - error information;<br>3 - warnings;<br>4 - for debugging (produces lots of information);<br>5 - extended debugging
(produces even more information).
EnableRemoteCommands
ExternalScripts
The location of external scripts (depends on the datadir compile-time installation variable).
Default: /usr/local/share/zabbix/externalscripts
Fping6Location
The location of fping6. Make sure that the fping6 binary has root ownership and the SUID flag set. Make empty (”Fping6Location=”)
if your fping utility is capable to process IPv6 addresses.
Default: /usr/sbin/fping6
FpingLocation
The location of fping. Make sure that the fping binary has root ownership and the SUID flag set.
Default: /usr/sbin/fping
HistoryCacheSize
The size of the history cache, in bytes. The shared memory size for storing history data.
HistoryIndexCacheSize
The size of the history index cache, in bytes. The shared memory size for indexing the history data stored in history cache. The
index cache size needs roughly 100 bytes to cache one item.
1769
Hostname
A unique, case sensitive proxy name. Make sure the proxy name is known to the server.<br>Allowed characters: alphanumeric,
’.’, ’ ’, ’_’ and ’-’. Maximum length: 128
HostnameItem
The item used for setting Hostname if it is undefined (this will be run on the proxy similarly as on an agent). Ignored if Hostname
is set.<br>Does not support UserParameters, performance counters or aliases, but does support system.run[].
Default: system.hostname
HousekeepingFrequency
How often Zabbix will perform housekeeping procedure (in hours). Housekeeping is removing outdated information from the
database.<br>Note: To lower load on proxy startup housekeeping is postponed for 30 minutes after proxy start. Thus, if House-
keepingFrequency is 1, the very first housekeeping procedure after proxy start will run after 30 minutes, and will repeat every
hour thereafter.<br>It is possible to disable automatic housekeeping by setting HousekeepingFrequency to 0. In this case the
housekeeping procedure can only be started by housekeeper_execute runtime control option.
Include
You may include individual files or all files in a directory in the configuration file.<br>To only include relevant files in the specified
directory, the asterisk wildcard character is supported for pattern matching.<br>See special notes about limitations.
Example:
Include=/absolute/path/to/config/files/*.conf
JavaGateway
The IP address (or hostname) of Zabbix Java gateway. Only required if Java pollers are started.
JavaGatewayPort
ListenBacklog
The maximum number of pending connections in the TCP queue.<br>The default value is a hard-coded constant, which depends
on the system.<br>The maximum supported value depends on the system, too high values may be silently truncated to the
’implementation-specified maximum’.
ListenIP
A list of comma-delimited IP addresses that the trapper should listen on.<br>Trapper will listen on all network interfaces if this
parameter is missing.
Default: 0.0.0.0
ListenPort
LoadModule
The module to load at proxy startup. Modules are used to extend the functionality of the proxy. The module must be located in the
directory specified by LoadModulePath or the path must precede the module name. If the preceding path is absolute (starts with ’/’)
then LoadModulePath is ignored.<br>Formats:<br>LoadModule=<module.so><br>LoadModule=<path/module.so><br>LoadModule=</ab
is allowed to include multiple LoadModule parameters.
LoadModulePath
The full path to the location of proxy modules. The default depends on compilation options.
LogFile
1770
Mandatory: Yes, if LogType is set to file; otherwise no
LogFileSize
The maximum size of a log file in MB.<br>0 - disable automatic log rotation.<br>Note: If the log file size limit is reached and file
rotation fails, for whatever reason, the existing log file is truncated and started anew.
LogRemoteCommands
LogType
The type of the log output:<br>file - write log to the file specified by LogFile parameter;<br>system - write log to sys-
log;<br>console - write log to standard output.
Default: file
LogSlowQueries
How long a database query may take before being logged (in milliseconds).<br>0 - don’t log slow queries.<br>This option becomes
enabled starting with DebugLevel=3.
MaxConcurrentChecksPerPoller
The maximum number of asynchronous checks that can be executed at once by each HTTP agent poller, agent poller or SNMP
poller. See StartHTTPAgentPollers, StartAgentPollers, and StartSNMPPollers.
PidFile
Default: /tmp/zabbix_proxy.pid
ProxyBufferMode
Specifies history, network discovery and autoregistration data storage mechanism: disk - data are stored in database and uploaded
from database; memory - data are stored in memory and uploaded from memory. If buffer runs out of memory the old data will be
discarded. On shutdown the buffer is discarded. hybrid - the proxy buffer normally works like in the memory mode until it runs out
of memory or the oldest record exceeds the configured age. If that happens the buffer is flushed to database and it works like in
disk mode until all data have been uploaded and it starts working with memory again. On shutdown the memory buffer is flushed
to database.
ProxyConfigFrequency
How often the proxy retrieves configuration data from Zabbix server in seconds.<br>Active proxy parameter. Ignored for passive
proxies (see ProxyMode parameter).
ProxyLocalBuffer
The proxy will keep data locally for N hours, even if the data have already been synced with the server.<br>This parameter may
be used if local data will be used by third-party applications.
ProxyMemoryBufferAge
The maximum age of data in proxy memory buffer, in seconds. When enabled (not zero) and records in proxy memory buffer are
older, then it forces proxy buffer to switch to database mode until all records are uploaded to server. This parameter must be less
or equal to ProxyOfflineBuffer parameter.
ProxyMemoryBufferSize
1771
The size of shared memory cache for collected history, discovery and autoregistration data, in bytes. If enabled (not zero) proxy
will keep history discovery and autoregistration data in memory unless cache is full or stored records are older than defined
ProxyMemoryBufferAge. This parameter cannot be used together with ProxyLocalBuffer parameter.
ProxyMode
The proxy operating mode.<br>0 - proxy in the active mode<br>1 - proxy in the passive mode<br>Note that (sensitive) proxy
configuration data may become available to parties having access to the Zabbix server trapper port when using an active proxy.
This is possible because anyone may pretend to be an active proxy and request configuration data; authentication does not take
place.
ProxyOfflineBuffer
The proxy will keep data for N hours in case of no connectivity with Zabbix server.<br>Older data will be lost.
Server
If ProxyMode is set to active mode:<br>Zabbix server IP address or DNS name (address:port) or cluster (address:port;address2:port)
to get configuration data from and send data to.<br>If port is not specified, the default port is used.<br>Cluster nodes must be
separated by a semicolon.<br><br>If ProxyMode is set to passive mode:<br>List of comma delimited IP addresses, optionally
in CIDR notation, or DNS names of Zabbix server. Incoming connections will be accepted only from the addresses listed here. If
IPv6 support is enabled then ’127.0.0.1’, ’::127.0.0.1’, ’::ffff:127.0.0.1’ are treated equally.<br>’::/0’ will allow any IPv4 or IPv6
address. ’0.0.0.0/0’ can be used to allow any IPv4 address.
Example:
Server=127.0.0.1,192.168.1.0/24,::1,2001:db8::/32,zabbix.example.com
Mandatory: yes
SNMPTrapperFile
A temporary file used for passing data from the SNMP trap daemon to the proxy.<br>Must be the same as in zabbix_trap_receiver.pl
or SNMPTT configuration file.
Default: /tmp/zabbix_traps.tmp
SocketDir
Default: /tmp
SourceIP
The source IP address for:<br>- outgoing connections to Zabbix server;<br>- agentless connections (VMware, SSH, JMX, SNMP, Tel-
net and simple checks);<br>- HTTP agent connections;<br>- script item JavaScript HTTP requests;<br>- preprocessing JavaScript
HTTP requests;<br>- connections to the Vault
SSHKeyLocation
The location of public and private keys for SSH checks and actions.
SSLCertLocation
The location of SSL client certificate files for client authentication.<br>This parameter is used in web monitoring only.
SSLKeyLocation
The location of SSL private key files for client authentication.<br>This parameter is used in web monitoring only.
SSLCALocation
The location of certificate authority (CA) files for SSL server certificate verification.<br>Note that the value of this parameter will
be set as the CURLOPT_CAPATH libcurl option. For libcurl versions before 7.42.0, this only has effect if libcurl was compiled to use
OpenSSL. For more information see the cURL web page.<br>This parameter is used in web monitoring and in SMTP authentication.
StartAgentPollers
1772
StartBrowserPollers
StartDBSyncers
The number of pre-forked instances of history syncers.<br>Note: Be careful when changing this value, increasing it may do more
harm than good.
StartDiscoverers
StartHTTPAgentPollers
StartHTTPPollers
StartIPMIPollers
StartJavaPollers
StartODBCPollers
StartPingers
StartPollersUnreachable
The number of pre-forked instances of pollers for unreachable hosts (including IPMI and Java). At least one poller for unreachable
hosts must be running if regular, IPMI or Java pollers are started.
StartPollers
StartPreprocessors
StartSNMPPollers
StartSNMPTrapper
1773
Default: 0<br> Range: 0-1
StartTrappers
The number of pre-forked instances of trappers.<br>Trappers accept incoming connections from Zabbix sender and active agents.
StartVMwareCollectors
StatsAllowedIP
A list of comma-delimited IP addresses, optionally in CIDR notation, or DNS names of external Zabbix instances. The stats request
will be accepted only from the addresses listed here. If this parameter is not set no stats requests will be accepted.<br>If IPv6
support is enabled then ’127.0.0.1’, ’::127.0.0.1’, ’::ffff:127.0.0.1’ are treated equally and ’::/0’ will allow any IPv4 or IPv6 address.
’0.0.0.0/0’ can be used to allow any IPv4 address.
Example:
StatsAllowedIP=127.0.0.1,192.168.1.0/24,::1,2001:db8::/32,zabbix.example.com
Timeout
Specifies how long to wait for connection to server, agent, Zabbix web service, or SNMP checks (except SNMP walk[OID] and
get[OID] items), in seconds.
Default: 3<br> Range: 1-30
TLSAccept
What incoming connections to accept from Zabbix server. Used for a passive proxy, ignored on an active proxy. Multiple val-
ues can be specified, separated by comma:<br>unencrypted - accept connections without encryption (default)<br>psk - accept
connections with TLS and a pre-shared key (PSK)<br>cert - accept connections with TLS and a certificate
Mandatory: yes for passive proxy, if TLS certificate or PSK parameters are defined (even for unencrypted connection); otherwise
no
TLSCAFile
The full pathname of a file containing the top-level CA(s) certificates for peer certificate verification, used for encrypted communi-
cations between Zabbix components.
TLSCertFile
The full pathname of a file containing the proxy certificate or certificate chain, used for encrypted communications between Zabbix
components.
TLSCipherAll
The GnuTLS priority string or OpenSSL (TLS 1.2) cipher string. Override the default ciphersuite selection criteria for certificate- and
PSK-based encryption.
Example:
TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256:TLS_AES_128_GCM_SHA256
TLSCipherAll13
The cipher string for OpenSSL 1.1.1 or newer in TLS 1.3. Override the default ciphersuite selection criteria for certificate- and
PSK-based encryption.
NONE:+VERS-TLS1.2:+ECDHE-RSA:+RSA:+ECDHE-PSK:+PSK:+AES-128-GCM:+AES-128-CBC:+AEAD:+SHA256:+SHA1:+CURVE-ALL
Example for OpenSSL:
EECDH+aRSA+AES128:RSA+aRSA+AES128:kECDHEPSK+AES128:kPSK+AES128
TLSCipherCert
The GnuTLS priority string or OpenSSL (TLS 1.2) cipher string. Override the default ciphersuite selection criteria for certificate-based
encryption.
1774
NONE:+VERS-TLS1.2:+ECDHE-RSA:+RSA:+AES-128-GCM:+AES-128-CBC:+AEAD:+SHA256:+SHA1:+CURVE-ALL:+COMP-NULL:+SIG
Example for OpenSSL:
EECDH+aRSA+AES128:RSA+aRSA+AES128
TLSCipherCert13
The cipher string for OpenSSL 1.1.1 or newer in TLS 1.3. Override the default ciphersuite selection criteria for certificate-based
encryption.
TLSCipherPSK
The GnuTLS priority string or OpenSSL (TLS 1.2) cipher string. Override the default ciphersuite selection criteria for PSK-based
encryption.
NONE:+VERS-TLS1.2:+ECDHE-PSK:+PSK:+AES-128-GCM:+AES-128-CBC:+AEAD:+SHA256:+SHA1:+CURVE-ALL:+COMP-NULL:+SIG
Example for OpenSSL:
kECDHEPSK+AES128:kPSK+AES128
TLSCipherPSK13
The cipher string for OpenSSL 1.1.1 or newer in TLS 1.3. Override the default ciphersuite selection criteria for PSK-based encryption.
Example:
TLS_CHACHA20_POLY1305_SHA256:TLS_AES_128_GCM_SHA256
TLSConnect
How the proxy should connect to Zabbix server. Used for an active proxy, ignored on a passive proxy. Only one value can be
specified:<br>unencrypted - connect without encryption (default)<br>psk - connect using TLS and a pre-shared key (PSK)<br>cert
- connect using TLS and a certificate
Mandatory: yes for active proxy, if TLS certificate or PSK parameters are defined (even for unencrypted connection); otherwise no
TLSCRLFile
The full pathname of a file containing revoked certificates. This parameter is used for encrypted communications between Zabbix
components.
TLSKeyFile
The full pathname of a file containing the proxy private key, used for encrypted communications between Zabbix components.
TLSPSKFile
The full pathname of a file containing the proxy pre-shared key, used for encrypted communications with Zabbix server.
TLSPSKIdentity
The pre-shared key identity string, used for encrypted communications with Zabbix server.
TLSServerCertIssuer
TLSServerCertSubject
TmpDir
Default: /tmp
TrapperTimeout
How many seconds the trapper may spend processing new data.
UnavailableDelay
How often a host is checked for availability during the unavailability period in seconds.
1775
UnreachableDelay
How often a host is checked for availability during the unreachability period in seconds.
UnreachablePeriod
User
Drop privileges to a specific, existing user on the system.<br>Only has effect if run as ’root’ and AllowRoot is disabled.
Default: zabbix
Vault
The vault provider:<br>HashiCorp - HashiCorp KV Secrets Engine version 2<br>CyberArk - CyberArk Central Credential
Provider<br>Must match the vault provider set in the frontend.
Default: HashiCorp
VaultDBPath
The location from where database credentials should be retrieved by keys. Depending on the vault, can be vault path or query.<br>
The keys used for HashiCorp are ’password’ and ’username’.
Example:
secret/zabbix/database
The keys used for CyberArk are ’Content’ and ’UserName’.
Example:
AppID=zabbix_server&Query=Safe=passwordSafe;Object=zabbix_proxy_database
This option can only be used if DBUser and DBPassword are not specified.
VaultPrefix
A custom prefix for the vault path or query, depending on the vault; if not specified, the most suitable defaults will be used.
VaultTLSCertFile
The name of the SSL certificate file used for client authentication. The certificate file must be in PEM1 format.<br>If the certificate
file contains also the private key, leave the SSL key file field empty.<br>The directory containing this file is specified by the
SSLCertLocation configuration parameter.<br>This option can be omitted, but is recommended for CyberArkCCP vault.
VaultTLSKeyFile
The name of the SSL private key file used for client authentication. The private key file must be in PEM1 format.<br>The di-
rectory containing this file is specified by the SSLKeyLocation configuration parameter.<br>This option can be omitted, but is
recommended for CyberArkCCP vault.
VaultToken
The HashiCorp vault authentication token that should have been generated exclusively for Zabbix proxy with read-only permission
to the path specified in the optional VaultDBPath configuration parameter.<br>It is an error if VaultToken and the VAULT_TOKEN
environment variable are defined at the same time.
VaultURL
The vault server HTTP[S] URL. The system-wide CA certificates directory will be used if SSLCALocation is not specified.
Default: https://127.0.0.1:8200
VMwareCacheSize
The shared memory size for storing VMware data.<br>A VMware internal check zabbix[vmware,buffer,...] can be used to monitor
the VMware cache usage (see Internal checks).<br>Note that shared memory is not allocated if there are no vmware collector
instances configured to start.
1776
VMwareFrequency
The delay in seconds between data gathering from a single VMware service.<br>This delay should be set to the least update
interval of any VMware monitoring item.
VMwarePerfFrequency
The delay in seconds between performance counter statistics retrieval from a single VMware service.<br>This delay should be set
to the least update interval of any VMware monitoring item that uses VMware performance counters.
VMwareTimeout
The maximum number of seconds a vmware collector will wait for a response from VMware service (vCenter or ESX hypervisor).
WebDriverURL
WebDriverURL=http://localhost:4444
Overview
The parameters supported by the Zabbix agent configuration file (zabbix_agentd.conf) are listed in this section.
The parameters are listed without additional information. Click on the parameter to see the full details.
Parameter Description
1777
Parameter Description
All parameters are non-mandatory unless explicitly stated that the parameter is mandatory.
Note that:
• The default values reflect daemon defaults, not the values in the shipped configuration files;
• Zabbix supports configuration files only in UTF-8 encoding without BOM;
• Comments starting with ”#” are only supported in the beginning of the line.
Parameter details
Alias
Sets an alias for an item key. It can be used to substitute a long and complex item key with a shorter and simpler one.<br>
Multiple Alias parameters may be present. Multiple parameters with the same Alias key are not allowed.<br> Different Alias keys
may reference the same item key.<br> Aliases can be used in HostMetadataItem but not in HostnameItem parameter.
Alias=zabbix.userid:vfs.file.regexp[/etc/passwd,"^zabbix:.:([0-9]+)",,,,\1]
Now the zabbix.userid shorthand key may be used to retrieve data.
Alias=cpu.util:system.cpu.util
Alias=cpu.util[*]:system.cpu.util[*]
This allows use the cpu.util key to get CPU utilization percentage with default parameters as well as use cpu.util[all, idle, avg15]
to get specific data about CPU utilization.
Example 3: Running multiple low-level discovery rules processing the same discovery items.
1778
Alias=vfs.fs.discovery[*]:vfs.fs.discovery
Now it is possible to set up several discovery rules using vfs.fs.discovery with different parameters for each rule, e.g.,
vfs.fs.discovery[foo], vfs.fs.discovery[bar], etc.
AllowKey
Allow the execution of those item keys that match a pattern. The key pattern is a wildcard expression that supports the ”*”
character to match any number of any characters.<br>Multiple key matching rules may be defined in combination with DenyKey.
The parameters are processed one by one according to their appearance order. See also: Restricting agent checks.
AllowRoot
Allow the agent to run as ’root’. If disabled and the agent is started by ’root’, the agent will try to switch to user ’zabbix’ instead.
Has no effect if started under a regular user.
BufferSend
BufferSize
The maximum number of values in the memory buffer. The agent will send all collected data to the Zabbix server or proxy if the
buffer is full.
DebugLevel
Specify the debug level:<br>0 - basic information about starting and stopping of Zabbix processes<br>1 - critical informa-
tion;<br>2 - error information;<br>3 - warnings;<br>4 - for debugging (produces lots of information);<br>5 - extended debugging
(produces even more information).
DenyKey
Deny the execution of those item keys that match a pattern. The key pattern is a wildcard expression that supports the ”*”
character to match any number of any characters.<br>Multiple key matching rules may be defined in combination with AllowKey.
The parameters are processed one by one according to their appearance order. See also: Restricting agent checks.
EnableRemoteCommands
Whether remote commands from Zabbix server are allowed. This parameter is deprecated, use AllowKey=system.run[*]
or DenyKey=system.run[*] instead.<br>It is an internal alias for AllowKey/DenyKey parameters depending on value:<br>0 -
DenyKey=system.run[*]<br>1 - AllowKey=system.run[*]
HeartbeatFrequency
The frequency of heartbeat messages in seconds. Used for monitoring the availability of active checks.<br>0 - heartbeat messages
disabled.
HostInterface
An optional parameter that defines the host interface. The host interface is used at host autoregistration process. If not defined,
the value will be acquired from HostInterfaceItem.<br>The agent will issue an error and not start if the value is over the limit of
255 characters.
HostInterfaceItem
An optional parameter that defines an item used for getting the host interface.<br>Host interface is used at host autoregistration
process.<br>During an autoregistration request the agent will log a warning message if the value returned by the specified item is
over the limit of 255 characters.<br>The system.run[] item is supported regardless of AllowKey/DenyKey values.<br>This option
is only used when HostInterface is not defined.
HostMetadata
1779
An optional parameter that defines host metadata. Host metadata is used only at host autoregistration process (active agent).
If not defined, the value will be acquired from HostMetadataItem.<br>The agent will issue an error and not start if the specified
value is over the limit of 2034 bytes or a non-UTF-8 string.
HostMetadataItem
An optional parameter that defines a Zabbix agent item used for getting host metadata. This option is only used when HostMetadata
is not defined. User parameters and aliases are supported. The system.run[] item is supported regardless of AllowKey/DenyKey
values.<br>The HostMetadataItem value is retrieved on each autoregistration attempt and is used only at host autoregistration
process (active agent).<br>During an autoregistration request the agent will log a warning message if the value returned by the
specified item is over the limit of 65535 UTF-8 code points. The value returned by the item must be a UTF-8 string otherwise it will
be ignored.
Hostname
A list of comma-delimited, unique, case-sensitive hostnames. Required for active checks and must match hostnames as configured
on the server. The value is acquired from HostnameItem if undefined.<br>Allowed characters: alphanumeric, ’.’, ’ ’, ’_’ and ’-’.
Maximum length: 128 characters per hostname, 2048 characters for the entire line.
HostnameItem
An optional parameter that defines a Zabbix agent item used for getting the host name. This option is only used when Hostname is
not defined. User parameters or aliases are not supported, but the system.run[] item is supported regardless of AllowKey/DenyKey
values.
Default: system.hostname
Include
You may include individual files or all files in a directory in the configuration file. To only include relevant files in the specified
directory, the asterisk wildcard character is supported for pattern matching.<br>See special notes about limitations.
Example:
Include=/absolute/path/to/config/files/*.conf
ListenBacklog
The maximum number of pending connections in the TCP queue.<br>The default value is a hard-coded constant, which depends
on the system.<br>The maximum supported value depends on the system, too high values may be silently truncated to the
’implementation-specified maximum’.
ListenIP
Default: 0.0.0.0
ListenPort
The agent will listen on this port for connections from the server.
LoadModule
The module to load at agent startup. Modules are used to extend the functionality of the agent. The module must be located in the
directory specified by LoadModulePath or the path must precede the module name. If the preceding path is absolute (starts with ’/’)
then LoadModulePath is ignored.<br>Formats:<br>LoadModule=<module.so><br>LoadModule=<path/module.so><br>LoadModule=</ab
is allowed to include multiple LoadModule parameters.
LoadModulePath
The full path to the location of agent modules. The default depends on compilation options.
LogFile
LogFileSize
1780
The maximum size of a log file in MB.<br>0 - disable automatic log rotation.<br>Note: If the log file size limit is reached and file
rotation fails, for whatever reason, the existing log file is truncated and started anew.
LogRemoteCommands
Enable logging of the executed shell commands as warnings. Commands will be logged only if executed remotely. Log entries will
not be created if system.run[] is launched locally by HostMetadataItem, HostInterfaceItem or HostnameItem parameters.
LogType
The type of the log output:<br>file - write log to the file specified by LogFile parameter;<br>system - write log to sys-
log;<br>console - write log to standard output.
Default: file
MaxLinesPerSecond
The maximum number of new lines the agent will send per second to Zabbix server or proxy when processing ’log’ and ’logrt’ active
checks. The provided value will be overridden by the ’maxlines’ parameter, provided in the ’log’ or ’logrt’ item key.<br>Note:
Zabbix will process 10 times more new lines than set in MaxLinesPerSecond to seek the required string in log items.
PidFile
Default: /tmp/zabbix_agentd.pid
RefreshActiveChecks
How often the list of active checks is refreshed, in seconds. Note that after failing to refresh active checks the next refresh will be
attempted in 60 seconds.
Server
A list of comma-delimited IP addresses, optionally in CIDR notation, or DNS names of Zabbix servers and Zabbix proxies. In-
coming connections will be accepted only from the hosts listed here. If IPv6 support is enabled then ’127.0.0.1’, ’::127.0.0.1’,
’::ffff:127.0.0.1’ are treated equally and ’::/0’ will allow any IPv4 or IPv6 address. ’0.0.0.0/0’ can be used to allow any IPv4 address.
Note that ”IPv4-compatible IPv6 addresses” (0000::/96 prefix) are supported but deprecated by RFC4291. Spaces are allowed.
Example:
Server=127.0.0.1,192.168.1.0/24,::1,2001:db8::/32,zabbix.example.com
Mandatory: yes, if StartAgents is not explicitly set to 0
ServerActive
The Zabbix server/proxy address or cluster configuration to get active checks from. The server/proxy address is an IP address or DNS
name and optional port separated by colon.<br>Cluster configuration is one or more server addresses separated by semicolon.
Multiple Zabbix servers/clusters and Zabbix proxies can be specified, separated by comma. More than one Zabbix proxy should
not be specified from each Zabbix server/cluster. If Zabbix proxy is specified then Zabbix server/cluster for that proxy should
not be specified.<br>Multiple comma-delimited addresses can be provided to use several independent Zabbix servers in parallel.
Spaces are allowed.<br>If the port is not specified, default port is used.<br>IPv6 addresses must be enclosed in square brackets
if port for that host is specified. If port is not specified, square brackets for IPv6 addresses are optional.<br>If this parameter is
not specified, active checks are disabled.
ServerActive=127.0.0.1:10051
Example for multiple servers:
ServerActive=127.0.0.1:20051,zabbix.domain,[::1]:30051,::1,[12fc::1]
Example for high availability:
ServerActive=zabbix.cluster.node1;zabbix.cluster.node2:20051;zabbix.cluster.node3
Example for high availability with two clusters and one server:
1781
ServerActive=zabbix.cluster.node1;zabbix.cluster.node2:20051,zabbix.cluster2.node1;zabbix.cluster2.node2,z
SourceIP
The source IP address for:<br>- outgoing connections to Zabbix server or Zabbix proxy;<br>- making connections while executing
some items (web.page.get, net.tcp.port, etc.).
StartAgents
The number of pre-forked instances of zabbix_agentd that process passive checks. If set to 0, passive checks are disabled and the
agent will not listen on any TCP port.
Timeout
Specifies timeout for communications (in seconds).<br> This parameter is used for defining the duration of various communication
operations:<br> - awaiting a response from Zabbix server;<br> - sending requests to Zabbix server, including active checks
configuration requests and item data;<br> - retrieving log data through logfile or Windows event log monitoring;<br> - sending
heartbeat messages;<br> - also used as a fallback in scenarios where server/proxy older than version 7.0 is sending checks without
timeouts.
TLSAccept
What incoming connections to accept. Used for a passive checks. Multiple values can be specified, separated by comma:<br>unencrypted
- accept connections without encryption (default)<br>psk - accept connections with TLS and a pre-shared key (PSK)<br>cert -
accept connections with TLS and a certificate
Mandatory: yes, if TLS certificate or PSK parameters are defined (even for unencrypted connection); otherwise no
TLSCAFile
The full pathname of the file containing the top-level CA(s) certificates for peer certificate verification, used for encrypted commu-
nications between Zabbix components.
TLSCertFile
The full pathname of the file containing the agent certificate or certificate chain, used for encrypted communications with Zabbix
components.
TLSCipherAll
The GnuTLS priority string or OpenSSL (TLS 1.2) cipher string. Override the default ciphersuite selection criteria for certificate- and
PSK-based encryption.
Example:
TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256:TLS_AES_128_GCM_SHA256
TLSCipherAll13
The cipher string for OpenSSL 1.1.1 or newer in TLS 1.3. Override the default ciphersuite selection criteria for certificate- and
PSK-based encryption.
NONE:+VERS-TLS1.2:+ECDHE-RSA:+RSA:+ECDHE-PSK:+PSK:+AES-128-GCM:+AES-128-CBC:+AEAD:+SHA256:+SHA1:+CURVE-ALL
Example for OpenSSL:
EECDH+aRSA+AES128:RSA+aRSA+AES128:kECDHEPSK+AES128:kPSK+AES128
TLSCipherCert
The GnuTLS priority string or OpenSSL (TLS 1.2) cipher string. Override the default ciphersuite selection criteria for certificate-based
encryption.
NONE:+VERS-TLS1.2:+ECDHE-RSA:+RSA:+AES-128-GCM:+AES-128-CBC:+AEAD:+SHA256:+SHA1:+CURVE-ALL:+COMP-NULL:+SIG
Example for OpenSSL:
EECDH+aRSA+AES128:RSA+aRSA+AES128
1782
TLSCipherCert13
The cipher string for OpenSSL 1.1.1 or newer in TLS 1.3. Override the default ciphersuite selection criteria for certificate-based
encryption.
TLSCipherPSK
The GnuTLS priority string or OpenSSL (TLS 1.2) cipher string. Override the default ciphersuite selection criteria for PSK-based
encryption.
NONE:+VERS-TLS1.2:+ECDHE-PSK:+PSK:+AES-128-GCM:+AES-128-CBC:+AEAD:+SHA256:+SHA1:+CURVE-ALL:+COMP-NULL:+SIG
Example for OpenSSL:
kECDHEPSK+AES128:kPSK+AES128
TLSCipherPSK13
The cipher string for OpenSSL 1.1.1 or newer in TLS 1.3. Override the default ciphersuite selection criteria for PSK-based encryption.
Example:
TLS_CHACHA20_POLY1305_SHA256:TLS_AES_128_GCM_SHA256
TLSConnect
How the agent should connect to Zabbix server or proxy. Used for active checks. Only one value can be specified:<br>unencrypted
- connect without encryption (default)<br>psk - connect using TLS and a pre-shared key (PSK)<br>cert - connect using TLS and a
certificate
Mandatory: yes, if TLS certificate or PSK parameters are defined (even for unencrypted connection); otherwise no
TLSCRLFile
The full pathname of the file containing revoked certificates. This parameter is used for encrypted communications between Zabbix
components.
TLSKeyFile
The full pathname of the file containing the agent private key, used for encrypted communications between Zabbix components.
TLSPSKFile
The full pathname of the file containing the agent pre-shared key, used for encrypted communications with Zabbix server.
TLSPSKIdentity
The pre-shared key identity string, used for encrypted communications with Zabbix server.
TLSServerCertIssuer
TLSServerCertSubject
UnsafeUserParameters
Allow all characters to be passed in arguments to user-defined parameters. The following characters are not allowed: \ ’ ” ‘ * ? [ ]
{ } ~ $ ! & ; ( ) < > | # @<br>Additionally, newline characters are not allowed.
User
Drop privileges to a specific, existing user on the system.<br>Only has effect if run as ’root’ and AllowRoot is disabled.
Default: zabbix
UserParameter
Example:
1783
UserParameter=system.test,who|wc -l
UserParameter=check_cpu,./custom_script.sh
UserParameterDir
The default search path for UserParameter commands. If used, the agent will change its working directory to the one specified here
before executing a command. Thereby, UserParameter commands can have a relative ./ prefix instead of a full path.<br>Only
one entry is allowed.
Example:
UserParameterDir=/opt/myscripts
See also
1. Differences in the Zabbix agent configuration for active and passive checks starting from version 2.0.0
Overview
Zabbix agent 2 is a new generation of Zabbix agent and may be used in place of Zabbix agent.
The parameters supported by the Zabbix agent 2 configuration file (zabbix_agent2.conf) are listed in this section.
The parameters are listed without additional information. Click on the parameter to see the full details.
Parameter Description
1784
Parameter Description
All parameters are non-mandatory unless explicitly stated that the parameter is mandatory.
Note that:
• The default values reflect process defaults, not the values in the shipped configuration files;
• Zabbix supports configuration files only in UTF-8 encoding without BOM;
• Comments starting with ”#” are only supported in the beginning of the line.
Parameter details
Alias
Sets an alias for an item key. It can be used to substitute a long and complex item key with a shorter and simpler one.<br>
Multiple Alias parameters may be present. Multiple parameters with the same Alias key are not allowed.<br> Different Alias keys
may reference the same item key.<br> Aliases can be used in HostMetadataItem but not in HostnameItem parameter.
Alias=zabbix.userid:vfs.file.regexp[/etc/passwd,"^zabbix:.:([0-9]+)",,,,\1]
Now the zabbix.userid shorthand key may be used to retrieve data.
Alias=cpu.util:system.cpu.util
Alias=cpu.util[*]:system.cpu.util[*]
This allows use the cpu.util key to get CPU utilization percentage with default parameters as well as use cpu.util[all, idle, avg15]
to get specific data about CPU utilization.
Example 3: Running multiple low-level discovery rules processing the same discovery items.
Alias=vfs.fs.discovery[*]:vfs.fs.discovery
Now it is possible to set up several discovery rules using vfs.fs.discovery with different parameters for each rule, e.g.,
vfs.fs.discovery[foo], vfs.fs.discovery[bar], etc.
AllowKey
Allow the execution of those item keys that match a pattern. The key pattern is a wildcard expression that supports the ”*”
character to match any number of any characters.<br>Multiple key matching rules may be defined in combination with DenyKey.
The parameters are processed one by one according to their appearance order. See also: Restricting agent checks.
BufferSend
The time interval in seconds which determines how often values are sent from the buffer to Zabbix server. Note that if the buffer
is full, the data will be sent sooner.
BufferSize
1785
The maximum number of values in the memory buffer. The agent will send all collected data to the Zabbix server or proxy if the
buffer is full. This parameter should only be used if persistent buffer is disabled (EnablePersistentBuffer=0).
ControlSocket
The control socket, used to send runtime commands with the ’-R’ option.
Default: /tmp/agent.sock
DebugLevel
Specify the debug level:<br>0 - basic information about starting and stopping of Zabbix processes<br>1 - critical informa-
tion;<br>2 - error information;<br>3 - warnings;<br>4 - for debugging (produces lots of information);<br>5 - extended debugging
(produces even more information).
DenyKey
Deny the execution of those item keys that match a pattern. The key pattern is a wildcard expression that supports the ”*”
character to match any number of any characters.<br>Multiple key matching rules may be defined in combination with AllowKey.
The parameters are processed one by one according to their appearance order. See also: Restricting agent checks.
EnablePersistentBuffer
Enable the usage of local persistent storage for active items. If persistent storage is disabled, the memory buffer will be used.
ForceActiveChecksOnStart
Perform active checks immediately after the restart for the first received configuration. Also available as a per-plugin configuration
parameter, for example: Plugins.Uptime.System.ForceActiveChecksOnStart=1
Default: 0<br> Values: 0 - disabled, 1 - enabled
HeartbeatFrequency
The frequency of heartbeat messages in seconds. Used for monitoring the availability of active checks.<br>0 - heartbeat messages
disabled.
HostInterface
An optional parameter that defines the host interface. The host interface is used at host autoregistration process. If not defined,
the value will be acquired from HostInterfaceItem.<br>The agent will issue an error and not start if the value is over the limit of
255 characters.
HostInterfaceItem
An optional parameter that defines an item used for getting the host interface.<br>Host interface is used at host autoregistra-
tion process. This option is only used when HostInterface is not defined.<br>The system.run[] item is supported regardless of
AllowKey/DenyKey values.<br>During an autoregistration request the agent will log a warning message if the value returned by
the specified item is over the limit of 255 characters.
HostMetadata
An optional parameter that defines host metadata. Host metadata is used only at host autoregistration process. If not defined,
the value will be acquired from HostMetadataItem.<br>The agent will issue an error and not start if the specified value is over the
limit of 2034 bytes or a non-UTF-8 string.
HostMetadataItem
An optional parameter that defines an item used for getting host metadata. This option is only used when HostMetadata is
not defined. User parameters and aliases are supported. The system.run[] item is supported regardless of AllowKey/DenyKey
values.<br>The HostMetadataItem value is retrieved on each autoregistration attempt and is used only at host autoregistration
process.<br>During an autoregistration request the agent will log a warning message if the value returned by the specified item
is over the limit of 65535 UTF-8 code points. The value returned by the item must be a UTF-8 string otherwise it will be ignored.
Hostname
1786
A list of comma-delimited, unique, case-sensitive hostnames. Required for active checks and must match hostnames as configured
on the server. The value is acquired from HostnameItem if undefined.<br>Allowed characters: alphanumeric, ’.’, ’ ’, ’_’ and ’-’.
Maximum length: 128 characters per hostname, 2048 characters for the entire line.
HostnameItem
An optional parameter that defines an item used for getting the host name. This option is only used when Hostname is not defined.
User parameters or aliases are not supported, but the system.run[] item is supported regardless of AllowKey/DenyKey values.
Default: system.hostname
Include
You may include individual files or all files in a directory in the configuration file. During the installation Zabbix will create the
include directory in /usr/local/etc, unless modified during the compile time. The path can be relative to the zabbix_agent2.conf
file location.<br>To only include relevant files in the specified directory, the asterisk wildcard character is supported for pattern
matching.<br>See special notes about limitations.
Example:
Include=/absolute/path/to/config/files/*.conf
ListenIP
A list of comma-delimited IP addresses that the agent should listen on. The first IP address is sent to the Zabbix server, if connecting
to it, to retrieve the list of active checks.
Default: 0.0.0.0
ListenPort
The agent will listen on this port for connections from the server.
LogFile
LogFileSize
The maximum size of a log file in MB.<br>0 - disable automatic log rotation.<br>Note: If the log file size limit is reached and file
rotation fails, for whatever reason, the existing log file is truncated and started anew.
LogType
The type of the log output:<br>file - write log to the file specified by LogFile parameter;<br>system - write log to sys-
log;<br>console - write log to standard output
Default: file
PersistentBufferFile
The file where Zabbix agent 2 should keep the SQLite database. Must be a full filename. This parameter is only used if persistent
buffer is enabled (EnablePersistentBuffer=1).
PersistentBufferPeriod
The time period for which data should be stored when there is no connection to the server or proxy. Older data will be lost. Log
data will be preserved. This parameter is only used if persistent buffer is enabled (EnablePersistentBuffer=1).
PidFile
Default: /tmp/zabbix_agent2.pid
Plugins.<PluginName>.System.Capacity
The limit of checks per <PluginName> plugin that can be executed at the same time.
Default: 1000 Range: 1-1000
1787
Plugins.Log.MaxLinesPerSecond
The maximum number of new lines the agent will send per second to Zabbix server or proxy when processing ’log’ and ’logrt’ active
checks. The provided value will be overridden by the ’maxlines’ parameter, provided in the ’log’ and ’logrt’ item key.<br>Note:
Zabbix will process 10 times more new lines than set in MaxLinesPerSecond to seek the required string in log items.
Plugins.SystemRun.LogRemoteCommands
Enable the logging of the executed shell commands as warnings. The commands will be logged only if executed remotely. Log
entries will not be created if system.run[] is launched locally by the HostMetadataItem, HostInterfaceItem or HostnameItem pa-
rameters.
PluginSocket
Default: /tmp/agent.plugin.sock
PluginTimeout
RefreshActiveChecks
How often the list of active checks is refreshed, in seconds. Note that after failing to refresh active checks the next refresh will be
attempted in 60 seconds.
Server
A list of comma-delimited IP addresses, optionally in CIDR notation, or DNS names of Zabbix servers or Zabbix proxies. Incoming con-
nections will be accepted only from the hosts listed here. If IPv6 support is enabled then ’127.0.0.1’, ’::127.0.0.1’, ’::ffff:127.0.0.1’
are treated equally and ’::/0’ will allow any IPv4 or IPv6 address. ’0.0.0.0/0’ can be used to allow any IPv4 address. Spaces are
allowed.
Example:
Server=127.0.0.1,192.168.1.0/24,::1,2001:db8::/32,zabbix.example.com
Mandatory: yes
ServerActive
The Zabbix server/proxy address or cluster configuration to get active checks from. The server/proxy address is an IP address
or DNS name and optional port separated by colon.<br>The cluster configuration is one or more server addresses separated by
semicolon. Multiple Zabbix servers/clusters and Zabbix proxies can be specified, separated by comma. More than one Zabbix proxy
should not be specified from each Zabbix server/cluster. If a Zabbix proxy is specified then Zabbix server/cluster for that proxy
should not be specified.<br>Multiple comma-delimited addresses can be provided to use several independent Zabbix servers in
parallel. Spaces are allowed.<br>If the port is not specified, default port is used.<br>IPv6 addresses must be enclosed in square
brackets if port for that host is specified. If port is not specified, square brackets for IPv6 addresses are optional.<br>If this
parameter is not specified, active checks are disabled.
ServerActive=127.0.0.1:10051
Example for multiple servers:
ServerActive=127.0.0.1:20051,zabbix.domain,[::1]:30051,::1,[12fc::1]
Example for high availability:
ServerActive=zabbix.cluster.node1;zabbix.cluster.node2:20051;zabbix.cluster.node3
Example for high availability with two clusters and one server:
ServerActive=zabbix.cluster.node1;zabbix.cluster.node2:20051,zabbix.cluster2.node1;zabbix.cluster2.node2,z
1788
SourceIP
The source IP address for:<br>- outgoing connections to Zabbix server or Zabbix proxy;<br>- making connections while executing
some items (web.page.get, net.tcp.port, etc.).
StatusPort
If set, the agent will listen on this port for HTTP status requests (http://localhost:<port>/status).
Range: 1024-32767
Timeout
Specifies timeout for communications (in seconds).<br> This parameter is used for defining the duration of various communication
operations:<br> - awaiting a response from Zabbix server;<br> - sending requests to Zabbix server, including active checks
configuration requests and item data;<br> - retrieving log data through logfile or Windows event log monitoring;<br> - sending
heartbeat messages;<br> - also used as a fallback in scenarios where server/proxy older than version 7.0 is sending checks without
timeouts.
TLSAccept
The incoming connections to accept. Used for passive checks. Multiple values can be specified, separated by comma:<br>unencrypted
- accept connections without encryption (default)<br>psk - accept connections with TLS and a pre-shared key (PSK)<br>cert -
accept connections with TLS and a certificate
Mandatory: yes, if TLS certificate or PSK parameters are defined (even for unencrypted connection); otherwise no
TLSCAFile
The full pathname of the file containing the top-level CA(s) certificates for peer certificate verification, used for encrypted commu-
nications between Zabbix components.
TLSCertFile
The full pathname of the file containing the agent certificate or certificate chain, used for encrypted communications with Zabbix
components.
TLSConnect
How the agent should connect to Zabbix server or proxy. Used for active checks. Only one value can be specified:<br>unencrypted
- connect without encryption (default)<br>psk - connect using TLS and a pre-shared key (PSK)<br>cert - connect using TLS and a
certificate
Mandatory: yes, if TLS certificate or PSK parameters are defined (even for unencrypted connection); otherwise no
TLSCRLFile
The full pathname of the file containing revoked certificates. This parameter is used for encrypted communications between Zabbix
components.
TLSKeyFile
The full pathname of the file containing the agent private key, used for encrypted communications between Zabbix components.
TLSPSKFile
The full pathname of the file containing the agent pre-shared key, used for encrypted communications with Zabbix server.
TLSPSKIdentity
The pre-shared key identity string, used for encrypted communications with Zabbix server.
TLSServerCertIssuer
TLSServerCertSubject
UnsafeUserParameters
Allow all characters to be passed in arguments to user-defined parameters. The following characters are not allowed: \ ’ ” ‘ * ? [ ]
{ } ~ $ ! & ; ( ) < > | # @<br>Additionally, newline characters are not allowed.
1789
UserParameter
Example:
UserParameter=system.test,who|wc -l
UserParameter=check_cpu,./custom_script.sh
UserParameterDir
The default search path for UserParameter commands. If used, the agent will change its working directory to the one specified here
before executing a command. Thereby, UserParameter commands can have a relative ./ prefix instead of a full path.<br>Only
one entry is allowed.
Example:
UserParameterDir=/opt/myscripts
Overview
The parameters supported by the Windows Zabbix agent configuration file (zabbix_agentd.conf) are listed in this section.
The parameters are listed without additional information. Click on the parameter to see the full details.
Parameter Description
1790
Parameter Description
All parameters are non-mandatory unless explicitly stated that the parameter is mandatory.
Note that:
• The default values reflect daemon defaults, not the values in the shipped configuration files;
• Zabbix supports configuration files only in UTF-8 encoding without BOM;
• Comments starting with ”#” are only supported in the beginning of the line.
Parameter details
Alias
Sets an alias for an item key. It can be used to substitute a long and complex item key with a shorter and simpler one.<br>
Multiple Alias parameters may be present. Multiple parameters with the same Alias key are not allowed.<br> Different Alias
keys may reference the same item key.<br> Aliases can be used in HostMetadataItem but not in HostnameItem or PerfCounter
parameter.
Example 1: Retrieving the paging file usage in percentage from the server.
Example 2: Getting the CPU load with default and custom parameters.
Alias=cpu.load:system.cpu.load
Alias=cpu.load[*]:system.cpu.load[*]
This allows use cpu.load key to get the CPU load with default parameters as well as use cpu.load[percpu,avg15] to get specific
data about the CPU load.
Example 3: Running multiple low-level discovery rules processing the same discovery items.
Alias=vfs.fs.discovery[*]:vfs.fs.discovery
Now it is possible to set up several discovery rules using vfs.fs.discovery with different parameters for each rule, e.g.,
vfs.fs.discovery[foo], vfs.fs.discovery[bar], etc.
AllowKey
Allow the execution of those item keys that match a pattern. The key pattern is a wildcard expression that supports the ”*”
character to match any number of any characters.<br>Multiple key matching rules may be defined in combination with DenyKey.
The parameters are processed one by one according to their appearance order. See also: Restricting agent checks.
BufferSend
BufferSize
1791
The maximum number of values in the memory buffer. The agent will send all collected data to the Zabbix server or proxy if the
buffer is full.
DebugLevel
Specify the debug level:<br>0 - basic information about starting and stopping of Zabbix processes<br>1 - critical informa-
tion;<br>2 - error information;<br>3 - warnings;<br>4 - for debugging (produces lots of information);<br>5 - extended debugging
(produces even more information).
DenyKey
Deny the execution of those item keys that match a pattern. The key pattern is a wildcard expression that supports the ”*”
character to match any number of any characters.<br>Multiple key matching rules may be defined in combination with AllowKey.
The parameters are processed one by one according to their appearance order. See also: Restricting agent checks.
EnableRemoteCommands
Whether remote commands from Zabbix server are allowed. This parameter is deprecated, use AllowKey=system.run[*]
or DenyKey=system.run[*] instead.<br>It is an internal alias for AllowKey/DenyKey parameters depending on value:<br>0 -
DenyKey=system.run[*]<br>1 - AllowKey=system.run[*]
HeartbeatFrequency
The frequency of heartbeat messages in seconds. Used for monitoring the availability of active checks.<br>0 - heartbeat messages
disabled.
HostInterface
An optional parameter that defines the host interface. The host interface is used at host autoregistration process. If not defined,
the value will be acquired from HostInterfaceItem.<br>The agent will issue an error and not start if the value is over the limit of
255 characters.
HostInterfaceItem
An optional parameter that defines an item used for getting the host interface.<br>Host interface is used at host autoregistration
process.<br>During an autoregistration request the agent will log a warning message if the value returned by the specified item is
over the limit of 255 characters.<br>The system.run[] item is supported regardless of AllowKey/DenyKey values.<br>This option
is only used when HostInterface is not defined.
HostMetadata
An optional parameter that defines host metadata. Host metadata is used only at host autoregistration process (active agent).
If not defined, the value will be acquired from HostMetadataItem.<br>The agent will issue an error and not start if the specified
value is over the limit of 2034 bytes or a non-UTF-8 string.
HostMetadataItem
An optional parameter that defines a Zabbix agent item used for getting host metadata. This option is only used when HostMetadata
is not defined. User parameters, performance counters and aliases are supported. The system.run[] item is supported regardless
of AllowKey/DenyKey values.<br>The HostMetadataItem value is retrieved on each autoregistration attempt and is used only at
host autoregistration process (active agent).<br>During an autoregistration request the agent will log a warning message if the
value returned by the specified item is over the limit of 65535 UTF-8 code points. The value returned by the item must be a UTF-8
string otherwise it will be ignored.
Hostname
A list of comma-delimited, unique, case-sensitive hostnames. Required for active checks and must match hostnames as configured
on the server. The value is acquired from HostnameItem if undefined.<br>Allowed characters: alphanumeric, ’.’, ’ ’, ’_’ and ’-’.
Maximum length: 128 characters per hostname, 2048 characters for the entire line.
HostnameItem
1792
An optional parameter that defines a Zabbix agent item used for getting the host name. This option is only used when Hostname
is not defined. User parameters, performance counters or aliases are not supported, but the system.run[] item is supported
regardless of AllowKey/DenyKey values.<br>See also a more detailed description.
Default: system.hostname
Include
You may include individual files or all files in a directory in the configuration file (located in C:\Program Files\Zabbix Agent
by default if Zabbix agent is installed using Windows MSI installer packages; located in the folder specified during installation if
Zabbix agent is installed as a zip archive). All included files must have correct syntax, otherwise agent will not start.<br>To only
include relevant files in the specified directory, the asterisk wildcard character is supported for pattern matching.<br>See special
notes about limitations.
Example:
The maximum number of pending connections in the TCP queue.<br>The default value is a hard-coded constant, which depends
on the system.<br>The maximum supported value depends on the system, too high values may be silently truncated to the
’implementation-specified maximum’.
ListenIP
Default: 0.0.0.0
ListenPort
The agent will listen on this port for connections from the server.
LogFile
LogFileSize
The maximum size of a log file in MB.<br>0 - disable automatic log rotation.<br>Note: If the log file size limit is reached and file
rotation fails, for whatever reason, the existing log file is truncated and started anew.
LogRemoteCommands
Enable the logging of the executed shell commands as warnings. Commands will be logged only if executed remotely. Log entries
will not be created if system.run[] is launched locally by HostMetadataItem, HostInterfaceItem or HostnameItem parameters.
LogType
The type of the log output:<br>file - write log to the file specified by LogFile parameter;<br>system - write log to Windows Event
Log;<br>console - write log to standard output.
Default: file
MaxLinesPerSecond
The maximum number of new lines the agent will send per second to Zabbix server or proxy when processing ’log’, ’logrt’, and
’eventlog’ active checks. The provided value will be overridden by the ’maxlines’ parameter, provided in the ’log’, ’logrt’, or
’eventlog’ item key.<br>Note: Zabbix will process 10 times more new lines than set in MaxLinesPerSecond to seek the required
string in log items.
PerfCounter
Defines a new parameter <parameter_name> which is the average value for system performance counter <perf_counter_path>
for the specified time period <period> (in seconds).<br>Syntax: <parameter_name>,”<perf_counter_path>”,<period>
1793
For example, if you wish to receive the average number of processor interrupts per second for the last minute, you can define a
new parameter ”interrupts” as the following:<br>
PerfCounter = interrupts,"\Processor(0)\Interrupts/sec",60
Please note the double quotes around the performance counter path. The parameter name (interrupts) is to be used as the item
key when creating an item. Samples for calculating the average value will be taken every second.<br>You may run ”typeperf -qx”
to get the list of all performance counters available in Windows.
PerfCounterEn
Defines a new parameter <parameter_name> which is the average value for system performance counter <perf_counter_path>
for the specified time period <period> (in seconds). Compared to PerfCounter, the perfcounter paths must be in English. Supported
only on Windows Server 2008/Vista and later.<br>Syntax: <parameter_name>,”<perf_counter_path>”,<period>
For example, if you wish to receive the average number of processor interrupts per second for the last minute, you can define a
new parameter ”interrupts” as the following:<br>
PerfCounterEn = interrupts,"\Processor(0)\Interrupts/sec",60
Please note the double quotes around the performance counter path. The parameter name (interrupts) is to be used as the
item key when creating an item. Samples for calculating the average value will be taken every second.<br>You can find
the list of English strings by viewing the following registry key: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows
NT\CurrentVersion\Perflib\009.
RefreshActiveChecks
How often the list of active checks is refreshed, in seconds. Note that after failing to refresh active checks the next refresh will be
attempted in 60 seconds.
Server
A list of comma-delimited IP addresses, optionally in CIDR notation, or DNS names of Zabbix servers or Zabbix proxies. Incoming con-
nections will be accepted only from the hosts listed here. If IPv6 support is enabled then ’127.0.0.1’, ’::127.0.0.1’, ’::ffff:127.0.0.1’
are treated equally and ’::/0’ will allow any IPv4 or IPv6 address. ’0.0.0.0/0’ can be used to allow any IPv4 address. Note that
”IPv4-compatible IPv6 addresses” (0000::/96 prefix) are supported but deprecated by RFC4291. Spaces are allowed.
Example:
Server=127.0.0.1,192.168.1.0/24,::1,2001:db8::/32,zabbix.example.com
Mandatory: yes, if StartAgents is not explicitly set to 0
ServerActive
The Zabbix server/proxy address or cluster configuration to get active checks from. The server/proxy address is an IP address
or DNS name and optional port separated by colon.<br>The cluster configuration is one or more server addresses separated by
semicolon. Multiple Zabbix servers/clusters and Zabbix proxies can be specified, separated by comma. More than one Zabbix proxy
should not be specified from each Zabbix server/cluster. If Zabbix proxy is specified then Zabbix server/cluster for that proxy should
not be specified.<br>Multiple comma-delimited addresses can be provided to use several independent Zabbix servers in parallel.
Spaces are allowed.<br>If the port is not specified, default port is used.<br>IPv6 addresses must be enclosed in square brackets
if port for that host is specified. If port is not specified, square brackets for IPv6 addresses are optional.<br>If this parameter is
not specified, active checks are disabled.
ServerActive=127.0.0.1:10051
Example for multiple servers:
ServerActive=127.0.0.1:20051,zabbix.domain,[::1]:30051,::1,[12fc::1]
Example for high availability:
ServerActive=zabbix.cluster.node1;zabbix.cluster.node2:20051;zabbix.cluster.node3
Example for high availability with two clusters and one server:
ServerActive=zabbix.cluster.node1;zabbix.cluster.node2:20051,zabbix.cluster2.node1;zabbix.cluster2.node2,z
Range: (*)
SourceIP
1794
The source IP address for:<br>- outgoing connections to Zabbix server or Zabbix proxy;<br>- making connections while executing
some items (web.page.get, net.tcp.port, etc.).
StartAgents
The number of pre-forked instances of zabbix_agentd that process passive checks. If set to 0, passive checks are disabled and the
agent will not listen on any TCP port.
Timeout
Specifies timeout for communications (in seconds).<br> This parameter is used for defining the duration of various communication
operations:<br> - awaiting a response from Zabbix server;<br> - sending requests to Zabbix server, including active checks
configuration requests and item data;<br> - retrieving log data through logfile or Windows event log monitoring;<br> - sending
heartbeat messages;<br> - also used as a fallback in scenarios where server/proxy older than version 7.0 is sending checks without
timeouts.
TLSAccept
The incoming connections to accept. Used for passive checks. Multiple values can be specified, separated by comma:<br>unencrypted
- accept connections without encryption (default)<br>psk - accept connections with TLS and a pre-shared key (PSK)<br>cert -
accept connections with TLS and a certificate
Mandatory: yes, if TLS certificate or PSK parameters are defined (even for unencrypted connection); otherwise no
TLSCAFile
The full pathname of the file containing the top-level CA(s) certificates for peer certificate verification, used for encrypted commu-
nications between Zabbix components.
TLSCertFile
The full pathname of the file containing the agent certificate or certificate chain, used for encrypted communications with Zabbix
components.
TLSConnect
How the agent should connect to Zabbix server or proxy. Used for active checks. Only one value can be specified:<br>unencrypted
- connect without encryption (default)<br>psk - connect using TLS and a pre-shared key (PSK)<br>cert - connect using TLS and a
certificate
Mandatory: yes, if TLS certificate or PSK parameters are defined (even for unencrypted connection); otherwise no
TLSCRLFile
The full pathname of the file containing revoked certificates. This parameter is used for encrypted communications between Zabbix
components.
TLSKeyFile
The full pathname of the file containing the agent private key, used for encrypted communications between Zabbix components.
TLSPSKFile
The full pathname of the file containing the agent pre-shared key, used for encrypted communications with Zabbix server.
TLSPSKIdentity
The pre-shared key identity string, used for encrypted communications with Zabbix server.
TLSServerCertIssuer
TLSServerCertSubject
UnsafeUserParameters
Allow all characters to be passed in arguments to user-defined parameters. The following characters are not allowed: \ ’ ” ‘ * ? [ ]
{ } ~ $ ! & ; ( ) < > | # @<br>Additionally, newline characters are not allowed.
UserParameter
1795
A user-defined parameter to monitor. There can be several user-defined parameters.<br>Format: UserParameter=<key>,<shell
command><br>Note that the shell command must not return empty string or EOL only. Shell commands may have relative paths,
if the UserParameterDir parameter is specified.
Example:
UserParameter=system.test,who|wc -l
UserParameter=check_cpu,./custom_script.sh
UserParameterDir
The default search path for UserParameter commands. If used, the agent will change its working directory to the one specified
here before executing a command. Thereby, UserParameter commands can have a relative ./ prefix instead of a full path. Only
one entry is allowed.
Example:
UserParameterDir=/opt/myscripts
Note:
(*) The number of active servers listed in ServerActive plus the number of pre-forked instances for passive checks specified
in StartAgents must be less than 64.
See also
1. Differences in the Zabbix agent configuration for active and passive checks starting from version 2.0.0.
Overview
Zabbix agent 2 is a new generation of Zabbix agent and may be used in place of Zabbix agent.
The parameters supported by the Windows Zabbix agent 2 configuration file (zabbix_agent2.conf) are listed in this section.
The parameters are listed without additional information. Click on the parameter to see the full details.
Parameter Description
1796
Parameter Description
Plugins.SystemRun.LogRemoteCommands
Enable the logging of the executed shell commands as warnings.
PluginSocket The path to the UNIX socket for loadable plugin communications.
PluginTimeout The timeout for connections with loadable plugins, in seconds.
RefreshActiveChecks How often the list of active checks is refreshed.
Server A list of comma-delimited IP addresses, optionally in CIDR notation, or DNS names of Zabbix
servers and Zabbix proxies.
ServerActive The Zabbix server/proxy address or cluster configuration to get active checks from.
SourceIP The source IP address.
StatusPort If set, the agent will listen on this port for HTTP status requests (http://localhost:<port>/status).
Timeout Specifies timeout for communications (in seconds).
TLSAccept What incoming connections to accept.
TLSCAFile The full pathname of a file containing the top-level CA(s) certificates for peer certificate
verification, used for encrypted communications between Zabbix components.
TLSCertFile The full pathname of a file containing the agent certificate or certificate chain, used for
encrypted communications between Zabbix components.
TLSConnect How the agent should connect to Zabbix server or proxy.
TLSCRLFile The full pathname of a file containing revoked certificates. This parameter is used for encrypted
communications between Zabbix components.
TLSKeyFile The full pathname of a file containing the agent private key, used for encrypted communications
between Zabbix components.
TLSPSKFile The full pathname of a file containing the agent pre-shared key, used for encrypted
communications with Zabbix server.
TLSPSKIdentity The pre-shared key identity string, used for encrypted communications with Zabbix server.
TLSServerCertIssuer The allowed server (proxy) certificate issuer.
TLSServerCertSubject The allowed server (proxy) certificate subject.
UnsafeUserParameters Allow all characters to be passed in arguments to user-defined parameters.
UserParameter A user-defined parameter to monitor.
UserParameterDir The default search path for UserParameter commands.
All parameters are non-mandatory unless explicitly stated that the parameter is mandatory.
Note that:
• The default values reflect process defaults, not the values in the shipped configuration files;
• Zabbix supports configuration files only in UTF-8 encoding without BOM;
• Comments starting with ”#” are only supported in the beginning of the line. <br>
Parameter details
Alias
Sets an alias for an item key. It can be used to substitute a long and complex item key with a shorter and simpler one.<br>
Multiple Alias parameters may be present. Multiple parameters with the same Alias key are not allowed.<br> Different Alias keys
may reference the same item key.<br> Aliases can be used in HostMetadataItem but not in the HostnameItem parameter.
Example 1: Retrieving the paging file usage in percentage from the server.
Example 2: Getting the CPU load with default and custom parameters.
Alias=cpu.load:system.cpu.load
Alias=cpu.load[*]:system.cpu.load[*]
This allows use cpu.load key to get the CPU load with default parameters as well as use cpu.load[percpu,avg15] to get specific
data about the CPU load.
Example 3: Running multiple low-level discovery rules processing the same discovery items.
Alias=vfs.fs.discovery[*]:vfs.fs.discovery
Now it is possible to set up several discovery rules using vfs.fs.discovery with different parameters for each rule, e.g.,
vfs.fs.discovery[foo], vfs.fs.discovery[bar], etc.
AllowKey
1797
Allow the execution of those item keys that match a pattern. The key pattern is a wildcard expression that supports the ”*”
character to match any number of any characters.<br>Multiple key matching rules may be defined in combination with DenyKey.
The parameters are processed one by one according to their appearance order. See also: Restricting agent checks.
BufferSend
The time interval in seconds which determines how often values are sent from the buffer to Zabbix server.<br>Note that if the
buffer is full, the data will be sent sooner.
BufferSize
The maximum number of values in the memory buffer. The agent will send all collected data to the Zabbix server or proxy if the
buffer is full.<br>This parameter should only be used if persistent buffer is disabled (EnablePersistentBuffer=0).
ControlSocket
The control socket, used to send runtime commands with the ’-R’ option.
Default: \\.\pipe\agent.sock
DebugLevel
Specify the debug level:<br>0 - basic information about starting and stopping of Zabbix processes<br>1 - critical informa-
tion;<br>2 - error information;<br>3 - warnings;<br>4 - for debugging (produces lots of information);<br>5 - extended debugging
(produces even more information).
DenyKey
Deny the execution of those item keys that match a pattern. The key pattern is a wildcard expression that supports the ”*”
character to match any number of any characters.<br>Multiple key matching rules may be defined in combination with AllowKey.
The parameters are processed one by one according to their appearance order. See also: Restricting agent checks.
EnablePersistentBuffer
Enable the usage of local persistent storage for active items. If persistent storage is disabled, the memory buffer will be used.
ForceActiveChecksOnStart
Perform active checks immediately after the restart for the first received configuration. Also available as a per-plugin configuration
parameter, for example: Plugins.Uptime.System.ForceActiveChecksOnStart=1
Default: 0<br> Values: 0 - disabled, 1 - enabled
HeartbeatFrequency
The frequency of heartbeat messages in seconds. Used for monitoring the availability of active checks.<br>0 - heartbeat messages
disabled.
HostInterface
An optional parameter that defines the host interface. The host interface is used at host autoregistration process. If not defined,
the value will be acquired from HostInterfaceItem.<br>The agent will issue an error and not start if the value is over the limit of
255 characters.
HostInterfaceItem
An optional parameter that defines an item used for getting the host interface.<br>Host interface is used at host autoregistra-
tion process. This option is only used when HostInterface is not defined.<br>The system.run[] item is supported regardless of
AllowKey/DenyKey values.<br>During an autoregistration request the agent will log a warning message if the value returned by
the specified item is over the limit of 255 characters.
HostMetadata
An optional parameter that defines host metadata. Host metadata is used only at host autoregistration process (active agent).
If not defined, the value will be acquired from HostMetadataItem.<br>The agent will issue an error and not start if the specified
value is over the limit of 2034 bytes or a non-UTF-8 string.
1798
Range: 0-2034 bytes
HostMetadataItem
An optional parameter that defines an item used for getting host metadata. This option is only used when HostMetadata is
not defined. User parameters and aliases are supported. The system.run[] item is supported regardless of AllowKey/DenyKey
values.<br>The HostMetadataItem value is retrieved on each autoregistration attempt and is used only at host autoregistration
process.<br>During an autoregistration request the agent will log a warning message if the value returned by the specified item
is over the limit of 65535 UTF-8 code points. The value returned by the item must be a UTF-8 string otherwise it will be ignored.
Hostname
A list of comma-delimited, unique, case-sensitive hostnames. Required for active checks and must match hostnames as configured
on the server. The value is acquired from HostnameItem if undefined.<br>Allowed characters: alphanumeric, ’.’, ’ ’, ’_’ and ’-’.
Maximum length: 128 characters per hostname, 2048 characters for the entire line.
HostnameItem
An optional parameter that defines an item used for getting the host name. This option is only used when Hostname is not defined.
User parameters or aliases are not supported, but the system.run[] item is supported regardless of AllowKey/DenyKey values.
Default: system.hostname
Include
You may include individual files or all files in a directory in the configuration file (located in C:\Program Files\Zabbix Agent
2 by default if Zabbix agent is installed using Windows MSI installer packages; located in the folder specified during installation
if Zabbix agent is installed as a zip archive). All included files must have correct syntax, otherwise agent will not start. The path
can be relative to the zabbix_agent2.conf file location (e.g., Include=.\zabbix_agent2.d\plugins.d\*.conf).<br>To only
include relevant files in the specified directory, the asterisk wildcard character is supported for pattern matching.<br>See special
notes about limitations.
Example:
A list of comma-delimited IP addresses that the agent should listen on. The first IP address is sent to the Zabbix server, if connecting
to it, to retrieve the list of active checks.
Default: 0.0.0.0
ListenPort
The agent will listen on this port for connections from the server.
LogFile
LogFileSize
The maximum size of a log file in MB.<br>0 - disable automatic log rotation.<br>Note: If the log file size limit is reached and file
rotation fails, for whatever reason, the existing log file is truncated and started anew.
LogType
The type of the log output:<br>file - write log to the file specified by LogFile parameter;<br>console - write log to standard output.
Default: file
PersistentBufferFile
The file where Zabbix agent 2 should keep the SQLite database. Must be a full filename. This parameter is only used if persistent
buffer is enabled (EnablePersistentBuffer=1).
PersistentBufferPeriod
1799
The time period for which data should be stored when there is no connection to the server or proxy. Older data will be lost. Log
data will be preserved. This parameter is only used if persistent buffer is enabled (EnablePersistentBuffer=1).
Plugins.<PluginName>.System.Capacity
The limit of checks per <PluginName> plugin that can be executed at the same time.
Default: 1000 Range: 1-1000
Plugins.Log.MaxLinesPerSecond
The maximum number of new lines the agent will send per second to Zabbix server or proxy when processing ’log’, ’logrt’ and
’eventlog’ active checks. The provided value will be overridden by the ’maxlines’ parameter, provided in the ’log’, ’logrt’ or
’eventlog’ item key.<br>Note: Zabbix will process 10 times more new lines than set in MaxLinesPerSecond to seek the required
string in log items.
Plugins.SystemRun.LogRemoteCommands
Enable the logging of the executed shell commands as warnings. The commands will be logged only if executed remotely. Log
entries will not be created if system.run[] is launched locally by the HostMetadataItem, HostInterfaceItem or HostnameItem pa-
rameters.
PluginSocket
Default: \\.\pipe\agent.plugin.sock
PluginTimeout
RefreshActiveChecks
How often the list of active checks is refreshed, in seconds. Note that after failing to refresh active checks the next refresh will be
attempted in 60 seconds.
Server
A list of comma-delimited IP addresses, optionally in CIDR notation, or DNS names of Zabbix servers or Zabbix proxies. Incoming con-
nections will be accepted only from the hosts listed here. If IPv6 support is enabled then ’127.0.0.1’, ’::127.0.0.1’, ’::ffff:127.0.0.1’
are treated equally and ’::/0’ will allow any IPv4 or IPv6 address. ’0.0.0.0/0’ can be used to allow any IPv4 address. Spaces are
allowed.
Example:
Server=127.0.0.1,192.168.1.0/24,::1,2001:db8::/32,zabbix.example.com
Mandatory: yes
ServerActive
The Zabbix server/proxy address or cluster configuration to get active checks from. The server/proxy address is an IP address
or DNS name and optional port separated by colon.<br>The cluster configuration is one or more server addresses separated by
semicolon. Multiple Zabbix servers/clusters and Zabbix proxies can be specified, separated by comma. More than one Zabbix proxy
should not be specified from each Zabbix server/cluster. If a Zabbix proxy is specified then Zabbix server/cluster for that proxy
should not be specified.<br>Multiple comma-delimited addresses can be provided to use several independent Zabbix servers in
parallel. Spaces are allowed.<br>If the port is not specified, default port is used.<br>IPv6 addresses must be enclosed in square
brackets if port for that host is specified. If port is not specified, square brackets for IPv6 addresses are optional.<br>If this
parameter is not specified, active checks are disabled.
ServerActive=127.0.0.1:10051
Example for multiple servers:
ServerActive=127.0.0.1:20051,zabbix.domain,\[::1\]:30051,::1,\[12fc::1\]
1800
Example for high availability:
ServerActive=zabbix.cluster.node1;zabbix.cluster.node2:20051;zabbix.cluster.node3
Example for high availability with two clusters and one server:
ServerActive=zabbix.cluster.node1;zabbix.cluster.node2:20051,zabbix.cluster2.node1;zabbix.cluster2.node2,z
SourceIP
The source IP address for:<br>- outgoing connections to Zabbix server or Zabbix proxy;<br>- making connections while executing
some items (web.page.get, net.tcp.port, etc.).
StatusPort
If set, the agent will listen on this port for HTTP status requests (http://localhost:<port>/status).
Range: 1024-32767
Timeout
Specifies timeout for communications (in seconds).<br> This parameter is used for defining the duration of various communication
operations:<br> - awaiting a response from Zabbix server;<br> - sending requests to Zabbix server, including active checks
configuration requests and item data;<br> - retrieving log data through logfile or Windows event log monitoring;<br> - sending
heartbeat messages;<br> - also used as a fallback in scenarios where server/proxy older than version 7.0 is sending checks without
timeouts.
TLSAccept
The incoming connections to accept. Used for passive checks. Multiple values can be specified, separated by comma:<br>unencrypted
- accept connections without encryption (default)<br>psk - accept connections with TLS and a pre-shared key (PSK)<br>cert -
accept connections with TLS and a certificate
Mandatory: yes, if TLS certificate or PSK parameters are defined (even for unencrypted connection); otherwise no
TLSCAFile
The full pathname of the file containing the top-level CA(s) certificates for peer certificate verification, used for encrypted commu-
nications between Zabbix components.
TLSCertFile
The full pathname of the file containing the agent certificate or certificate chain, used for encrypted communications with Zabbix
components.
TLSConnect
How the agent should connect to Zabbix server or proxy. Used for active checks. Only one value can be specified:<br>unencrypted
- connect without encryption (default)<br>psk - connect using TLS and a pre-shared key (PSK)<br>cert - connect using TLS and a
certificate
Mandatory: yes, if TLS certificate or PSK parameters are defined (even for unencrypted connection); otherwise no
TLSCRLFile
The full pathname of the file containing revoked certificates. This parameter is used for encrypted communications between Zabbix
components.
TLSKeyFile
The full pathname of the file containing the agent private key, used for encrypted communications between Zabbix components.
TLSPSKFile
The full pathname of the file containing the agent pre-shared key, used for encrypted communications with Zabbix server.
TLSPSKIdentity
The pre-shared key identity string, used for encrypted communications with Zabbix server.
TLSServerCertIssuer
TLSServerCertSubject
1801
UnsafeUserParameters
Allow all characters to be passed in arguments to user-defined parameters. The following characters are not allowed: \ ’ ” ‘ * ? [ ]
{ } ~ $ ! & ; ( ) < > | # @<br>Additionally, newline characters are not allowed.
UserParameter
Example:
UserParameter=system.test,who|wc -l
UserParameter=check_cpu,./custom_script.sh
UserParameterDir
The default search path for UserParameter commands. If used, the agent will change its working directory to the one specified here
before executing a command. Thereby, UserParameter commands can have a relative ./ prefix instead of a full path.<br>Only
one entry is allowed.
Example:
UserParameterDir=/opt/myscripts
Overview
This section contains descriptions of configuration file parameters for Zabbix agent 2 plugins. Please use the sidebar to access
information about the specific plugin.
1 Ceph plugin
Overview
This section lists parameters supported in the Ceph Zabbix agent 2 plugin configuration file (ceph.conf).
Note that:
• The default values reflect process defaults, not the values in the shipped configuration files;
• Zabbix supports configuration files only in UTF-8 encoding without BOM;
• Comments starting with ”#” are only supported at the beginning of the line.
Parameters
Plugins.Ceph.Default.ApiKey
no Default API key for connecting to Ceph; used if no value is
specified in an item key or named session.
Plugins.Ceph.Default.User
no Default username for connecting to Ceph; used if no value is
specified in an item key or named session.
Plugins.Ceph.Default.Uri
no https://localhost:8003
Default URI for connecting to Ceph; used if no value is
specified in an item key or named session.
1802
Parameter Mandatory Range Default Description
Plugins.Ceph.InsecureSkipVerify
no false / true false Determines whether an http client should verify the server’s
certificate chain and host name.
If true, TLS accepts any certificate presented by the server
and any host name in that certificate. In this mode, TLS is
susceptible to man-in-the-middle attacks (should be used only
for testing).
Plugins.Ceph.KeepAlive
no 60-900 300 The maximum time of waiting (in seconds) before unused
plugin connections are closed.
Plugins.Ceph.Sessions.<SessionName>.ApiKey
no Named session API key.
<SessionName> - define name of a session for using in item
keys.
Plugins.Ceph.Sessions.<SessionName>.User
no Named session username.
<SessionName> - define name of a session for using in item
keys.
Plugins.Ceph.Sessions.<SessionName>.Uri
no Connection string of a named session.
<SessionName> - define name of a session for using in item
keys.
See also:
• Description of general Zabbix agent 2 configuration parameters: Zabbix agent 2 (UNIX) / Zabbix agent 2 (Windows)
• Instructions for configuring plugins
2 Docker plugin
Overview
This section lists parameters supported in the Docker Zabbix agent 2 plugin configuration file (docker.conf).
Note that:
• The default values reflect process defaults, not the values in the shipped configuration files;
• Zabbix supports configuration files only in UTF-8 encoding without BOM;
• Comments starting with ”#” are only supported at the beginning of the line.
Parameters
Plugins.Docker.Endpoint
no unix:///var/run/docker.sock
Docker daemon unix-socket location.
Must contain a scheme (only unix:// is supported).
no
Plugins.Docker.Timeout 1-30 global Request execution timeout (how long to wait for a request to
timeout complete before shutting it down).
See also:
• Description of general Zabbix agent 2 configuration parameters: Zabbix agent 2 (UNIX) / Zabbix agent 2 (Windows)
• Instructions for configuring plugins
3 Ember+ plugin
Overview
1803
This section lists parameters supported in the Ember+ Zabbix agent 2 plugin configuration file (ember.conf).
The Ember+ plugin is a loadable plugin and is available and fully described in the Ember+ plugin repository.
This plugin is currently only available to be built from the source (for both Unix and Windows).
Note that:
• The default values reflect process defaults, not the values in the shipped configuration files;
• Zabbix supports configuration files only in UTF-8 encoding without BOM;
• Comments starting with ”#” are only supported at the beginning of the line.
Parameters
Plugins.EmberPlus.Default.Uri
no tcp://localhost:9998
The default URI to connect. The only supported schema is
tcp://. A schema can be omitted. Embedded credentials
will be ignored.
Plugins.EmberPlus.KeepAlive
no 60-900 300 The maximum time of waiting (in seconds) before unused
plugin connections are closed.
Plugins.EmberPlus.Sessions.<SessionName>.Uri
no tcp://localhost:9998
The URI to connect, for the named session. The only
supported schema is tcp://. A schema can be omitted.
Embedded credentials will be ignored.
<SessionName> - define name of a session for using in item
keys.
Plugins.EmberPlus.System.Path
no Path to the Ember+ plugin executable.
Example usage:
Plugins.EmberPlus.System.Path=/usr/sbin/zabbix-agent2-plu
Plugins.EmberPlus.Timeout
no 1-30 global The amount of time to wait for a server to respond when first
timeout connecting and on follow-up operations in the session.
See also:
• Description of general Zabbix agent 2 configuration parameters: Zabbix agent 2 (UNIX) / Zabbix agent 2 (Windows)
• Instructions for configuring plugins
4 Memcached plugin
Overview
This section lists parameters supported in the Memcached Zabbix agent 2 plugin configuration file (memcached.conf).
Note that:
• The default values reflect process defaults, not the values in the shipped configuration files;
• Zabbix supports configuration files only in UTF-8 encoding without BOM;
• Comments starting with ”#” are only supported at the beginning of the line.
Parameters
Plugins.Memcached.Default.Password
no Default password for connecting to Memcached; used if no
value is specified in an item key or named session.
Plugins.Memcached.Default.Uri
no tcp://localhost:11211
Default URI for connecting to Memcached; used if no value is
specified in an item key or named session.
1804
Parameter Mandatory Range Default Description
Plugins.Memcached.KeepAlive
no 60-900 300 The maximum time of waiting (in seconds) before unused
plugin connections are closed.
Plugins.Memcached.Sessions.<SessionName>.Password
no Named session password.
<SessionName> - define name of a session for using in item
keys.
Plugins.Memcached.Sessions.<SessionName>.Uri
no Connection string of a named session.
<SessionName> - define name of a session for using in item
keys.
See also:
• Description of general Zabbix agent 2 configuration parameters: Zabbix agent 2 (UNIX) / Zabbix agent 2 (Windows)
• Instructions for configuring plugins
5 Modbus plugin
Overview
This section lists parameters supported in the Modbus Zabbix agent 2 plugin configuration file (modbus.conf).
Note that:
• The default values reflect process defaults, not the values in the shipped configuration files;
• Zabbix supports configuration files only in UTF-8 encoding without BOM;
• Comments starting with ”#” are only supported at the beginning of the line.
Parameters
Plugins.Modbus.Sessions.<SessionName>.Endpoint
no Endpoint is a connection string consisting of a protocol
scheme, a host address and a port or serial port name and
attributes.
<SessionName> - define name of a session for using in item
keys.
Plugins.Modbus.Sessions.<SessionName>.SlaveID
no Slave ID of a named session.
<SessionName> - define name of a session for using in item
keys.
Example: Plugins.Modbus.Sessions.MB1.SlaveID=20
Note that this named session parameter is checked only if the
value provided in the item key slave ID parameter is empty.
Plugins.Modbus.Sessions.<SessionName>.Timeout
no Timeout of a named session.
<SessionName> - define name of a session for using in item
keys.
Example: Plugins.Modbus.Sessions.MB1.Timeout=2
If you need to set the request execution timeout (how long to wait for a request to complete before shutting it down), use the item
configuration form.
1805
See also:
• Description of general Zabbix agent 2 configuration parameters: Zabbix agent 2 (UNIX) / Zabbix agent 2 (Windows)
• Instructions for configuring plugins
6 MongoDB plugin
Overview
This section lists parameters supported in the MongoDB Zabbix agent 2 plugin configuration file (mongo.conf).
The MongoDB plugin is a loadable plugin and is available and fully described in the MongoDB plugin repository.
Note that:
• The default values reflect process defaults, not the values in the shipped configuration files;
• Zabbix supports configuration files only in UTF-8 encoding without BOM;
• Comments starting with ”#” are only supported at the beginning of the line.
Options
Parameter Description
Parameters
Plugins.MongoDB.Default.Password
no Default password for connecting to MongoDB; used if no value
is specified in an item key or named session.
Plugins.MongoDB.Default.Uri
no Default URI for connecting to MongoDB; used if no value is
specified in an item key or named session.
1806
Parameter Mandatory Range Default Description
Plugins.MongoDB.Sessions.<SessionName>.TLSConnect
no Encryption type for communications between Zabbix agent 2
and monitored databases.
<SessionName> - define name of a session for using in item
keys.
Supported values:
required - require TLS connection;
verify_ca - verify certificates;
verify_full - verify certificates and IP address.
See also:
• Description of general Zabbix agent 2 configuration parameters: Zabbix agent 2 (UNIX) / Zabbix agent 2 (Windows)
• Instructions for configuring plugins
7 MQTT plugin
Overview
This section lists parameters supported in the MQTT Zabbix agent 2 plugin configuration file (mqtt.conf).
Note that:
• The default values reflect process defaults, not the values in the shipped configuration files;
• Zabbix supports configuration files only in UTF-8 encoding without BOM;
• Comments starting with ”#” are only supported at the beginning of the line.
Parameters
Plugins.MQTT.Default.Password
no Default password for connecting to MQTT; used if no value is
specified in an item key or named session.
no
Plugins.MQTT.Default.TLSCAFile Full pathname of a file containing the top-level CA(s)
certificates for peer certificate verification for encrypted
communications between Zabbix agent 2 and MQTT broker;
used if no value is specified in a named session.
1807
Parameter Mandatory Range Default Description
Plugins.MQTT.Default.TLSCertFile
no Full pathname of a file containing the agent certificate or
certificate chain for encrypted communications between
Zabbix agent 2 and MQTT broker; used if no value is specified
in a named session.
Plugins.MQTT.Default.TLSKeyFile
no Full pathname of a file containing the MQTT private key for
encrypted communications between Zabbix agent 2 and
MQTT broker; used if no value is specified in a named session.
Plugins.MQTT.Default.Topic
no Default topic for MQTT subscription; used if no value is
specified in an item key or named session.
1808
Parameter Mandatory Range Default Description
Plugins.MQTT.Sessions.<SessionName>.Url
no Connection string of a named session.
<SessionName> - define name of a session for using in item
keys.
If you need to set the request execution timeout (how long to wait for a request to complete before shutting it down), use the item
configuration form.
See also:
• Description of general Zabbix agent 2 configuration parameters: Zabbix agent 2 (UNIX) / Zabbix agent 2 (Windows)
• Instructions for configuring plugins
8 MSSQL plugin
Overview
This section lists parameters supported in the MSSQL Zabbix agent 2 plugin configuration file (mssql.conf).
The MSSQL plugin is a loadable plugin and is available and fully described in the MSSQL plugin repository.
Note that:
• The default values reflect process defaults, not the values in the shipped configuration files;
• Zabbix supports configuration files only in UTF-8 encoding without BOM;
• Comments starting with ”#” are only supported at the beginning of the line.
Parameters
Plugins.MSSQL.CustomQueriesDir
no empty Specifies the file path to a directory containing user-defined
.sql files with custom queries that the plugin can execute. The
plugin loads all available .sql files in the configured directory
at startup. This means that any changes to the custom query
files will not be reflected until the plugin is restarted. The
plugin is started and stopped together with Zabbix agent 2.
Plugins.MSSQL.Default.CACertPath
no The default file path to the public key certificate of the
certificate authority (CA) that issued the certificate of the
MSSQL server. The certificate must be in PEM format.
Plugins.MSSQL.Default.Database
no The default database name to connect to.
Plugins.MSSQL.Default.Encrypt
no Specifies the default connection encryption type. Possible
values are:
true - data sending between plugin and server is encrypted;
false - data sending between plugin and server is not
encrypted beyond the login packet;
strict - data sending between plugin and server is encrypted
E2E using TDS8;
disable - data sending between plugin and server is not
encrypted.
Plugins.MSSQL.Default.HostNameInCertificate
no The common name (CN) of the certificate of the MSSQL server
by default.
1809
Parameter Mandatory Range Default Description
Plugins.MSSQL.Default.Password
no The password to be sent to a protected MSSQL server by
default.
Plugins.MSSQL.Default.TLSMinVersion
no The minimum TLS version to use by default. Possible values
are: 1.0, 1.1, 1.2, 1.3.
Plugins.MSSQL.Default.TrustServerCertificate
no Whether the plugin should trust the server certificate without
validating it by default. Possible values: true, false.
Plugins.MSSQL.Default.Uri
no sqlserver://localhost:1433
The default URI to connect. The only supported schema is
sqlserver://. A schema can be omitted. Embedded
credentials will be ignored.
Plugins.MSSQL.Default.User
no The default username to be sent to a protected MSSQL server.
Plugins.MSSQL.KeepAlive
no 60-900 300 The maximum time of waiting (in seconds) before unused
plugin connections are closed.
Plugins.MSSQL.Sessions.<SessionName>.CACertPath
no The file path to the public key certificate of the certificate
authority (CA) that issued the certificate of the MSSQL server
for the named session. The certificate must be in PEM format.
<SessionName> - define name of a session for using in item
keys.
Plugins.MSSQL.Sessions.<SessionName>.Database
no The database name to connect to for the named session.
<SessionName> - define name of a session for using in item
keys.
Plugins.MSSQL.Sessions.<SessionName>.Encrypt
no Specifies the connection encryption type for the named
session. Possible values are:
true - data sending between plugin and server is encrypted;
false - data sending between plugin and server is not
encrypted beyond the login packet;
strict - data sending between plugin and server is encrypted
E2E using TDS8;
disable - data sending between plugin and server is not
encrypted.
<SessionName> - define name of a session for using in item
keys.
Plugins.MSSQL.Sessions.<SessionName>.HostNameInCertificate
no The common name (CN) of the certificate of the MSSQL server
for the named session.
<SessionName> - define name of a session for using in item
keys.
Plugins.MSSQL.Sessions.<SessionName>.Password
no The password to be sent to a protected MSSQL server for the
named session.
<SessionName> - define name of a session for using in item
keys.
Plugins.MSSQL.Sessions.<SessionName>.TLSMinVersion
no The minimum TLS version to use for the named session.
Possible values are: 1.0, 1.1, 1.2, 1.3.
<SessionName> - define name of a session for using in item
keys.
Plugins.MSSQL.Sessions.<SessionName>.TrustServerCertificate
no Whether the plugin should trust the server certificate without
validating it for the named session. Possible values: true,
false.
<SessionName> - define name of a session for using in item
keys.
Plugins.MSSQL.Sessions.<SessionName>.Uri
no sqlserver://localhost:1433
The URI to connect, for the named session. The only
supported schema is sqlserver://. A schema can be
omitted. Embedded credentials will be ignored.
<SessionName> - define name of a session for using in item
keys.
Plugins.MSSQL.Sessions.<SessionName>.User
no The username to be sent to a protected MSSQL server for the
named session.
<SessionName> - define name of a session for using in item
keys.
Plugins.MSSQL.System.Path
no Path to the MSSQL plugin executable.
Global setting for the MSSQL plugin. Applied to all
connections.
Example usage:
Plugins.MSSQL.System.Path=/usr/sbin/zabbix-agent2-plugin/
1810
Parameter Mandatory Range Default Description
Plugins.MSSQL.Timeout
no 1-30 global The amount of time to wait for a server to respond when first
timeout connecting and on follow-up operations in the session.
See also:
• Description of general Zabbix agent 2 configuration parameters: Zabbix agent 2 (UNIX) / Zabbix agent 2 (Windows)
• Instructions for configuring plugins
9 MySQL plugin
Overview
This section lists parameters supported in the MySQL Zabbix agent 2 plugin configuration file (mysql.conf).
Note that:
• The default values reflect process defaults, not the values in the shipped configuration files;
• Zabbix supports configuration files only in UTF-8 encoding without BOM;
• Comments starting with ”#” are only supported at the beginning of the line.
Parameters
Plugins.Mysql.CallTimeout
no 1-30 global The maximum amount of time in seconds to wait for a request
timeout to be done.
Plugins.Mysql.CustomQueriesPath
no empty Full path to the directory used for storing custom queries.
Plugins.Mysql.Default.Password
no Default password for connecting to MySQL; used if no value is
specified in an item key or named session.
Plugins.Mysql.Default.TLSCAFile
no Full pathname of a file containing the top-level CA(s)
(yes, if Plug- certificates for peer certificate verification for encrypted
ins.Mysql.Default.TLSConnect communications between Zabbix agent 2 and monitored
is set to databases; used if no value is specified in a named session.
verify_ca or
verify_full)
Plugins.Mysql.Default.TLSCertFile
no Full pathname of a file containing the agent certificate or
(yes, if Plug- certificate chain for encrypted communications between
ins.Mysql.Default.TLSConnect Zabbix agent 2 and monitored databases; used if no value is
is set to specified in a named session.
verify_ca or
verify_full)
Plugins.Mysql.Default.TLSConnect
no Encryption type for communications between Zabbix agent 2
and monitored databases; used if no value is specified in a
named session.
Supported values:
required - require TLS connection;
verify_ca - verify certificates;
verify_full - verify certificates and IP address.
Plugins.Mysql.Default.TLSKeyFile
no Full pathname of a file containing the database private key for
(yes, if Plug- encrypted communications between Zabbix agent 2 and
ins.Mysql.Default.TLSConnect monitored databases; used if no value is specified in a named
is set to session.
verify_ca or
verify_full)
1811
Parameter Mandatory Range Default Description
Plugins.Mysql.Default.Uri
no tcp://localhost:3306
Default URI for connecting to MySQL; used if no value is
specified in an item key or named session.
Supported values:
required - require TLS connection;
verify_ca - verify certificates;
verify_full - verify certificates and IP address.
Plugins.Mysql.Sessions.<SessionName>.TLSKeyFile
yes, if Plug- Full pathname of a file containing the database private key
ins.Mysql.Sessions.<SessionName>.TLSCertFile used for encrypted communications between Zabbix agent 2
is specified and monitored databases.
<SessionName> - define name of a session for using in item
keys.
Plugins.Mysql.Sessions.<SessionName>.Uri
no Connection string of a named session.
<SessionName> - define name of a session for using in item
keys.
1812
See also:
• Description of general Zabbix agent 2 configuration parameters: Zabbix agent 2 (UNIX) / Zabbix agent 2 (Windows)
• Instructions for configuring plugins
10 Oracle plugin
Overview
This section lists parameters supported in the Oracle Zabbix agent 2 plugin configuration file (oracle.conf).
Note that:
• The default values reflect process defaults, not the values in the shipped configuration files;
• Zabbix supports configuration files only in UTF-8 encoding without BOM;
• Comments starting with ”#” are only supported at the beginning of the line.
Parameters
Plugins.Oracle.CallTimeout
no 1-30 global The maximum wait time in seconds for a request to be
timeout completed.
Plugins.Oracle.ConnectTimeout
no 1-30 global The maximum wait time in seconds for a connection to be
timeout established.
Plugins.Oracle.CustomQueriesPath
no Full pathname of a directory containing .sql files with custom
queries.
Disabled by default.
Example: /etc/zabbix/oracle/sql
Plugins.Oracle.Default.Password
no Default password for connecting to Oracle; used if no value is
specified in an item key or named session.
Plugins.Oracle.Default.Service
no Default service name for connecting to Oracle (SID is not
supported); used if no value is specified in an item key or
named session.
Plugins.Oracle.Default.Uri
no tcp://localhost:1521
Default URI for connecting to Oracle; used if no value is
specified in an item key or named session.
1813
Parameter Mandatory Range Default Description
Plugins.Oracle.Sessions.<SessionName>.User
no Named session username.
<SessionName> - define name of a session for using in item
keys.
See also:
• Description of general Zabbix agent 2 configuration parameters: Zabbix agent 2 (UNIX) / Zabbix agent 2 (Windows)
• Instructions for configuring plugins
11 PostgreSQL plugin
Overview
This section lists parameters supported in the PostgreSQL Zabbix agent 2 plugin configuration file (postgresql.conf).
The PostgreSQL plugin is a loadable plugin and is available and fully described in the PostgreSQL plugin repository.
Note that:
• The default values reflect process defaults, not the values in the shipped configuration files;
• Zabbix supports configuration files only in UTF-8 encoding without BOM;
• Comments starting with ”#” are only supported at the beginning of the line.
Options
Parameter Description
Parameters
Plugins.PostgreSQL.Default.CacheMode
no prepare Cache mode for the PostgreSQL connection.
Supported values:
prepare (default) - will create prepared statements on the
PostgreSQL server;
describe - will use the anonymous prepared statement to
describe a statement without creating a statement on the
server.
Note that ”describe” is primarily useful when the environment
does not allow prepared statements such as when running a
connection pooler like PgBouncer.
Plugins.PostgreSQL.CallTimeout
no 1-30 global Maximum wait time (in seconds) for a request to be
timeout completed.
Plugins.PostgreSQL.CustomQueriesPath
no disabled Full pathname of the directory containing .sql files with
custom queries.
Plugins.PostgreSQL.Default.Database
no Default database for connecting to PostgreSQL; used if no
value is specified in an item key or named session.
Plugins.PostgreSQL.Default.Password
no Default password for connecting to PostgreSQL; used if no
value is specified in an item key or named session.
Plugins.PostgreSQL.Default.TLSCAFile
no Full pathname of a file containing the top-level CA(s)
(yes, if Plug- certificate for peer certificate verification for encrypted
ins.PostgreSQL.Default.TLSConnect communications between Zabbix agent 2 and monitored
is set to databases; used if no value is specified in a named session.
verify_ca or
verify_full)
1814
Parameter Mandatory Range Default Description
Plugins.PostgreSQL.Default.TLSCertFile
no Full pathname of a file containing the PostgreSQL certificate
(yes, if Plug- or certificate chain for encrypted communications between
ins.PostgreSQL.Default.TLSConnect Zabbix agent 2 and monitored databases; used if no value is
is set to specified in a named session.
verify_ca or
verify_full)
Plugins.PostgreSQL.Default.TLSConnect
no Encryption type for communications between Zabbix agent 2
and monitored databases; used if no value is specified in a
named session.
Supported values:
required - connect using TLS as transport mode without
identity checks;
verify_ca - connect using TLS and verify certificate;
verify_full - connect using TLS, verify certificate and verify
that database identity (CN) specified by DBHost matches its
certificate.
Undefined encryption type means unencrypted connection.
Plugins.PostgreSQL.Default.TLSKeyFile
no Full pathname of a file containing the PostgreSQL private key
(yes, if Plug- for encrypted communications between Zabbix agent 2 and
ins.PostgreSQL.Default.TLSConnect monitored databases; used if no value is specified in a named
is set to session.
verify_ca or
verify_full)
Plugins.PostgreSQL.Default.Uri
no Default URI for connecting to PostgreSQL; used if no value is
specified in an item key or named session.
1815
Parameter Mandatory Range Default Description
Plugins.PostgreSQL.Sessions.<SessionName>.TLSCertFile
yes, if Plug- Full pathname of a file containing the PostgreSQL certificate
ins.PostgreSQL.Sessions.<SessionName>.TLSKeyFile
or certificate chain.
is specified <SessionName> - define name of a session for using in item
keys.
Plugins.PostgreSQL.Sessions.<SessionName>.TLSConnect
no Encryption type for PostgreSQL connection.
<SessionName> - define name of a session for using in item
keys.
Supported values:
required - connect using TLS as transport mode without
identity checks;
verify_ca - connect using TLS and verify certificate;
verify_full - connect using TLS, verify certificate and verify
that database identity (CN) specified by DBHost matches its
certificate.
Undefined encryption type means unencrypted connection.
Plugins.PostgreSQL.Sessions.<SessionName>.TLSKeyFile
yes, if Plug- Full pathname of a file containing the PostgreSQL private key.
ins.PostgreSQL.Sessions.<SessionName>.TLSCertFile
<SessionName> - define name of a session for using in item
is specified keys.
Plugins.PostgreSQL.Sessions.<SessionName>.Uri
no Connection string of a named session.
<SessionName> - define name of a session for using in item
keys.
See also:
• Description of general Zabbix agent 2 configuration parameters: Zabbix agent 2 (UNIX) / Zabbix agent 2 (Windows)
• Instructions for configuring plugins
12 Redis plugin
Overview
This section lists parameters supported in the Redis Zabbix agent 2 plugin configuration file (redis.conf).
Note that:
• The default values reflect process defaults, not the values in the shipped configuration files;
• Zabbix supports configuration files only in UTF-8 encoding without BOM;
• Comments starting with ”#” are only supported at the beginning of the line.
Parameters
Plugins.Redis.Default.Password
no Default password for connecting to Redis; used if no value is
specified in an item key or named session.
1816
Parameter Mandatory Range Default Description
Plugins.Redis.Default.Uri
no tcp://localhost:6379
Default URI for connecting to Redis; used if no value is
specified in an item key or named session.
See also:
• Description of general Zabbix agent 2 configuration parameters: Zabbix agent 2 (UNIX) / Zabbix agent 2 (Windows)
• Instructions for configuring plugins
13 Smart plugin
Overview
This section lists parameters supported in the Smart Zabbix agent 2 plugin configuration file (smart.conf).
Note that:
• The default values reflect process defaults, not the values in the shipped configuration files;
• Zabbix supports configuration files only in UTF-8 encoding without BOM;
• Comments starting with ”#” are only supported at the beginning of the line.
Parameters
Plugins.Smart.Path
no smartctl Path to the smartctl executable.
Plugins.Smart.Timeout
no 1-30 global Request execution timeout (how long to wait for a request to
timeout complete before shutting it down).
See also:
• Description of general Zabbix agent 2 configuration parameters: Zabbix agent 2 (UNIX) / Zabbix agent 2 (Windows)
• Instructions for configuring plugins
1817
8 Zabbix Java gateway
If you use startup.sh and shutdown.sh scripts for starting Zabbix Java gateway, then you can specify the necessary configu-
ration parameters in thesettings.sh file. The startup and shutdown scripts source the settings file and take care of converting
shell variables (listed in the first column) to Java properties (listed in the second column).
If you start Zabbix Java gateway manually by running java directly, then you specify the corresponding Java properties on the
command line.
Warning:
Port 10052 is not IANA registered.
Overview
The Zabbix web service is a process that is used for communication with external web services.
The parameters supported by the Zabbix web service configuration file (zabbix_web_service.conf) are listed in this section.
The parameters are listed without additional information. Click on the parameter to see the full details.
Parameter Description
AllowedIP A list of comma delimited IP addresses, optionally in CIDR notation, or DNS names of Zabbix
servers and Zabbix proxies.
DebugLevel The debug level.
ListenPort The agent will listen on this port for connections from the server.
LogFile The name of the log file.
LogFileSize The maximum size of the log file.
LogType The type of the log output.
Timeout Spend no more than Timeout seconds on processing.
TLSAccept What incoming connections to accept.
TLSCAFile The full pathname of a file containing the top-level CA(s) certificates for peer certificate
verification, used for encrypted communications between Zabbix components.
TLSCertFile The full pathname of a file containing the service certificate or certificate chain, used for
encrypted communications between Zabbix components.
1818
Parameter Description
TLSKeyFile The full pathname of a file containing the service private key, used for encrypted
communications between Zabbix components.
All parameters are non-mandatory unless explicitly stated that the parameter is mandatory.
Note that:
• The default values reflect process defaults, not the values in the shipped configuration files;
• Zabbix supports configuration files only in UTF-8 encoding without BOM;
• Comments starting with ”#” are only supported at the beginning of the line.
Parameter details
AllowedIP
A list of comma delimited IP addresses, optionally in CIDR notation, or DNS names of Zabbix servers and Zabbix proxies. Incoming
connections will be accepted only from the hosts listed here.<br>If IPv6 support is enabled then 127.0.0.1, ::127.0.0.1,
::ffff:127.0.0.1 are treated equally and ::/0 will allow any IPv4 or IPv6 address. 0.0.0.0/0 can be used to allow any IPv4
address.
Example:
127.0.0.1,192.168.1.0/24,::1,2001:db8::/32,zabbix.example.com
Mandatory: yes
DebugLevel
Specify the debug level:<br>0 - basic information about starting and stopping of Zabbix processes<br>1 - critical informa-
tion;<br>2 - error information;<br>3 - warnings;<br>4 - for debugging (produces lots of information);<br>5 - extended debugging
(produces even more information).
ListenPort
The service will listen on this port for connections from the server.
LogFile
Example:
/tmp/zabbix_web_service.log
Mandatory: Yes, if LogType is set to file; otherwise no
LogFileSize
The maximum size of a log file in MB.<br>0 - disable automatic log rotation.<br>Note: If the log file size limit is reached and file
rotation fails, for whatever reason, the existing log file is truncated and started anew.
LogType
The type of the log output:<br>file - write log to the file specified by LogFile parameter;<br>system - write log to sys-
log;<br>console - write log to standard output.
Default: file
Timeout
TLSAccept
What incoming connections to accept:<br>unencrypted - accept connections without encryption (default)<br>cert - accept con-
nections with TLS and a certificate
Default: unencrypted
1819
TLSCAFile
The full pathname of the file containing the top-level CA(s) certificates for peer certificate verification, used for encrypted commu-
nications between Zabbix components.
TLSCertFile
The full pathname of the file containing the service certificate or certificate chain, used for encrypted communications with Zabbix
components.
TLSKeyFile
The full pathname of the file containing the service private key, used for encrypted communications between Zabbix components.
10 Inclusion
Overview
Additional files or directories can be included into server/proxy/agent configuration using the Include parameter.
Notes on inclusion
If the Include parameter is used for including a file, the file must be readable.
If the Include parameter is used for including a directory:
• All files in the directory must be readable.
• No particular order of inclusion should be assumed (e.g. files are not included in alphabetical order). Therefore do not define
one parameter in several ”Include” files (e.g. to override a general setting with a specific one).
• All files in the directory are included into configuration.
• Beware of file backup copies automatically created by some text editors. For example, if editing the ”include/my_specific.conf”
file produces a backup copy ”include/my_specific_conf.BAK” then both files will be included. Move ”include/my_specific.conf.BAK”
out of the ”Include” directory. On Linux, contents of the ”Include” directory can be checked with a ”ls -al” command for
unnecessary files.
4 Protocols
Overview
Request and response messages must begin with header and data length.
Passive proxy
Configuration request
The server will first send an empty proxy config request. This request is sent every ProxyConfigFrequency (server configu-
ration parameter) seconds.
The proxy responds with the current proxy version, session token and configuration revision. The server responds with the config-
uration data that need to be updated.
server→proxy:
request string ’proxy config’
proxy→server:
1820
name value type description
server→proxy:
full_sync number 1 - if full configuration data is sent; absent - otherwise (optional).
data array Object of table data. Absent if configuration has not been
changed (optional).
<table> object One or more objects with <table> data (optional, depending on
changes).
fields array Array of field names.
- string Field name.
data array Array of rows.
- array Array of columns.
- string,number Column value with type depending on column type in database
schema.
proxy→server:
response string Request success information (’success’ or ’failed’).
version string Proxy version (<major>.<minor>.<build>).
Example:
server→proxy:
server→proxy:
{
"request":"proxy config"
}
proxy→server:
{
"version": "7.0.0",
"session": "0033124949800811e5686dbfd9bcea98",
"config_revision": 0
}
server→proxy:
{
"full_sync": 1,
"data": {
"hosts": {
"fields": ["hostid", "host", "status", "ipmi_authtype", "ipmi_privilege", "ipmi_username", "ipmi_password"
"data": [
[10084, "Zabbix server", 0, -1, 2, "", "", "Zabbix server", 1, 1, "", "", "", ""]
]
},
"interface": {
"fields": ["interfaceid", "hostid", "main", "type", "useip", "ip", "dns", "port", "available"],
"data": [
[1, 10084, 1, 1, 1, "127.0.0.1", "", "10053", 1]
]
1821
},
"interface_snmp": {
"fields": ["interfaceid", "version", "bulk", "community", "securityname", "securitylevel", "authpassphrase
"data": []
},
"host_inventory": {
"fields": ["hostid", "type", "type_full", "name", "alias", "os", "os_full", "os_short", "serialno_a", "ser
"data": [
[10084, "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "5
]
},
"items": {
"fields": ["itemid", "type", "snmp_oid", "hostid", "key_", "delay", "history", "status", "value_type", "tr
"data": [
[44161, 7, "", 10084, "agent.hostmetadata", "10s", "90d", 0, 1, "", "", "", "", 0, "", "", "", "", 0, null
[44162, 0, "", 10084, "agent.ping", "10s", "90d", 0, 3, "", "", "", "", 0, "", "", "", "", 0, 1, 0, "", nu
]
},
"item_rtdata": {
"fields": ["itemid", "lastlogsize", "mtime"],
"data": [
[44161, 0, 0],
[44162, 0, 0]
]
},
"item_preproc": {
"fields": ["item_preprocid", "itemid", "step", "type", "params", "error_handler", "error_handler_params"],
"data": []
},
"item_parameter": {
"fields": ["item_parameterid", "itemid", "name", "value"],
"data": []
},
"globalmacro": {
"fields": ["globalmacroid", "macro", "value", "type"],
"data": [
[2, "{$SNMP_COMMUNITY}", "public", 0]
]
},
"hosts_templates": {
"fields": ["hosttemplateid", "hostid", "templateid", "link_type"],
"data": []
},
"hostmacro": {
"fields": ["hostmacroid", "hostid", "macro", "value", "type", "automatic"],
"data": [
[5676, 10084, "{$M}", "AppID=zabbix_server&Query=Safe=passwordSafe;Object=zabbix:Content", 2, 0]
]
},
"drules": {
"fields": ["druleid", "name", "iprange", "delay"],
"data": [
[2, "Local network", "127.0.0.1", "10s"]
]
},
"dchecks": {
"fields": ["dcheckid", "druleid", "type", "key_", "snmp_community", "ports", "snmpv3_securityname", "snmpv
"data": [
[2, 2, 9, "system.uname", "", "10052", "", 0, "", "", 0, 0, 0, "", 1, 0]
]
},
"regexps": {
1822
"fields": ["regexpid", "name"],
"data": [
[1, "File systems for discovery"],
[2, "Network interfaces for discovery"],
[3, "Storage devices for SNMP discovery"],
[4, "Windows service names for discovery"],
[5, "Windows service startup states for discovery"]
]
},
"expressions": {
"fields": ["expressionid", "regexpid", "expression", "expression_type", "exp_delimiter", "case_sensitive"]
"data": [
[1, 1, "^(btrfs|ext2|ext3|ext4|reiser|xfs|ffs|ufs|jfs|jfs2|vxfs|hfs|apfs|refs|ntfs|fat32|zfs)$", 3, ",", 0
[3, 3, "^(Physical memory|Virtual memory|Memory buffers|Cached memory|Swap space)$", 4, ",", 1],
[5, 4, "^(MMCSS|gupdate|SysmonLog|clr_optimization_v2.0.50727_32|clr_optimization_v4.0.30319_32)$", 4, ","
[6, 5, "^(automatic|automatic delayed)$", 3, ",", 1],
[7, 2, "^Software Loopback Interface", 4, ",", 1],
[8, 2, "^(In)?[Ll]oop[Bb]ack[0-9._]*$", 4, ",", 1],
[9, 2, "^NULL[0-9.]*$", 4, ",", 1],
[10, 2, "^[Ll]o[0-9.]*$", 4, ",", 1],
[11, 2, "^[Ss]ystem$", 4, ",", 1],
[12, 2, "^Nu[0-9.]*$", 4, ",", 1]
]
},
"config": {
"fields": ["configid", "snmptrap_logging", "hk_history_global", "hk_history", "autoreg_tls_accept"],
"data": [
[1, 1, 0, "90d", 1]
]
},
"httptest": {
"fields": ["httptestid", "name", "delay", "agent", "authentication", "http_user", "http_password", "hostid
"data": []
},
"httptestitem": {
"fields": ["httptestitemid", "httptestid", "itemid", "type"],
"data": []
},
"httptest_field": {
"fields": ["httptest_fieldid", "httptestid", "type", "name", "value"],
"data": []
},
"httpstep": {
"fields": ["httpstepid", "httptestid", "name", "no", "url", "timeout", "posts", "required", "status_codes"
"data": []
},
"httpstepitem": {
"fields": ["httpstepitemid", "httpstepid", "itemid", "type"],
"data": []
},
"httpstep_field": {
"fields": ["httpstep_fieldid", "httpstepid", "type", "name", "value"],
"data": []
},
"config_autoreg_tls": {
"fields": ["autoreg_tlsid", "tls_psk_identity", "tls_psk"],
"data": [
[1, "", ""]
]
}
},
"macro.secrets": {
1823
"AppID=zabbix_server&Query=Safe=passwordSafe;Object=zabbix": {
"Content": "738"
}
},
"config_revision": 2
}
proxy→server:
{
"response": "success",
"version": "7.0.0"
}
Data request
The proxy data request is used to obtain host interface availability, historical, discovery and autoregistration data from proxy.
ProxyDataFrequency (server configuration parameter) seconds.
This request is sent every
server→proxy:
request string ’proxy data’
proxy→server:
session string Data session token.
interface array (optional) Array of interface availability data objects.
avail-
abil-
ity
interfaceid number Interface identifier.
available number Interface availability:
0, INTERFACE_AVAILABLE_UNKNOWN - unknown
1, INTERFACE_AVAILABLE_TRUE - available
2, INTERFACE_AVAILABLE_FALSE - unavailable
error string Interface error message or empty string.
history array (optional) Array of history data objects.
data
itemid number Item identifier.
clock number Item value timestamp (seconds).
ns number Item value timestamp (nanoseconds).
value string (optional) Item value.
id number Value identifier (ascending counter, unique within one data session).
timestamp number (optional) Timestamp of log type items.
source string (optional) Eventlog item source value.
severity number (optional) Eventlog item severity value.
eventid number (optional) Eventlog item eventid value.
state string (optional) Item state:
0, ITEM_STATE_NORMAL
1, ITEM_STATE_NOTSUPPORTED
lastlogsize number (optional) Last log size of log type items.
mtime number (optional) Modification time of log type items.
discovery array (optional) Array of discovery data objects.
data
clock number Discovery data timestamp.
druleid number Discovery rule identifier.
dcheckid number Discovery check identifier or null for discovery rule data.
1824
name value type description
0, DOBJECT_STATUS_UP - Service UP
1, DOBJECT_STATUS_DOWN - Service DOWN
auto array (optional) Array of autoregistration data objects.
reg-
is-
tra-
tion
clock number Autoregistration data timestamp.
host string Host name.
ip string (optional) Host IP address.
dns string (optional) Resolved DNS name from IP address.
port string (optional) Host port.
host_metadata string (optional) Host metadata sent by agent (based on HostMetadata or
HostMetadataItem agent configuration parameter).
tasks array (optional) Array of tasks.
type number Task type:
0, ZBX_TM_TASK_PROCESS_REMOTE_COMMAND_RESULT - remote
command result
status number Remote-command execution status:
1825
name value type description
Example:
server→proxy:
{
"request": "proxy data"
}
proxy→server:
{
"session": "12345678901234567890123456789012"
"interface availability": [
{
"interfaceid": 1,
"available": 1,
"error": ""
},
{
"interfaceid": 2,
"available": 2,
"error": "Get value from agent failed: cannot connect to [[127.0.0.1]:10049]: [111] Connection
},
{
"interfaceid": 3,
"available": 1,
"error": ""
},
{
"interfaceid": 4,
"available": 1,
"error": ""
}
],
1826
"history data":[
{
"itemid":"12345",
"clock":1478609647,
"ns":332510044,
"value":"52956612",
"id": 1
},
{
"itemid":"12346",
"clock":1478609647,
"ns":330690279,
"state":1,
"value":"Cannot find information for this network interface in /proc/net/dev.",
"id": 2
}
],
"discovery data":[
{
"clock":1478608764,
"drule":2,
"dcheck":3,
"type":12,
"ip":"10.3.0.10",
"dns":"vdebian",
"status":1
},
{
"clock":1478608764,
"drule":2,
"dcheck":null,
"type":-1,
"ip":"10.3.0.10",
"dns":"vdebian",
"status":1
}
],
"auto registration":[
{
"clock":1478608371,
"host":"Logger1",
"ip":"10.3.0.1",
"dns":"localhost",
"port":"10050"
},
{
"clock":1478608381,
"host":"Logger2",
"ip":"10.3.0.2",
"dns":"localhost",
"port":"10050"
}
],
"tasks":[
{
"type": 0,
"status": 0,
"parent_taskid": 10
},
{
"type": 0,
"status": 1,
1827
"error": "No permissions to execute task.",
"parent_taskid": 20
}
],
"version":"7.0.0"
}
server→proxy:
{
"response": "success",
"tasks":[
{
"type": 1,
"clock": 1478608371,
"ttl": 600,
"commandtype": 2,
"command": "restart_service1.sh",
"execute_on": 2,
"port": 80,
"authtype": 0,
"username": "userA",
"password": "password1",
"publickey": "MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCqGKukO1De7zhZj6+H0qtjTkVxwTCpvKe",
"privatekey": "lsuusFncCzWBQ7RKNUSesmQRMSGkVb1/3j+skZ6UtW+5u09lHNsj6tQ5QCqGKukO1De7zhd",
"parent_taskid": 10,
"hostid": 10070
},
{
"type": 1,
"clock": 1478608381,
"ttl": 600,
"commandtype": 1,
"command": "restart_service2.sh",
"execute_on": 0,
"authtype": 0,
"username": "",
"password": "",
"publickey": "",
"privatekey": "",
"parent_taskid": 20,
"hostid": 10084
}
]
}
Active proxy
Configuration request
The proxy config request is sent by active proxy to obtain proxy configuration data. This request is sent every ProxyConfigFrequency
(proxy configuration parameter) seconds.
proxy→server:
request string ’proxy config’
host string<br> Proxy name.
version string Proxy version (<major>.<minor>.<build>).
session string Proxy configuration session token.
config_revision number Proxy configuration revision.
server→proxy:
fullsync number 1 - if full configuration data is sent, absent otherwise (optional).
1828
name value type description
data array Object of table data. Absent if configuration has not been
changed (optional).
<table> object One or more objects with <table> data (optional, depending on
changes).
fields array Array of field names.
- string Field name.
data array Array of rows.
- array Array of columns.
- string,number Column value with type depending on column type in database
schema.
Example:
proxy→server:
{
"request": "proxy config",
"host": "Zabbix proxy",
"version":"7.0.0",
"session": "fd59a09ff4e9d1fb447de1f04599bcf6",
"config_revision": 0
}
server→proxy:
{
"full_sync": 1,
"data": {
"hosts": {
"fields": ["hostid", "host", "status", "ipmi_authtype", "ipmi_privilege", "ipmi_username", "ipmi_password"
"data": [
[10084, "Zabbix server", 0, -1, 2, "", "", "Zabbix server", 1, 1, "", "", "", ""]
]
},
"interface": {
"fields": ["interfaceid", "hostid", "main", "type", "useip", "ip", "dns", "port", "available"],
"data": [
[1, 10084, 1, 1, 1, "127.0.0.1", "", "10053", 1]
]
},
"interface_snmp": {
"fields": ["interfaceid", "version", "bulk", "community", "securityname", "securitylevel", "authpassphrase
"data": []
},
"host_inventory": {
"fields": ["hostid", "type", "type_full", "name", "alias", "os", "os_full", "os_short", "serialno_a", "ser
"data": [
[10084, "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "5
]
},
"items": {
"fields": ["itemid", "type", "snmp_oid", "hostid", "key_", "delay", "history", "status", "value_type", "tr
"data": [
1829
[44161, 7, "", 10084, "agent.hostmetadata", "10s", "90d", 0, 1, "", "", "", "", 0, "", "", "", "", 0, null
[44162, 0, "", 10084, "agent.ping", "10s", "90d", 0, 3, "", "", "", "", 0, "", "", "", "", 0, 1, 0, "", nu
]
},
"item_rtdata": {
"fields": ["itemid", "lastlogsize", "mtime"],
"data": [
[44161, 0, 0],
[44162, 0, 0]
]
},
"item_preproc": {
"fields": ["item_preprocid", "itemid", "step", "type", "params", "error_handler", "error_handler_params"],
"data": []
},
"item_parameter": {
"fields": ["item_parameterid", "itemid", "name", "value"],
"data": []
},
"globalmacro": {
"fields": ["globalmacroid", "macro", "value", "type"],
"data": [
[2, "{$SNMP_COMMUNITY}", "public", 0]
]
},
"hosts_templates": {
"fields": ["hosttemplateid", "hostid", "templateid", "link_type"],
"data": []
},
"hostmacro": {
"fields": ["hostmacroid", "hostid", "macro", "value", "type", "automatic"],
"data": [
[5676, 10084, "{$M}", "AppID=zabbix_server&Query=Safe=passwordSafe;Object=zabbix:Content", 2, 0]
]
},
"drules": {
"fields": ["druleid", "name", "iprange", "delay"],
"data": [
[2, "Local network", "127.0.0.1", "10s"]
]
},
"dchecks": {
"fields": ["dcheckid", "druleid", "type", "key_", "snmp_community", "ports", "snmpv3_securityname", "snmpv
"data": [
[2, 2, 9, "system.uname", "", "10052", "", 0, "", "", 0, 0, 0, "", 1, 0]
]
},
"regexps": {
"fields": ["regexpid", "name"],
"data": [
[1, "File systems for discovery"],
[2, "Network interfaces for discovery"],
[3, "Storage devices for SNMP discovery"],
[4, "Windows service names for discovery"],
[5, "Windows service startup states for discovery"]
]
},
"expressions": {
"fields": ["expressionid", "regexpid", "expression", "expression_type", "exp_delimiter", "case_sensitive"]
"data": [
[1, 1, "^(btrfs|ext2|ext3|ext4|reiser|xfs|ffs|ufs|jfs|jfs2|vxfs|hfs|apfs|refs|ntfs|fat32|zfs)$", 3, ",", 0
[3, 3, "^(Physical memory|Virtual memory|Memory buffers|Cached memory|Swap space)$", 4, ",", 1],
1830
[5, 4, "^(MMCSS|gupdate|SysmonLog|clr_optimization_v2.0.50727_32|clr_optimization_v4.0.30319_32)$", 4, ","
[6, 5, "^(automatic|automatic delayed)$", 3, ",", 1],
[7, 2, "^Software Loopback Interface", 4, ",", 1],
[8, 2, "^(In)?[Ll]oop[Bb]ack[0-9._]*$", 4, ",", 1],
[9, 2, "^NULL[0-9.]*$", 4, ",", 1],
[10, 2, "^[Ll]o[0-9.]*$", 4, ",", 1],
[11, 2, "^[Ss]ystem$", 4, ",", 1],
[12, 2, "^Nu[0-9.]*$", 4, ",", 1]
]
},
"config": {
"fields": ["configid", "snmptrap_logging", "hk_history_global", "hk_history", "autoreg_tls_accept"],
"data": [
[1, 1, 0, "90d", 1]
]
},
"httptest": {
"fields": ["httptestid", "name", "delay", "agent", "authentication", "http_user", "http_password", "hostid
"data": []
},
"httptestitem": {
"fields": ["httptestitemid", "httptestid", "itemid", "type"],
"data": []
},
"httptest_field": {
"fields": ["httptest_fieldid", "httptestid", "type", "name", "value"],
"data": []
},
"httpstep": {
"fields": ["httpstepid", "httptestid", "name", "no", "url", "timeout", "posts", "required", "status_codes"
"data": []
},
"httpstepitem": {
"fields": ["httpstepitemid", "httpstepid", "itemid", "type"],
"data": []
},
"httpstep_field": {
"fields": ["httpstep_fieldid", "httpstepid", "type", "name", "value"],
"data": []
},
"config_autoreg_tls": {
"fields": ["autoreg_tlsid", "tls_psk_identity", "tls_psk"],
"data": [
[1, "", ""]
]
}
},
"macro.secrets": {
"AppID=zabbix_server&Query=Safe=passwordSafe;Object=zabbix": {
"Content": "738"
}
},
"config_revision": 2
}
Data request
The proxy data request is sent by proxy to provide host interface availability, history, discovery and autoregistration data. This
request is sent every DataSenderFrequency (proxy configuration parameter) seconds. Note that active proxy will still poll Zabbix
server every second for remote command tasks (with an empty proxy data request).
1831
name value type description
proxy→server:
request string ’proxy data’
host string Proxy name.
session string Data session token.
interface array (optional) Array of interface availability data objects.
avail-
abil-
ity
interfaceid number Interface identifier.
available number Interface availability:
0, INTERFACE_AVAILABLE_UNKNOWN - unknown
1, INTERFACE_AVAILABLE_TRUE - available
2, INTERFACE_AVAILABLE_FALSE - unavailable
error string Interface error message or empty string.
history array (optional) Array of history data objects.
data
itemid number Item identifier.
clock number Item value timestamp (seconds).
ns number Item value timestamp (nanoseconds).
value string (optional) Item value.
id number Value identifier (ascending counter, unique within one data session).
timestamp number (optional) Timestamp of log type items.
source string (optional) Eventlog item source value.
severity number (optional) Eventlog item severity value.
eventid number (optional) Eventlog item eventid value.
state string (optional) Item state:
0, ITEM_STATE_NORMAL
1, ITEM_STATE_NOTSUPPORTED
lastlogsize number (optional) Last log size of log type items.
mtime number (optional) Modification time of log type items.
discovery array (optional) Array of discovery data objects.
data
clock number Discovery data timestamp.
druleid number Discovery rule identifier.
dcheckid number Discovery check identifier or null for discovery rule data.
type number Discovery check type:
1832
name value type description
0, DOBJECT_STATUS_UP - Service UP
1, DOBJECT_STATUS_DOWN - Service DOWN
autoregistration array (optional) Array of autoregistration data objects.
clock number Autoregistration data timestamp.
host string Host name.
ip string (optional) Host IP address.
dns string (optional) Resolved DNS name from IP address.
port string (optional) Host port.
host_metadata string (optional) Host metadata sent by agent (based on HostMetadata or
HostMetadataItem agent configuration parameter).
tasks array (optional) Array of tasks.
type number Task type:
0, ZBX_TM_TASK_PROCESS_REMOTE_COMMAND_RESULT - remote
command result
status number Remote-command execution status:
Possible values:
enabled - normal operation
disabled - server is not accepting data (possibly due to internal cache
over limit)
tasks array (optional) Array of tasks.
type number Task type:
1833
name value type description
Example:
proxy→server:
{
"request": "proxy data",
"host": "Zabbix proxy",
"session": "818cdd1b537bdc5e50c09ed4969235b6",
"interface availability": [{
"interfaceid": 1,
"available": 1,
"error": ""
}],
"history data": [{
"id": 1114,
"itemid": 44162,
"clock": 1665730632,
"ns": 798953105,
"value": "1"
}, {
"id": 1115,
"itemid": 44161,
"clock": 1665730633,
"ns": 811684663,
"value": "58"
}],
"auto registration": [{
"clock": 1665730633,
"host": "Zabbix server",
"ip": "127.0.0.1",
"dns": "localhost",
"port": "10053",
"host_metadata": "58",
"tls_accepted": 1
}],
"discovery data": [{
"clock": 1665732232,
"drule": 2,
"dcheck": 2,
"ip": "127.0.0.1",
"dns": "localhost",
"port": 10052,
"status": 1
}, {
"clock": 1665732232,
"drule": 2,
"dcheck": null,
"ip": "127.0.0.1",
"dns": "localhost",
"status": 1
}],
"host data": [{
"hostid": 10084,
"active_status": 1
}],
"tasks": [{
"type": 3,
1834
"clock": 1665730985,
"ttl": 0,
"status": -1,
"info": "Remote commands are not enabled",
"parent_taskid": 3
}],
"version": "7.0.0",
"clock": 1665730643,
"ns": 65389964
}
server→proxy:
{
"upload": "enabled",
"response": "success",
"tasks": [{
"type": 2,
"clock": 1665730986,
"ttl": 600,
"commandtype": 0,
"command": "ping -c 3 127.0.0.1; case $? in [01]) true;; *) false;; esac",
"execute_on": 2,
"port": 0,
"authtype": 0,
"username": "",
"password": "",
"publickey": "",
"privatekey": "",
"alertid": 0,
"parent_taskid": 4,
"hostid": 10084
}]
}
Please refer to Passive and active agent checks page for more information on Zabbix agent and Zabbix agent 2 protocols.
Code
Size
Payload data
1835
Common data
Log request
A request sent by a plugin to write a log message into the agent log file.
Example:
{"id":0,"type":1,"severity":3,"message":"message"}
Register request
A request sent by the agent during the agent startup phase to obtain provided metrics to register a plugin.
Example:
{"id":1,"type":2,"version":"1.0"}
Register response
1836
Name Type Comments
Examples:
{"id":2,"type":3,"error":"error message"}
Start request
The request doesn’t have specific parameters, it only contains common data parameters.
Example:
{"id":3,"type":4}
Terminate request
The request doesn’t have specific parameters, it only contains common data parameters.
Example:
{"id":3,"type":5}
Export request
Example:
{"id":4,"type":6,"key":"test.key","parameters":["foo","bar"]}
Export response
1837
Parameters specific to export responses:
Examples:
{"id":5,"type":7,"value":"response"}
or
{"id":5,"type":7,"error":"error message"}
Configure request
Example:
{"id":6,"type":8,"global_options":{...},"private_options":{...}}
Validate request
Example:
{"id":7,"type":9,"private_options":{...}}
Validate response
1838
Name Type Comments
Example:
{"id":8,"type":10}
or
{"id":8,"type":10,"error":"error message"}
Overview
Zabbix server and Zabbix proxy use a JSON-based communication protocol for receiving data from Zabbix sender. Data can be
received with the help of a trapper item, or an HTTP agent item with trapping enabled.
Request and response messages must begin with header and data length.
{
"request": "sender data",
"data": [
{
"host": "<hostname>",
"key": "trap",
"value": "test value"
}
]
}
{
"response": "success",
"info": "processed: 1; failed: 0; total: 1; seconds spent: 0.060753"
}
Alternatively, Zabbix sender can send a request with a timestamp and nanoseconds.
{
"request": "sender data",
"data": [
{
"host": "<hostname>",
"key": "trap",
"value": "test value",
"clock": 1516710794,
"ns": 592397170
},
{
"host": "<hostname>",
"key": "trap",
"value": "test value",
"clock": 1516710795,
"ns": 192399456
}
],
"clock": 1516712029,
"ns": 873386094
}
1839
Zabbix server response
{
"response": "success",
"info": "processed: 2; failed: 0; total: 2; seconds spent: 0.060904"
}
6 Header
Overview
The header is present in all request and response messages between Zabbix components. It is required to determine the message
length, if it is compressed or not, if it is a large packet or not.
Zabbix communications protocol has 1GB packet size limit per connection. The limit of 1GB is applied to both the received packet
data length and the uncompressed data length.
When sending configuration to Zabbix proxy, the packet size limit is increased to 4GB to allow syncing large configurations. When
data length before compression exceeds 4GB, Zabbix server automatically starts using the large packet format (0x04 flag) which
increases the packet size limit to 16GB.
Note that while a large packet format can be used for sending any data, currently only the Zabbix proxy configuration syncer can
handle packets that are larger than 1GB.
Structure
The header consists of four fields. All numbers in the header are formatted as little-endian.
Size
(large
Field Size packet) Description
<PROTOCOL> 4 4 "ZBXD" or 5A 42 58 44
<FLAGS> 1 1 Protocol flags:
0x01 - Zabbix communications protocol
0x02 - compression
0x04 - large packet
<DATALEN> 4 8 Data length.
<RESERVED> 4 8 When compression is used (0x02 flag) - the length of uncompressed
data
When compression is not used - 00 00 00 00
Contains item timeout value in passive check request.
Examples
Here are some code snippets showing how to add Zabbix protocol header to the data you want to send in order to obtain the packet
you should send to Zabbix so that it is interpreted correctly. These code snippets assume that the data is not larger than 1GB, thus
the large packet format is not used.
Python
or
1840
Perl
or
sub zbx_create_header($;$)
{
my $plain_data_size = shift;
my $compressed_data_size = shift;
my $protocol = "ZBXD";
my $flags = 0x01;
my $datalen;
my $reserved;
if (!defined($compressed_data_size))
{
$datalen = $plain_data_size;
$reserved = 0;
}
else
{
$flags |= 0x02;
$datalen = $compressed_data_size;
$reserved = $plain_data_size;
}
PHP
or
Bash
1841
7 Newline-delimited JSON export protocol
This section presents details of the export protocol in a newline-delimited JSON format, used in:
• trigger events
• item values
• trends (export to files only)
All files have a .ndjson extension. Each line of the export file is a JSON object.
Trigger events
clock number Number of seconds since Epoch to the moment when problem
was detected (integer part).
ns number Number of nanoseconds to be added to clock to get a precise
problem detection time.
value number 1 (always).
eventid number Problem event ID.
name string Problem event name.
severity number Problem event severity (0 - Not classified, 1 - Information, 2 -
Warning, 3 - Average, 4 - High, 5 - Disaster).
hosts array List of hosts involved in the trigger expression; there should be
at least one element in array.
- object
host string Host name.
name string Visible host name.
groups array List of host groups of all hosts involved in the trigger expression;
there should be at least one element in array.
- string Host group name.
tags array List of problem tags (can be empty).
- object
tag string Tag name.
value string Tag value (can be empty).
clock number Number of seconds since Epoch to the moment when problem was
resolved (integer part).
ns number Number of nanoseconds to be added to clock to get a precise
problem resolution time.
value number 0 (always).
eventid number Recovery event ID.
p_eventid number Problem event ID.
Examples
Problem:
{"clock":1519304345,"ns":987654321,"value":0,"eventid":43,"p_eventid":42}
Problem (multiple problem event generation):
1842
{"clock":1519304286,"ns":123456789,"value":1,"eventid":43,"name":"Either Zabbix agent is unreachable on Ho
{"clock":1519304346,"ns":987654321,"value":0,"eventid":44,"p_eventid":43}
{"clock":1519304346,"ns":987654321,"value":0,"eventid":44,"p_eventid":42}
Item values
Examples
1843
Trends
Examples
5 Items
1 vm.memory.size parameters
Overview
This section provides some parameter details for the vm.memory.size[<mode>] agent item.
Parameters
1844
• slab - total amount of memory used by the kernel to cache data structures for its own use
• total - total physical memory available
• used - used memory, calculated differently depending on the platform (see the table below)
• wired - memory that is marked to always stay in RAM. It is never moved to disk.
Warning:
Some of these parameters are platform-specific and might not be available on your platform. See Zabbix agent items for
details.
Attention:
The sum of vm.memory.size[used] and vm.memory.size[available] does not necessarily equal total. For instance, on
FreeBSD:
* Active, inactive, wired, cached memories are considered used, because they store some useful information.
* At the same time inactive, cached, free memories are considered available, because these kinds of memories can be
given instantly to processes that request more memory.
So inactive memory is both used and available simultaneously. Because of this, the vm.memory.size[used] item is
designed for informational purposes only, while vm.memory.size[available] is designed to be used in triggers.
See also
Overview
This section provides details on passive and active checks performed by Zabbix agent and Zabbix agent 2.
Zabbix uses a JSON-based communication protocol for communicating with the agents.
Zabbix agent and Zabbix agent 2 protocols have been unified since Zabbix 7.0. The difference between Zabbix agent and Zabbix
agent 2 requests/responses is expressed by the ”variant” tag value.
Passive checks
A passive check is a simple data request. Zabbix server or proxy asks for some data (for example, CPU load) and Zabbix agent
sends back the result to the server.
Passive checks are executed asynchronously - it is not required to receive the response to one request before other checks are
started. DNS resolving is asynchronous as well.
Server request
1845
For definition of header and data length please refer to protocol details.
{
"request": "passive checks",
"data": [
{
"key": "agent.version",
"timeout": 3
}
]
}
Agent response
{
"version": "7.0.0",
"variant": 2,
"data": [
{
"value": "7.0.0"
}
]
}
To make sure that Zabbix server or proxy can work with agents from pre-7.0 versions, which have plaintext protocol, a failover to
the old protocol is implemented.
Passive checks are performed using the JSON protocol (7.0 and later) after restart or when the interface configuration is changed.
If no valid JSON is received in response (agent sent ”ZBX_NOTSUPPORTED”), Zabbix will cache the interface as old protocol and
retry the check by sending only the item key.
Note that every hour Zabbix server/proxy will again try working with the new protocol with all interfaces, falling back to the old
protocol if required.
Active checks
Active checks require more complex processing. The agent must first retrieve from the server/proxy a list of items and/or remote
commands for independent processing.
The servers/proxies to get the active checks from are listed in the ’ServerActive’ parameter of the agent configuration file. The
frequency of asking for these checks is set by the ’RefreshActiveChecks’ parameter in the same configuration file. However, if
refreshing active checks fails, it is retried after hardcoded 60 seconds.
Note:
Since Zabbix 6.4 the agent (in active mode) no longer receives from the server/proxy a full copy of the configuration once
every two minutes (default). Instead, in order to decrease network traffic and resources usage, an incremental configu-
ration sync is performed every 5 seconds (default) upon which the server/proxy provides a full copy of the configuration
only if the agent has not yet received it, or something has changed in host configuration, global macros or global regular
expressions.
1846
The agent then periodically sends the new values to the server(s). If the agent received any remote commands to execute, the
execution result will also be sent. Note that remote command execution on an active agent is supported since Zabbix agent 7.0.
Note:
If an agent is behind the firewall you might consider using only Active checks because in this case you wouldn’t need to
modify the firewall to allow initial incoming connections.
Agent request
The active checks request is used to obtain the active checks to be processed by agent. This request is sent by the agent upon
start and then with RefreshActiveChecks intervals.
{
"request": "active checks",
"host": "Zabbix server",
"host_metadata": "mysql,nginx",
"hostinterface": "zabbix.server.lan",
"ip": "159.168.1.1",
"port": 12050,
"version": "7.0.0",
"variant": 2,
"config_revision": 1,
"session": "e3dcbd9ace2c9694e1d7bbd030eeef6e"
}
Server response
The active checks response is sent by the server back to agent after processing the active checks request.
{
"response": "success",
"config_revision": 2,
"data": [
{
"key": "system.uptime",
"itemid": 1234,
"delay": "10s",
"lastlogsize": 0,
"mtime": 0
},
{
"key": "agent.version",
"itemid": 5678,
"delay": "10m",
"lastlogsize": 0,
"mtime": 0,
"timeout": "30s"
}
],
1847
"commands": [
{
"command": "df -h --output=source,size / | awk 'NR>1 {print $2}'",
"id": 1324,
"wait": 1
}
]
}
For example:
Attention:
Note that (sensitive) configuration data may become available to parties having access to the Zabbix server trapper
port when using an active check. This is possible because anyone may pretend to be an active agent and request item
configuration data; authentication does not take place unless you use encryption options.
Agent sends
1848
The agent data request contains the gathered item values and the values for executed remote commands (if any).
{
"request": "agent data",
"data": [
{
"id": 1,
"itemid": 5678,
"value": "7.0.0",
"clock": 1712830783,
"ns": 76808644
},
{
"id": 2,
"itemid": 1234,
"value": "69672",
"clock": 1712830783,
"ns": 77053975
}
],
"commands": [
{
"id": 1324,
"value": "16G"
}
],
"session": "1234456akdsjhfoui",
"host": "Zabbix server",
"version": "7.0.0",
"variant": 2
}
A virtual ID is assigned to each value. Value ID is a simple ascending counter, unique within one data session (identified by the
1849
session token). This ID is used to discard duplicate values that might be sent in poor connectivity environments.
Server response
The agent data response is sent by the server back to the agent after processing the agent data request.
{
"response": "success",
"info": "processed: 2; failed: 0; total: 2; seconds spent: 0.003534"
}
Attention:
If sending of some values fails on the server (for example, because host or item has been disabled or deleted), agent will
not retry sending of those values.
For example:
Note how in the example above the not supported status for vfs.fs.size[/nono] is indicated by the ”state” value of 1 and the error
message in ”value” property.
Attention:
Error message will be trimmed to 2048 symbols on server side.
Heartbeat message
The heartbeat message is sent by an active agent to Zabbix server/proxy every HeartbeatFrequency seconds (configured in the
Zabbix agent/ agent 2 configuration file).
{
"request": "active check heartbeat",
"host": "Zabbix server",
"heartbeat_freq": 60,
"version": "7.0.0",
"variant": 2
}
Note:
Zabbix will take up to 16 MB of XML Base64-encoded data, but a single decoded value should be no longer than 64 KB
otherwise it will be truncated to 64 KB while decoding.
1850
3 Minimum permission level for Windows agent items
Overview
When monitoring systems using an agent, a good practice is to obtain metrics from the host on which the agent is installed. To
use the principle of least privilege, it is necessary to determine what metrics are obtained from the agent.
The table in this document allows you to select the minimum rights for guaranteed correct operation of Zabbix agent.
If a different user is selected for the agent to work, rather than ’LocalSystem’, then for the operation of agent as a Windows service,
the new user must have the rights ”Log on as a service” from ”Local Policy→User Rights Assignment” and the right to create, write
and delete the Zabbix agent log file. An Active Directory user must be added to the Performance Monitor Users group.
Note:
When working with the rights of an agent based on the ”minimum technically acceptable” group, prior provision of rights
to objects for monitoring is required.
1851
Item key User group
Zabbix server expects every returned text value in the UTF8 encoding. This is related to any type of checks: Zabbix agent, SSH,
Telnet, etc.
Different monitored systems/devices and checks can return non-ASCII characters in the value. For such cases, almost all possible
zabbix keys contain an additional item key parameter - <encoding>. This key parameter is optional but it should be specified if
the returned value is not in the UTF8 encoding and it contains non-ASCII characters. Otherwise the result can be unexpected and
unpredictable.
MySQL
If a value contains a non-ASCII character in non UTF8 encoding - this character and the following will be discarded when the
database stores this value. No warning messages will be written to the zabbix_server.log.
Relevant for at least MySQL version 5.1.61
PostgreSQL
If a value contains a non-ASCII character in non UTF8 encoding - this will lead to a failed SQL query (PGRES_FATAL_ERROR:ERROR
invalid byte sequence for encoding) and data will not be stored. An appropriate warning message will be written to the zab-
bix_server.log.
Relevant for at least PostgreSQL version 9.1.3
Large file support, often abbreviated to LFS, is the term applied to the ability to work with files larger than 2 GB on 32-bit operating
systems. Support for large files affects at least log file monitoring and all vfs.file.* items. Large file support depends on the
capabilities of a system at Zabbix compilation time, but is completely disabled on a 32-bit Solaris due to its incompatibility with
procfs and swapctl.
6 Sensor
Each sensor chip gets its own directory in the sysfs /sys/devices tree. To find all sensor chips, it is easier to follow the device
symlinks from /sys/class/hwmon/hwmon*, where * is a real number (0,1,2,...).
1852
The sensor readings are located either in /sys/class/hwmon/hwmon*/ directory for virtual devices, or in /sys/class/hwmon/hwmon*/device
directory for non-virtual devices. A file, called name, located inside hwmon* or hwmon*/device directories contains the name of
the chip, which corresponds to the name of the kernel driver used by the sensor chip.
There is only one sensor reading value per file. The common scheme for naming the files that contain sensor readings inside any
of the directories mentioned above is: <type><number>_<item>, where
• type - for sensor chips is ”in” (voltage), ”temp” (temperature), ”fan” (fan), etc.,
• item - ”input” (measured value), ”max” (high threshold), ”min” (low threshold), etc.,
• number - always used for elements that can be present more than once (usually starts from 1, except for voltages which
start from 0). If files do not refer to a specific element they have a simple name with no number.
The information regarding sensors available on the host can be acquired using sensor-detect and sensors tools (lm-sensors
package: http://lm-sensors.org/). Sensors-detect helps to determine which modules are necessary for available sensors. When
modules are loaded the sensors program can be used to show the readings of all sensor chips. The labeling of sensor readings,
used by this program, can be different from the common naming scheme (<type><number>_<item> ):
• if there is a file called <type><number>_label, then the label inside this file will be used instead of <type><number><item>
name;
• if there is no <type><number>_label file, then the program searches inside the /etc/sensors.conf (could be also
/etc/sensors3.conf, or different) for the name substitution.
This labeling allows user to determine what kind of hardware is used. If there is neither <type><number>_label file nor label
inside the configuration file the type of hardware can be determined by the name attribute (hwmon*/device/name). The actual
names of sensors, which zabbix_agent accepts, can be obtained by running sensors program with -u parameter (sensors -u).
In sensor program the available sensors are separated by the bus type (ISA adapter, PCI adapter, SPI adapter, Virtual device, ACPI
interface, HID adapter).
On Linux 2.4:
On Linux 2.6+:
• device - device name (non regular expression). The device name could be the actual name of the device (e.g 0000:00:18.3)
or the name acquired using sensors program (e.g. k8temp-pci-00c3). It is up to the user to choose which name to use;
• sensor - sensor name (non regular expression);
• mode - possible values: avg, max, min (if this parameter is omitted, device and sensor are treated verbatim).
Example key:
sensor[k8temp-pci-00c3,temp,max] or sensor[0000:00:18.3,temp1]
sensor[smsc47b397-isa-0880,in,avg] or sensor[smsc47b397.2176,in1]
Sensor labels, as printed by the sensors command, cannot always be used directly because the naming of labels may be different
for each sensor chip vendor. For example, sensors output might contain the following lines:
$ sensors
in0: +2.24 V (min = +0.00 V, max = +3.32 V)
Vcore: +1.15 V (min = +0.00 V, max = +2.99 V)
+3.3V: +3.30 V (min = +2.97 V, max = +3.63 V)
+12V: +13.00 V (min = +0.00 V, max = +15.94 V)
M/B Temp: +30.0°C (low = -127.0°C, high = +127.0°C)
Out of these, only one label may be used directly:
1853
$ zabbix_get -s 127.0.0.1 -k sensor[lm85-i2c-0-2e,Vcore]
ZBX_NOTSUPPORTED
To find out the actual sensor name, which can be used by Zabbix to retrieve the sensor readings, run sensors -u. In the output, the
following may be observed:
$ sensors -u
...
Vcore:
in1_input: 1.15
in1_min: 0.00
in1_max: 2.99
in1_alarm: 0.00
...
+12V:
in4_input: 13.00
in4_min: 0.00
in4_max: 15.94
in4_alarm: 0.00
...
5
So Vcore should be queried as in1, and +12V should be queried as in4.
Overview
The memtype parameter is supported on Linux, AIX, FreeBSD, and Solaris platforms.
Three common values of ’memtype’ are supported on all of these platforms: pmem, rss and vsize. Additionally, platform-specific
’memtype’ values are supported on some platforms.
AIX
Source in
procentry64 Tries to be
Supported value Description structure compatible with
1
vsize Virtual memory size pi_size
pmem Percentage of real memory pi_prm ps -o pmem
rss Resident set size pi_trss + pi_drss ps -o rssize
size Size of process (code + data) pi_dvm ”ps gvw” SIZE
column
dsize Data size pi_dsize
tsize Text (code) size pi_tsize ”ps gvw” TSIZ
column
sdsize Data size from shared library pi_sdsize
drss Data resident set size pi_drss
trss Text resident set size pi_trss
1. When choosing parameters for proc.mem[] item key on AIX, try to specify narrow process selection criteria. Otherwise there
is a risk of getting unwanted processes counted into proc.mem[] result.
Example:
$ zabbix_agentd -t proc.mem[,,,NonExistingProcess,rss]
proc.mem[,,,NonExistingProcess,rss] [u|2879488]
5
According to specification these are voltages on chip pins and generally speaking may need scaling.
1854
This example shows how specifying only command line (regular expression to match) parameter results in Zabbix agent self-
accounting - probably not what you want.
2. Do not use ”ps -ef” to browse processes - it shows only non-kernel processes. Use ”ps -Af” to see all processes which will be
seen by Zabbix agent.
3. Let’s go through example of ’topasrec’ how Zabbix agent proc.mem[] selects processes.
proc.mem[<name>,<user>,<mode>,<cmdline>,<memtype>]
The 1st criterion is a process name (argument <name>). In our example Zabbix agent will see it as ’topasrec’. In order to
match, you need to either specify ’topasrec’ or to leave it empty. The 2nd criterion is a user name (argument <user>). To
match, you need to either specify ’root’ or to leave it empty. The 3rd criterion used in process selection is an argument <cmd-
line>. Zabbix agent will see its value as ’/usr/bin/topasrec -L -s 300 -R 1 -r 6 -o /var/perf/daily/ -ypersistent=1 -O type=bin -
ystart_time=04:08:54,Mar16,2023’. To match, you need to either specify a regular expression which matches this string or to
leave it empty.
Arguments <mode> and <memtype> are applied after using the three criteria mentioned above.
FreeBSD
Source in
kinfo_proc Tries to be
Supported value Description structure compatible with
Linux
1855
1. Not all ’memtype’ values are supported by older Linux kernels. For example, Linux 2.4 kernels do not support hwm, pin,
peak, pte and swap values.
2. We have noticed that self-monitoring of the Zabbix agent active check process with proc.mem[...,...,...,...,data]
shows a value that is 4 kB larger than reported by VmData line in the agent’s /proc/<pid>/status file. At the time of self-
measurement the agent’s data segment increases by 4 kB and then returns to the previous size.
Solaris
Footnotes
1
Default value.
Some programs use modifying their commandline as a method for displaying their current activity. A user can see the activity by
running ps and top commands. Examples of such programs include PostgreSQL, Sendmail, Zabbix.
Let’s see an example from Linux. Let’s assume we want to monitor a number of Zabbix agent processes.
1856
Why a simple renaming of executable to a longer name lead to quite different result ?
Zabbix agent starts with checking the process name. /proc/<pid>/status file is opened and the line Name is checked. In our
case the Name lines are:
$ grep Name /proc/{6715,6716,6717,6718,6719,6720}/status
/proc/6715/status:Name: zabbix_agentd_3
/proc/6716/status:Name: zabbix_agentd_3
/proc/6717/status:Name: zabbix_agentd_3
/proc/6718/status:Name: zabbix_agentd_3
/proc/6719/status:Name: zabbix_agentd_3
/proc/6720/status:Name: zabbix_agentd_3
The process name in status file is truncated to 15 characters.
A similar result can be seen with ps command:
$ ps -u zabbix
PID TTY TIME CMD
...
6715 ? 00:00:00 zabbix_agentd_3
6716 ? 00:00:01 zabbix_agentd_3
6717 ? 00:00:00 zabbix_agentd_3
6718 ? 00:00:00 zabbix_agentd_3
6719 ? 00:00:00 zabbix_agentd_3
6720 ? 00:00:00 zabbix_agentd_3
...
Obviously, that is not equal to our proc.num[] name parameter value zabbix_agentd_30. Having failed to match the process
name from status file the Zabbix agent turns to /proc/<pid>/cmdline file.
How the agent sees the ”cmdline” file can be illustrated with running a command
$ for i in 6715 6716 6717 6718 6719 6720; do cat /proc/$i/cmdline | awk '{gsub(/\x0/,"<NUL>"); print};'; d
sbin/zabbix_agentd_30<NUL>-c<NUL>/home/zabbix/ZBXNEXT-1078/zabbix_agentd.conf<NUL>
sbin/zabbix_agentd_30: collector [idle 1 sec]<NUL><NUL><NUL><NUL><NUL><NUL><NUL><NUL><NUL><NUL><NUL><NUL><
sbin/zabbix_agentd_30: listener #1 [waiting for connection]<NUL><NUL><NUL><NUL><NUL><NUL><NUL><NUL><NUL><N
sbin/zabbix_agentd_30: listener #2 [waiting for connection]<NUL><NUL><NUL><NUL><NUL><NUL><NUL><NUL><NUL><N
sbin/zabbix_agentd_30: listener #3 [waiting for connection]<NUL><NUL><NUL><NUL><NUL><NUL><NUL><NUL><NUL><N
sbin/zabbix_agentd_30: active checks #1 [idle 1 sec]<NUL><NUL><NUL><NUL><NUL><NUL><NUL><NUL><NUL><NUL><NUL
/proc/<pid>/cmdline files in our case contain invisible, non-printable null bytes, used to terminate strings in C language. The
null bytes are shown as ”<NUL>” in this example.
Zabbix agent checks ”cmdline” for the main process and takes a zabbix_agentd_30, which matches our name parameter value
zabbix_agentd_30. So, the main process is counted by item proc.num[zabbix_agentd_30,zabbix].
When checking the next process, the agent takes zabbix_agentd_30: collector [idle 1 sec] from the cmdline file and
it does not meet our name parameter zabbix_agentd_30. So, only the main process which does not modify its commandline,
gets counted. Other agent processes modify their command line and are ignored.
This example shows that the name parameter cannot be used in proc.mem[] and proc.num[] for selecting processes in this
case.
Note:
For proc.get[] item, when Zabbix agent checks ”cmdline” for the process name, it will only use part of the name starting
from the last slash and until the first space or colon sign. Process name received from cmdline file will only be used if its
beginning completely matches the shortened process name in the status file. The algorithm is the same for both process
name in the filter and in the JSON output.
Using cmdline parameter with a proper regular expression produces a correct result:
$ zabbix_get -s localhost -k 'proc.num[,zabbix,,zabbix_agentd_30[ :]]'
6
Be careful when using proc.get[], proc.mem[] and proc.num[] items for monitoring programs which modify their command
lines.
Before putting name and cmdline parameters into proc.get[], proc.mem[] and proc.num[] items, you may want to test the
parameters using proc.num[] item and ps command.
1857
Linux kernel threads
Threads cannot be selected with cmdline parameter in proc.get[], proc.mem[] and proc.num[] items
Let’s take as an example one of kernel threads:
Be careful when using proc.mem[] and proc.num[] items if the program name happens to match one of the thread.
Before putting parameters into proc.mem[] and proc.num[] items, you may want to test the parameters using proc.num[]
item and ps command.
Implementation of net.tcp.service and net.udp.service checks is detailed on this page for various services specified in the service
parameter.
ftp
Creates a TCP connection and expects the first 4 characters of the response to be ”220 ”, then sends ”QUIT\r\n”. Default port 21
is used if not specified.
http
Creates a TCP connection without expecting and sending anything. Default port 80 is used if not specified.
https
1858
Uses (and only works with) libcurl, does not verify the authenticity of the certificate, does not verify the host name in the SSL
certificate, only fetches the response header (HEAD request). Default port 443 is used if not specified.
imap
Creates a TCP connection and expects the first 4 characters of the response to be ”* OK”, then sends ”a1 LOGOUT\r\n”. Default
port 143 is used if not specified.
ldap
Opens a connection to an LDAP server and performs an LDAP search operation with filter set to (objectClass=*). Expects successful
retrieval of the first attribute of the first entry. Default port 389 is used if not specified.
nntp
Creates a TCP connection and expects the first 3 characters of the response to be ”200” or ”201”, then sends ”QUIT\r\n”. Default
port 119 is used if not specified.
pop
Creates a TCP connection and expects the first 3 characters of the response to be ”+OK”, then sends ”QUIT\r\n”. Default port 110
is used if not specified.
smtp
Creates a TCP connection and expects the first 3 characters of the response to be ”220”, followed by a space, the line ending or a
dash. The lines containing a dash belong to a multiline response and the response will be re-read until a line without the dash is
received. Then sends ”QUIT\r\n”. Default port 25 is used if not specified.
ssh
Creates a TCP connection. If the connection has been established, both sides exchange an identification string (SSH-major.minor-
XXXX), where major and minor are protocol versions and XXXX is a string. Zabbix checks if the string matching the specification
is found and then sends back the string ”SSH-major.minor-zabbix_agent\r\n” or ”0\n” on mismatch. Default port 22 is used if not
specified.
tcp
Creates a TCP connection without expecting and sending anything. Unlike the other checks requires the port parameter to be
specified.
telnet
Creates a TCP connection and expects a login prompt (’:’ at the end). Default port 23 is used if not specified.
ntp
Sends an SNTP packet over UDP and validates the response according to RFC 4330, section 5. Default port 123 is used if not
specified.
10 proc.get parameters
Overview
The item proc.get[<name>,<user>,<cmdline>,<mode>] is supported on Linux, Windows, FreeBSD, OpenBSD, and NetBSD.
List of process parameters returned by the item varies depending on the operating system and ’mode’ argument value.
Linux
The following process parameters are returned on Linux for each mode:
1859
mode=process mode=thread mode=summary
group: group (real) the process runs uid: user ID data: size of data segment
under
uid: user ID gid: ID of the group the process runs exe: size of code segment
under
gid: ID of the group the process runs tid: thread ID lib: size of shared libraries
under
vsize: virtual memory size tname: thread name lck: size of locked memory
pmem: percentage of real memory cputime_user: total CPU seconds pin: size of pinned pages
(user)
rss: resident set size cputime_system: total CPU seconds pte: size of page table entries
(system)
data: size of data segment state: thread state size: size of process code + data +
stack segments
exe: size of code segment ctx_switches: number of context stk: size of stack segment
switches
hwm: peak resident set size page_faults: number of page faults swap: size of swap space used
lck: size of locked memory cputime_user: total CPU seconds
(user)
lib: size of shared libraries cputime_system: total CPU seconds
(system)
peak: peak virtual memory size ctx_switches: number of context
switches
pin: size of pinned pages threads: number of threads
pte: size of page table entries page_faults: number of page faults
size: size of process code + data + pss: proportional set size memory
stack segments
stk: size of stack segment
swap: size of swap space used
cputime_user: total CPU seconds
(user)
cputime_system: total CPU seconds
(system)
state: process state (transparently
retrieved from procfs, long form)
ctx_switches: number of context
switches
threads: number of threads
page_faults: number of page faults
pss: proportional set size memory
BSD-based OS
The following process parameters are returned on FreeBSD, OpenBSD, and NetBSD for each mode:
1860
mode=process mode=thread mode=summary
gid: ID of the group the process runs tid: thread ID cputime_user: total CPU seconds
under (user)
vsize: virtual memory size tname: thread name cputime_system: total CPU seconds
(system)
pmem: percentage of real memory cputime_user: total CPU seconds ctx_switches: number of context
(FreeBSD only) (user) switches
rss: resident set size cputime_system: total CPU seconds threads: number of threads (not
(system) supported for NetBSD)
size: size of process (code + data + state: thread state stk: size of stack segment
stack)
tsize: text (code) size ctx_switches: number of context page_faults: number of page faults
switches
dsize: data size io_read_op: number of times the fds: number of file descriptors
system had to perform input (OpenBSD only)
ssize: stack size io_write_op: number of times the swap: size of swap space used
system had to perform output
cputime_user: total CPU seconds io_read_op: number of times the
(user) system had to perform input
cputime_system: total CPU seconds io_write_op: number of times the
(system) system had to perform output
state: process state (disk
sleep/running/sleeping/tracing
stop/zombie/other)
ctx_switches: number of context
switches
threads: number of threads (not
supported for NetBSD)
page_faults: number of page faults
fds: number of file descriptors
(OpenBSD only)
swap: size of swap space used
io_read_op: number of times the
system had to perform input
io_write_op: number of times the
system had to perform output
Windows
The following process parameters are returned on Windows for each mode:
1861
mode=process mode=thread mode=summary
Overview
Several configuration parameters define how Zabbix server should behave when an agent check (Zabbix, SNMP, IPMI, JMX) fails
and a host interface becomes unreachable.
Unreachable interface
A host interface is treated as unreachable after a failed check (network error, timeout) by Zabbix, SNMP, IPMI or JMX agents. Note
that Zabbix agent active checks do not influence interface availability in any way.
From that moment UnreachableDelay defines how often an interface is rechecked using one of the items (including LLD rules) in
this unreachability situation and such rechecks will be performed already by unreachable pollers (or IPMI pollers for IPMI checks).
By default it is 15 seconds before the next check.
Attention:
Checks performed by asynchronous pollers are not moved to unreachable pollers.
Zabbix agent item "system.cpu.load[percpu,avg1]" on host "New host" failed: first network error, wait for
Zabbix agent item "system.cpu.load[percpu,avg15]" on host "New host" failed: another network error, wait f
Note that the exact item that failed is indicated and the item type (Zabbix agent).
Note:
The Timeout parameter will also affect how early an interface is rechecked during unreachability. If the Timeout is 20
seconds and UnreachableDelay 30 seconds, the next check will be in 50 seconds after the first attempt.
The UnreachablePeriod parameter defines how long the unreachability period is in total. By default UnreachablePeriod is 45
seconds. UnreachablePeriod should be several times bigger than UnreachableDelay, so that an interface is rechecked more than
once before an interface becomes unavailable.
When the unreachability period is over, the interface is polled again, decreasing priority for item that turned the interface into
unreachable state. If the unreachable interface reappears, the monitoring returns to normal automatically:
Note:
Once interface becomes available, the host does not poll all its items immediately for two reasons:
• It might overload the host.
• The interface restore time is not always matching planned item polling schedule time.
So, after the interface becomes available, items are not polled immediately, but they are getting rescheduled to their next
polling round.
Unavailable interface
After the UnreachablePeriod ends and the interface has not reappeared, the interface is treated as unavailable.
temporarily disabling Zabbix agent checks on host "New host": interface unavailable
1862
and in the frontend the host availability icon goes from green/gray to yellow/red (the unreachable interface details can be seen in
the hint box that is displayed when a mouse is positioned on the host availability icon):
The UnavailableDelay parameter defines how often an interface is checked during interface unavailability.
By default it is 60 seconds (so in this case ”temporarily disabling”, from the log message above, will mean disabling checks for
one minute).
When the connection to the interface is restored, the monitoring returns to normal automatically, too:
enabling Zabbix agent checks on host "New host": interface became available
Overview
It is possible to make some internal metrics of Zabbix server and proxy accessible remotely by another Zabbix instance or a third-
party tool. This can be useful so that supporters/service providers can monitor their client Zabbix servers/proxies remotely or, in
organizations where Zabbix is not the main monitoring tool, that Zabbix internal metrics can be monitored by a third-party system
in an umbrella-monitoring setup.
Zabbix internal stats are exposed to a configurable set of addresses listed in the new ’StatsAllowedIP’ server/proxy parameter.
Requests will be accepted only from these addresses.
Items
To configure querying of internal stats on another Zabbix instance, you may use two items:
• zabbix[stats,<ip>,<port>] internal item - for direct remote queries of Zabbix server/proxy. <ip> and <port> are used
to identify the target instance.
• zabbix.stats[<ip>,<port>] agent item - for agent-based remote queries of Zabbix server/proxy. <ip> and <port> are
used to identify the target instance.
The following diagram illustrates the use of either item depending on the context.
1863
• - Server → external Zabbix instance (zabbix[stats,<ip>,<port>])
To make sure that the target instance allows querying it by the external instance, list the address of the external instance in the
’StatsAllowedIP’ parameter on the target instance.
Exposed metrics
The stats items gather the statistics in bulk and return a JSON, which is the basis for dependent items to get their data from. The
following internal metrics are returned by either of the two items:
• zabbix[boottime]
• zabbix[hosts]
• zabbix[items]
• zabbix[items_unsupported]
• zabbix[preprocessing_queue] (server only)
• zabbix[process,<type>,<mode>,<state>] (only process type based statistics)
• zabbix[rcache,<cache>,<mode>]
• zabbix[requiredperformance]
• zabbix[triggers] (server only)
• zabbix[uptime]
• zabbix[vcache,buffer,<mode>] (server only)
• zabbix[vcache,cache,<parameter>]
1864
• zabbix[version]
• zabbix[vmware,buffer,<mode>]
• zabbix[wcache,<cache>,<mode>] (’trends’ cache type server only)
Templates
Templates are available for remote monitoring of Zabbix server or proxy internal metrics from an external instance:
Note that in order to use a template for remote monitoring of multiple external instances, a separate host is required for each
external instance monitoring.
Trapper process
Receiving internal metric requests from an external Zabbix instance is handled by the trapper process that validates the request,
gathers the metrics, creates the JSON data buffer and sends the prepared JSON back, for example, from server:
{
"response": "success",
"data": {
"boottime": N,
"uptime": N,
"hosts": N,
"items": N,
"items_unsupported": N,
"preprocessing_queue": N,
"process": {
"alert manager": {
"busy": {
"avg": N,
"max": N,
"min": N
},
"idle": {
"avg": N,
"max": N,
"min": N
},
"count": N
},
...
},
"queue": N,
"rcache": {
"total": N,
"free": N,
"pfree": N,
"used": N,
"pused": N
},
"requiredperformance": N,
"triggers": N,
"uptime": N,
"vcache": {
"buffer": {
"total": N,
"free": N,
"pfree": N,
"used": N,
"pused": N
},
"cache": {
"requests": N,
"hits": N,
1865
"misses": N,
"mode": N
}
},
"vmware": {
"total": N,
"free": N,
"pfree": N,
"used": N,
"pused": N
},
"version": "N",
"wcache": {
"values": {
"all": N,
"float": N,
"uint": N,
"str": N,
"log": N,
"text": N,
"not supported": N
},
"history": {
"pfree": N,
"free": N,
"total": N,
"used": N,
"pused": N
},
"index": {
"pfree": N,
"free": N,
"total": N,
"used": N,
"pused": N
},
"trend": {
"pfree": N,
"free": N,
"total": N,
"used": N,
"pused": N
}
}
}
}
There are also another two items specifically allowing to remote query internal queue stats on another Zabbix instance:
• zabbix[stats,<ip>,<port>,queue,<from>,<to>] internal item - for direct internal queue queries to remote Zabbix
server/proxy
• zabbix.stats[<ip>,<port>,queue,<from>,<to>] agent item - for agent-based internal queue queries to remote
Zabbix server/proxy
Overview
Kerberos authentication can be used in web monitoring and HTTP items in Zabbix.
1866
This section describes an example of configuring Kerberos with Zabbix server to perform web monitoring of www.example.com
with user ’zabbix’.
Steps
Step 1
For Debian/Ubuntu:
cat /etc/krb5.conf
[libdefaults]
default_realm = EXAMPLE.COM
#### The following krb5.conf variables are only for MIT Kerberos.
kdc_timesync = 1
ccache_type = 4
forwardable = true
proxiable = true
[realms]
EXAMPLE.COM = {
}
[domain_realm]
.example.com=EXAMPLE.COM
example.com=EXAMPLE.COM
Step 3
Create a Kerberos ticket for user zabbix. Run the following command as user zabbix:
kinit zabbix
Attention:
It is important to run the above command as user zabbix. If you run it as root the authentication will not work.
Step 4
Create a web scenario or HTTP agent item with Kerberos authentication type.
14 modbus.get parameters
Overview
1867
Parameter Description Defaults Example
endpoint Protocol and address of the endpoint, defined as protocol: none tcp://192.168.6.1:511
protocol://connection_string tcp://192.168.6.2
rtu/ascii protocol: tcp://[::1]:511
Possible protocol values: rtu, ascii (Agent 2 only), tcp port_name: none tcp://::1
speed: 115200 tcp://localhost:511
Connection string format: params: 8n1 tcp://localhost
rtu://COM1:9600:8n
with tcp - address:port tcp protocol: ascii://COM2:1200:7o2
with serial line: rtu, ascii - address: none rtu://ttyS0:9600
port_name:speed:params port: 502 ascii://ttyS1
where
’speed’ - 1200, 9600 etc
’params’ - data bits (5,6,7 or 8), parity (n,e or o for
none/even/odd), stop bits (1 or 2)
slave id Modbus address of the device it is intended for (1 to serial: 1 2
247), see MODBUS Messaging Implementation Guide
(page 23) tcp: 255 (0xFF)
1 - Read Coil,
2 - Read Discrete Input,
3 - Read Holding Registers,
4 - Read Input Registers
address Address of the first registry, coil or input. empty function: 9999
00001
If ’function’ is empty, then ’address’ should be in
range for: non-empty
Coil - 00001 - 09999 function: 0
Discrete input - 10001 - 19999
Input register - 30001 - 39999
Holding register - 40001 - 49999
1868
Parameter Description Defaults Example
Limitations:
for 1 bit - be
for 8 bits - be,le
for 16 bits - be,le
offset Number of registers, starting from ’address’, the 0 4
result of which will be discarded.
Overview
The VMware performance counter path has the group/counter[rollup] format where:
• group - the performance counter group, for example cpu
• counter - the performance counter name, for example usagemhz
• rollup - the performance counter rollup type, for example average
So the above example would give the following counter path: cpu/usagemhz[average]
The performance counter group descriptions, counter names and rollup types can be found in VMware documentation.
It is possible to obtain internal names and create custom performance counter names by using script item in Zabbix.
Configuration
1. Create disabled Script item on the main VMware host (where the eventlog[] item is present) with the following parameters:
1869
• Name: VMware metrics
• Type: Script
• Key: vmware.metrics
• Type of information: Text
• Script: copy and paste the script provided below
• Timeout: 10
• History: Do not store
• Enabled: unmarked
Script
try {
Zabbix.log(4, 'vmware metrics script');
1870
throw 'Response code: '+req.getStatus();
}
resp = req.post("{$VMWARE.URL}",logout);
if (req.getStatus() != 200) {
throw 'Response code: '+req.getStatus();
}
} catch (error) {
Zabbix.log(4, 'vmware call failed : '+error);
result = {};
}
return result;
Once the item is configured, press Test button, then press Get value.
Copy received XML to any XML formatter and find the desired metric.
<PerfCounterInfo xsi:type="PerfCounterInfo">
<key>6</key>
1871
<nameInfo>
<label>Usage in MHz</label>
<summary>CPU usage in megahertz during the interval</summary>
<key>usagemhz</key>
</nameInfo>
<groupInfo>
<label>CPU</label>
<summary>CPU</summary>
<key>cpu</key>
</groupInfo>
<unitInfo>
<label>MHz</label>
<summary>Megahertz</summary>
<key>megaHertz</key>
</unitInfo>
<rollupType>average</rollupType>
<statsType>rate</statsType>
<level>1</level>
<perDeviceLevel>3</perDeviceLevel>
</PerfCounterInfo>
Use XPath to extract the counter path from received XML. For the example above, the XPath will be:
Overview
This section provides return value details for the system.sw.packages.get Zabbix agent item.
Details
The output of this item is an array of objects each containing the following keys:
Example:
[
{
"name": "util-linux-core",
"manager": "rpm",
"version": "2.37.4-3.el9",
"size": 1296335,
"arch": "x86_64",
"buildtime": {
"timestamp" : 1653552239,
"value" : "Sep 20 01:39:40 2021 UTC"
1872
},
"installtime": {
"timestamp" : 1660780885,
"value" : "Aug 18 00:01:25 2022 UTC"
}
},
{
"name": "xfonts-base",
"manager": "dpkg",
"version": "1:1.0.5",
"size": 7337984,
"arch": "all",
"buildtime": {
"timestamp": 0,
"value": ""
},
"installtime": {
"timestamp": 0,
"value": ""
}
}
]
Overview
This section provides return value details for the net.dns.get Zabbix agent 2 item.
Details
The output of this item is an object containing DNS record information based on the parameters provided in the item key.
For example, the net.dns.get[,example.com] item may return the following JSON of a refused query:
{
"flags": [
"RA"
],
"query_time": "0.00",
"question_section": [
{
"qclass": "IN",
"qname": "example.com.",
"qtype": "SOA"
}
],
"response_code": "REFUSED",
"zbx_error_code": 0
}
By specifying the IP address of the DNS server, the net.dns.get[192.0.2.0,example.com] item may return the following
JSON:
{
"answer_section": [
{
"class": "IN",
"name": "example.com.",
"rdata": {
"expire": 1209600,
"mbox": "noc.dns.example.org.",
"minttl": 3600,
"ns": "ns.example.org.",
"refresh": 7200,
1873
"retry": 3600,
"serial": 2022091378
},
"rdlength": 44,
"ttl": 1205,
"type": "SOA"
}
],
"flags": [
"RA"
],
"query_time": "0.02",
"question_section": [
{
"qclass": "IN",
"qname": "example.com.",
"qtype": "SOA"
}
],
"response_code": "NOERROR",
"zbx_error_code": 0
}
{
"additional_section": [
{
"extended_rcode": 32768,
"name": ".",
"rdata": {
"options": [
{
"code": 0,
"nsid": "67 70 64 6e 73 2d 6c 70 70"
}
]
},
"rdlength": 13,
"type": "OPT",
"udp_payload": 512
}
],
"answer_section": [
{
"class": "IN",
"name": "example.com.",
"rdata": {
1874
"a": "192.0.2.0"
},
"rdlength": 4,
"ttl": 19308,
"type": "A"
},
{
"class": "IN",
"name": "example.com.",
"rdata": {
"algorithm": 13,
"expiration": 1704715951,
"inception": 1702910624,
"key_tag": 21021,
"labels": 2,
"orig_ttl": 86400,
"signature": "HVBOBcJJQy0S08J3f8kviPj8UkEUj7wmyiMyQqPSWgQIY9SCEJ5plq6KuxJmtAek1txZWXDo+6tp
"signer_name": "example.com.",
"type_covered": "A"
},
"rdlength": 95,
"ttl": 19308,
"type": "RRSIG"
}
],
"flags": [
"RD",
"RA",
"AD",
"CD"
],
"query_time": "0.05",
"question_section": [
{
"qclass": "IN",
"qname": "example.com.",
"qtype": "ANY"
}
],
"response_code": "NOERROR",
"zbx_error_code": 0
}
See also
6 Supported functions
Function
group Functions
Aggregate avg, bucket_percentile, count, histogram_quantile, item_count, kurtosis, mad, max, min,
func- skewness, stddevpop, stddevsamp, sum, sumofsquares, varpop, varsamp
tions
Foreach functions avg_foreach, bucket_rate_foreach, count_foreach, exists_foreach, last_foreach, max_foreach,
min_foreach, sum_foreach
1875
Function
group Functions
1 Aggregate functions
Except where stated otherwise, all functions listed here are supported in:
• Trigger expressions
• Calculated items
The functions are listed without additional information. Click on the function to see the full details.
Function Description
avg The average value of an item within the defined evaluation period.
bucket_percentile Calculates the percentile from the buckets of a histogram.
count The count of values in an array returned by a foreach function.
histogram_quantile Calculates the φ-quantile from the buckets of a histogram.
item_count The count of existing items in configuration that match the filter criteria.
kurtosis The ”tailedness” of the probability distribution in collected values within the defined evaluation
period.
mad The median absolute deviation in collected values within the defined evaluation period.
max The highest value of an item within the defined evaluation period.
min The lowest value of an item within the defined evaluation period.
skewness The asymmetry of the probability distribution in collected values within the defined evaluation
period.
stddevpop The population standard deviation in collected values within the defined evaluation period.
stddevsamp The sample standard deviation in collected values within the defined evaluation period.
1876
Function Description
sum The sum of collected values within the defined evaluation period.
sumofsquares The sum of squares in collected values within the defined evaluation period.
varpop The population variance of collected values within the defined evaluation period.
varsamp The sample variance of collected values within the defined evaluation period.
Common parameters
• /host/key is a common mandatory first parameter for the functions referencing the host item history
• (sec|#num)<:time shift> is a common second parameter for the functions referencing the host item history, where:
– sec - maximum evaluation period in seconds (time suffixes can be used), or
– #num - maximum evaluation range in latest collected values (if preceded by a hash mark)
– time shift (optional) allows to move the evaluation point back in time. See more details on specifying time shift.
Function details
The average value of an item within the defined evaluation period.<br> Supported value types: Float, Integer.<br> Supported
foreach functions: avg_foreach, count_foreach, exists_foreach, last_foreach, max_foreach, min_foreach, sum_foreach.
Time shift is useful when there is a need to compare the current average value with the average value some time ago.
Examples:
avg(/host/key,1h) #the average value for the last hour until now
avg(/host/key,1h:now-1d) #the average value for an hour from 25 hours ago to 24 hours ago from now
avg(/host/key,#5) #the average value of the five latest values
avg(/host/key,#5:now-1d) #the average value of the five latest values excluding the values received in the
bucket_percentile(item filter,time period,percentage)
Parameters:
Comments:
The count of values in an array returned by a foreach function.<br> Supported foreach functions: avg_foreach, count_foreach,
exists_foreach, last_foreach, max_foreach, min_foreach, sum_foreach.
Parameters:
• func_foreach - foreach function for which the number of returned values should be counted (with supported arguments).
See foreach functions for details.
• item filter - see item filter;<br>
• time period - see time period;<br>
• operator (must be double-quoted). Supported operators:<br>eq - equal<br>ne - not equal<br>gt - greater<br>ge
- greater or equal<br>lt - less<br>le - less or equal<br>like - matches if contains pattern (case-sensitive)<br>bitand -
bitwise AND<br>regexp - case-sensitive match of the regular expression given in pattern<br>iregexp - case-insensitive
match of the regular expression given in pattern<br>
• pattern - the required pattern (string arguments must be double-quoted); supported if operator is specified in the third
parameter.
1877
Comments:
• Using count() with a history-related foreach function (max_foreach, avg_foreach, etc.) may lead to performance implica-
tions, whereas using exists_foreach(), which works only with configuration data, will not have such effect.
• Optional parameters operator or pattern can’t be left empty after a comma, only fully omitted.
• With bitand as the third parameter, the fourth pattern parameter can be specified as two numbers, separated by ’/’:
number_to_compare_with/mask. count() calculates ”bitwise AND” from the value and the mask and compares the result
to number_to_compare_with. If the result of ”bitwise AND” is equal to number_to_compare_with, the value is counted.<br>If
number_to_compare_with and mask are equal, only the mask need be specified (without ’/’).
• With regexp or iregexp as the third parameter, the fourth pattern parameter can be an ordinary or global (starting with
’@’) regular expression. In case of global regular expressions case sensitivity is inherited from global regular expression
settings. For the purpose of regexp matching, float values will always be represented with 4 decimal digits after ’.’. Also
note that for large numbers difference in decimal (stored in database) and binary (used by Zabbix server) representation
may affect the 4th decimal digit.
Examples:
count(max_foreach(/*/net.if.in[*],1h)) #the number of net.if.in items that received data in the last hour
count(last_foreach(/*/vfs.fs.dependent.size[*,pused]),"gt",95) #the number of file systems with over 95% o
histogram_quantile(quantile,bucket1,value1,bucket2,value2,...)
Calculates the φ-quantile from the buckets of a histogram.<br> Supported foreach function: bucket_rate_foreach.
Parameters:
• quantile - 0 ≤ φ ≤ 1;<br>
• bucketN, valueN - manually entered pairs (>=2) of parameters or the response of bucket_rate_foreach.
Comments:
Examples:
histogram_quantile(0.75,1.0,last(/host/rate_bucket[1.0]),"+Inf",last(/host/rate_bucket[Inf]))
histogram_quantile(0.5,bucket_rate_foreach(//item_key,30s))
item_count(item filter)
The count of existing items in configuration that match the filter criteria.<br> Supported value type: Integer.
Parameter:
• item filter - criteria for item selection, allows referencing by host group, host, item key, and tags. Wildcards are supported.
See item filter for more details.<br>
Comments:
Examples:
item_count(/*/agent.ping?[group="Host group 1"]) #the number of hosts with the *agent.ping* item in the "H
kurtosis(/host/key,(sec|#num)<:time shift>)
The ”tailedness” of the probability distribution in collected values within the defined evaluation period. See also: Kurtosis.<br>
Supported value types: Float, Integer.<br> Supported foreach function: last_foreach.
Example:
The median absolute deviation in collected values within the defined evaluation period. See also: Median absolute deviation.<br>
Supported value types: Float, Integer.<br> Supported foreach function: last_foreach.
Example:
1878
mad(/host/key,1h) #median absolute deviation for the last hour until now
max(/host/key,(sec|#num)<:time shift>)
The highest value of an item within the defined evaluation period.<br> Supported value types: Float, Integer.<br> Supported
foreach functions: avg_foreach, count_foreach, exists_foreach, last_foreach, max_foreach, min_foreach, sum_foreach.
Example:
max(/host/key,1h) - min(/host/key,1h) #calculate the difference between the maximum and minimum values wit
min(/host/key,(sec|#num)<:time shift>)
The lowest value of an item within the defined evaluation period.<br> Supported value types: Float, Integer.<br> Supported
foreach functions: avg_foreach, count_foreach, exists_foreach, last_foreach, max_foreach, min_foreach, sum_foreach.
Example:
max(/host/key,1h) - min(/host/key,1h) #calculate the difference between the maximum and minimum values wit
skewness(/host/key,(sec|#num)<:time shift>)
The asymmetry of the probability distribution in collected values within the defined evaluation period. See also: Skewness.<br>
Supported value types: Float, Integer.<br> Supported foreach function: last_foreach.
Example:
The population standard deviation in collected values within the defined evaluation period. See also: Standard deviation.<br>
Supported value types: Float, Integer.<br> Supported foreach function: last_foreach.
Example:
stddevpop(/host/key,1h) #the population standard deviation for the last hour until now
stddevsamp(/host/key,(sec|#num)<:time shift>)
The sample standard deviation in collected values within the defined evaluation period. See also: Standard deviation.<br> Sup-
ported value types: Float, Integer.<br> Supported foreach function: last_foreach.
At least two data values are required for this function to work.
Example:
stddevsamp(/host/key,1h) #the sample standard deviation for the last hour until now
sum(/host/key,(sec|#num)<:time shift>)
The sum of collected values within the defined evaluation period.<br> Supported value types: Float, Integer.<br> Supported
foreach functions: avg_foreach, count_foreach, exists_foreach, last_foreach, max_foreach, min_foreach, sum_foreach.
Example:
sum(/host/key,1h) #the sum of values for the last hour until now
sumofsquares(/host/key,(sec|#num)<:time shift>)
The sum of squares in collected values within the defined evaluation period.<br> Supported value types: Float, Integer.<br>
Supported foreach function: last_foreach.
Example:
sumofsquares(/host/key,1h) #the sum of squares for the last hour until now
1879
varpop(/host/key,(sec|#num)<:time shift>)
The population variance of collected values within the defined evaluation period. See also: Variance.<br> Supported value types:
Float, Integer.<br> Supported foreach function: last_foreach.
Example:
varpop(/host/key,1h) #the population variance for the last hour until now
varsamp(/host/key,(sec|#num)<:time shift>)
The sample variance of collected values within the defined evaluation period. See also: Variance.<br> Supported value types:
Float, Integer.<br> Supported foreach function: last_foreach.
At least two data values are required for this function to work.
Example:
varsamp(/host/key,1h) #the sample variance for the last hour until now
See all supported functions.
1 Foreach functions
Overview
Foreach functions are used in aggregate calculations to return one aggregate value for each item that is selected by the used item
filter. An array of values is returned.
For example, the avg_foreach function will return an array of values, where each value is the average history value of the selected
item, during the time interval that is specified.
The item filter is part of the syntax used by foreach functions. The use of wildcards is supported in the item filter, thus the required
items can be selected quite flexibly.
Supported functions
Function Description
Function syntax
Foreach functions support two common parameters: item filter (see details below) and time period:
foreach_function(item filter,time period)
For example:
avg_foreach(/*/mysql.qps?[group="MySQL Servers"],5m)
will return the five-minute average of each ’mysql.qps’ item in the MySQL server group.
/host/key[parameters]?[conditions]
1880
consists of four parts, where:
Wildcard usage
• Wildcard can be used to replace the host name, item key or an individual item key parameter.
• Either the host or item key must be specified without wildcard. So /host/* and /*/key are valid filters, but /*/* is invalid.
• Wildcard cannot be used for a part of host name, item key, item key parameter.
• Wildcard does not match more than a single item key parameter. So a wildcard must be specified for each parameter in
separation (i.e. key[abc,*,*]).
Conditions expression
• operands:
– group - host group
– tag - item tag
– "<text>" - string constant, with the \ escape character to escape " and \
• case-sensitive string comparison operators: =, <>
• logical operators: and, or, not
• grouping with parentheses: ( )
Quotation of string constants is mandatory. Only case-sensitive full string comparison is supported.
Warning:
When specifying tags in the filter (i.e. tag="tagname:value"), the colon ”:” is used as a delimiter. Everything after it is
considered the tag value. Thus it is currently not supported to specify a tag name containing ”:” in it.
Examples
A complex filter may be used, referencing the item key, host group and tags, as illustrated by the examples:
All referenced items must exist and collect data. Only enabled items on enabled hosts are included in the calculations. Items in
the unsupported state are not included.
Attention:
If the item key of a referenced item is changed, the filter must be updated manually.
Specifying a parent host group includes the parent group and all nested host groups with their items.
Time period
The second parameter allows to specify the time period for aggregation. The time period can only be expressed as time, the
amount of values (prefixed with #) is not supported.
Supported unit symbols can be used in this parameter for convenience, for example, ’5m’ (five minutes) instead of ’300s’ (300
seconds) or ’1d’ (one day) instead of ’86400’ (86400 seconds).
For the last_foreach function time period is an optional parameter (supported since Zabbix 7.0), which can be omitted:
last_foreach(/*/key?[group="host group"])
1881
Time period is not supported with the exists_foreach function.
Additional parameters
bucket_rate_foreach
count_foreach
Third and fourth optional parameters are supported by the count_foreach function:
• operator is the conditional operator for item values (must be double-quoted). Supported operators:<br>eq -
equal<br>ne - not equal<br>gt - greater<br>ge - greater or equal<br>lt - less<br>le - less or equal<br>like - matches
if contains pattern (case-sensitive)<br>bitand - bitwise AND<br>regexp - case-sensitive match of the regular expression
given in pattern<br>iregexp - case-insensitive match of the regular expression given in pattern<br>
• pattern is the required pattern (string arguments must be double-quoted); supported if operator is specified in the third
parameter.
Comments:
• Optional parameters operator or pattern can’t be left empty after a comma, only fully omitted.
• With bitand as the third parameter, the fourth pattern parameter can be specified as two numbers, separated by ’/’:
number_to_compare_with/mask. count() calculates ”bitwise AND” from the value and the mask and compares the result
to number_to_compare_with. If the result of ”bitwise AND” is equal to number_to_compare_with, the value is counted.<br>If
number_to_compare_with and mask are equal, only the mask need be specified (without ’/’).
• With regexp or iregexp as the third parameter, the fourth pattern parameter can be an ordinary or global (starting with
’@’) regular expression. In case of global regular expressions case sensitivity is inherited from global regular expression
settings. For the purpose of regexp matching, float values will always be represented with 4 decimal digits after ’.’. Also
note that for large numbers difference in decimal (stored in database) and binary (used by Zabbix server) representation
may affect the 4th decimal digit.
See aggregate calculations for more details and examples on using foreach functions.
The following table illustrates how each function behaves in cases of limited availability of host/item and history data.
Unavailable
Unavailable host without Unsupported Data retrieval
Function Disabled host host with data data Disabled item item error (SQL)
2 Bitwise functions
• Trigger expressions
• Calculated items
1882
The functions are listed without additional information. Click on the function to see the full details.
Function Description
Function details
The value of ”bitwise AND” of an item value and mask.<br> Supported value types: Integer.
Parameter:
Although the comparison is done in a bitwise manner, all the values must be supplied and are returned in decimal. For example,
checking for the 3rd bit is done by comparing to 4, not 100.
Examples:
bitand(last(/host/key),12)=8 or bitand(last(/host/key),12)=4 #3rd or 4th bit set, but not both at the same
bitand(last(/host/key),20)=16 #3rd bit not set and 5th bit set
bitlshift(value,bits to shift)
The bitwise shift left of an item value.<br> Supported value types: Integer.
Parameter:
Although the comparison is done in a bitwise manner, all the values must be supplied and are returned in decimal. For example,
checking for the 3rd bit is done by comparing to 4, not 100.
bitnot(value)
The value of ”bitwise NOT” of an item value.<br> Supported value types: Integer.
Parameter:
Although the comparison is done in a bitwise manner, all the values must be supplied and are returned in decimal. For example,
checking for the 3rd bit is done by comparing to 4, not 100.
bitor(value,mask)
The value of ”bitwise OR” of an item value and mask.<br> Supported value types: Integer.
Parameter:
Although the comparison is done in a bitwise manner, all the values must be supplied and are returned in decimal. For example,
checking for the 3rd bit is done by comparing to 4, not 100.
bitrshift(value,bits to shift)
The bitwise shift right of an item value.<br> Supported value types: Integer.
Parameter:
1883
• value - the value to check;
• bits to shift (mandatory) - the number of bits to shift.
Although the comparison is done in a bitwise manner, all the values must be supplied and are returned in decimal. For example,
checking for the 3rd bit is done by comparing to 4, not 100.
bitxor(value,mask)
The value of ”bitwise exclusive OR” of an item value and mask.<br> Supported value types: Integer.
Parameter:
Although the comparison is done in a bitwise manner, all the values must be supplied and are returned in decimal. For example,
checking for the 3rd bit is done by comparing to 4, not 100.
• Trigger expressions
• Calculated items
Attention:
Date and time functions cannot be used in the expression alone; at least one non-time-based function referencing the host
item must be present in the expression.
The functions are listed without additional information. Click on the function to see the full details.
Function Description
Function details
date
Example:
date()<20220101
dayofmonth
Example:
dayofmonth()=1
dayofweek
dayofweek()<6
Example (only weekend):
dayofweek()>5
1884
now
The number of seconds since the Epoch (00:00:00 UTC, January 1, 1970).
Example:
now()<1640998800
time
time()<060000
See all supported functions.
4 History functions
• Trigger expressions
• Calculated items
The functions are listed without additional information. Click on the function to see the full details.
Function Description
change The amount of difference between the previous and latest value.
changecount The number of changes between adjacent values within the defined evaluation period.
count The number of values within the defined evaluation period.
countunique The number of unique values within the defined evaluation period.
find Find a value match within the defined evaluation period.
first The first (the oldest) value within the defined evaluation period.
fuzzytime Check how much the passive agent time differs from the Zabbix server/proxy time.
last The most recent value.
logeventid Check if the event ID of the last log entry matches a regular expression.
logseverity The log severity of the last log entry.
logsource Check if log source of the last log entry matches a regular expression.
monodec Check if there has been a monotonous decrease in values.
monoinc Check if there has been a monotonous increase in values.
nodata Check for no data received.
percentile The P-th percentile of a period, where P (percentage) is specified by the third parameter.
rate The per-second average rate of the increase in a monotonically increasing counter within the
defined time period.
Common parameters
• /host/key is a common mandatory first parameter for the functions referencing the host item history
• (sec|#num)<:time shift> is a common second parameter for the functions referencing the host item history, where:
– sec - maximum evaluation period in seconds (time suffixes can be used), or
– #num - maximum evaluation range in latest collected values (if preceded by a hash mark)
– time shift (optional) allows to move the evaluation point back in time. See more details on specifying time shift.
Function details
The amount of difference between the previous and latest value.<br> Supported value types: Float, Integer, String, Text, Log.<br>
For strings returns: 0 - values are equal; 1 - values differ.
Comments:
1885
• Numeric difference will be calculated, as seen with these incoming example values (’previous’ and ’latest’ value = differ-
ence):<br>’1’ and ’5’ = +4<br>’3’ and ’1’ = -2<br>’0’ and ’-2.5’ = -2.5<br>
• See also: abs for comparison.
Examples:
change(/host/key)>10
changecount(/host/key,(sec|#num)<:time shift>,<mode>)
The number of changes between adjacent values within the defined evaluation period.<br> Supported value types: Float, Integer,
String, Text, Log.
Parameters:
Examples:
changecount(/host/key,1w) #the number of value changes for the last week until now
changecount(/host/key,#10,"inc") #the number of value increases (relative to the adjacent value) among the
changecount(/host/key,24h,"dec") #the number of value decreases (relative to the adjacent value) for the l
count(/host/key,(sec|#num)<:time shift>,<operator>,<pattern>)
The number of values within the defined evaluation period.<br> Supported value types: Float, Integer, String, Text, Log.
Parameters:
Comments:
Examples:
The number of unique values within the defined evaluation period.<br> Supported value types: Float, Integer, String, Text, Log.
Parameters:
1886
• operator (must be double-quoted). Supported operators:<br>eq - equal (default for integer, float)<br>ne - not
equal<br>gt - greater<br>ge - greater or equal<br>lt - less<br>le - less or equal<br>like (default for string, text, log)
- matches if contains pattern (case-sensitive)<br>bitand - bitwise AND<br>regexp - case-sensitive match of the regular
expression given in pattern<br>iregexp - case-insensitive match of the regular expression given in pattern<br>
• pattern - the required pattern (string arguments must be double-quoted).
Comments:
Examples:
countunique(/host/key,10m) #the number of unique values for the last 10 minutes until now
countunique(/host/key,10m,"like","error") #the number of unique values for the last 10 minutes until now t
countunique(/host/key,10m,,12) #the number of unique values for the last 10 minutes until now that equal '
countunique(/host/key,10m,"gt",12) #the number of unique values for the last 10 minutes until now that are
countunique(/host/key,#10,"gt",12) #the number of unique values within the last 10 values until now that a
countunique(/host/key,10m:now-1d,"gt",12) #the number of unique values between 24 hours and 10 minutes and
countunique(/host/key,10m,"bitand","6/7") #the number of unique values for the last 10 minutes until now h
countunique(/host/key,10m:now-1d) #the number of unique values between 24 hours and 10 minutes and 24 hour
find(/host/key,(sec|#num)<:time shift>,<operator>,<pattern>)
Find a value match within the defined evaluation period.<br> Supported value types: Float, Integer, String, Text, Log.<br> Returns:
1 - found; 0 - otherwise.
Parameters:
• If more than one value is processed, ’1’ is returned if there is at least one matching value;
• like is not supported as operator for integer values;
• like and bitand are not supported as operators for float values;
• For string, text, and log values only eq, ne, like, regexp and iregexp operators are supported;
• With regexp or iregexp as operator, the fourth pattern parameter can be an ordinary or global (starting with ’@’) regular
expression. In case of global regular expressions case sensitivity is inherited from the global regular expression settings.
Example:
find(/host/key,10m,"like","error") #find a value that contains 'error' within the last 10 minutes until no
first(/host/key,sec<:time shift>)
The first (the oldest) value within the defined evaluation period.<br> Supported value types: Float, Integer, String, Text, Log.
Parameters:
1887
See also last().
Example:
first(/host/key,1h) #retrieve the oldest value within the last hour until now
fuzzytime(/host/key,sec)
Check how much the passive agent time differs from the Zabbix server/proxy time.<br> Supported value types: Float, Integer.<br>
Returns: 1 - difference between the passive item value (as timestamp) and Zabbix server/proxy timestamp (the clock of value
collection) is less than or equal to T seconds; 0 - otherwise.
Parameters:
Comments:
• Usually used with the ’system.localtime’ item to check that local time is in sync with the local time of Zabbix server. Note
that ’system.localtime’ must be configured as a passive check.
• Can be used also with the vfs.file.time[/path/file,modify] key to check that the file did not get updates for long
time;
• This function is not recommended for use in complex trigger expressions (with multiple items involved), because it may cause
unexpected results (time difference will be measured with the most recent metric), e.g. in fuzzytime(/Host/system.localtime,60
or last(/Host/trap)<>0.
Example:
The most recent value.<br> Supported value types: Float, Integer, String, Text, Log.
Parameters:
Comments:
• Take note that a hash-tagged time period (#N) works differently here than with many other functions. For example: last()
is always equal to last(#1); last(#3) - the third most recent value (not three latest values);
• Zabbix does not guarantee the exact order of values if more than two values exist within one second in history;
• See also first().
Example:
Check if the event ID of the last log entry matches a regular expression.<br> Supported value types: Log.<br> Returns: 0 - does
not match; 1 - matches.
Parameters:
logseverity(/host/key,<#num<:time shift»)
Log severity of the last log entry.<br> Supported value types: Log.<br> Returns: 0 - default severity; N - severity (integer, useful
for Windows event logs: 1 - Information, 2 - Warning, 4 - Error, 7 - Failure Audit, 8 - Success Audit, 9 - Critical, 10 - Verbose).
Parameters:
Zabbix takes log severity from the Information field of Windows event log.
logsource(/host/key,<#num<:time shift»,<pattern>)
1888
Check if log source of the last log entry matches a regular expression.<br> Supported value types: Log.<br> Returns: 0 - does
not match; 1 - matches.
Parameters:
Example:
logsource(/host/key,,"VMware Server")
monodec(/host/key,(sec|#num)<:time shift>,<mode>)
Check if there has been a monotonous decrease in values.<br> Supported value types: Integer.<br> Returns: 1 - if all elements
in the time period continuously decrease; 0 - otherwise.
Parameters:
Example:
Check if there has been a monotonous increase in values.<br> Supported value types: Integer.<br> Returns: 1 - if all elements
in the time period continuously increase; 0 - otherwise.
Parameters:
Example:
monoinc(/Host1/system.localtime,#3,"strict")=0 #check if the system local time has been increasing consist
nodata(/host/key,sec,<mode>)
Check for no data received.<br> Supported value types: Integer, Float, Character, Text, Log.<br> Returns: 1 - if no data received
during the defined period of time; 0 - otherwise.
Parameters:
Comments:
• the ’nodata’ triggers monitored by proxy are, by default, sensitive to proxy availability - if proxy becomes unavailable, the
’nodata’ triggers will not fire immediately after a restored connection, but will skip the data for the delayed period. Note
that for passive proxies suppression is activated if connection is restored more than 15 seconds and no less than 2 & Prox-
yUpdateFrequency seconds later. For active proxies suppression is activated if connection is restored more than 15 seconds
later. To turn off sensitiveness to proxy availability, use the third parameter, e.g.: nodata(/host/key,5m,"strict"); in
this case the function will fire as soon as the evaluation period (five minutes) without data has past.<br>
• This function will display an error if, within the period of the 1st parameter:<br>- there’s no data and Zabbix server was
restarted<br>- there’s no data and maintenance was completed<br>- there’s no data and the item was added or re-
enabled<br>
• Errors are displayed in the Info column in trigger configuration;<br>
• This function may not work properly if there are time differences between Zabbix server, proxy and agent. See also: Time
synchronization requirement.
1889
percentile(/host/key,(sec|#num)<:time shift>,percentage)
The P-th percentile of a period, where P (percentage) is specified by the third parameter.<br> Supported value types: Float, Integer.
Parameters:
rate(/host/key,sec<:time shift>)
The per-second average rate of the increase in a monotonically increasing counter within the defined time period.<br> Supported
value types: Float, Integer.
Parameters:
Example:
rate(/host/key,30s) #if the monotonic increase over 30 seconds is 20, this function will return 0.67.
See all supported functions.
5 Trend functions
Trend functions, in contrast to history functions, use trend data for calculations.
Trends store hourly aggregate values. Trend functions use these hourly averages, and thus are useful for long-term analysis.
Trend function results are cached so multiple calls to the same function with the same parameters fetch info from the database
only once. The trend function cache is controlled by the TrendFunctionCacheSize server parameter.
Triggers that reference trend functions only are evaluated once per the smallest time period in the expression. For instance, a
trigger like
• Trigger expressions
• Calculated items
The functions are listed without additional information. Click on the function to see the full details.
Function Description
baselinedev Returns the number of deviations (by stddevpop algorithm) between the last data period and the
same data periods in preceding seasons.
baselinewma Calculates the baseline by averaging data from the same timeframe in multiple equal time
periods (’seasons’) using the weighted moving average algorithm.
trendavg The average of trend values within the defined time period.
trendcount The number of successfully retrieved trend values within the defined time period.
trendmax The maximum in trend values within the defined time period.
trendmin The minimum in trend values within the defined time period.
trendstl Returns the rate of anomalies during the detection period - a decimal value between 0 and 1 that
is ((the number of anomaly values)/(total number of values)).
trendsum The sum of trend values within the defined time period.
Common parameters
1890
Function details
Returns the number of deviations (by stddevpop algorithm) between the last data period and the same data periods in preceding
seasons.<br>
Parameters:
Examples:
Calculates the baseline by averaging data from the same timeframe in multiple equal time periods (’seasons’) using the weighted
moving average algorithm.<br>
Parameters:
Examples:
baselinewma(/host/key,1h:now/h,"d",3) #calculating the baseline based on the last full hour within a 3-day
baselinewma(/host/key,2h:now/h,"d",3) #calculating the baseline based on the last two hours within a 3-day
baselinewma(/host/key,1d:now/d,"M",4) #calculating the baseline based on the same day of month as 'yesterd
trendavg(/host/key,time period:time shift)
Parameters:
Examples:
The number of successfully retrieved trend values within the defined time period.
Parameters:
Examples:
trendcount(/host/key,1h:now/h) #the value count for the previous hour (e.g. 12:00-13:00)
trendcount(/host/key,1h:now/h-1h) #the value count for two hours ago (11:00-12:00)
trendcount(/host/key,1h:now/h-2h) #the value count for three hours ago (10:00-11:00)
trendcount(/host/key,1M:now/M-1y) #the value count for the previous month a year ago
1891
trendmax(/host/key,time period:time shift)
Parameters:
Examples:
Parameters:
Examples:
Returns the rate of anomalies during the detection period - a decimal value between 0 and 1 that is ((the number of anomaly
values)/(total number of values)).
Parameters:
Examples:
trendstl(/host/key,100h:now/h,10h,2h) #analyse the last 100 hours of trend data, find the anomaly rate for
trendstl(/host/key,100h:now/h-10h,100h,2h,2.1,"mad") #analyse the period of 100 hours of trend data, up to
trendstl(/host/key,100d:now/d-1d,10d,1d,4,,10) #analyse 100 days of trend data up to a day ago, find the a
trendstl(/host/key,1M:now/M-1y,1d,2h,,"stddevsamp") #analyse the previous month a year ago, find the anoma
trendsum(/host/key,time period:time shift)
Parameters:
Examples:
1892
trendsum(/host/key,1M:now/M-1y) #the sum for the previous month a year ago
See all supported functions.
6 Mathematical functions
• Trigger expressions
• Calculated items
Mathematical functions are supported with float and integer value types, unless stated otherwise.
The functions are listed without additional information. Click on the function to see the full details.
Function Description
Function details
The absolute value of a value.<br> Supported value types: Float, Integer, String, Text, Log.<br> For strings returns: 0 - the values
are equal; 1 - the values differ.
Parameter:
1893
• value - the value to check
The absolute numeric difference will be calculated, as seen with these incoming example values (’previous’ and ’latest’ value =
absolute difference): ’1’ and ’5’ = 4; ’3’ and ’1’ = 2; ’0’ and ’-2.5’ = 2.5
Example:
abs(last(/host/key))>10
acos(value)
Parameter:
The value must be between -1 and 1. For example, the arccosine of a value ’0.5’ will be ’2.0943951’.
Example:
acos(last(/host/key))
asin(value)
Parameter:
The value must be between -1 and 1. For example, the arcsine of a value ’0.5’ will be ’-0.523598776’.
Example:
asin(last(/host/key))
atan(value)
Parameter:
The value must be between -1 and 1. For example, the arctangent of a value ’1’ will be ’0.785398163’.
Example:
atan(last(/host/key))
atan2(value,abscissa)
The arctangent of the ordinate (value) and abscissa coordinates specified as an angle, expressed in radians.
Parameter:
For example, the arctangent of the ordinate and abscissa coordinates of a value ’1’ will be ’2.21429744’.
Example:
atan(last(/host/key),2)
avg(<value1>,<value2>,...)
Parameter:
• valueX - the value returned by another function that is working with item history.
Example:
avg(avg(/host/key),avg(/host2/key2))
cbrt(value)
Parameter:
1894
• value - the value to check
For example, the cube root of ’64’ will be ’4’, of ’63’ will be ’3.97905721’.
Example:
cbrt(last(/host/key))
ceil(value)
Parameter:
Example:
ceil(last(/host/key))
cos(value)
Parameter:
Example:
cos(last(/host/key))
cosh(value)
The hyperbolic cosine of a value. Returns the value as a real number, not as scientific notation.
Parameter:
Example:
cosh(last(/host/key))
cot(value)
Parameter:
Example:
cot(last(/host/key))
degrees(value)
Parameter:
Example:
degrees(last(/host/key))
e
Example:
1895
e()
exp(value)
Parameter:
Example:
exp(last(/host/key))
expm1(value)
Parameter:
For example, Euler’s number at a power of a value ’2’ minus 1 will be ’6.38905609893065’.
Example:
expm1(last(/host/key))
floor(value)
Parameter:
For example, ’2.6’ will be rounded down to ’2’. See also ceil().
Example:
floor(last(/host/key))
log(value)
Parameter:
Example:
log(last(/host/key))
log10(value)
Parameter:
Example:
log10(last(/host/key))
max(<value1>,<value2>,...)
Parameter:
• valueX - the value returned by another function that is working with item history.
Example:
max(avg(/host/key),avg(/host2/key2))
1896
min(<value1>,<value2>,...)
Parameter:
• valueX - the value returned by another function that is working with item history.
Example:
min(avg(/host/key),avg(/host2/key2))
mod(value,denominator)
Parameter:
For example, division remainder of a value ’5’ with division denominator ’2’ will be ’1’.
Example:
mod(last(/host/key),2)
pi
Example:
pi()
power(value,power value)
Parameter:
Example:
power(last(/host/key),3)
radians(value)
Parameter:
Example:
radians(last(/host/key))
rand
Return a random integer value. A pseudo-random generated number using time as seed (enough for mathematical purposes, but
not cryptography).
Example:
rand()
round(value,decimal places)
Parameter:
1897
For example, a value ’2.5482’ rounded to 2 decimal places will be ’2.55’.
Example:
round(last(/host/key),2)
signum(value)
Returns ’-1’ if a value is negative, ’0’ if a value is zero, ’1’ if a value is positive.
Parameter:
Example:
signum(last(/host/key))
sin(value)
Parameter:
Example:
sin(last(/host/key))
sinh(value)
The hyperbolical sine of a value, where the value is an angle expressed in radians.
Parameter:
Example:
sinh(last(/host/key))
sqrt(value)
The square root of a value.<br> This function will fail with a negative value.
Parameter:
Example:
sqrt(last(/host/key))
sum(<value1>,<value2>,...)
Parameter:
• valueX - the value returned by another function that is working with item history.
Example:
sum(avg(/host/key),avg(/host2/key2))
tan(value)
Parameter:
Example:
1898
tan(last(/host/key))
truncate(value,decimal places)
Parameter:
Example:
truncate(last(/host/key),2)
See all supported functions.
7 Operator functions
• Trigger expressions
• Calculated items
The functions are listed without additional information. Click on the function to see the full details.
Function Description
Function details
between(value,min,max)
Check if the value belongs to the given range.<br> Supported value types: Integer, Float.<br> Returns: 1 - in range; 0 - otherwise.
Parameter:
Example:
Check if the value is equal to at least one of the listed values.<br> Supported value types: Integer, Float, Character, Text, Log.<br>
Returns: 1 - if equal; 0 - otherwise.
Parameter:
The value is compared to the listed values as numbers, if all of these values can be converted to numeric; otherwise compared as
strings.
Example:
1899
8 Prediction functions
• Trigger expressions
• Calculated items
The functions are listed without additional information. Click on the function to see the full details.
Function Description
forecast The future value, max, min, delta or avg of the item.
timeleft The time in seconds needed for an item to reach the specified threshold.
Common parameters
• /host/key is a common mandatory first parameter for the functions referencing the host item history
• (sec|#num)<:time shift> is a common second parameter for the functions referencing the host item history, where:
– sec - maximum evaluation period in seconds (time suffixes can be used), or
– #num - maximum evaluation range in latest collected values (if preceded by a hash mark)
– time shift (optional) allows to move the evaluation point back in time. See more details on specifying time shift.
Function details
The future value, max, min, delta or avg of the item.<br> Supported value types: Float, Integer.
Parameters:
• If the value to return is larger than 1.7976931348623158E+308 or less than -1.7976931348623158E+308, the return value
is cropped to 1.7976931348623158E+308 or -1.7976931348623158E+308 correspondingly;
• Becomes unsupported only if misused in the expression (wrong item type, invalid parameters), otherwise returns -1 in case
of errors;
• See also additional information on predictive trigger functions.
Examples:
forecast(/host/key,#10,1h) #forecast the item value in one hour based on the last 10 values
forecast(/host/key,1h,30m) #forecast the item value in 30 minutes based on the last hour data
forecast(/host/key,1h:now-1d,12h) #forecast the item value in 12 hours based on one hour one day ago
forecast(/host/key,1h,10m,"exponential") #forecast the item value in 10 minutes based on the last hour dat
forecast(/host/key,1h,2h,"polynomial3","max") #forecast the maximum value the item can reach in the next t
forecast(/host/key,#2,-20m) #estimate the item value 20 minutes ago based on the last two values (this can
timeleft(/host/key,(sec|#num)<:time shift>,threshold,<fit>)
The time in seconds needed for an item to reach the specified threshold.<br> Supported value types: Float, Integer.
Parameters:
1900
• threshold - the value to reach (unit suffixes can be used);
• fit (optional; must be double-quoted) - see forecast().
Comments:
• If the value to return is larger than 1.7976931348623158E+308, the return value is cropped to 1.7976931348623158E+308;
• Returns 1.7976931348623158E+308 if the threshold cannot be reached;
• Becomes unsupported only if misused in the expression (wrong item type, invalid parameters), otherwise returns -1 in case
of errors;
• See also additional information on predictive trigger functions.
Examples:
timeleft(/host/key,#10,0) #the time until the item value reaches zero based on the last 10 values
timeleft(/host/key,1h,100) #the time until the item value reaches 100 based on the last hour data
timeleft(/host/key,1h:now-1d,100) #the time until the item value reaches 100 based on one hour one day ago
timeleft(/host/key,1h,200,"polynomial2") #the time until the item value reaches 200 based on the last hour
See all supported functions.
9 String functions
• Trigger expressions
• Calculated items
The functions are listed without additional information. Click on the function to see the full details.
Function Description
Function details
The ASCII code of the leftmost character of the value.<br> Supported value types: String, Text, Log.
Parameter:
For example, a value like ’Abc’ will return ’65’ (ASCII code for ’A’).
Example:
1901
ascii(last(/host/key))
bitlength(value)
The length of value in bits.<br> Supported value types: String, Text, Log, Integer.
Parameter:
Example:
bitlength(last(/host/key))
bytelength(value)
The length of value in bytes.<br> Supported value types: String, Text, Log, Integer.
Parameter:
Example:
bytelength(last(/host/key))
char(value)
Return the character by interpreting the value as ASCII code.<br> Supported value types: Integer.
Parameter:
The value must be in the 0-255 range. For example, a value like ’65’ (interpreted as ASCII code) will return ’A’.
Example:
char(last(/host/key))
concat(<value1>,<value2>,...)
The string resulting from concatenating the referenced item values or constant values.<br> Supported value types: String, Text,
Log, Float, Integer.
Parameter:
• valueX - the value returned by one of the history functions or a constant value (string, integer, or float number). Must
contain at least two parameters.
For example, a value like ’Zab’ concatenated to ’bix’ (the constant string) will return ’Zabbix’.
Examples:
concat(last(/host/key),"bix")
concat("1 min: ",last(/host/system.cpu.load[all,avg1]),", 15 min: ",last(/host/system.cpu.load[all,avg15])
insert(value,start,length,replacement)
Insert specified characters or spaces into the character string beginning at the specified position in the string.<br> Supported
value types: String, Text, Log.
Parameters:
For example, a value like ’Zabbbix’ will be replaced by ’Zabbix’ if ’bb’ (starting position 3, positions to replace 2) is replaced by ’b’.
Example:
insert(last(/host/key),3,2,"b")
jsonpath(value,path,<default>)
Return the JSONPath result.<br> Supported value types: String, Text, Log.
Parameters:
1902
• value - the value to check;<br>
• path - the path (must be quoted);<br>
• default - the optional fallback value if the JSONPath query returns no data. Note that on other errors failure is returned (e.g.
”unsupported construct”).
Example:
jsonpath(last(/host/proc.get[zabbix_agentd,,,summary]),"$..size")
left(value,count)
Return the leftmost characters of the value.<br> Supported value types: String, Text, Log.
Parameter:
For example, you may return ’Zab’ from ’Zabbix’ by specifying 3 leftmost characters to return. See also right().
Example:
The length of value in characters.<br> Supported value types: String, Text, Log.
Parameter:
Examples:
Remove specified characters from the beginning of string.<br> Supported value types: String, Text, Log.
Parameter:
Whitespace is left-trimmed by default (if no optional characters are specified). See also: rtrim(), trim().
Examples:
Return a substring of N characters beginning at the character position specified by ’start’.<br> Supported value types: String,
Text, Log.
Parameter:
For example, it is possible return ’abbi’ from a value like ’Zabbix’ if starting position is 2, and positions to return is 4.
Example:
mid(last(/host/key),2,4)="abbi"
repeat(value,count)
Parameter:
1903
• count - the number of times to repeat.
Example:
Find the pattern in the value and replace with replacement. All occurrences of the pattern will be replaced.<br> Supported value
types: String, Text, Log.
Parameter:
Example:
Return the rightmost characters of the value.<br> Supported value types: String, Text, Log.
Parameter:
For example, you may return ’bix’ from ’Zabbix’ by specifying 3 rightmost characters to return. See also left().
Example:
Remove specified characters from the end of string.<br> Supported value types: String, Text, Log.
Parameter:
Whitespace is right-trimmed by default (if no optional characters are specified). See also: ltrim(), trim().
Examples:
Remove specified characters from the beginning and end of string.<br> Supported value types: String, Text, Log.
Parameter:
Whitespace is trimmed from both sides by default (if no optional characters are specified). See also: ltrim(), rtrim().
Examples:
Return the XML XPath result.<br> Supported value types: String, Text, Log.
Parameters:
1904
Example:
xmlxpath(last(/host/xml_result),"/response/error/status")
See all supported functions.
7 Macros
It is possible to use out-of-the-box Supported macros and User macros supported by location.
1 Supported macros
Overview
This page contains a complete list of built-in macros supported by Zabbix, grouped by application area.
Note:
To view all macros supported in a specific location, paste the location name (for example, ”map URL”) into your browser’s
search box (accessible by pressing CTRL+F) and search for next.
Note:
To customize macro values (for example, shorten or extract specific substrings), you can use macro functions.
Actions
1905
Macro Supported in Description
Discovery
{DISCOVERY.DEVICE.IPADDRESS}
→ Discovery notifications and commands IP address of the discovered device.
Available always, does not depend on host being
added.
{DISCOVERY.DEVICE.DNS}
→ Discovery notifications and commands DNS name of the discovered device.
Available always, does not depend on host being
added.
{DISCOVERY.DEVICE.STATUS}
→ Discovery notifications and commands Status of the discovered device: can be either UP or
DOWN.
{DISCOVERY.DEVICE.UPTIME}
→ Discovery notifications and commands Time since the last change of discovery status for a
particular device, with precision down to a second.
For example: 1h 29m 01s.
For devices with status DOWN, this is the period of
their downtime.
{DISCOVERY.RULE.NAME}
→ Discovery notifications and commands Name of the discovery rule that discovered the
presence or absence of the device or service.
{DISCOVERY.SERVICE.NAME}
→ Discovery notifications and commands Name of the service that was discovered.
For example: HTTP.
{DISCOVERY.SERVICE.PORT}
→ Discovery notifications and commands Port of the service that was discovered.
For example: 80.
{DISCOVERY.SERVICE.STATUS}
→ Discovery notifications and commands Status of the discovered service: can be either UP or
DOWN.
{DISCOVERY.SERVICE.UPTIME}
→ Discovery notifications and commands Time since the last change of discovery status for a
particular service, with precision down to a second.
For example: 1h 29m 01s.
For services with status DOWN, this is the period of
their downtime.
Events
{EVENT.ACK.STATUS}
→ Trigger-based notifications and commands Acknowledgment status of the event (Yes/No).
→ Problem update notifications and commands
→ Manual event action scripts
1906
Macro Supported in Description
{EVENT.AGE} → Trigger-based notifications and commands Age of the event that triggered an action, with
→ Problem update notifications and commands precision down to a second.
→ Service-based notifications and commands Useful in escalated messages.
→ Service update notifications and commands
→ Service recovery notifications and commands
→ Discovery notifications and commands
→ Autoregistration notifications and commands
→ Internal notifications
→ Manual event action scripts
{EVENT.DATE} → Trigger-based notifications and commands Date of the event that triggered an action.
→ Problem update notifications and commands
→ Service-based notifications and commands
→ Service update notifications and commands
→ Service recovery notifications and commands
→ Discovery notifications and commands
→ Autoregistration notifications and commands
→ Internal notifications
→ Manual event action scripts
{EVENT.DURATION}
→ Trigger-based notifications and commands Duration of the event (time difference between
→ Problem update notifications and commands problem and recovery events), with precision down
→ Service-based notifications and commands to a second.
→ Service update notifications and commands Useful in problem recovery messages.
→ Service recovery notifications and commands
→ Internal notifications
→ Manual event action scripts
{EVENT.ID} → Trigger-based notifications and commands Numeric ID of the event that triggered an action.
→ Problem update notifications and commands
→ Service-based notifications and commands
→ Service update notifications and commands
→ Service recovery notifications and commands
→ Discovery notifications and commands
→ Autoregistration notifications and commands
→ Internal notifications
→ Trigger URLs
→ Manual event action scripts
{EVENT.NAME} → Trigger-based notifications and commands Name of the problem event that triggered an action.
→ Problem update notifications and commands
→ Service-based notifications and commands
→ Service update notifications and commands
→ Service recovery notifications and commands
→ Internal notifications
→ Manual event action scripts
{EVENT.NSEVERITY}
→ Trigger-based notifications and commands Numeric value of the event severity. Possible values:
→ Problem update notifications and commands 0 - Not classified, 1 - Information, 2 - Warning, 3 -
→ Service-based notifications and commands Average, 4 - High, 5 - Disaster.
→ Service update notifications and commands
→ Service recovery notifications and commands
→ Manual event action scripts
{EVENT.OBJECT}→ Trigger-based notifications and commands Numeric value of the event object. Possible values:
→ Problem update notifications and commands 0 - Trigger, 1 - Discovered host, 2 - Discovered
→ Service-based notifications and commands service, 3 - Autoregistration, 4 - Item, 5 - Low-level
→ Service update notifications and commands discovery rule.
→ Service recovery notifications and commands
→ Discovery notifications and commands
→ Autoregistration notifications and commands
→ Internal notifications
→ Manual event action scripts
{EVENT.OPDATA}→ Trigger-based notifications and commands Operational data of the underlying trigger of a
→ Problem update notifications and commands problem.
→ Manual event action scripts
1907
Macro Supported in Description
{EVENT.RECOVERY
→ .DATE}
Problem recovery notifications and commands Date of the recovery event.
→ Problem update notifications and commands (if
recovery took place)
→ Service recovery notifications and commands
→ Manual event action scripts (if recovery took place)
{EVENT.RECOVERY
→ .ID}
Problem recovery notifications and commands Numeric ID of the recovery event.
→ Problem update notifications and commands (if
recovery took place)
→ Service recovery notifications and commands
→ Manual event action scripts (if recovery took place)
{EVENT.RECOVERY
→ .NAME}
Problem recovery notifications and commands Name of the recovery event.
→ Problem update notifications and commands (if
recovery took place)
→ Service recovery notifications and commands
→ Manual event action scripts (if recovery took place)
{EVENT.RECOVERY
→ .STATUS}
Problem recovery notifications and commands Verbal value of the recovery event.
→ Problem update notifications and commands (if
recovery took place)
→ Service recovery notifications and commands
→ Manual event action scripts (if recovery took place)
{EVENT.RECOVERYProblem recovery notifications and commands
→ .TAGS} A comma separated list of recovery event tags.
→ Problem update notifications and commands (if Expanded to an empty string if no tags exist.
recovery took place)
→ Service recovery notifications and commands
→ Internal notifications
→ Manual event action scripts (if recovery took place)
{EVENT.RECOVERYProblem recovery notifications and commands
→ .TAGSJSON} A JSON array containing event tag objects. Expanded
→ Problem update notifications and commands (if to an empty array if no tags exist.
recovery took place)
→ Service recovery notifications and commands
→ Internal notifications
→ Manual event action scripts (if recovery took place)
{EVENT.RECOVERY
→ .TIME}
Problem recovery notifications and commands Time of the recovery event.
→ Problem update notifications and commands (if
recovery took place)
→ Service recovery notifications and commands
→ Manual event action scripts (if recovery took place)
{EVENT.RECOVERYProblem recovery notifications and commands
→ .VALUE} Numeric value of the recovery event.
→ Problem update notifications and commands (if
recovery took place)
→ Service recovery notifications and commands
→ Manual event action scripts (if recovery took place)
{EVENT.SEVERITY}
→ Trigger-based notifications and commands Name of the event severity.
→ Problem update notifications and commands
→ Service-based notifications and commands
→ Service update notifications and commands
→ Service recovery notifications and commands
→ Manual event action scripts
{EVENT.SOURCE}
→ Trigger-based notifications and commands Numeric value of the event source. Possible values:
→ Problem update notifications and commands 0 - Trigger, 1 - Discovery, 2 - Autoregistration, 3 -
→ Service-based notifications and commands Internal, 4 - Service.
→ Service update notifications and commands
→ Service recovery notifications and commands
→ Discovery notifications and commands
→ Autoregistration notifications and commands
→ Internal notifications
→ Manual event action scripts
1908
Macro Supported in Description
{EVENT.STATUS}→ Trigger-based notifications and commands Verbal value of the event that triggered an action.
→ Problem update notifications and commands
→ Service-based notifications and commands
→ Service update notifications and commands
→ Service recovery notifications and commands
→ Internal notifications
→ Manual event action scripts
{EVENT.TAGS} → Trigger-based notifications and commands A comma separated list of event tags. Expanded to
→ Problem update notifications and commands an empty string if no tags exist.
→ Service-based notifications and commands
→ Service update notifications and commands
→ Service recovery notifications and commands
→ Internal notifications
→ Manual event action scripts
{EVENT.TAGSJSON}
→ Trigger-based notifications and commands A JSON array containing event tag objects. Expanded
→ Problem update notifications and commands to an empty array if no tags exist.
→ Service-based notifications and commands
→ Service update notifications and commands
→ Service recovery notifications and commands
→ Internal notifications
→ Manual event action scripts
→ Trigger-based notifications and commands
{EVENT.TAGS.<tag Event tag value referenced by the tag name.
name>} → Problem update notifications and commands A tag name containing non-alphanumeric characters
→ Service-based notifications and commands (including non-English multibyte-UTF characters)
→ Service update notifications and commands should be double quoted. Quotes and backslashes
→ Service recovery notifications and commands inside a quoted tag name must be escaped with a
→ Internal notifications backslash.
→ Webhook media type URL names and URLs
→ Manual event action scripts
{EVENT.TIME} → Trigger-based notifications and commands Time of the event that triggered an action.
→ Problem update notifications and commands
→ Service-based notifications and commands
→ Service update notifications and commands
→ Service recovery notifications and commands
→ Discovery notifications and commands
→ Autoregistration notifications and commands
→ Internal notifications
→ Manual event action scripts
{EVENT.UPDATE.ACTION}
→ Problem update notifications and commands Human-readable name of the action(s) performed
during problem update.
Resolves to the following values: acknowledged,
commented, changed severity from (original
severity) to (updated severity) and closed
(depending on how many actions are performed in
one update).
{EVENT.UPDATE.DATE}
→ Problem update notifications and commands Date of event update (acknowledgment, etc).
→ Service update notifications and commands Deprecated name: {ACK.DATE}
{EVENT.UPDATE.HISTORY}
→ Trigger-based notifications and commands Log of problem updates (acknowledgments, etc).
→ Problem update notifications and commands Deprecated name: {EVENT.ACK.HISTORY}
→ Manual event action scripts
{EVENT.UPDATE.MESSAGE}
→ Problem update notifications and commands Problem update message.
Deprecated name: {ACK.MESSAGE}
{EVENT.UPDATE.NSEVERITY}
→ Service update notifications and commands Numeric value of the new event severity set during
problem update operation.
{EVENT.UPDATE.SEVERITY}
→ Service update notifications and commands Name of the new event severity set during problem
update operation.
{EVENT.UPDATE.STATUS}
→ Trigger-based notifications and commands Numeric value of the problem update status.
→ Problem update notifications and commands Possible values: 0 - Webhook was called because of
→ Manual event action scripts problem/recovery event, 1 - Update operation.
{EVENT.UPDATE.TIME}
→ Problem update notifications and commands Time of event update (acknowledgment, etc).
→ Service update notifications and commands Deprecated name: {ACK.TIME}
1909
Macro Supported in Description
{EVENT.VALUE} → Trigger-based notifications and commands Numeric value of the event that triggered an action
→ Problem update notifications and commands (1 for problem, 0 for recovering).
→ Service-based notifications and commands
→ Service update notifications and commands
→ Service recovery notifications and commands
→ Internal notifications
→ Manual event action scripts
{EVENT.CAUSE.*} macros are used in the context of a symptom event, for example, in notifications; they return information about
the cause event.
The {EVENT.SYMPTOMS} macro is used in the context of the cause event and returns information about symptom events.
{EVENT.CAUSE.ACK.STATUS}
→ Trigger-based notifications and commands Acknowledgment status of the cause event (Yes/No).
→ Problem update notifications and commands
→ Manual event action scripts
{EVENT.CAUSE.AGE}
→ Trigger-based notifications and commands Age of the cause event, with precision down to a
→ Problem update notifications and commands second.
→ Manual event action scripts Useful in escalated messages.
{EVENT.CAUSE.DATE}
→ Trigger-based notifications and commands Date of the cause event.
→ Problem update notifications and commands
→ Manual event action scripts
{EVENT.CAUSE.DURATION}
→ Trigger-based notifications and commands Duration of the cause event (time difference
→ Problem update notifications and commands between problem and recovery events), with
→ Manual event action scripts precision down to a second.
Useful in problem recovery messages.
{EVENT.CAUSE.ID}
→ Trigger-based notifications and commands Numeric ID of the cause event .
→ Problem update notifications and commands
→ Manual event action scripts
{EVENT.CAUSE.NAME}
→ Trigger-based notifications and commands Name of the cause problem event.
→ Problem update notifications and commands
→ Manual event action scripts
{EVENT.CAUSE.NSEVERITY}
→ Trigger-based notifications and commands Numeric value of the cause event severity.
→ Problem update notifications and commands Possible values: 0 - Not classified, 1 - Information, 2 -
→ Manual event action scripts Warning, 3 - Average, 4 - High, 5 - Disaster.
{EVENT.CAUSE.OBJECT}
→ Trigger-based notifications and commands Numeric value of the cause event object.
→ Problem update notifications and commands Possible values: 0 - Trigger, 1 - Discovered host, 2 -
→ Manual event action scripts Discovered service, 3 - Autoregistration, 4 - Item, 5 -
Low-level discovery rule.
{EVENT.CAUSE.OPDATA}
→ Trigger-based notifications and commands Operational data of the underlying trigger of the
→ Problem update notifications and commands cause problem.
→ Manual event action scripts
{EVENT.CAUSE.SEVERITY}
→ Trigger-based notifications and commands Name of the cause event severity.
→ Problem update notifications and commands
→ Manual event action scripts
{EVENT.CAUSE.SOURCE}
→ Trigger-based notifications and commands Numeric value of the cause event source.
→ Problem update notifications and commands Possible values: 0 - Trigger, 1 - Discovery, 2 -
→ Manual event action scripts Autoregistration, 3 - Internal.
{EVENT.CAUSE.STATUS}
→ Trigger-based notifications and commands Verbal value of the cause event.
→ Problem update notifications and commands
→ Manual event action scripts
{EVENT.CAUSE.TAGS}
→ Trigger-based notifications and commands A comma separated list of cause event tags.
→ Problem update notifications and commands Expanded to an empty string if no tags exist.
→ Manual event action scripts
{EVENT.CAUSE.TAGSJSON}
→ Trigger-based notifications and commands A JSON array containing cause event tag objects.
→ Problem update notifications and commands Expanded to an empty array if no tags exist.
→ Manual event action scripts
1910
Macro Supported in Description
{EVENT.CAUSE.TAGS.<tag
→ Trigger-based notifications and commands Cause event tag value referenced by the tag name.
name>} → Problem update notifications and commands A tag name containing non-alphanumeric characters
→ Manual event action scripts (including non-English multibyte-UTF characters)
should be double quoted. Quotes and backslashes
inside a quoted tag name must be escaped with a
backslash.
{EVENT.CAUSE.TIME}
→ Trigger-based notifications and commands Time of the cause event.
→ Problem update notifications and commands
→ Manual event action scripts
{EVENT.CAUSE.UPDATE.HISTORY}
→ Trigger-based notifications and commands Log of cause problem updates (acknowledgments,
→ Problem update notifications and commands etc).
→ Manual event action scripts
{EVENT.CAUSE.VALUE}
→ Trigger-based notifications and commands Numeric value of the cause event (1 for problem, 0
→ Problem update notifications and commands for recovering).
→ Manual event action scripts
{EVENT.SYMPTOMS}
→ Trigger-based notifications and commands The list of symptom events.
→ Problem update notifications and commands Includes the following details: host name, event
→ Manual event action scripts name, severity, age, service tags and values.
Functions
{FUNCTION.VALUE<1-
→ Trigger-based notifications and commands Results of the Nth item-based function in the trigger
9>} → Problem update notifications and commands expression at the time of the event.
→ Manual event action scripts Only functions with /host/key as the first parameter
→ Event names are counted. See indexed macros.
{FUNCTION.RECOVERY
→ Problem
.VALUE<1-
recovery notifications and commands Results of the Nth item-based function in the
9>} → Problem update notifications and commands recovery expression at the time of the event.
→ Manual event action scripts Only functions with /host/key as the first parameter
are counted. See indexed macros.
Hosts
1911
Macro Supported in Description
{HOST.CONN} → Trigger-based notifications and commands Host IP address or DNS name, depending on host
2
→ Problem update notifications and commands settings .
→ Internal notifications
→ Map element labels, map URL names and values May be used with a numeric index as
1
→ Item key parameters {HOST.CONN<1-9>} to point to the first, second,
→ Host interface IP/DNS third, etc. host in a trigger expression. See indexed
→ Trapper item ”Allowed hosts” field macros.
→ Database monitoring additional parameters
→ SSH and Telnet scripts
→ JMX item endpoint field
4
→ Web monitoring
→ Low-level discovery rule filter regular expressions
→ URL field of dynamic URL dashboard widget
→ Trigger names, event names, operational data and
descriptions
→ Trigger URLs
→ Tag names and values
→ Script-type and Browser-type item, item prototype
and discovery rule parameter names and values
→ HTTP agent type item, item prototype and
discovery rule fields:
URL, Query fields, Request body, Headers, SSL
certificate file, SSL key file, Allowed hosts.
→ Manual host action scripts (including confirmation
text)
→ Manual event action scripts (including
confirmation text)
→ Description parameter in Item value and Gauge
widget
→ Primary/Secondary label Text parameter in
Honeycomb widget
{HOST.DESCRIPTION}
→ Trigger-based notifications and commands Host description.
→ Problem update notifications and commands
→ Internal notifications This macro may be used with a numeric index e.g.
→ Map element labels {HOST.DESCRIPTION<1-9>} to point to the first,
→ Manual event action scripts second, third, etc. host in a trigger expression. See
→ Description parameter in Item value and Gauge indexed macros.
widget
→ Primary/Secondary label Text parameter in
Honeycomb widget
1912
Macro Supported in Description
2
{HOST.DNS} → Trigger-based notifications and commands Host DNS name .
→ Problem update notifications and commands
→ Internal notifications This macro may be used with a numeric index e.g.
→ Map element labels, map URL names and values {HOST.DNS<1-9>} to point to the first, second, third,
1
→ Item key parameters etc. host in a trigger expression. See indexed
→ Host interface IP/DNS macros.
→ Trapper item ”Allowed hosts” field
→ Database monitoring additional parameters
→ SSH and Telnet scripts
→ JMX item endpoint field
4
→ Web monitoring
→ Low-level discovery rule filter regular expressions
→ URL field of dynamic URL dashboard widget
→ Trigger names, event names, operational data and
descriptions
→ Trigger URLs
→ Tag names and values
→ Script-type and Browser-type item, item prototype
and discovery rule parameter names and values
→ HTTP agent type item, item prototype and
discovery rule fields:
URL, Query fields, Request body, Headers, SSL
certificate file, SSL key file, Allowed hosts.
→ Manual host action scripts (including confirmation
text)
→ Manual event action scripts (including
confirmation text)
→ Description parameter in Item value and Gauge
widget
→ Primary/Secondary label Text parameter in
Honeycomb widget
1913
Macro Supported in Description
1914
Macro Supported in Description
2
{HOST.IP} → Trigger-based notifications and commands Host IP address .
→ Problem update notifications and commands
→ Autoregistration notifications and commands This macro may be used with a numeric index e.g.
→ Internal notifications {HOST.IP<1-9>} to point to the first, second, third,
→ Map element labels, map URL names and values etc. host in a trigger expression. See indexed
1
→ Item key parameters macros.
→ Host interface IP/DNS
→ Trapper item ”Allowed hosts” field {IPADDRESS<1-9>} is deprecated.
→ Database monitoring additional parameters
→ SSH and Telnet scripts
→ JMX item endpoint field
4
→ Web monitoring
→ Low-level discovery rule filter regular expressions
→ URL field of dynamic URL dashboard widget
→ Trigger names, event names, operational data and
descriptions
→ Trigger URLs
→ Tag names and values
→ Script-type and Browser-type item, item prototype
and discovery rule parameter names and values
→ HTTP agent type item, item prototype and
discovery rule fields:
URL, Query fields, Request body, Headers, SSL
certificate file, SSL key file, Allowed hosts.
→ Manual host action scripts (including confirmation
text)
→ Manual event action scripts (including
confirmation text)
→ Description parameter in Item value and Gauge
widget
→ Primary/Secondary label Text parameter in
Honeycomb widget
{HOST.METADATA}
→ Autoregistration notifications and commands Host metadata.
Used only for active agent autoregistration.
1915
Macro Supported in Description
1916
Macro Supported in Description
{HOST.TARGET.NAME}
→ Trigger-based commands Visible name of the target host.
→ Problem update commands
→ Discovery commands
→ Autoregistration commands
Host groups
{HOSTGROUP.ID}
→ Map element labels, map URL names and values Host group ID.
Host inventory
{INVENTORY.ALIAS}
→ Trigger-based notifications and commands Alias field in host inventory.
→ Problem update notifications and commands
→ Internal notifications This macro may be used with a numeric index e.g.
→ Tag names and values {INVENTORY.ALIAS<1-9>} to point to the first,
→ Map element labels, map URL names and values second, third, etc. host in a trigger expression. See
6
→ Script-type and Browser-type items indexed macros.
6
→ Manual host action scripts
→ Manual event action scripts
→ Description parameter in Item value and Gauge
widget
→ Primary/Secondary label Text parameter in
Honeycomb widget
{INVENTORY.ASSET.TAG}
→ Trigger-based notifications and commands Asset tag field in host inventory.
→ Problem update notifications and commands
→ Internal notifications This macro may be used with a numeric index e.g.
→ Tag names and values {INVENTORY.ASSET.TAG<1-9>} to point to the first,
→ Map element labels, map URL names and values second, third, etc. host in a trigger expression. See
6
→ Script-type and Browser-type items indexed macros.
6
→ Manual host action scripts
→ Manual event action scripts
→ Description parameter in Item value and Gauge
widget
→ Primary/Secondary label Text parameter in
Honeycomb widget
{INVENTORY.CHASSIS}
→ Trigger-based notifications and commands Chassis field in host inventory.
→ Problem update notifications and commands
→ Internal notifications This macro may be used with a numeric index e.g.
→ Tag names and values {INVENTORY.CHASSIS<1-9>} to point to the first,
→ Map element labels, map URL names and values second, third, etc. host in a trigger expression. See
6
→ Script-type and Browser-type items indexed macros.
6
→ Manual host action scripts
→ Manual event action scripts
→ Description parameter in Item value and Gauge
widget
→ Primary/Secondary label Text parameter in
Honeycomb widget
1917
Macro Supported in Description
{INVENTORY.CONTACT}
→ Trigger-based notifications and commands Contact field in host inventory.
→ Problem update notifications and commands
→ Internal notifications This macro may be used with a numeric index e.g.
→ Tag names and values {INVENTORY.CONTACT<1-9>} to point to the first,
→ Map element labels, map URL names and values second, third, etc. host in a trigger expression. See
6
→ Script-type and Browser-type items indexed macros.
6
→ Manual host action scripts
→ Manual event action scripts {PROFILE.CONTACT<1-9>} is deprecated.
→ Description parameter in Item value and Gauge
widget
→ Primary/Secondary label Text parameter in
Honeycomb widget
{INVENTORY.CONTRACT.NUMBER}
→ Trigger-based notifications and commands Contract number field in host inventory.
→ Problem update notifications and commands
→ Internal notifications This macro may be used with a numeric index e.g.
→ Tag names and values {INVENTORY.CONTRACT.NUMBER<1-9>} to point to
→ Map element labels, map URL names and values the first, second, third, etc. host in a trigger
6
→ Script-type and Browser-type items expression. See indexed macros.
6
→ Manual host action scripts
→ Manual event action scripts
→ Description parameter in Item value and Gauge
widget
→ Primary/Secondary label Text parameter in
Honeycomb widget
{INVENTORY.DEPLOYMENT.STATUS}
→ Trigger-based notifications and commands Deployment status field in host inventory.
→ Problem update notifications and commands
→ Internal notifications This macro may be used with a numeric index e.g.
→ Tag names and values {INVENTORY.DEPLOYMENT.STATUS<1-9>} to point to
→ Map element labels, map URL names and values the first, second, third, etc. host in a trigger
6
→ Script-type and Browser-type items expression. See indexed macros.
6
→ Manual host action scripts
→ Manual event action scripts
→ Description parameter in Item value and Gauge
widget
→ Primary/Secondary label Text parameter in
Honeycomb widget
{INVENTORY.HARDWARE}
→ Trigger-based notifications and commands Hardware field in host inventory.
→ Problem update notifications and commands
→ Internal notifications This macro may be used with a numeric index e.g.
→ Tag names and values {INVENTORY.HARDWARE<1-9>} to point to the first,
→ Map element labels, map URL names and values second, third, etc. host in a trigger expression. See
6
→ Script-type and Browser-type items indexed macros.
6
→ Manual host action scripts
→ Manual event action scripts {PROFILE.HARDWARE<1-9>} is deprecated.
→ Description parameter in Item value and Gauge
widget
→ Primary/Secondary label Text parameter in
Honeycomb widget
{INVENTORY.HARDWARE.FULL}
→ Trigger-based notifications and commands Hardware (Full details) field in host inventory.
→ Problem update notifications and commands
→ Internal notifications This macro may be used with a numeric index e.g.
→ Tag names and values {INVENTORY.HARDWARE.FULL<1-9>} to point to the
→ Map element labels, map URL names and values first, second, third, etc. host in a trigger expression.
6
→ Script-type and Browser-type items See indexed macros.
6
→ Manual host action scripts
→ Manual event action scripts
→ Description parameter in Item value and Gauge
widget
→ Primary/Secondary label Text parameter in
Honeycomb widget
1918
Macro Supported in Description
{INVENTORY.HOST.NETMASK}
→ Trigger-based notifications and commands Host subnet mask field in host inventory.
→ Problem update notifications and commands
→ Internal notifications This macro may be used with a numeric index e.g.
→ Tag names and values {INVENTORY.HOST.NETMASK<1-9>} to point to the
→ Map element labels, map URL names and values first, second, third, etc. host in a trigger expression.
6
→ Script-type and Browser-type items See indexed macros.
6
→ Manual host action scripts
→ Manual event action scripts
→ Description parameter in Item value and Gauge
widget
→ Primary/Secondary label Text parameter in
Honeycomb widget
{INVENTORY.HOST.NETWORKS}
→ Trigger-based notifications and commands Host networks field in host inventory.
→ Problem update notifications and commands
→ Internal notifications This macro may be used with a numeric index e.g.
→ Tag names and values {INVENTORY.HOST.NETWORKS<1-9>} to point to the
→ Map element labels, map URL names and values first, second, third, etc. host in a trigger expression.
6
→ Script-type and Browser-type items See indexed macros.
6
→ Manual host action scripts
→ Manual event action scripts
→ Description parameter in Item value and Gauge
widget
→ Primary/Secondary label Text parameter in
Honeycomb widget
{INVENTORY.HOST.ROUTER}
→ Trigger-based notifications and commands Host router field in host inventory.
→ Problem update notifications and commands
→ Internal notifications This macro may be used with a numeric index e.g.
→ Tag names and values {INVENTORY.HOST.ROUTER<1-9>} to point to the
→ Map element labels, map URL names and values first, second, third, etc. host in a trigger expression.
6
→ Script-type and Browser-type items See indexed macros.
6
→ Manual host action scripts
→ Manual event action scripts
→ Description parameter in Item value and Gauge
widget
→ Primary/Secondary label Text parameter in
Honeycomb widget
{INVENTORY.HW.ARCH}
→ Trigger-based notifications and commands Hardware architecture field in host inventory.
→ Problem update notifications and commands
→ Internal notifications This macro may be used with a numeric index e.g.
→ Tag names and values {INVENTORY.HW.ARCH<1-9>} to point to the first,
→ Map element labels, map URL names and values second, third, etc. host in a trigger expression. See
6
→ Script-type and Browser-type items indexed macros.
6
→ Manual host action scripts
→ Manual event action scripts
→ Description parameter in Item value and Gauge
widget
→ Primary/Secondary label Text parameter in
Honeycomb widget
{INVENTORY.HW.DATE.DECOMM}
→ Trigger-based notifications and commands Date hardware decommissioned field in host
→ Problem update notifications and commands inventory.
→ Internal notifications
→ Tag names and values This macro may be used with a numeric index e.g.
→ Map element labels, map URL names and values {INVENTORY.HW.DATE.DECOMM<1-9>} to point to
6
→ Script-type and Browser-type items the first, second, third, etc. host in a trigger
6
→ Manual host action scripts expression. See indexed macros.
→ Manual event action scripts
→ Description parameter in Item value and Gauge
widget
→ Primary/Secondary label Text parameter in
Honeycomb widget
1919
Macro Supported in Description
{INVENTORY.HW.DATE.EXPIRY}
→ Trigger-based notifications and commands Date hardware maintenance expires field in host
→ Problem update notifications and commands inventory.
→ Internal notifications
→ Tag names and values This macro may be used with a numeric index e.g.
→ Map element labels, map URL names and values {INVENTORY.HW.DATE.EXPIRY<1-9>} to point to the
6
→ Script-type and Browser-type items first, second, third, etc. host in a trigger expression.
6
→ Manual host action scripts See indexed macros.
→ Manual event action scripts
→ Description parameter in Item value and Gauge
widget
→ Primary/Secondary label Text parameter in
Honeycomb widget
{INVENTORY.HW.DATE.INSTALL}
→ Trigger-based notifications and commands Date hardware installed field in host inventory.
→ Problem update notifications and commands
→ Internal notifications This macro may be used with a numeric index e.g.
→ Tag names and values {INVENTORY.HW.DATE.INSTALL<1-9>} to point to the
→ Map element labels, map URL names and values first, second, third, etc. host in a trigger expression.
6
→ Script-type and Browser-type items See indexed macros.
6
→ Manual host action scripts
→ Manual event action scripts
→ Description parameter in Item value and Gauge
widget
→ Primary/Secondary label Text parameter in
Honeycomb widget
{INVENTORY.HW.DATE.PURCHASE}
→ Trigger-based notifications and commands Date hardware purchased field in host inventory.
→ Problem update notifications and commands
→ Internal notifications This macro may be used with a numeric index e.g.
→ Tag names and values {INVENTORY.HW.DATE.PURCHASE<1-9>} to point to
→ Map element labels, map URL names and values the first, second, third, etc. host in a trigger
6
→ Script-type and Browser-type items expression. See indexed macros.
6
→ Manual host action scripts
→ Manual event action scripts
→ Description parameter in Item value and Gauge
widget
→ Primary/Secondary label Text parameter in
Honeycomb widget
{INVENTORY.INSTALLER.NAME}
→ Trigger-based notifications and commands Installer name field in host inventory.
→ Problem update notifications and commands
→ Internal notifications This macro may be used with a numeric index e.g.
→ Tag names and values {INVENTORY.INSTALLER.NAME<1-9>} to point to the
→ Map element labels, map URL names and values first, second, third, etc. host in a trigger expression.
6
→ Script-type and Browser-type items See indexed macros.
6
→ Manual host action scripts
→ Manual event action scripts
→ Description parameter in Item value and Gauge
widget
→ Primary/Secondary label Text parameter in
Honeycomb widget
{INVENTORY.LOCATION}
→ Trigger-based notifications and commands Location field in host inventory.
→ Problem update notifications and commands
→ Internal notifications This macro may be used with a numeric index e.g.
→ Tag names and values {INVENTORY.LOCATION<1-9>} to point to the first,
→ Map element labels, map URL names and values second, third, etc. host in a trigger expression. See
6
→ Script-type and Browser-type items indexed macros.
6
→ Manual host action scripts
→ Manual event action scripts {PROFILE.LOCATION<1-9>} is deprecated.
→ Description parameter in Item value and Gauge
widget
→ Primary/Secondary label Text parameter in
Honeycomb widget
1920
Macro Supported in Description
{INVENTORY.LOCATION.LAT}
→ Trigger-based notifications and commands Location latitude field in host inventory.
→ Problem update notifications and commands
→ Internal notifications This macro may be used with a numeric index e.g.
→ Tag names and values {INVENTORY.LOCATION.LAT<1-9>} to point to the
→ Map element labels, map URL names and values first, second, third, etc. host in a trigger expression.
6
→ Script-type and Browser-type items See indexed macros.
6
→ Manual host action scripts
→ Manual event action scripts
→ Description parameter in Item value and Gauge
widget
→ Primary/Secondary label Text parameter in
Honeycomb widget
{INVENTORY.LOCATION.LON}
→ Trigger-based notifications and commands Location longitude field in host inventory.
→ Problem update notifications and commands
→ Internal notifications This macro may be used with a numeric index e.g.
→ Tag names and values {INVENTORY.LOCATION.LON<1-9>} to point to the
→ Map element labels, map URL names and values first, second, third, etc. host in a trigger expression.
6
→ Script-type and Browser-type items See indexed macros.
6
→ Manual host action scripts
→ Manual event action scripts
→ Description parameter in Item value and Gauge
widget
→ Primary/Secondary label Text parameter in
Honeycomb widget
{INVENTORY.MACADDRESS.A}
→ Trigger-based notifications and commands MAC address A field in host inventory.
→ Problem update notifications and commands
→ Internal notifications This macro may be used with a numeric index e.g.
→ Tag names and values {INVENTORY.MACADDRESS.A<1-9>} to point to the
→ Map element labels, map URL names and values first, second, third, etc. host in a trigger expression.
6
→ Script-type and Browser-type items See indexed macros.
6
→ Manual host action scripts
→ Manual event action scripts {PROFILE.MACADDRESS<1-9>} is deprecated.
→ Description parameter in Item value and Gauge
widget
→ Primary/Secondary label Text parameter in
Honeycomb widget
{INVENTORY.MACADDRESS.B}
→ Trigger-based notifications and commands MAC address B field in host inventory.
→ Problem update notifications and commands
→ Internal notifications This macro may be used with a numeric index e.g.
→ Tag names and values {INVENTORY.MACADDRESS.B<1-9>} to point to the
→ Map element labels, map URL names and values first, second, third, etc. host in a trigger expression.
6
→ Script-type and Browser-type items See indexed macros.
6
→ Manual host action scripts
→ Manual event action scripts
→ Description parameter in Item value and Gauge
widget
→ Primary/Secondary label Text parameter in
Honeycomb widget
{INVENTORY.MODEL}
→ Trigger-based notifications and commands Model field in host inventory.
→ Problem update notifications and commands
→ Internal notifications This macro may be used with a numeric index e.g.
→ Tag names and values {INVENTORY.MODEL<1-9>} to point to the first,
→ Map element labels, map URL names and values second, third, etc. host in a trigger expression. See
6
→ Script-type and Browser-type items indexed macros.
6
→ Manual host action scripts
→ Manual event action scripts
→ Description parameter in Item value and Gauge
widget
→ Primary/Secondary label Text parameter in
Honeycomb widget
1921
Macro Supported in Description
{INVENTORY.NAME}
→ Trigger-based notifications and commands Name field in host inventory.
→ Problem update notifications and commands
→ Internal notifications This macro may be used with a numeric index e.g.
→ Tag names and values {INVENTORY.NAME<1-9>} to point to the first,
→ Map element labels, map URL names and values second, third, etc. host in a trigger expression. See
6
→ Script-type and Browser-type items indexed macros.
6
→ Manual host action scripts
→ Manual event action scripts {PROFILE.NAME<1-9>} is deprecated.
→ Description parameter in Item value and Gauge
widget
→ Primary/Secondary label Text parameter in
Honeycomb widget
{INVENTORY.NOTES}
→ Trigger-based notifications and commands Notes field in host inventory.
→ Problem update notifications and commands
→ Internal notifications This macro may be used with a numeric index e.g.
→ Tag names and values {INVENTORY.NOTES<1-9>} to point to the first,
→ Map element labels, map URL names and values second, third, etc. host in a trigger expression. See
6
→ Script-type and Browser-type items indexed macros.
6
→ Manual host action scripts
→ Manual event action scripts {PROFILE.NOTES<1-9>} is deprecated.
→ Description parameter in Item value and Gauge
widget
→ Primary/Secondary label Text parameter in
Honeycomb widget
{INVENTORY.OOB.IP}
→ Trigger-based notifications and commands OOB IP address field in host inventory.
→ Problem update notifications and commands
→ Internal notifications This macro may be used with a numeric index e.g.
→ Tag names and values {INVENTORY.OOB.IP<1-9>} to point to the first,
→ Map element labels, map URL names and values second, third, etc. host in a trigger expression. See
6
→ Script-type and Browser-type items indexed macros.
6
→ Manual host action scripts
→ Manual event action scripts
→ Description parameter in Item value and Gauge
widget
→ Primary/Secondary label Text parameter in
Honeycomb widget
{INVENTORY.OOB.NETMASK}
→ Trigger-based notifications and commands OOB subnet mask field in host inventory.
→ Problem update notifications and commands
→ Internal notifications This macro may be used with a numeric index e.g.
→ Tag names and values {INVENTORY.OOB.NETMASK<1-9>} to point to the
→ Map element labels, map URL names and values first, second, third, etc. host in a trigger expression.
6
→ Script-type and Browser-type items See indexed macros.
6
→ Manual host action scripts
→ Manual event action scripts
→ Description parameter in Item value and Gauge
widget
→ Primary/Secondary label Text parameter in
Honeycomb widget
{INVENTORY.OOB.ROUTER}
→ Trigger-based notifications and commands OOB router field in host inventory.
→ Problem update notifications and commands
→ Internal notifications This macro may be used with a numeric index e.g.
→ Tag names and values {INVENTORY.OOB.ROUTER<1-9>} to point to the
→ Map element labels, map URL names and values first, second, third, etc. host in a trigger expression.
6
→ Script-type and Browser-type items See indexed macros.
6
→ Manual host action scripts
→ Manual event action scripts
→ Description parameter in Item value and Gauge
widget
→ Primary/Secondary label Text parameter in
Honeycomb widget
1922
Macro Supported in Description
1923
Macro Supported in Description
{INVENTORY.POC.PRIMARY
→ Trigger-based
.NAME}notifications and commands Primary POC name field in host inventory.
→ Problem update notifications and commands
→ Internal notifications This macro may be used with a numeric index e.g.
→ Tag names and values {INVENTORY.POC.PRIMARY.NAME<1-9>} to point to
→ Map element labels, map URL names and values the first, second, third, etc. host in a trigger
6
→ Script-type and Browser-type items expression. See indexed macros.
6
→ Manual host action scripts
→ Manual event action scripts
→ Description parameter in Item value and Gauge
widget
→ Primary/Secondary label Text parameter in
Honeycomb widget
{INVENTORY.POC.PRIMARY
→ Trigger-based
.NOTES}
notifications and commands Primary POC notes field in host inventory.
→ Problem update notifications and commands
→ Internal notifications This macro may be used with a numeric index e.g.
→ Tag names and values {INVENTORY.POC.PRIMARY.NOTES<1-9>} to point to
→ Map element labels, map URL names and values the first, second, third, etc. host in a trigger
6
→ Script-type and Browser-type items expression. See indexed macros.
6
→ Manual host action scripts
→ Manual event action scripts
→ Description parameter in Item value and Gauge
widget
→ Primary/Secondary label Text parameter in
Honeycomb widget
{INVENTORY.POC.PRIMARY
→ Trigger-based
.PHONE.A}
notifications and commands Primary POC phone A field in host inventory.
→ Problem update notifications and commands
→ Internal notifications This macro may be used with a numeric index e.g.
→ Tag names and values {INVENTORY.POC.PRIMARY.PHONE.A<1-9>} to point
→ Map element labels, map URL names and values to the first, second, third, etc. host in a trigger
6
→ Script-type and Browser-type items expression. See indexed macros.
6
→ Manual host action scripts
→ Manual event action scripts
→ Description parameter in Item value and Gauge
widget
→ Primary/Secondary label Text parameter in
Honeycomb widget
{INVENTORY.POC.PRIMARY
→ Trigger-based
.PHONE.B}
notifications and commands Primary POC phone B field in host inventory.
→ Problem update notifications and commands
→ Internal notifications This macro may be used with a numeric index e.g.
→ Tag names and values {INVENTORY.POC.PRIMARY.PHONE.B<1-9>} to point
→ Map element labels, map URL names and values to the first, second, third, etc. host in a trigger
6
→ Script-type and Browser-type items expression. See indexed macros.
6
→ Manual host action scripts
→ Manual event action scripts
→ Description parameter in Item value and Gauge
widget
→ Primary/Secondary label Text parameter in
Honeycomb widget
{INVENTORY.POC.PRIMARY
→ Trigger-based
.SCREEN}
notifications and commands Primary POC screen name field in host inventory.
→ Problem update notifications and commands
→ Internal notifications This macro may be used with a numeric index e.g.
→ Tag names and values {INVENTORY.POC.PRIMARY.SCREEN<1-9>} to point to
→ Map element labels, map URL names and values the first, second, third, etc. host in a trigger
6
→ Script-type and Browser-type items expression. See indexed macros.
6
→ Manual host action scripts
→ Manual event action scripts
→ Description parameter in Item value and Gauge
widget
→ Primary/Secondary label Text parameter in
Honeycomb widget
1924
Macro Supported in Description
{INVENTORY.POC.SECONDARY
→ Trigger-based
.CELL}
notifications and commands Secondary POC cell field in host inventory.
→ Problem update notifications and commands
→ Internal notifications This macro may be used with a numeric index e.g.
→ Tag names and values {INVENTORY.POC.SECONDARY.CELL<1-9>} to point
→ Map element labels, map URL names and values to the first, second, third, etc. host in a trigger
6
→ Script-type and Browser-type items expression. See indexed macros.
6
→ Manual host action scripts
→ Manual event action scripts
→ Description parameter in Item value and Gauge
widget
→ Primary/Secondary label Text parameter in
Honeycomb widget
{INVENTORY.POC.SECONDARY
→ Trigger-based
.EMAIL}
notifications and commands Secondary POC email field in host inventory.
→ Problem update notifications and commands
→ Internal notifications This macro may be used with a numeric index e.g.
→ Tag names and values {INVENTORY.POC.SECONDARY.EMAIL<1-9>} to point
→ Map element labels, map URL names and values to the first, second, third, etc. host in a trigger
6
→ Script-type and Browser-type items expression. See indexed macros.
6
→ Manual host action scripts
→ Manual event action scripts
→ Description parameter in Item value and Gauge
widget
→ Primary/Secondary label Text parameter in
Honeycomb widget
{INVENTORY.POC.SECONDARY
→ Trigger-based
.NAME}
notifications and commands Secondary POC name field in host inventory.
→ Problem update notifications and commands
→ Internal notifications This macro may be used with a numeric index e.g.
→ Tag names and values {INVENTORY.POC.SECONDARY.NAME<1-9>} to point
→ Map element labels, map URL names and values to the first, second, third, etc. host in a trigger
6
→ Script-type and Browser-type items expression. See indexed macros.
6
→ Manual host action scripts
→ Manual event action scripts
→ Description parameter in Item value and Gauge
widget
→ Primary/Secondary label Text parameter in
Honeycomb widget
{INVENTORY.POC.SECONDARY
→ Trigger-based
.NOTES}
notifications and commands Secondary POC notes field in host inventory.
→ Problem update notifications and commands
→ Internal notifications This macro may be used with a numeric index e.g.
→ Tag names and values {INVENTORY.POC.SECONDARY.NOTES<1-9>} to point
→ Map element labels, map URL names and values to the first, second, third, etc. host in a trigger
6
→ Script-type and Browser-type items expression. See indexed macros.
6
→ Manual host action scripts
→ Manual event action scripts
→ Description parameter in Item value and Gauge
widget
→ Primary/Secondary label Text parameter in
Honeycomb widget
{INVENTORY.POC.SECONDARY
→ Trigger-based
.PHONE.A}
notifications and commands Secondary POC phone A field in host inventory.
→ Problem update notifications and commands
→ Internal notifications This macro may be used with a numeric index e.g.
→ Tag names and values {INVENTORY.POC.SECONDARY.PHONE.A<1-9>} to
→ Map element labels, map URL names and values point to the first, second, third, etc. host in a trigger
6
→ Script-type and Browser-type items expression. See indexed macros.
6
→ Manual host action scripts
→ Manual event action scripts
→ Description parameter in Item value and Gauge
widget
→ Primary/Secondary label Text parameter in
Honeycomb widget
1925
Macro Supported in Description
{INVENTORY.POC.SECONDARY
→ Trigger-based
.PHONE.B}
notifications and commands Secondary POC phone B field in host inventory.
→ Problem update notifications and commands
→ Internal notifications This macro may be used with a numeric index e.g.
→ Tag names and values {INVENTORY.POC.SECONDARY.PHONE.B<1-9>} to
→ Map element labels, map URL names and values point to the first, second, third, etc. host in a trigger
6
→ Script-type and Browser-type items expression. See indexed macros.
6
→ Manual host action scripts
→ Manual event action scripts
→ Description parameter in Item value and Gauge
widget
→ Primary/Secondary label Text parameter in
Honeycomb widget
{INVENTORY.POC.SECONDARY
→ Trigger-based
.SCREEN}
notifications and commands Secondary POC screen name field in host inventory.
→ Problem update notifications and commands
→ Internal notifications This macro may be used with a numeric index e.g.
→ Tag names and values {INVENTORY.POC.SECONDARY.SCREEN<1-9>} to
→ Map element labels, map URL names and values point to the first, second, third, etc. host in a trigger
6
→ Script-type and Browser-type items expression. See indexed macros.
6
→ Manual host action scripts
→ Manual event action scripts
→ Description parameter in Item value and Gauge
widget
→ Primary/Secondary label Text parameter in
Honeycomb widget
{INVENTORY.SERIALNO.A}
→ Trigger-based notifications and commands Serial number A field in host inventory.
→ Problem update notifications and commands
→ Internal notifications This macro may be used with a numeric index e.g.
→ Tag names and values {INVENTORY.SERIALNO.A<1-9>} to point to the first,
→ Map element labels, map URL names and values second, third, etc. host in a trigger expression. See
6
→ Script-type and Browser-type items indexed macros.
6
→ Manual host action scripts
→ Manual event action scripts {PROFILE.SERIALNO<1-9>} is deprecated.
→ Description parameter in Item value and Gauge
widget
→ Primary/Secondary label Text parameter in
Honeycomb widget
{INVENTORY.SERIALNO.B}
→ Trigger-based notifications and commands Serial number B field in host inventory.
→ Problem update notifications and commands
→ Internal notifications This macro may be used with a numeric index e.g.
→ Tag names and values {INVENTORY.SERIALNO.B<1-9>} to point to the first,
→ Map element labels, map URL names and values second, third, etc. host in a trigger expression. See
6
→ Script-type and Browser-type items indexed macros.
6
→ Manual host action scripts
→ Manual event action scripts
→ Description parameter in Item value and Gauge
widget
→ Primary/Secondary label Text parameter in
Honeycomb widget
{INVENTORY.SITE.ADDRESS.A}
→ Trigger-based notifications and commands Site address A field in host inventory.
→ Problem update notifications and commands
→ Internal notifications This macro may be used with a numeric index e.g.
→ Tag names and values {INVENTORY.SITE.ADDRESS.A<1-9>} to point to the
→ Map element labels, map URL names and values first, second, third, etc. host in a trigger expression.
6
→ Script-type and Browser-type items See indexed macros.
6
→ Manual host action scripts
→ Manual event action scripts
→ Description parameter in Item value and Gauge
widget
→ Primary/Secondary label Text parameter in
Honeycomb widget
1926
Macro Supported in Description
{INVENTORY.SITE.ADDRESS.B}
→ Trigger-based notifications and commands Site address B field in host inventory.
→ Problem update notifications and commands
→ Internal notifications This macro may be used with a numeric index e.g.
→ Tag names and values {INVENTORY.SITE.ADDRESS.B<1-9>} to point to the
→ Map element labels, map URL names and values first, second, third, etc. host in a trigger expression.
6
→ Script-type and Browser-type items See indexed macros.
6
→ Manual host action scripts
→ Manual event action scripts
→ Description parameter in Item value and Gauge
widget
→ Primary/Secondary label Text parameter in
Honeycomb widget
{INVENTORY.SITE.ADDRESS.C}
→ Trigger-based notifications and commands Site address C field in host inventory.
→ Problem update notifications and commands
→ Internal notifications This macro may be used with a numeric index e.g.
→ Tag names and values {INVENTORY.SITE.ADDRESS.C<1-9>} to point to the
→ Map element labels, map URL names and values first, second, third, etc. host in a trigger expression.
6
→ Script-type and Browser-type items See indexed macros.
6
→ Manual host action scripts
→ Manual event action scripts
→ Description parameter in Item value and Gauge
widget
→ Primary/Secondary label Text parameter in
Honeycomb widget
{INVENTORY.SITE.CITY}
→ Trigger-based notifications and commands Site city field in host inventory.
→ Problem update notifications and commands
→ Internal notifications This macro may be used with a numeric index e.g.
→ Tag names and values {INVENTORY.SITE.CITY<1-9>} to point to the first,
→ Map element labels, map URL names and values second, third, etc. host in a trigger expression. See
6
→ Script-type and Browser-type items indexed macros.
6
→ Manual host action scripts
→ Manual event action scripts
→ Description parameter in Item value and Gauge
widget
→ Primary/Secondary label Text parameter in
Honeycomb widget
{INVENTORY.SITE.COUNTRY}
→ Trigger-based notifications and commands Site country field in host inventory.
→ Problem update notifications and commands
→ Internal notifications This macro may be used with a numeric index e.g.
→ Tag names and values {INVENTORY.SITE.COUNTRY<1-9>} to point to the
→ Map element labels, map URL names and values first, second, third, etc. host in a trigger expression.
6
→ Script-type and Browser-type items See indexed macros.
6
→ Manual host action scripts
→ Manual event action scripts
→ Description parameter in Item value and Gauge
widget
→ Primary/Secondary label Text parameter in
Honeycomb widget
{INVENTORY.SITE.NOTES}
→ Trigger-based notifications and commands Site notes field in host inventory.
→ Problem update notifications and commands
→ Internal notifications This macro may be used with a numeric index e.g.
→ Tag names and values {INVENTORY.SITE.NOTES<1-9>} to point to the first,
→ Map element labels, map URL names and values second, third, etc. host in a trigger expression. See
6
→ Script-type and Browser-type items indexed macros.
6
→ Manual host action scripts
→ Manual event action scripts
→ Description parameter in Item value and Gauge
widget
→ Primary/Secondary label Text parameter in
Honeycomb widget
1927
Macro Supported in Description
{INVENTORY.SITE.RACK}
→ Trigger-based notifications and commands Site rack location field in host inventory.
→ Problem update notifications and commands
→ Internal notifications This macro may be used with a numeric index e.g.
→ Tag names and values {INVENTORY.SITE.RACK<1-9>} to point to the first,
→ Map element labels, map URL names and values second, third, etc. host in a trigger expression. See
6
→ Script-type and Browser-type items indexed macros.
6
→ Manual host action scripts
→ Manual event action scripts
→ Description parameter in Item value and Gauge
widget
→ Primary/Secondary label Text parameter in
Honeycomb widget
{INVENTORY.SITE.STATE}
→ Trigger-based notifications and commands Site state/province field in host inventory.
→ Problem update notifications and commands
→ Internal notifications This macro may be used with a numeric index e.g.
→ Tag names and values {INVENTORY.SITE.STATE<1-9>} to point to the first,
→ Map element labels, map URL names and values second, third, etc. host in a trigger expression. See
6
→ Script-type and Browser-type items indexed macros.
6
→ Manual host action scripts
→ Manual event action scripts
→ Description parameter in Item value and Gauge
widget
→ Primary/Secondary label Text parameter in
Honeycomb widget
{INVENTORY.SITE.ZIP}
→ Trigger-based notifications and commands Site ZIP/postal field in host inventory.
→ Problem update notifications and commands
→ Internal notifications This macro may be used with a numeric index e.g.
→ Tag names and values {INVENTORY.SITE.ZIP<1-9>} to point to the first,
→ Map element labels, map URL names and values second, third, etc. host in a trigger expression. See
6
→ Script-type and Browser-type items indexed macros.
6
→ Manual host action scripts
→ Manual event action scripts
→ Description parameter in Item value and Gauge
widget
→ Primary/Secondary label Text parameter in
Honeycomb widget
{INVENTORY.SOFTWARE}
→ Trigger-based notifications and commands Software field in host inventory.
→ Problem update notifications and commands
→ Internal notifications This macro may be used with a numeric index e.g.
→ Tag names and values {INVENTORY.SOFTWARE<1-9>} to point to the first,
→ Map element labels, map URL names and values second, third, etc. host in a trigger expression. See
6
→ Script-type and Browser-type items indexed macros.
6
→ Manual host action scripts
→ Manual event action scripts {PROFILE.SOFTWARE<1-9>} is deprecated.
→ Description parameter in Item value and Gauge
widget
→ Primary/Secondary label Text parameter in
Honeycomb widget
{INVENTORY.SOFTWARE.APP.A}
→ Trigger-based notifications and commands Software application A field in host inventory.
→ Problem update notifications and commands
→ Internal notifications This macro may be used with a numeric index e.g.
→ Tag names and values {INVENTORY.SOFTWARE.APP.A<1-9>} to point to the
→ Map element labels, map URL names and values first, second, third, etc. host in a trigger expression.
6
→ Script-type and Browser-type items See indexed macros.
6
→ Manual host action scripts
→ Manual event action scripts
→ Description parameter in Item value and Gauge
widget
→ Primary/Secondary label Text parameter in
Honeycomb widget
1928
Macro Supported in Description
{INVENTORY.SOFTWARE.APP.B}
→ Trigger-based notifications and commands Software application B field in host inventory.
→ Problem update notifications and commands
→ Internal notifications This macro may be used with a numeric index e.g.
→ Tag names and values {INVENTORY.SOFTWARE.APP.B<1-9>} to point to the
→ Map element labels, map URL names and values first, second, third, etc. host in a trigger expression.
6
→ Script-type and Browser-type items See indexed macros.
6
→ Manual host action scripts
→ Manual event action scripts
→ Description parameter in Item value and Gauge
widget
→ Primary/Secondary label Text parameter in
Honeycomb widget
{INVENTORY.SOFTWARE.APP.C}
→ Trigger-based notifications and commands Software application C field in host inventory.
→ Problem update notifications and commands
→ Internal notifications This macro may be used with a numeric index e.g.
→ Tag names and values {INVENTORY.SOFTWARE.APP.C<1-9>} to point to the
→ Map element labels, map URL names and values first, second, third, etc. host in a trigger expression.
6
→ Script-type and Browser-type items See indexed macros.
6
→ Manual host action scripts
→ Manual event action scripts
→ Description parameter in Item value and Gauge
widget
→ Primary/Secondary label Text parameter in
Honeycomb widget
{INVENTORY.SOFTWARE.APP.D}
→ Trigger-based notifications and commands Software application D field in host inventory.
→ Problem update notifications and commands
→ Internal notifications This macro may be used with a numeric index e.g.
→ Tag names and values {INVENTORY.SOFTWARE.APP.D<1-9>} to point to the
→ Map element labels, map URL names and values first, second, third, etc. host in a trigger expression.
6
→ Script-type and Browser-type items See indexed macros.
6
→ Manual host action scripts
→ Manual event action scripts
→ Description parameter in Item value and Gauge
widget
→ Primary/Secondary label Text parameter in
Honeycomb widget
{INVENTORY.SOFTWARE.APP.E}
→ Trigger-based notifications and commands Software application E field in host inventory.
→ Problem update notifications and commands
→ Internal notifications This macro may be used with a numeric index e.g.
→ Tag names and values {INVENTORY.SOFTWARE.APP.E<1-9>} to point to the
→ Map element labels, map URL names and values first, second, third, etc. host in a trigger expression.
6
→ Script-type and Browser-type items See indexed macros.
6
→ Manual host action scripts
→ Manual event action scripts
→ Description parameter in Item value and Gauge
widget
→ Primary/Secondary label Text parameter in
Honeycomb widget
{INVENTORY.SOFTWARE.FULL}
→ Trigger-based notifications and commands Software (Full details) field in host inventory.
→ Problem update notifications and commands
→ Internal notifications This macro may be used with a numeric index e.g.
→ Tag names and values {INVENTORY.SOFTWARE.FULL<1-9>} to point to the
→ Map element labels, map URL names and values first, second, third, etc. host in a trigger expression.
6
→ Script-type and Browser-type items See indexed macros.
6
→ Manual host action scripts
→ Manual event action scripts
→ Description parameter in Item value and Gauge
widget
→ Primary/Secondary label Text parameter in
Honeycomb widget
1929
Macro Supported in Description
{INVENTORY.TAG}
→ Trigger-based notifications and commands Tag field in host inventory.
→ Problem update notifications and commands
→ Internal notifications This macro may be used with a numeric index e.g.
→ Tag names and values {INVENTORY.TAG<1-9>} to point to the first, second,
→ Map element labels, map URL names and values third, etc. host in a trigger expression. See indexed
6
→ Script-type and Browser-type items macros.
6
→ Manual host action scripts
→ Manual event action scripts {PROFILE.TAG<1-9>} is deprecated.
→ Description parameter in Item value and Gauge
widget
→ Primary/Secondary label Text parameter in
Honeycomb widget
{INVENTORY.TYPE}
→ Trigger-based notifications and commands Type field in host inventory.
→ Problem update notifications and commands
→ Internal notifications This macro may be used with a numeric index e.g.
→ Tag names and values {INVENTORY.TYPE<1-9>} to point to the first,
→ Map element labels, map URL names and values second, third, etc. host in a trigger expression. See
6
→ Script-type and Browser-type items indexed macros.
6
→ Manual host action scripts
→ Manual event action scripts {PROFILE.DEVICETYPE<1-9>} is deprecated.
→ Description parameter in Item value and Gauge
widget
→ Primary/Secondary label Text parameter in
Honeycomb widget
{INVENTORY.TYPE.FULL}
→ Trigger-based notifications and commands Type (Full details) field in host inventory.
→ Problem update notifications and commands
→ Internal notifications This macro may be used with a numeric index e.g.
→ Tag names and values {INVENTORY.TYPE.FULL<1-9>} to point to the first,
→ Map element labels, map URL names and values second, third, etc. host in a trigger expression. See
6
→ Script-type and Browser-type items indexed macros.
6
→ Manual host action scripts
→ Manual event action scripts
→ Description parameter in Item value and Gauge
widget
→ Primary/Secondary label Text parameter in
Honeycomb widget
{INVENTORY.URL.A}
→ Trigger-based notifications and commands URL A field in host inventory.
→ Problem update notifications and commands
→ Internal notifications This macro may be used with a numeric index e.g.
→ Tag names and values {INVENTORY.URL.A<1-9>} to point to the first,
→ Map element labels, map URL names and values second, third, etc. host in a trigger expression. See
6
→ Script-type and Browser-type items indexed macros.
6
→ Manual host action scripts
→ Manual event action scripts
→ Description parameter in Item value and Gauge
widget
→ Primary/Secondary label Text parameter in
Honeycomb widget
{INVENTORY.URL.B}
→ Trigger-based notifications and commands URL B field in host inventory.
→ Problem update notifications and commands
→ Internal notifications This macro may be used with a numeric index e.g.
→ Tag names and values {INVENTORY.URL.B<1-9>} to point to the first,
→ Map element labels, map URL names and values second, third, etc. host in a trigger expression. See
6
→ Script-type and Browser-type items indexed macros.
6
→ Manual host action scripts
→ Manual event action scripts
→ Description parameter in Item value and Gauge
widget
→ Primary/Secondary label Text parameter in
Honeycomb widget
1930
Macro Supported in Description
{INVENTORY.URL.C}
→ Trigger-based notifications and commands URL C field in host inventory.
→ Problem update notifications and commands
→ Internal notifications This macro may be used with a numeric index e.g.
→ Tag names and values {INVENTORY.URL.C<1-9>} to point to the first,
→ Map element labels, map URL names and values second, third, etc. host in a trigger expression. See
6
→ Script-type and Browser-type items indexed macros.
6
→ Manual host action scripts
→ Manual event action scripts
→ Description parameter in Item value and Gauge
widget
→ Primary/Secondary label Text parameter in
Honeycomb widget
{INVENTORY.VENDOR}
→ Trigger-based notifications and commands Vendor field in host inventory.
→ Problem update notifications and commands
→ Internal notifications This macro may be used with a numeric index e.g.
→ Tag names and values {INVENTORY.VENDOR<1-9>} to point to the first,
→ Map element labels, map URL names and values second, third, etc. host in a trigger expression. See
6
→ Script-type and Browser-type items indexed macros.
6
→ Manual host action scripts
→ Manual event action scripts
→ Description parameter in Item value and Gauge
widget
→ Primary/Secondary label Text parameter in
Honeycomb widget
Items
{ITEM.DESCRIPTION}
→ Trigger-based notifications and commands Description of the Nth item in the trigger expression
→ Problem update notifications and commands that caused a notification.
→ Internal notifications
→ Manual event action scripts This macro may be used with a numeric index e.g.
→ Description parameter in Item value and Gauge {ITEM.DESCRIPTION<1-9>} to point to the first,
widget second, third, etc. host in a trigger expression. See
→ Primary/Secondary label Text parameter in indexed macros.
Honeycomb widget
{ITEM.DESCRIPTION.ORIG}
→ Trigger-based notifications and commands Description (with macros unresolved) of the Nth item
→ Problem update notifications and commands in the trigger expression that caused a notification.
→ Internal notifications
→ Manual event action scripts This macro may be used with a numeric index e.g.
→ Description parameter in Item value and Gauge {ITEM.DESCRIPTION.ORIG<1-9>} to point to the
widget first, second, third, etc. host in a trigger expression.
→ Primary/Secondary label Text parameter in See indexed macros.
Honeycomb widget
{ITEM.ID} → Trigger-based notifications and commands Numeric ID of the Nth item in the trigger expression
→ Problem update notifications and commands that caused a notification.
→ Internal notifications
→ Script-type and Browser-type item, item prototype This macro may be used with a numeric index e.g.
and discovery rule parameter names and values {ITEM.ID<1-9>} to point to the first, second, third,
→ HTTP agent type item, item prototype and etc. host in a trigger expression. See indexed
discovery rule fields: macros.
URL, query fields, request body, headers, proxy, SSL
certificate file, SSL key file
→ Manual event action scripts
→ Description parameter in Item value and Gauge
widget
→ Primary/Secondary label Text parameter in
Honeycomb widget
1931
Macro Supported in Description
{ITEM.KEY} → Trigger-based notifications and commands Key of the Nth item in the trigger expression that
→ Problem update notifications and commands caused a notification.
→ Internal notifications
→ Script-type and Browser-type item, item prototype This macro may be used with a numeric index e.g.
and discovery rule parameter names and values {ITEM.KEY<1-9>} to point to the first, second, third,
→ HTTP agent type item, item prototype and etc. host in a trigger expression. See indexed
discovery rule fields: macros.
URL, query fields, request body, headers, proxy, SSL
certificate file, SSL key file {TRIGGER.KEY} is deprecated.
→ Manual event action scripts
→ Description parameter in Item value and Gauge Macro functions are not supported for this macro if it
widget is used as a placeholder in the first parameter of a
→ Primary/Secondary label Text parameter in history function, for example,
Honeycomb widget last(/{HOST.HOST}/{ITEM.KEY}).
{ITEM.KEY.ORIG}→ Trigger-based notifications and commands Original key (with macros not expanded) of the Nth
→ Problem update notifications and commands item in the trigger expression that caused a
4
→ Internal notifications notification .
→ Script-type and Browser-type item, item prototype
and discovery rule parameter names and values This macro may be used with a numeric index e.g.
→ HTTP agent type item, item prototype and {ITEM.KEY.ORIG<1-9>} to point to the first, second,
discovery rule fields: third, etc. host in a trigger expression. See indexed
URL, Query fields, Request body, Headers, Proxy, macros.
SSL certificate file, SSL key file, Allowed hosts.
→ Manual event action scripts
→ Description parameter in Item value and Gauge
widget
→ Primary/Secondary label Text parameter in
Honeycomb widget
1932
Macro Supported in Description
{ITEM.LASTVALUE}
→ Trigger-based notifications and commands The latest value of the Nth item in the trigger
→ Problem update notifications and commands expression that caused a notification.
→ Trigger names, event names, operational data and
descriptions It will resolve to *UNKNOWN* in the frontend if the
→ Tag names and values latest history value has been collected more than
→ Trigger URLs the Max history display period time ago (set in the
→ Manual event action scripts Administration→General menu section).
→ Description parameter in Item value and Gauge
widget When used in the problem name, the macro will not
→ Primary/Secondary label Text parameter in resolve to the latest item value when viewing
Honeycomb widget problem events; instead, it will keep the item value
from the time when the problem happened.
It is alias to last(/{HOST.HOST}/{ITEM.KEY}).
1933
Macro Supported in Description
{ITEM.LOG.DATE}
→ Trigger-based notifications and commands Date of the log item event.
→ Problem update notifications and commands
→ Trigger names, operational data and descriptions This macro may be used with a numeric index e.g.
→ Trigger URLs {ITEM.LOG.DATE<1-9>} to point to the first, second,
→ Event tags and values third, etc. host in a trigger expression. See indexed
→ Manual event action scripts macros.
→ Description parameter in Item value and Gauge
widget
→ Primary/Secondary label Text parameter in
Honeycomb widget
{ITEM.LOG.EVENTID}
→ Trigger-based notifications and commands ID of the event in the event log.
→ Problem update notifications and commands For Windows event log monitoring only.
→ Trigger names, operational data and descriptions
→ Trigger URLs This macro may be used with a numeric index e.g.
→ Event tags and values {ITEM.LOG.EVENTID<1-9>} to point to the first,
→ Manual event action scripts second, third, etc. host in a trigger expression. See
→ Description parameter in Item value and Gauge indexed macros.
widget
→ Primary/Secondary label Text parameter in
Honeycomb widget
{ITEM.LOG.NSEVERITY}
→ Trigger-based notifications and commands Numeric severity of the event in the event log.
→ Problem update notifications and commands For Windows event log monitoring only.
→ Trigger names, operational data and descriptions
→ Trigger URLs This macro may be used with a numeric index e.g.
→ Event tags and values {ITEM.LOG.NSEVERITY<1-9>} to point to the first,
→ Manual event action scripts second, third, etc. host in a trigger expression. See
→ Description parameter in Item value and Gauge indexed macros.
widget
→ Primary/Secondary label Text parameter in
Honeycomb widget
{ITEM.LOG.SEVERITY}
→ Trigger-based notifications and commands Verbal severity of the event in the event log.
→ Problem update notifications and commands For Windows event log monitoring only.
→ Trigger names, operational data and descriptions
→ Trigger URLs This macro may be used with a numeric index e.g.
→ Event tags and values {ITEM.LOG.SEVERITY<1-9>} to point to the first,
→ Manual event action scripts second, third, etc. host in a trigger expression. See
→ Description parameter in Item value and Gauge indexed macros.
widget
→ Primary/Secondary label Text parameter in
Honeycomb widget
{ITEM.LOG.SOURCE}
→ Trigger-based notifications and commands Source of the event in the event log.
→ Problem update notifications and commands For Windows event log monitoring only.
→ Trigger names, operational data and descriptions
→ Trigger URLs This macro may be used with a numeric index e.g.
→ Event tags and values {ITEM.LOG.SOURCE<1-9>} to point to the first,
→ Manual event action scripts second, third, etc. host in a trigger expression. See
→ Description parameter in Item value and Gauge indexed macros.
widget
→ Primary/Secondary label Text parameter in
Honeycomb widget
{ITEM.LOG.TIME}
→ Trigger-based notifications and commands Time of the log item event.
→ Problem update notifications and commands
→ Trigger names, operational data and descriptions This macro may be used with a numeric index e.g.
→ Trigger URLs {ITEM.LOG.TIME<1-9>} to point to the first, second,
→ Event tags and values third, etc. host in a trigger expression. See indexed
→ Manual event action scripts macros.
→ Description parameter in Item value and Gauge
widget
→ Primary/Secondary label Text parameter in
Honeycomb widget
1934
Macro Supported in Description
{ITEM.NAME} → Trigger-based notifications and commands Name of the item with all macros resolved.
→ Problem update notifications and commands
→ Internal notifications This macro may be used with a numeric index e.g.
→ Manual event action scripts {ITEM.NAME<1-9>} to point to the first, second,
→ Description parameter in Item value and Gauge third, etc. host in a trigger expression. See indexed
widget macros.
→ Primary/Secondary label Text parameter in
Honeycomb widget
{ITEM.NAME.ORIG}
→ Trigger-based notifications and commands This macro is used to resolve to the original name
→ Problem update notifications and commands (i.e. without macros resolved) of the item.
→ Internal notifications
→ Manual event action scripts This macro may be used with a numeric index e.g.
→ Description parameter in Item value and Gauge {ITEM.NAME.ORIG<1-9>} to point to the first,
widget second, third, etc. host in a trigger expression. See
→ Primary/Secondary label Text parameter in indexed macros.
Honeycomb widget
{ITEM.STATE} → Item-based internal notifications The latest state of the Nth item in the trigger
→ Description parameter in Item value and Gauge expression that caused a notification. Possible
widget values: Not supported and Normal.
→ Primary/Secondary label Text parameter in
Honeycomb widget This macro may be used with a numeric index e.g.
{ITEM.STATE<1-9>} to point to the first, second,
third, etc. host in a trigger expression. See indexed
macros.
{ITEM.STATE.ERROR}
→ Item-based internal notifications Error message with details why an item became
unsupported.
1935
Macro Supported in Description
{LLDRULE.DESCRIPTION}
→ LLD-rule based internal notifications Description of the low-level discovery rule which
caused a notification.
{LLDRULE.DESCRIPTION.ORIG}
→ LLD-rule based internal notifications Description (with macros unresolved) of the low-level
discovery rule which caused a notification.
{LLDRULE.ID} → LLD-rule based internal notifications Numeric ID of the low-level discovery rule which
caused a notification.
{LLDRULE.KEY} → LLD-rule based internal notifications Key of the low-level discovery rule which caused a
notification.
{LLDRULE.KEY.ORIG}
→ LLD-rule based internal notifications Original key (with macros not expanded) of the
low-level discovery rule which caused a notification.
1936
Macro Supported in Description
{LLDRULE.NAME}
→ LLD-rule based internal notifications Name of the low-level discovery rule (with macros
resolved) that caused a notification.
{LLDRULE.NAME.ORIG}
→ LLD-rule based internal notifications Original name (i.e. without macros resolved) of the
low-level discovery rule that caused a notification.
{LLDRULE.STATE}
→ LLD-rule based internal notifications The latest state of the low-level discovery rule.
Possible values: Not supported and Normal.
{LLDRULE.STATE.ERROR}
→ LLD-rule based internal notifications Error message with details why an LLD rule became
unsupported.
Maps
{MAP.ID} → Map element labels, map URL names and values Network map ID.
{MAP.NAME} → Map element labels, map URL names and values Network map name.
→ Text field in map shapes
Proxies
{PROXY.DESCRIPTION}
→ Trigger-based notifications and commands Description of the proxy. Resolves to either:
→ Problem update notifications and commands 1) proxy of the Nth item in the trigger expression (in
→ Discovery notifications and commands trigger-based notifications). You may use indexed
→ Autoregistration notifications and commands macros here.
→ Internal notifications 2) proxy, which executed discovery (in discovery
→ Manual event action scripts notifications). Use {PROXY.DESCRIPTION} here,
without indexing.
3) proxy to which an active agent registered (in
autoregistration notifications). Use
{PROXY.DESCRIPTION} here, without indexing.
Scripts
1937
Macro Supported in Description
{MANUALINPUT}→ Manual host action scripts, confirmation text, and Manual input value specified by user at script
URL field in URL scripts execution time.
→ Manual event action scripts, confirmation text, and
URL field in URL scripts
Services
{SERVICE.DESCRIPTION}
→ Service-based notifications and commands Description of the service (with macros resolved).
→ Service update notifications and commands
{SERVICE.NAME}→ Service-based notifications and commands Name of the service (with macros resolved).
→ Service update notifications and commands
{SERVICE.ROOTCAUSE}
→ Service-based notifications and commands List of trigger problem events that caused a service
→ Service update notifications and commands to fail, sorted by severity and host name. Includes
the following details: host name, event name,
severity, age, service tags and values.
{SERVICE.TAGS}→ Service-based notifications and commands A comma separated list of service event tags.
→ Service update notifications and commands Service event tags can be defined in the service
configuration section Tags. Expanded to an empty
string if no tags exist.
{SERVICE.TAGSJSON}
→ Service-based notifications and commands A JSON array containing service event tag objects.
→ Service update notifications and commands Service event tags can be defined in the service
configuration section Tags. Expanded to an empty
array if no tags exist.
{SERVICE.TAGS.<tag
→ Service-based notifications and commands Service event tag value referenced by the tag name.
name>} → Service update notifications and commands Service event tags can be defined in the service
configuration section Tags.
A tag name containing non-alphanumeric characters
(including non-English multibyte-UTF characters)
should be double quoted. Quotes and backslashes
inside a quoted tag name must be escaped with a
backslash.
Triggers
{TRIGGER.DESCRIPTION}
→ Trigger-based notifications and commands Trigger description.
→ Problem update notifications and commands All macros supported in a trigger description will be
→ Trigger-based internal notifications expanded if {TRIGGER.DESCRIPTION} is used in
→ Manual event action scripts notification text.
{TRIGGER.COMMENT} is deprecated.
{TRIGGER.EXPRESSION.EXPLAIN}
→ Trigger-based notifications and commands Partially evaluated trigger expression.
→ Problem update notifications and commands Item-based functions are evaluated and replaced by
→ Manual event action scripts the results at the time of event generation whereas
→ Event names all other functions are displayed as written in the
expression. Can be used for debugging trigger
expressions.
→ Problem recovery
{TRIGGER.EXPRESSION.RECOVERY notifications and commands
.EXPLAIN} Partially evaluated trigger recovery expression.
→ Problem update notifications and commands Item-based functions are evaluated and replaced by
→ Manual event action scripts the results at the time of event generation whereas
all other functions are displayed as written in the
expression. Can be used for debugging trigger
recovery expressions.
{TRIGGER.EVENTS.ACK}
→ Trigger-based notifications and commands Number of acknowledged events for a map element
→ Problem update notifications and commands in maps, or for the trigger which generated current
→ Map element labels event in notifications.
→ Manual event action scripts
1938
Macro Supported in Description
{TRIGGER.EVENTS.PROBLEM.ACK}
→ Trigger-based notifications and commands Number of acknowledged PROBLEM events for all
→ Problem update notifications and commands triggers disregarding their state.
→ Map element labels
→ Manual event action scripts
{TRIGGER.EVENTS.PROBLEM.UNACK}
→ Trigger-based notifications and commands Number of unacknowledged PROBLEM events for all
→ Problem update notifications and commands triggers disregarding their state.
→ Map element labels
→ Manual event action scripts
{TRIGGER.EVENTS.UNACK}
→ Trigger-based notifications and commands Number of unacknowledged events for a map
→ Problem update notifications and commands element in maps, or for the trigger which generated
→ Map element labels current event in notifications.
→ Manual event action scripts
{TRIGGER.HOSTGROUP.NAME}
→ Trigger-based notifications and commands A sorted (by SQL query), comma-space separated
→ Problem update notifications and commands list of host groups in which the trigger is defined.
→ Trigger-based internal notifications
→ Manual event action scripts
{TRIGGER.PROBLEM.EVENTS.PROBLEM.ACK}
→ Map element labels Number of acknowledged PROBLEM events for
triggers in PROBLEM state.
{TRIGGER.PROBLEM.EVENTS.PROBLEM.UNACK}
→ Map element labels Number of unacknowledged PROBLEM events for
triggers in PROBLEM state.
{TRIGGER.EXPRESSION}
→ Trigger-based notifications and commands Trigger expression.
→ Problem update notifications and commands
→ Trigger-based internal notifications
→ Manual event action scripts
{TRIGGER.EXPRESSION.RECOVERY}
→ Trigger-based notifications and commands Trigger recovery expression if OK event generation in
→ Problem update notifications and commands trigger configuration is set to ’Recovery expression’;
→ Trigger-based internal notifications otherwise an empty string is returned.
→ Manual event action scripts
{TRIGGER.ID} → Trigger-based notifications and commands Numeric trigger ID which triggered this action.
→ Problem update notifications and commands Supported in trigger tag values.
→ Trigger-based internal notifications
→ Map element labels, map URL names and values
→ Trigger URLs
→ Trigger tag value
→ Manual event action scripts
{TRIGGER.NAME}
→ Trigger-based notifications and commands Name of the trigger (with macros resolved).
→ Problem update notifications and commands Note that since 4.0.0 {EVENT.NAME} can be used in
→ Trigger-based internal notifications actions to display the triggered event/problem name
→ Manual event action scripts with macros resolved.
{TRIGGER.NAME.ORIG}
→ Trigger-based notifications and commands Original name of the trigger (i.e. without macros
→ Problem update notifications and commands resolved).
→ Trigger-based internal notifications
→ Manual event action scripts
{TRIGGER.NSEVERITY}
→ Trigger-based notifications and commands Numerical trigger severity. Possible values: 0 - Not
→ Problem update notifications and commands classified, 1 - Information, 2 - Warning, 3 - Average, 4
→ Trigger-based internal notifications - High, 5 - Disaster.
→ Manual event action scripts
{TRIGGER.SEVERITY}
→ Trigger-based notifications and commands Trigger severity name. Can be defined in
→ Problem update notifications and commands Administration → General → Trigger displaying
→ Trigger-based internal notifications options.
→ Manual event action scripts
{TRIGGER.STATE}
→ Trigger-based internal notifications The latest state of the trigger. Possible values:
Unknown and Normal.
{TRIGGER.STATE.ERROR}
→ Trigger-based internal notifications Error message with details why a trigger became
unsupported.
1939
Macro Supported in Description
{TRIGGER.TEMPLATE.NAME}
→ Trigger-based notifications and commands A sorted (by SQL query), comma-space separated
→ Problem update notifications and commands list of templates in which the trigger is defined, or
→ Trigger-based internal notifications *UNKNOWN* if the trigger is defined in a host.
→ Manual event action scripts
{TRIGGER.URL} → Trigger-based notifications and commands Trigger URL.
→ Problem update notifications and commands
→ Trigger-based internal notifications
→ Manual event action scripts
{TRIGGER.URL.NAME}
→ Trigger-based notifications and commands The label for the trigger URL.
→ Problem update notifications and commands
→ Trigger-based internal notifications
→ Manual event action scripts
{TRIGGER.VALUE}
→ Trigger-based notifications and commands Current trigger numeric value: 0 - trigger is in OK
→ Problem update notifications and commands state, 1 - trigger is in PROBLEM state.
→ Trigger expressions
→ Manual event action scripts
{TRIGGERS.UNACK}
→ Map element labels Number of unacknowledged triggers for a map
element, disregarding trigger state.
A trigger is considered to be unacknowledged if at
least one of its PROBLEM events is unacknowledged.
{TRIGGERS.PROBLEM.UNACK}
→ Map element labels Number of unacknowledged PROBLEM triggers for a
map element.
A trigger is considered to be unacknowledged if at
least one of its PROBLEM events is unacknowledged.
{TRIGGERS.ACK}→ Map element labels Number of acknowledged triggers for a map
element, disregarding trigger state.
A trigger is considered to be acknowledged if all of
it’s PROBLEM events are acknowledged.
{TRIGGERS.PROBLEM.ACK}
→ Map element labels Number of acknowledged PROBLEM triggers for a
map element.
A trigger is considered to be acknowledged if all of
it’s PROBLEM events are acknowledged.
Users
{USER.FULLNAME}
→ Problem update notifications and commands Name, surname and username of the user who
→ Manual host action scripts (including confirmation added event acknowledgment or started the script.
text)
→ Manual event action scripts (including
confirmation text)
{USER.NAME} → Manual host action scripts (including confirmation Name of the user who started the script.
text)
→ Manual event action scripts (including
confirmation text)
→ Manual host action scripts (including confirmation
{USER.SURNAME} Surname of the user who started the script.
text)
→ Manual event action scripts (including
confirmation text)
{USER.USERNAME}
→ Manual host action scripts (including confirmation Username of the user who started the script.
text) {USER.ALIAS}, supported before Zabbix 5.4.0, is now
→ Manual event action scripts (including deprecated.
confirmation text)
1940
Macro Supported in Description
Footnotes
1
The {HOST.*} macros supported in item key parameters will resolve to the interface that is selected for the item. When used in
items without interfaces they will resolve to either the Zabbix agent, SNMP, JMX or IPMI interface of the host in this order of priority
or to ’UNKNOWN’ if the host does not have any interface.
2
In global scripts, interface IP/DNS fields and web scenarios the macro will resolve to the main agent interface, however, if it is
not present, the main SNMP interface will be used. If SNMP is also not present, the main JMX interface will be used. If JMX is not
present either, the main IPMI interface will be used. If the host does not have any interface, the macro resolves to ’UNKNOWN’.
3
Only the avg, last, max and min functions, with seconds as parameter are supported in this macro in map labels.
4
{HOST.*} macros are supported in web scenario Variables, Headers, SSL certificate file and SSL key file fields and in scenario
step URL, Post, Headers and Required string fields. Since Zabbix 5.2.2, {HOST.*} macros are no longer supported in web scenario
Name and web scenario step Name fields.
5
Only the avg, last, max and min functions, with seconds as parameter are supported within this macro in graph names. The
{HOST.HOST<1-9>} macro can be used as host within the macro. For example:
last(/Cisco switch/ifAlias[{#SNMPINDEX}])
last(/{HOST.HOST}/ifAlias[{#SNMPINDEX}])
6
Supported in Script-type and Browser-type items and manual host action scripts for Zabbix server and Zabbix proxy.
Indexed macros
The indexed macro syntax of {MACRO<1-9>} works only in the context of trigger expressions. It can be used to reference hosts
or functions in the order in which they appear in the expression. Macros like {HOST.IP1}, {HOST.IP2}, {HOST.IP3} will resolve to the
IP of the first, second, and third host in the trigger expression (providing the trigger expression contains those hosts). Macros like
{FUNCTION.VALUE1}, {FUNCTION.VALUE2}, {FUNCTION.VALUE3} will resolve to the value of the first, second, and third item-based
function in the trigger expression at the time of the event (providing the trigger expression contains those functions).
Additionally the {HOST.HOST<1-9>} macro is also supported within the {?func(/host/key,param)} expression macro in
graph names. For example, {?func(/{HOST.HOST2}/key,param)} in the graph name will refer to the host of the second
item in the graph.
Warning:
Indexed macros will not resolve in any other context, except the two cases mentioned here. For other contexts, use macros
without index (i. e.{HOST.HOST}, {HOST.IP}, etc) instead.
Overview
This section contains a list of locations, where user-definable macros are supported.
Note:
Only global-level user macros are supported for Actions, Network discovery, Proxies and all locations listed under Other
locations section of this page. In the mentioned locations, host-level and template-level macros will not be resolved.
1941
Note:
To customize macro values (for example, shorten or extract specific substrings), you can use macro functions.
Actions
1
Location Multiple macros/mix with text
Hosts/host prototypes
In a host and host prototype configuration, user macros can be used in the following fields:
1
Location Multiple macros/mix with text
In an item or an item prototype configuration, user macros can be used in the following fields:
Item yes
name
Item yes
key
pa-
ram-
e-
ters
Update no
in-
ter-
val
1942
Multiple macros/mix with
1
Location text
Custom no
in-
ter-
vals
Timeout no
(avail-
able
for
sup-
ported
item
types)
Store no
up
to
(for
his-
tory
and
trends)
Description yes
Calculated
item
Formula yes
Database
mon-
i-
tor
Username yes
Password yes
SQL query yes
HTTP
agent
3
URL yes
Query fields yes
Request body yes
Headers (names and values) yes
Required status codes yes
HTTP proxy yes
HTTP authentication username yes
HTTP authentication password yes
SSl certificate file yes
SSl key file yes
SSl key password yes
Allowed hosts yes
JMX
agent
JMX endpoint yes
Script
item
Parameter names and values yes
Browser
item
Parameter names and values yes
SNMP
agent
SNMP OID yes
SSH
agent
Username yes
Public key file yes
1943
Multiple macros/mix with
1
Location text
Low-level discovery
In a low-level discovery rule, user macros can be used in the following fields:
1
Location Multiple macros/mix with text
1944
1
Location Multiple macros/mix with text
Filters
Regular expression yes
Overrides
Filters: regular expression yes
Operations: update interval (for item prototypes) no
Operations: history storage period (for item prototypes) no
Operations: trend storage period (for item prototypes) no
Network discovery
In a network discovery rule, user macros can be used in the following fields:
1
Location Multiple macros/mix with text
Update interval no
SNMP v1, v2
SNMP community yes
SNMP OID yes
SNMP v3
Context name yes
Security name yes
Authentication passphrase yes
Privacy passphrase yes
SNMP OID yes
Proxies
1
Location Multiple macros/mix with text
Proxy groups
In a proxy group configuration, user macros can be used in the following fields:
1
Location Multiple macros/mix with text
Failover period no
Minimum number of proxies no
Templates
1
Location Multiple macros/mix with text
2
Tags
Tag names yes
Tag values yes
Triggers
1945
Multiple macros/mix with
1
Location text
Name yes
Operational yes
data
Expression yes
(only
in
con-
stants
and
func-
tion
pa-
ram-
e-
ters;
se-
cret
macros
are
not
sup-
ported)
Tag yes
for
match-
ing
Menu yes
en-
try
name
Menu yes
en-
try
3
URL
Description yes
2
Tags
Tag names yes
Tag values yes
Web scenario
In a web scenario configuration, user macros can be used in the following fields:
1
Location Multiple macros/mix with text
Name yes
Update interval no
Agent yes
HTTP proxy yes
Variables (values only) yes
Headers (names and values) yes
Steps
Name yes
3
URL yes
Variables (values only) yes
Headers (names and values) yes
Timeout no
Required string yes
Required status codes no
Authentication
User yes
1946
1
Location Multiple macros/mix with text
Password yes
SSL certificate yes
SSL key file yes
SSL key password yes
2
Tags
Tag names yes
Tag values yes
Other locations
In addition to the locations listed here, user macros can be used in the following fields:
Global yes
scripts
(URL,
script,
SSH,
Tel-
net,
IPMI),
in-
clud-
ing
con-
fir-
ma-
tion
text
Webhooks
JavaScript script no
JavaScript script parameter name no
JavaScript script parameter value yes
Dashboards
Description parameter in Item value and Gauge dashboard widget yes
Primary/Secondary label Text parameter in Honeycomb dashboard widget yes
3
URL parameter in URL dashboard widget yes
Users
→
Users
→
Me-
dia
When active no
Administration
→
Gen-
eral
→
GUI
Working time no
Administration
→
Gen-
eral
→
Time-
outs
Timeouts for item types no
1947
Multiple macros/mix with
1
Location text
Administration
→
Gen-
eral
→
Con-
nec-
tors
URL yes
Username yes
Password yes
Bearer token yes
Timeout no
HTTP proxy yes
SSL certificate file yes
SSL key file yes
SSL key password yes
Alerts
→
Me-
dia
types
→
Mes-
sage
tem-
plates
Subject yes
Message yes
Alerts
→
Me-
dia
types
→
Script
Script parameters yes
Alerts
→
Me-
dia
types
→
Me-
dia
type
Username and Password fields for the Email media type (when Authentication is set to yes
”Username and password”; secret macros recommended)
For a complete list of all macros supported in Zabbix, see supported macros.
Footnotes
1
If multiple macros in a field or macros mixed with text are not supported for the location, a single macro has to fill the whole field.
2
Macros used in tag names and values are resolved only during event generation process.
3
URLs that contain a secret macro will not work, as the macro in them will be resolved as ”******”.
1948
8 Unit symbols
Overview
Working with large values such as ”86400”, ”104857600”, or ”1000000” can be challenging and can lead to errors. Therefore,
Zabbix supports unit symbols (suffixes) that function as value multipliers.
The use of suffixes can simplify, for example, the configuration of trigger expressions, making them easier to understand and
maintain.
last(/host/system.uptime)<86400
avg(/host/system.cpu.load,600s)<10
last(/host/vm.memory.size[available])<20971520
Trigger expressions with suffixes:
last(/host/system.uptime)<1d
avg(/host/system.cpu.load,10m)<10
last(/host/vm.memory.size[available])<20M
Suffixes can also simplify the configuration of other entities - item keys, widgets, etc. To see if a configuration field supports
suffixes, always see the relevant page for the entity being configured.
Time suffixes
Note:
Time suffixes support only integer numbers. For example, ”1h” is supported, but ”1,5h” or ”1.5h” is not; use ”90m” instead.
• K - kilobyte
• M - megabyte
• G - gigabyte
• T - terabyte
Other uses
Unit symbols are also used for a human-readable representation of data in Zabbix frontend.
Zabbix server and frontend support the following unit symbols (suffixes):
• K - kilo
• M - mega
• G - giga
• T - tera
• P - peta (frontend only)
• E - exa (frontend only)
• Z - zetta (frontend only)
• Y - yotta (frontend only)
Note:
When displaying item values in bytes (B) or bytes per second (Bps), a base 2 conversion is applied (1K = 1024B); otherwise,
a base 10 conversion is applied (1K = 1000).
1949
9 Time period syntax
Overview
d-d,hh:mm-hh:mm
where the symbols stand for the following:
Symbol Description
You can specify more than one time period using a semicolon (;) separator:
d-d,hh:mm-hh:mm;d-d,hh:mm-hh:mm...
Leaving the time period empty equals 1-7,00:00-24:00, which is the default value.
Attention:
The upper limit of a time period is not included. Thus, if you specify 09:00-18:00 the last second included in the time period
is 17:59:59.
Examples
1-5,09:00-18:00
Working hours plus weekend. Monday - Friday from 9:00 till 18:00 and Saturday, Sunday from 10:00 till 16:00:
1-5,09:00-18:00;6-7,10:00-16:00
10 Command execution
Zabbix uses common functionality for external checks, user parameters, system.run items, custom alert scripts, remote commands
and global scripts.
Execution steps
Note:
By default, all scripts in Zabbix are executed using the sh shell, and it is not possible to modify the default shell. To utilize
a different shell, you can employ a workaround: create a script file and invoke that script during command execution.
1950
Attention:
Zabbix assumes that a command/script has done processing when the initial child process has exited AND no other process
is still keeping the output handle/file descriptor open. When processing is done, ALL created processes are terminated.
All double quotes and backslashes in the command are escaped with backslashes and the command is enclosed in double quotes.
• Only for custom alert scripts, remote commands and user scripts executed on Zabbix server and Zabbix proxy.
• Any exit code that is different from 0 is considered as execution failure.
• Contents of standard error and standard output for failed executions are collected and available in frontend (where execution
result is displayed).
• Additional log entry is created for remote commands on Zabbix server to save script execution output and can be enabled
using LogRemoteCommands agent parameter.
• Contents of standard error and standard output for failed executions (if any).
• ”Process exited with code: N.” (for empty output, and exit code not equal to 0).
• ”Process killed by signal: N.” (for process terminated by a signal, on Linux only).
• ”Process terminated unexpectedly.” (for process terminated for unknown reasons).
See also
• External checks
• User parameters
• system.run items
• Custom alert scripts
• Remote commands
• Global scripts
11 Version compatibility
Supported agents
To be compatible with Zabbix 7.0, Zabbix agent must not be older than version 1.4 and must not be newer than 7.0.
You may need to review the configuration of older agents as some parameters have changed, for example, parameters related to
logging for versions before 3.0.
To take full advantage of the latest functionality, metrics, improved performance and reduced memory usage, use the latest
supported agent.
• On 32-bit Windows XP, do not use Zabbix agents newer than 6.0.x;
• On Windows XP/Server 2003, do not use agent templates that are newer than Zabbix 4.0.x. The newer templates use English
performance counters, which are only supported since Windows Vista/Server 2008.
Supported agents 2
Older Zabbix agents 2 from version 4.4 onwards are compatible with Zabbix 7.0; Zabbix agent 2 must not be newer than 7.0.
Note that when using Zabbix agent 2 versions 4.4 and 5.0, the default interval of 10 minutes is used for refreshing unsupported
items.
To take full advantage of the latest functionality, metrics, improved performance and reduced memory usage, use the latest
supported agent 2.
To be fully compatible with Zabbix 7.0, the proxies must be of the same major version; thus only Zabbix 7.0.x proxies are fully
compatible with Zabbix 7.0.x server. However, outdated proxies are also supported, although only partially.
1951
• Unsupported (proxy version is older than server previous LTS release version or proxy version is newer than server major
version).
Examples:
Server version Current proxy version Outdated proxy version Unsupported proxy version
6.4 6.4 6.0, 6.2 Older than 6.0; newer than 6.4
7.0 7.0 6.0, 6.2, 6.4 Older than 6.0; newer than 7.0
7.2 7.2 7.0 Older than 7.0; newer than 7.2
XML files not older than version 1.8 are supported for import in Zabbix 7.0.
Attention:
In the XML export format, trigger dependencies are stored by name only. If there are several triggers with the same name
(for example, having different severities and expressions) that have a dependency defined between them, it is not possible
to import them. Such dependencies must be manually removed from the XML file and re-added after import.
If Zabbix detects that the backend database is not accessible, it will send a notification message and continue the attempts to
connect to the database. For some database engines, specific error codes are recognized.
MySQL
• CR_CONN_HOST_ERROR
• CR_SERVER_GONE_ERROR
• CR_CONNECTION_ERROR
• CR_SERVER_LOST
• CR_UNKNOWN_HOST
• ER_SERVER_SHUTDOWN
• ER_ACCESS_DENIED_ERROR
• ER_ILLEGAL_GRANT_FOR_TABLE
• ER_TABLEACCESS_DENIED_ERROR
• ER_UNKNOWN_ERROR
Overview
In a Windows environment applications can send data to Zabbix server/proxy by using the Zabbix sender dynamic link library
(zabbix_sender.dll) instead of having to launch an external process (zabbix_sender.exe).
zabbix_sender.h and zabbix_sender.lib are required for compiling user applications with zabbix_sender.dll.
1952
Getting it
When choosing download options make sure to select ”No encryption” under Encryption and ”Archive” under Packaging. Then
download Zabbix agent (not Zabbix agent 2).
The zabbix_sender.h, zabbix_sender.lib and zabbix_sender.dll files will be inside the downloaded ZIP archive in the bin\dev direc-
tory. Unzip the files where you need it.
The dynamic link library with the development files will be located in the bin\winXX\dev directory. To use it, include the zab-
bix_sender.h header file and link with the zabbix_sender.lib library.
See also
• example of a simple Zabbix sender utility implemented with Zabbix sender dynamic link library to illustrate the library usage;
• zabbix_sender.h file for the interface functions of the Zabbix sender dynamic link library. This file contains documentation
explaining the purpose of each interface function, its arguments, and return value.
Overview
Overview In Zabbix 6.0, service monitoring functionality has been reworked significantly (see What’s new in Zabbix 6.0.0 for the
list of changes).
This page describes how services and SLAs, defined in earlier Zabbix versions, are changed during an upgrade to Zabbix 6.0 or
newer.
Services In older Zabbix versions, services had two types of dependencies: soft and hard. After an upgrade, all dependencies
will become equal.
If a service ”Child service” has been previously linked to ”Parent service 1” via hard dependency and additionally ”Parent service
2” via soft dependency, after an upgrade the ”Child service” will have two parent services ”Parent service 1” and ”Parent service
2”.
Trigger-based mapping between problems and services has been replaced by tag-based mapping. In Zabbix 6.0 and newer, service
configuration form has a new parameter Problem tags, which allows specifying one or multiple tag name and value pairs for problem
matching. Triggers that have been linked to a service will get a new tag ServiceLink : <trigger ID>:<trigger name> (tag
value will be truncated to 32 characters). Linked services will get ServiceLink problem tag with the same value.
Status calculation rules
The ’Status calculation algorithm’ will be upgraded using the following rules:
1953
SLAs Previously, SLA targets had to be defined for each service separately. Since Zabbix 6.0, SLA has become a separate entity,
which contains information about service schedule, expected service level objective (SLO) and downtime periods to exclude from
the calculation. Once configured, an SLA can be assigned to multiple services through service tags.
During an upgrade:
• Identical SLAs defined for each service will be grouped and one SLA per each group will be created.
• Each affected service will get a special tag SLA:<ID> and the same tag will be specified in the Service tags parameter of
the corresponding SLA.
• Service creation time, a new metric in SLA reports, will be set to 01/01/2000 00:00 for existing services.
16 Other issues
We recommend creating a zabbix user as system user, that is, without ability to log in. Some users ignore this recommendation
and use the same account to log in (e. g. using SSH) to host running Zabbix. This might crash Zabbix daemon on log out. In this
case you will get something like the following in Zabbix server log:
RemoveIPC=
Controls whether System V and POSIX IPC objects belonging to the user shall be removed when the
user fully logs out. Takes a boolean argument. If enabled, the user may not consume IPC resources
after the last of the user's sessions terminated. This covers System V semaphores, shared memory
and message queues, as well as POSIX shared memory and message queues. Note that IPC objects of the
root user and other system users are excluded from the effect of this setting. Defaults to "yes".
There are 2 solutions to this problem:
1. (recommended) Stop using zabbix account for anything else than Zabbix processes, create a dedicated account for other
things.
2. (not recommended) Set RemoveIPC=no in /etc/systemd/logind.conf and reboot the system. Note that RemoveIPC
is a system-wide parameter, changing it will affect the whole system.
If Zabbix frontend runs behind proxy server, the cookie path in the proxy configuration file needs to be rewritten in order to match
the reverse-proxied path. See examples below. If the cookie path is not rewritten, users may experience authorization issues,
when trying to login to Zabbix frontend.
# ..
location / {
# ..
proxy_cookie_path /zabbix /;
proxy_pass http://192.168.0.94/zabbix/;
# ..
Example configuration for Apache
# ..
ProxyPass "/" http://host/zabbix/
ProxyPassReverse "/" http://host/zabbix/
ProxyPassReverseCookiePath /zabbix /
ProxyPassReverseCookieDomain host zabbix.example.com
1954
# ..
This section describes the differences between the Zabbix agent and the Zabbix agent 2.
See also:
1955
18 Escaping examples
Overview
This page provides examples of using correct escaping when using regular expressions in various contexts.
Note:
When using the trigger expression constructor, correct escaping in regular expressions is added automatically.
Examples