Hype Cycle For It Operations 252566

Download as pdf or txt
Download as pdf or txt
You are on page 1of 98

This research note is restricted to the personal use of [email protected].

sg G00252566

Hype Cycle for IT Operations Management, 2013


Published: 23 July 2013 Analyst(s): Patricia Adams, Milind Govekar

IT operations management is a mature market overall; however, several disruptive and hyped technologies are emerging in the production environment. Infrastructure and operations leaders should use this Hype Cycle to set expectations when investing in these technologies.
Table of Contents
Analysis.................................................................................................................................................. 3 What You Need to Know.................................................................................................................. 3 The Hype Cycle................................................................................................................................ 4 Retired........................................................................................................................................6 Hype Cycle Overview..................................................................................................................7 Peak of Inflated Expectations......................................................................................................8 Trough of Disillusionment............................................................................................................9 Slope of Enlightenment.............................................................................................................10 Plateau of Productivity.............................................................................................................. 10 The Priority Matrix...........................................................................................................................13 Off the Hype Cycle......................................................................................................................... 14 On the Rise.................................................................................................................................... 15 ValueOps..................................................................................................................................15 Business Value Dashboard....................................................................................................... 16 Business Productivity Teams.................................................................................................... 18 IT Operations Gamification........................................................................................................20 Software License Optimization Tools........................................................................................ 21 IT Operations Analytics............................................................................................................. 23 IT Service Support Management Tools..................................................................................... 25 Social IT Management.............................................................................................................. 27 DevOps.................................................................................................................................... 29

This research note is restricted to the personal use of [email protected]

This research note is restricted to the personal use of [email protected]

Service Billing........................................................................................................................... 32 Application Release Automation............................................................................................... 33 At the Peak.....................................................................................................................................36 IT Workload Automation Broker Tools.......................................................................................36 IT Financial Management Tools.................................................................................................37 Cloud Management Platforms.................................................................................................. 39 Capacity-Planning and Management Tools...............................................................................42 IT Service Catalog Tools........................................................................................................... 44 Sliding Into the Trough....................................................................................................................46 Enterprise Application Stores....................................................................................................46 COBIT...................................................................................................................................... 47 IT Process Automation Tools.................................................................................................... 50 Application Performance Monitoring......................................................................................... 52 IT Service View CMDB..............................................................................................................54 Real-Time Infrastructure............................................................................................................56 Workspace Virtualization...........................................................................................................59 IT Service Dependency Mapping.............................................................................................. 60 Business Service Management Tools........................................................................................63 Network Configuration and Change Management Tools........................................................... 65 Configuration Auditing.............................................................................................................. 67 IT Management Process Maturity............................................................................................. 69 ITIL........................................................................................................................................... 71 Server Provisioning and Configuration Management................................................................. 73 IT Asset Management Tools..................................................................................................... 76 Service-Level Reporting Tools.................................................................................................. 77 Climbing the Slope......................................................................................................................... 79 Hosted Virtual Desktops........................................................................................................... 79 IT Event Correlation and Analysis Tools.....................................................................................81 PC Application Virtualization..................................................................................................... 83 Network Performance Monitoring Tools.................................................................................... 84 Mobile Device Management......................................................................................................86 Entering the Plateau....................................................................................................................... 88 Client Management Tools......................................................................................................... 88 Infrastructure Monitoring...........................................................................................................89 Network Fault Monitoring Tools................................................................................................ 91 Job-Scheduling Tools...............................................................................................................93
Page 2 of 98
This research note is restricted to the personal use of [email protected] Gartner, Inc. | G00252566

This research note is restricted to the personal use of [email protected]

Appendices.................................................................................................................................... 94 Hype Cycle Phases, Benefit Ratings and Maturity Levels.......................................................... 96 Recommended Reading.......................................................................................................................97

List of Tables
Table 1. Hype Cycle Phases.................................................................................................................96 Table 2. Benefit Ratings........................................................................................................................96 Table 3. Maturity Levels........................................................................................................................97

List of Figures
Figure 1. Hype Cycle for IT Operations Management, 2013.................................................................. 12 Figure 2. Priority Matrix for IT Operations Management, 2013...............................................................14 Figure 3. Hyper Cycle for IT Operations Management, 2012.................................................................95

Analysis
What You Need to Know
This document was revised on 26 July 2013. For more information, see the Corrections page. Continuous change is coming at IT organizations from all angles, and infrastructure and operations (I&O) organizations need to remain relevant by adapting to increased change velocity as a way of life. The business has control over more IT decisions than ever before, and it's influencing CEOs to consider alternatives that are faster, more agile and responsive so that the business can accelerate its pace, while facing cost pressures in an economically uncertain environment. Managing IT like a business has always been a goal, but it's even more so now. If it doesn't meet financial and timeliness expectations, then services will go into the cloud, or infrastructure as a service (IaaS) or platform as a service (PaaS) will be implemented. IT operations departments are under pressure from the consumerization of IT to simplify their IT environments and the consumption of the services they provide. Furthermore, as enterprises increase their adoption of dynamic technologies such as the cloud, social, mobility and hosted virtual desktops (HVDs) that enable flexible styles of computing, IT operations must become more proactive to deliver on the promises and opportunities that these technologies and processes are creating. Well-run IT operations organizations take a service management view across their technology silos, and strive for excellence and continuous improvement.

Gartner, Inc. | G00252566 This research note is restricted to the personal use of [email protected]

Page 3 of 98

This research note is restricted to the personal use of [email protected]

The struggle here is that organizations have not matured. After viewing IT data from more than 750 companies, we have seen little progress. Although new technologies and approaches to solving problems have become available, the people and process investments are not being made. Fundamentally, it comes down to investing in staffing to deliver on the process side. Organizations can purchase tools to automate, but they need the people to implement them. They can design and build the ideal processes, but they need the staff to implement and adhere to the processes. Thus, managing IT operations like a business requires a strong combination of business management, organization, processes and tools. This journey toward business partnership, while managing cost, needs to be managed through a methodical, step-by-step approach. Gartner's ITScore for I&O (ITSIO) is a maturity model that has been developed to provide this guidance. According to current ITSIO maturity assessment data, many organizations are not mature enough to take full advantage of dynamic new technologies, and the average organization is selfassessed at a maturity score of 2.29. Knowing where an organization is from a people, process, technology and business management perspective can help I&O leaders understand and define their departments' readiness for the technologies listed in the Technology Trigger stage or moremature technologies that have reached the Plateau of Productivity in this Hype Cycle. Knowing the organization's ability to handle risk is an imperative in this evaluation scenario as well. Typically, Type A organizations are usually technology leaders and are better able to manage the introduction of adolescent or emerging technologies, whereas Type C organizations tend to be followers that wait for technologies to near the plateau. Type B enterprises are between the extremes. From a business perspective, the expectation is that IT operations should be able to deliver services with high quality, agility, low fixed costs and minimal risk. The promise of new technology to deliver these business expectations continues; therefore, the hype associated with IT operations technology used to ensure quality of service (QoS), nimbleness and a high level of customer satisfaction continues. Making the right choices about this technology and investing wisely is imperative. This Hype Cycle provides information and advice on the most important IT operations tools, technologies and process frameworks, as well as their level of visibility and market adoption. Use this Hype Cycle to review your IT operations portfolio, and to update expectations for future investments, relative to your organization's desire to innovate and its willingness to assume risk.

The Hype Cycle


Technology has moved at a rapid pace during the past 20 years, but it's threatening to slow down its cycles of innovation as commoditization gains a foothold. Within the data center infrastructure, major changes are underway with vertically integrated systems (with or without stack software) that include servers, storage and networks integrated in one package, while maintaining a small energy footprint. Traditional vendors are inhibiting progress to a certain degree to avoid commoditization and to protect their margins. Although many technologies are evolving toward commodity hence, the retirement of IT Service Desk Tools from the Hype Cycle this year and the introduction of IT Service and Support Management (ITSSM) Tools in 2012 selecting, architecting, integrating, managing and supporting these new technologies is a significant challenge. Moreover, new technologies, such as IT Financial Management (ITFM) and Software License Optimization tools and services, emerge continuously.

Page 4 of 98
This research note is restricted to the personal use of [email protected]

Gartner, Inc. | G00252566

This research note is restricted to the personal use of [email protected]

With continued cost pressures, organizations are separating into camps those that are aggressively investing in IT to gain business advantage and those in a "wait and see" mode. Despite taking these business stances, organizations are squeezing more efficiency through automation, as evidenced in the ITScore survey results, from their infrastructures and trying to keep up with the pace of new technology. With consumer expectations driving everything toward mobility, this is affecting the rearchitecting of apps and driving (or forcing) modernization efforts. CEOs, line of business (LOB) leaders and marketing executives are realizing that the key to their growth is through customer empowerment in mobile and context. As everything becomes "smart," which results in a large postpurchase market for applications combined with cloud services and widespread developer support, IT will be challenged to look holistically at a wider range of devices it may support. With devices ranging from projectors to video teleconferencing equipment all having Internet access, either directly or through an attached PC, IT will have to manage, count and patch it. Desk phones, smartphones and voice over Internet Protocol (VoIP) devices are only differentiated by their shape, not the functionality they provide. Trying to balance productivity, while making efficiency gains, can be a challenge for any organization that is looking to manage risk; however, this is critical if your goal is for IT operations to reach the point where it becomes a partner with the business. Simply put, IT operations needs to work smarter to have time to innovate and transform the business. In addition, shadow IT is "up" for most IT organizations, as LOBs and marketing seek to move faster at acquiring IT services than the shared services organization can respond. If procurement is a bottleneck or strategy, and planning requires endless meetings, the business will look for options outside IT. A Gartner survey in 2012 showed that 28% of IT organizations believe that shadow IT is at least 25% of the IT budget. Examples of shadow IT include software as a service (SaaS), PaaS and IaaS acquired by lines of business, and Hadoop clusters, which are often implemented outside the governance of the shared services organization. Running IT operations like a business requires investments in automation that will drive growth and support transformation projects. However, cost optimization continues to be a primary concern for many IT leaders with an increasing spotlight on IT financial management. With a limited budget, access to IT operations analytics can facilitate making decisions quickly in a dynamic environment, thereby enabling more effective planning and better use of virtualization by leveraging cloud management platforms and DevOps. Many technologies and processes are clustered in the Trough of Disillusionment, because organizations are beginning to grasp how they apply them to the problems organizations are facing. With this technology context in mind, I&O leaders are also facing budgetary and staffing risks. Most IT organizations don't know the unit cost to deliver a service; therefore, they make uninformed decisions on sourcing. When CEOs ask why they shouldn't cloudsource all of IT, it's difficult to provide the financial data needed to support an argument about capital expenditures (capex) versus operating expenditures (opex). To address this hole in the market, ITFM tools have been added to the Hype Cycle in 2013 to supplement IT service billing and IT asset management data. With the right business skills and metrics from a business value dashboard, IT can do an effective analysis and help make the right decisions on technology, and sourcing investments. With factual data and analysis, they will be able

Gartner, Inc. | G00252566 This research note is restricted to the personal use of [email protected]

Page 5 of 98

This research note is restricted to the personal use of [email protected]

to better communicate the value they provide the business and a reason for a continuing relationship. A key challenge facing I&O leaders, which is having a material impact on the ability to mature, is related to the implementation of process. Organizations have the technology, but without the competent staff to implement it, such technology could become "shelfware." Without skilled versatilists, rather than domain-specific skillsets, that have both business and technical skills, I&O can't implement the technology required for automation. Creating a culture of innovation inside your I&O organization will ensure that talented team members will remain, rather than moving on to other organizations. Every year in the IT operations management (ITOM) Hype Cycle, we introduce several new concepts and retire a few to reflect the changes occurring in the marketplace. The changes were driven by both vendors and clients, based on how they thought about technology or how the market moved away from a particular nomenclature, due to an evolution in the technology. Many of the technologies have an underlying commonality, in that they can provide automation that reduces costs or provides visibility that will also lead to cost reductions. This year we've added five new technology- and nontechnology-related profiles to the Hype Cycle:

ValueOps Business Value Dashboard Software License Optimization Tools Enterprise Application Stores IT Management Process Maturity

ValueOps, which is at the Innovation Trigger stage, has been added to help IT operations organizations manage rapidly changing and complex IT environments by leveraging Gartner's PaceLayered Application Strategy to focus operations practices on the needs of the business. In addition to IT operations having the right focus, IT operations needs to be able to quantify the business value of I&O performance to support business and IT leaders in making key decisions. This capability is provided using Business Value Dashboards, at the Innovation Trigger stage of this Hype Cycle. IT Asset Management (ITAM) Tools have been used by IT operations for a long time; however, Software License Optimization Tools have emerged to help ITAM correlate software license entitlement with installed software, which is not an easy task. Thus, these tools are also on the slope of Innovation Trigger.

Retired
This year, several technologies were retired from the Hype Cycle, because they were subsumed by other technologies, customers and vendors changed the way they thought about them, or they reached the Plateau of Productivity.

Page 6 of 98
This research note is restricted to the personal use of [email protected]

Gartner, Inc. | G00252566

This research note is restricted to the personal use of [email protected]

IT Change Management (ITCM) Tools were once a stand-alone market, but all of the IT service support management (ITSSM) vendors are now offering native ITCM capabilities. It no longer made sense to continue that as a separate technology, because there aren't any vendors that continue to sell stand-alone ITCM. PC Application Streaming was once a stand-alone functional toolset. It has also been retired, because it is has been absorbed into PC Application Virtualization. Companies that are looking for app streaming should look to the application virtualization vendors now for this functionality. IT Service Desk Tools has also been retired as a separate technical profile due to commoditization. Although enterprises that are not looking for a comprehensive ITSSM suite still exist, and some are unable to fully utilize all of the functionality, the incident and problem management capabilities are not positioned in that technology profile.

Hype Cycle Overview


To deliver effective and efficient services, I&O organizations need improved process maturity. Gartner's ITSIO enables organizations to self-assess this maturity to deliver business value. This is in the Trough of Disillusionment, because process management is the lowest maturity discipline, which suggests that, although most organizations have worked on incident and change management to some extent, most have not achieved consistent process alignment nor moved forward into end-to-end service management and integrated IT management processes. In the Innovation Trigger stage, there are 11 technology profiles that are composed of both technologies and approaches to managing technology. This is the point at which most new technologies enter the Hype Cycle. Four of the profiles will take 10 years to reach the Plateau of Productivity:

ValueOps Business Value Dashboards Software License Optimization Tools ITSSM Tools

Although organizations may need these, they may not be able to take full advantage of the value enabled. What is most noteworthy is the speed with which we expect three technology areas to accelerate to mainstream by 2018:

IT Operations Analytics Service Billing Application Release Automation

IT Operations Analytics is the approach of using complex-event processing (CEP), statistical pattern discovery (SPD), unstructured text file search (UTFS), behavior learning engines (BLEs), topology mapping and analysis (TMA), and multidimensional database analysis (MDA) to provide insight into

Gartner, Inc. | G00252566 This research note is restricted to the personal use of [email protected]

Page 7 of 98

This research note is restricted to the personal use of [email protected]

root causes and provide faster resolutions of IT system performance problems, and to assess relative impact when multiple causes are involved. These tools are sold stand-alone, and the functionality can also be embedded in other toolsets. Service cost analysis and predicting performance-affecting events, continues a modest move up the curve. Meanwhile, Social IT Management has seen more progress, as organizations become aware of how to take advantage of internal messaging systems (e.g., Facebook-style) to communicate with users and collect data about outages or problems. Many ITSSM tools are embedding this functionality into the core products now. Service Billing has moved slightly, progressing toward the peak, because it provides resource usage data and may offer service-pricing options across the infrastructure components that can be used to calculate the costs for chargeback and aggregate the details to display the cost of a service.

Peak of Inflated Expectations


As the technologies reach the Peak of Inflated Expectations, changes are happening that reflect the goal of running IT operations like a business. There are five technology profiles placed at this stage. ITFM is becoming important, beyond just chargeback. It provides IT cost data and analytics that best support strategic decision making by collecting cost-related data from a heterogeneous and complex IT environment, along with the ability to build a cost model with cost allocation and reporting capabilities. This will support showback and cost transparency. The costing data is critical, as more organizations attempt to reach a maturity level needed to implement IT service catalog tools and populate them with actual cost data. The automated process workflow for ordering and delivering IT services will increase IT operations efficiency by reducing errors in service delivery, identifying process bottlenecks and uncovering opportunities for efficiency improvements. Cloud Management Platforms technology has moved rapidly to reach the Peak of Inflated Expectations, and it will become mainstream in two to five years, as the high visibility of these tools is matched by the high expectations that IT operations departments have for these tools to manage their private, public and hybrid cloud environments, based on policy-driven automation. We have seen growing visibility and deployment interest in Capacity-Planning and Management Tools. These products are increasingly being used for standard data center consolidation activities, as well as the related planning and management of virtual and cloud infrastructures. These tools are also used to match workload requirements to the most appropriate resources in a physical, virtual or cloud data center. They also provide real-time visualization of capacity in a data center to help optimize workloads and associated resources. The IT Workload Automation Broker Tools will move rapidly through this Hype Cycle to reach the Plateau of Productivity, as the expectations from these tools have begun to match their capabilities. Increasingly, we are seeing interoperability between IT Workload Automation Broker Tools and IT Process Automation Tools.

Page 8 of 98
This research note is restricted to the personal use of [email protected]

Gartner, Inc. | G00252566

This research note is restricted to the personal use of [email protected]

Trough of Disillusionment
More than a third of the technology profiles represented in this Hype Cycle are located in the Trough of Disillusionment, which is the second largest in terms of overall spacing on the Hype Cycle. Most of them are two to 10 years away from reaching the plateau and their value to the business varies greatly. Many of the tools located in this area of the Hype Cycle have not fully delivered the benefits expected by the users that have implemented them. There could be many reasons that these technologies or frameworks have slowed their market penetration, such as the implementation time, time to value or mismatched expectations. Technologies such as ServiceLevel Reporting Tools, Network Configuration and Change Management, ITAM, and Server Provisioning and Configuration Management are poised to advance during the next two to five years. Enterprise Application Stores, on the edge of the Trough of Disillusionment, offer a similar approach to public stores (such as Apple's App Store), but are private and implemented on internal servers or are delivered through private clouds. IT organizations' demand for private application stores keeps growing as mobility adoption rapidly takes place. Although most providers today offer basic enterprise app store functionality (either as software or an as-a-service product), only a few provide a comprehensive solution for all scenarios. IT Process Automation Tools are a key focus for IT organizations looking to improve IT operations efficiencies and provide a means to track and measure process execution. They will reduce the human factor and associated risks by automating safe, repeatable processes. They will also increase IT operations efficiencies by integrating and leveraging the IT management tools needed to support operations processes across IT domains. On the 2012 IT Operations Hype Cycle, the area that had provided a major impact and has had significant movement on this Hype Cycle is Workspace Virtualization, which almost bypassed the Peak of Inflated Expectations in one year. Some of the technologies that were on the 2012 Hype Cycle will now take a longer time to reach the plateau on the 2013 IT Operations Hype Cycle:

IT Service Dependency Mapping Business Service Management Tools HVDs Network Performance Monitoring Tools

This is due to the underestimation of the time and effort needed to implement and gain value from these tools. This was also a year that had IT Service Dependency Mapping leapfrogging in front of IT Service View CMDB.

Gartner, Inc. | G00252566 This research note is restricted to the personal use of [email protected]

Page 9 of 98

This research note is restricted to the personal use of [email protected]

Slope of Enlightenment
Within the Slope of Enlightenment, which is the largest spacing on the Hype Cycle, there are six technology profiles. Most will be mainstream in less than five years. As we approach the Plateau of Productivity, three technologies are poised to become mainstream in less than two years:

PC Application Virtualization Mobile Device Management Client Management Tools.

HVD and mobile computing have experienced huge demand due to the impact that mobility has had on so many aspects of IT operations. However, this year, the time frame for HVD to mainstream was switched to a two- to five-year horizon. Additionally, server-based computing (SBC) has grown consistently. Relatively, the HVD movement on the 2013 IT Operations Hype Cycle has been modest. The value proposition of these tools is achieved by making desktops more personalized and enhancing performance. Importantly, they can reduce I&O costs by reducing the number of servers, the amount of storage, and the number of images that organizations must use to provide users with a personalized desktop.

Plateau of Productivity
The plateau contains the last three technology profiles:

Infrastructure Monitoring Network Fault Monitoring Tools Job Scheduling Tools

These tools are ubiquitous, mature and are approaching the end of the Hype Cycle. These technologies may be providing proven value to organizations, so the investment decision making is straightforward. This Hype Cycle should benefit most adoption profiles (early adopters of technology, mainstream, etc.). For example, enterprises that are leading adopters of technology should begin testing technologies that are still early in the Hype Cycle. However, risk-averse clients may delay the adoption of these technologies. The earlier or later the technology is positioned on the Hype Cycle, the higher the expectations and marketing hype; therefore, manage down your expectations and implement specific plans to mitigate any risks from using that technology. The three important considerations for using this Hype Cycle:

Creating a business case for new technologies driven by ROI is important for organizations with a low tolerance for risk. Highly innovative organizations that have increased their IT operations budgets are likely to gain a competitive advantage from a technology's benefits. Innovative technologies often come from smaller vendors with questionable viability. These vendors are likely to be acquired, exit the market or go out of business, so plan carefully. Although budget constraints are simultaneously easing and tightening, depending on the organization's business, organizations should consider the risks they're willing to take with new,
Gartner, Inc. | G00252566 This research note is restricted to the personal use of [email protected]

Page 10 of 98

This research note is restricted to the personal use of [email protected]

unproven technologies, as well as the timing of their adoption. Weigh risks against needs and the technology's potential benefits. Figure 1 depicts technologies on the Hype Cycle for IT Operations Management, 2013.

Gartner, Inc. | G00252566 This research note is restricted to the personal use of [email protected]

Page 11 of 98

This research note is restricted to the personal use of [email protected]

Figure 1. Hype Cycle for IT Operations Management, 2013

expectations

Cloud Management Platforms Capacity-Planning and Management Tools IT Service Catalog Tools

IT Financial Management Tools IT Workload Automation Broker Tools

Enterprise Application Stores Application Release Automation Service Billing DevOps COBIT IT Process Automation Tools Application Performance Monitoring Job-Scheduling Tools Network Fault Monitoring Tools

Social IT Management IT Service Support Management Tools IT Operations Analytics Software License Optimization Tools IT Operations Gamification Business Productivity Teams Business Value Dashboard ValueOps

IT Service View CMDB Real-Time Infrastructure Infrastructure Monitoring Workspace Virtualization Client Management Tools IT Service Dependency Mapping Business Service Mobile Device Management Configuration Auditing Management Tools Network Performance Monitoring Tools IT Management Network PC Application Virtualization Process Maturity Configuration IT Event Correlation and Analysis Tools and Change Management Tools Hosted Virtual Desktops ITIL Service-Level Reporting Tools Server Provisioning and IT Asset Management Tools Configuration Management As of July 2013

Innovation Trigger

Peak of Inflated Expectations

Trough of Disillusionment

Slope of Enlightenment

Plateau of Productivity

time
Plateau will be reached in: less than 2 years
Source: Gartner (July 2013)

2 to 5 years

5 to 10 years

more than 10 years

obsolete before plateau

Page 12 of 98

Gartner, Inc. | G00252566

This research note is restricted to the personal use of [email protected]

This research note is restricted to the personal use of [email protected]

The Priority Matrix


The Priority Matrix maps a technology and a framework's time to maturity on a grid in an easy-toread format that answers two high-priority questions:

How much value will an organization receive from a technology? When will the technology be mature enough to provide this value?

In the longer term, the truly transformative impact of technology can be delivered by interlocking its adoption with people and process frameworks. This is apparent on this Hype Cycle from profiles such as IT Management Process Maturity, ITIL, ValueOps and Business Value Dashboards, which have little to do with technology. Some technologies that have a high business impact have a short time to plateau for example, Enterprise Application Store or Mobile Device Management technologies. This also reflects the rapid change in adoption and value of technologies, such as mobile and the quest for the management tools to provide value quickly. To further assess your readiness for technology adoption, consider using Gartner's ITSIO maturity assessment to understand the current maturity level of your organization and to chart a course for continuous improvement. Some of the technologies on this Hype Cycle take a longer time to mature, but have a low direct impact for example, Social IT Management. Virtualization and the cloud continue to broaden the service delivery transparency for IT operations. Business users are demanding more agility and transparency for the services and the associated financial impact they receive, but they also want improved service levels that provide acceptable availability and performance. They are looking to understand the fixed and variable costs that form the basis of the services they want and receive. They are also looking at data security, increased service agility and responsiveness. Many business customers have circumvented IT to acquire public cloud services. This has caused IT organizations to respond by investing in private cloud services for some of their most-used and highly standardized sets of services. Technologies such as Cloud Management Platforms, Application Release Automation and ITFM will help IT operations address some of these challenges by providing higher benefits in the medium term. Investment in all ITOM technologies should be considered based on their cost as part of a suite or stand-alone. The advantages of implementing ITOM technologies continue to include lowering the total cost of ownership (TCO) of managing the complex IT environment, improving QoS, lowering business risk and accelerating service delivery, as is the case with cloud services.

Gartner, Inc. | G00252566 This research note is restricted to the personal use of [email protected]

Page 13 of 98

This research note is restricted to the personal use of [email protected]

Figure 2. Priority Matrix for IT Operations Management, 2013

benefit

years to mainstream adoption


less than 2 years 2 to 5 years 5 to 10 years
Business Productivity Teams DevOps IT Management Process Maturity ITIL Real-Time Infrastructure

more than 10 years


Business Value Dashboard ValueOps

transformational

high

Enterprise Application Stores IT Workload Automation Broker Tools Mobile Device Management

Application Release Automation Cloud Management Platforms Configuration Auditing Hosted Virtual Desktops IT Financial Management Tools IT Operations Analytics IT Process Automation Tools Server Provisioning and Configuration Management

Application Performance Monitoring Capacity-Planning and Management Tools IT Service Catalog Tools IT Service Dependency Mapping IT Service View CMDB

Business Service Management Tools

moderate

Client Management Tools Network Fault Monitoring Tools PC Application Virtualization

IT Asset Management Tools IT Event Correlation and Analysis Tools Network Configuration and Change Management Tools Network Performance Monitoring Tools Service Billing Service-Level Reporting Tools Workspace Virtualization

COBIT IT Operations Gamification

IT Service Support Management Tools Software License Optimization Tools

low

Infrastructure Monitoring Job-Scheduling Tools

Social IT Management

As of July 2013
Source: Gartner (July 2013)

Off the Hype Cycle


As part of the annual process, we evaluate whether technologies should be retired from the Hype Cycle and remove those that no longer belong. This year, we deleted:

Page 14 of 98
This research note is restricted to the personal use of [email protected]

Gartner, Inc. | G00252566

This research note is restricted to the personal use of [email protected]

IT Service Desk IT Change Management PC Application Streaming

On the Rise
ValueOps
Analysis By: George Spafford; Jeffrey M. Brooks; Ian Head Definition: ValueOps is a perspective for IT operations that leverages Gartner's Pace-Layered Application Strategy to focus operations practices on the needs of the business. Using ValueOps, I&O leaders can implement a holistic set of frameworks, methodologies and pragmatic guidance to achieve the right balance of operational risk and agility. Position and Adoption Speed Justification: Business processes change at varying rates of speed. At the same time, there is a trade-off in the supporting IT processes between the need for enablement and speed and the need to have risk mitigation that can slow down the implementation. This core conflict creates a great deal of stress in IT and must be surfaced and addressed. For example, a change management process that follows a one-size-fits-all approach will either be too slow for groups wishing to innovate rapidly or will not sufficiently mitigate risks for services with substantial governance, risk and compliance (GRC) requirements. Rather than a one-size-fits-all approach, a tiered system that understands the different needs of each service class will result in better trade-offs between speed and risk. This approach was set forth by Gartner in the Pace-Layered Application Strategy approach for development to enable innovation, manage risks and control costs. However, it is only in the last year that the applicability of this approach to IT operations has been recognized. The three pace-layered tiers are as follows: Systems of record (SORs) support business processes that change relatively infrequently and are risk-averse. This means that I&O processes must have the proper controls in place to mitigate risks to a level that's acceptable to the business. Systems of differentiation (SODs) change relatively more frequently and there is a greater level of risk tolerance. Systems of innovation (SOIs) are changing very rapidly and the business must be willing to tolerate a greater degree of operational risk. Systems of innovation have a need for speed and, therefore, use more agile development practices, including DevOps. User Advice: Review the pace of business change and tier the supporting IT services accordingly, using Gartner's ValueOps research for guidance. For each tier, review the design of processes, supporting technology and human factors to ensure that they are appropriately designed to support both speed and risk mitigation in alignment with the needs of the business. This balance must be a business decision and not decided on by IT alone. Business Impact: By tiering the IT services provided and revising processes to reflect the differing speed and risk mitigation requirements from the business, IT will be better-positioned to address

Gartner, Inc. | G00252566 This research note is restricted to the personal use of [email protected]

Page 15 of 98

This research note is restricted to the personal use of [email protected]

diverse customer needs while retaining operational efficiency (for example, by applying external compliance requirements only where needed). This, in turn, will enable the business to be more competitive and increase satisfaction with IT. Benefit Rating: Transformational Market Penetration: 1% to 5% of target audience Maturity: Embryonic Recommended Reading: "Increase I&O Effectiveness With the ValueOps Perspective" "Use a ValueOps Perspective to Balance Risk and Agility in IT Operations" "Use ValueOps to Align Incident and Service Request Management With Changing Business Priorities" "Five Steps Toward a Faster, Better, Cheaper I&O" "How I&O Can Manage Change at the Pace of the Business"

Business Value Dashboard


Analysis By: Colin Fletcher; Jeffrey M. Brooks Definition: Business value dashboard (BVD) initiatives are used to quantify the business value of infrastructure and operations (I&O) performance in a way relevant to supporting business and IT leaders' decisions. BVDs are composed of audience-specific sets of business value metrics (themselves combinations of prioritized business objectives, business performance measures and I&O performance measures) and delivered through a variety of presentation and reporting mechanisms. Position and Adoption Speed Justification: BVDs are a relatively new, transformational concept addressing the long-established imperative for I&O organizations to clearly define and communicate the business value they deliver. The continued rise of cost optimization, business fluency and external service provider management (both competitive and cooperative) expectations are combining to put significant pressure on I&O teams to transition from delivering traditional I&Ofocused executive dashboards for performance quantification to BVDs so much pressure, in fact, that Gartner currently estimates that by 2017, 40% of I&O organizations will replace their executive dashboards with new BVDs. Despite this building pressure, the fact that BVDs represent a transformation cannot be understated, both in the size of their potential positive impact and in the effort required to succeed. Adoption speed will continue to be tempered by BVD initiative maturity requirements; difficulty in securing adequate investment; and struggles to move away from skill, process and cultural investments built around traditional paradigms, performance and productivity measures. Tooling is

Page 16 of 98
This research note is restricted to the personal use of [email protected]

Gartner, Inc. | G00252566

This research note is restricted to the personal use of [email protected]

mechanistically mature (as BVDs can be built utilizing reporting and presentation capabilities from any number of IT operations management [ITOM] tools); however, appropriate predefined content is still nascent. User Advice: BVDs hold liberating promise for I&O teams willing to break down faulty assumptions and take the following pragmatic, stepwise approach to start clearly demonstrating their positive impact on business priorities:

Step 1 Obtain key IT and LOB executive support for an incremental BVD initiative. Step 2 I&O leaders should collaborate with LOB partners to determine the right questions to ask and the right way to provide the answers. Step 3 Determine and document the data and analytical techniques required to deliver BVD metrics. Step 4 Beta test manual BVDs with a subset of the intended audience, incorporating feedback. Step 5 Opportunistically select, implement and evolve tooling.

As the steps illustrate, the BVD construction process depends on collaboration and broad support to solely measure the impact of I&O services as defined by its audience, not by what data is easily available or what I&O teams use for technical optimization. It is also critical that BVD initiatives be treated as parts of, not substitutes for, larger strategic business alignment efforts. Business Impact: BVDs have the potential to positively impact all business services and processes across all verticals due to their foundational basis in the fundamental business concepts of growing revenue (or mission fulfillment in the case of nonprofit entities), reducing costs and mitigating risk. At the heart of BVD initiatives is business and IT leaders collaborating to redefine the performance measurement of I&O that will result in potentially dramatic shifts of both tactical and strategic investments across all dimensions of I&O (people, process and technology). This BVD-initiativefostered collaboration (in addition to that of similar alignment initiatives) has the potential to generate the insights needed to discover new ways of doing business and new business opportunities. Benefit Rating: Transformational Market Penetration: Less than 1% of target audience Maturity: Emerging Sample Vendors: Apptio; BMC Software; CA Technologies; Execview; HP; IBM; Mirror42; Plexent; PureShare; Texas Digital; Westbury; Xtraction Solutions Recommended Reading: "I&O Value Takes Center Stage With the Business Value Dashboard"

Gartner, Inc. | G00252566 This research note is restricted to the personal use of [email protected]

Page 17 of 98

This research note is restricted to the personal use of [email protected]

"First Steps in Building an I&O Business Value Dashboard"

Business Productivity Teams


Analysis By: Jarod Greene; John Rivard Definition: Business productivity teams (BPTs) are a departure from traditional IT service desk teams, which react to user productivity issues. BPTs proactively promote productivity by enabling users to be self-sufficient, in part, because they are instilled with a broad understanding of business processes so they can quickly identify solutions for business issues. Position and Adoption Speed Justification: The traditional IT service desk is not sustainable in the era of the personal cloud and personal productivity enablement. The costs of IT service desk support have increased yearly, while performance against traditional key performance indicators (KPIs) has decreased. The original value proposition of the IT service desk was formed around detect-and-fix approaches to service support, but the business is now looking to move faster than the IT support model can sustain. Since their inception, IT service desks have been expense reduction targets that justify their spending with operational metrics not aligned with business objectives. Furthermore, the ability to support the needs of the business in real time requires a largely unreached IT maturity level. Thus, business users will look to personal cloud models for new levels of functionality that IT is not in a position to deliver. Fifty percent of business users' perception of IT is derived from experiences with the IT service desk, which inhibits the productivity of users by not supporting bring your own computer (BYOC)/ bring your own device (BYOD) programs. This reinforces the idea that IT is failing to meet the needs of the business. If IT objectives do not shift to personal productivity enablement, then the value proposition of the traditional IT service desk will continue to diminish. BPTs will use a business-value-based justification for each performance metric that strategically links the relevant IT service-critical success factors of the IT organization with the goals of the business. In conjunction with IT product managers, BPTs will provide an open communication channel that will enable IT to determine the goals of the business. Reported metrics will focus on improved productivity correlated with increases in business outcomes by each business process for which the customer satisfaction overall IT service value has improved. User Advice: BPTs should be formed only by enterprises that are on the path to becoming servicealigned and/or business-aligned IT organizations. Organizations at lower levels should focus their efforts and resources on traditional service support management. Mature IT organizations (Level 3 ITScore or above) should strategize to transform their reactive IT service desks to a proactive BPT; however, this represents only about 10% of all infrastructure and operations (I&O) organizations. Planning to replace traditional IT service desks with BPTs should align with I&O maturity road maps for continual improvement. BPTs require personnel who have business understanding, technical domain knowledge and customer service skills. BPTs demonstrate value to the business as a group of technical advisors

Page 18 of 98
This research note is restricted to the personal use of [email protected]

Gartner, Inc. | G00252566

This research note is restricted to the personal use of [email protected]

capable of working with users to identify solutions for business issues. IT organizations must also leverage mobility and social media to enable BPTs to directly engage with the business on a regular basis. IT organizations can look to customer support models, such as Apple's Genius Bar or Best Buy's Geek Squad, which focus on customer productivity enablement through education and promotion, rather than reactive support. Test BPTs using a subset of IT service desk resources aimed at supporting a targeted business area (such as VIP or executive support) or applications/ services. No single BPT approach is ideal for all enterprises. Gartner advises organizations to customize the model to their specific needs and characteristics. We also recommend that a low-risk proof of concept (POC) pilot be used to refine the BPT model, prior to wider-scale deployment. Business Impact: As business user expectations and service complexity increase, CIOs and I&O leaders will be evaluated on their abilities to foster a self-service culture through the use of BPTs, which have the potential to drive innovation and revolutionize the business model for IT. Traditional IT service desks maintain productivity by reactively addressing application and system failures. BPTs use proactive approaches, including leveraging mobility and social collaboration for enhanced business user interaction. The most common form of a BPT has been the formalization of a user walk-in contact channel, staffed by Level 1 IT service desk and some Level 2 desktop support personnel. Less emphasis has been put on the strategic objectives of channel. Therefore, there is less focus on business impact metrics, such as increases in IT-business engagement and productivity levels. BPTs collaborate at the grassroots to identify user challenges and opportunities, mine user/system data to identify productivity enhancement opportunities, provide training and promote IT services. From these enhanced interactions, BPTs will be better positioned to identify system enhancements for future releases and align business requirements to services. To this extent, the objectives of the IT organization can be developed and will become better aligned with those of the business. For example, BPTs can help business users make more-informed personal productivity decisions, and work to minimize the levels of routine support by providing teaching-type support in favor of a flagand-fix approach. The potentially larger business impact is focused on an innovation driver. If BPTs can work in conjunction with business relationship managers, they can better understand user sentiment and pain points and be the necessary feedback loop to steer effective change that can have a significant impact on service delivery. For example, BPTs can go to development teams with user sentiment as suggestions and enhancements to provide productivity gains that would pay for themselves exponentially. The ramifications of not creating a BPT are likely to exasperate the business, which would increasingly look to consumerization and shadow IT to achieve productivity. Benefit Rating: Transformational Market Penetration: 1% to 5% of target audience Maturity: Adolescent

Gartner, Inc. | G00252566 This research note is restricted to the personal use of [email protected]

Page 19 of 98

This research note is restricted to the personal use of [email protected]

Recommended Reading: "Reinvent Your IT Frontline Capabilities" "Best Practices for Conducting the Business Productivity Team Proof-of-Concept Plan" "Four Keys to a Successful Business Productivity Team Implementation" "IT Service Desks Must Modernize User Experiences or Get Out of the Way"

IT Operations Gamification
Analysis By: Jarod Greene Definition: Gamification is the application of the same design techniques and game mechanics found in all games, applied to nongame contexts. Gamification leverages concepts associated with feedback loops that reinforce desired behaviors and encourage good habits by turning progress into rewards. Applied to IT operations, gamification can make IT organizational change activities more enjoyable and benefit IT operations initiatives by garnering a deeper level of committed participant engagement. Position and Adoption Speed Justification: Gamification is emerging as a trend in I&O used to develop skills and change behaviors among IT staff and business end users. IT organizations have used simple contests to engage staff, boost morale and promote participation in tasks deemed undesirable. Simple contests have been manually administered and have been overly focused on extrinsic rewards, Therefore, they have typically produced suboptimal results and undesired negative outcomes. Gamification shifts the focus to intrinsic rewards (e.g., status, merit and achievement) that are provided to users through real-time analytics and data visualization techniques to reinforce or modify behavior. For example, gamification can provide a real-time feedback loop to collect context data (which analyst/technician/engineer did what and when in support of which user/service) and can provision the feedback mechanism (e.g., a point, a badge, a trophy). Automated and accelerated feedback loops provide the mechanism for IT personnel to understand how they relate to their peers in support of the IT organizations' goals and objectives, while also providing intrinsic rewards, (e.g., status and recognition) as motivation, rather than monetary rewards. Gamification as an engagement accelerator is being evaluated by a small minority of organizations, exploring use cases specific to incident, problem, change and configuration management processes, and driving users to lower-cost IT contact channels, fostering peer-to-peer support, promoting user and IT staff training, and encouraging collaboration and mentoring. In polling conducted in December 2012, 32% of organizations described their view of gamification as an opportunity, but they remain skeptical. Forty-five percent of organizations see value, but will wait to implement, whereas 5% are already using game mechanics in their environments. Furthermore, ITSM vendors are beginning to provide layers of gamification within their core solutions, ranging from affirmation of task orders within structured workflows to real-time dashboard capabilities that provide performance metrics of individuals and teams. Ultimately, gamification will become part of well-designed software, as opposed to a feature or function.
Page 20 of 98
This research note is restricted to the personal use of [email protected] Gartner, Inc. | G00252566

This research note is restricted to the personal use of [email protected]

User Advice: IT organizations should not look to gamification to resolve issues specific to a lack of standardized processes and low business user confidence. Gamification can provide a powerful mechanism to motivate behavior, but organizations must first have a clear understanding of the behaviors they are trying to change, modify or reinforce. IT organizations considering gamification should start with small, simple game mechanics and involve IT staff in gamification efforts to leverage their knowledge and gain participation. Rewards should be intrinsic and mixed within multiple IT organizational areas to sustain attention and engagement. Business Impact: During any transformation effort, 80% of the work involves persuading people to change the way they do their jobs and making the new practices stick. Gamification can create a motivational environment if clear performance expectations are set to appropriately reward and discipline performance. Because gamification fosters higher levels of IT staff and business user engagement to perform desired behaviors, IT organizations can more efficiently and effectively deliver solutions to the business, reduce onboarding cycles, increase skill levels, promote productivity and boost morale without significantly overhauling or modifying IT operations processes. IT operations gamification can also be a pilot group for larger business efforts, providing useful test data for future initiatives. Benefit Rating: Moderate Market Penetration: 5% to 20% of target audience Maturity: Emerging Sample Vendors: Axios Systems; Badgeville; Bunchball; Capita; CA Technologies; ServiceNow Recommended Reading: "Engaging Your IT Service Desk With Gamification" "Driving Engagement of Social IT Support With Gamification" "Improving IT Service Desk Performance With Gamification"

Software License Optimization Tools


Analysis By: Patricia Adams Definition: Correlating software license entitlement with installed software is not an easy task. To address this market niche, vendors specializing in software license optimization are providing a level of detail about product use rights, vendor stock-keeping units (SKUs) and contract terms to determine license position. Because some of these complex licenses are based on the platform, such as cores, processors and virtualization policies, it is necessary to know hardware attributes. Software license optimization tools are a supplement to an IT asset management (ITAM) program. Position and Adoption Speed Justification: The software license optimization vendors that specialize in license management are beginning to make inroads into large organizations that may

Gartner, Inc. | G00252566 This research note is restricted to the personal use of [email protected]

Page 21 of 98

This research note is restricted to the personal use of [email protected]

have implemented ITAM programs, but they are still not getting the data they need to ensure compliance. This level of detailed data is often not found in an ITAM repository. As a result, these organizations are looking to a limited group of specialty vendors to assist with license optimization and entitlement management, especially with complex server-based license types that are not easily counted, such as IBM, Oracle, Symantec and SAP. As OEM vendors move toward licensing models to support virtualization and software in public clouds, complexity will only increase, and site licenses or enterprise licenses will not be the most cost-effective for customers. These tools offer some relief to organizations that are suffering from an extensive number of audits by vendors that are evolving their license models to maximize revenue or in response to new delivery channels, such as Adobe and Microsoft, or make it complex to count and keep track of installs for software with reassignment rights. As pirated or "cracked" software becomes prevalent on the resale market, it becomes even more critical for organizations to have tight controls, processes and tools in place to accurately understand their license positions. User Advice: To achieve the fastest ROI from a software license optimization tool and gain visibility into all software, organizations should ensure that the discovered inventory data is accurate and that sources are reliable. The success of the software license optimization tool is premised on the accuracy of this data. If there is an existing ITAM repository tool or configuration management database (CMDB) in place, ensure that out-of-the-box integrations are available with the software license optimization tool. To ensure this supplementary toolset is successful, integrate it with your existing ITAM or CMDB tool so that discovery data can be seamlessly exchanged and normalized against other discovery sources. Gartner recommends selecting a software license optimization vendor based on its ability to support licensing models across the entire software portfolio, not just the vendors that are problems today. The vendor software SKU/product identification library will also require constant updating as software vendors are acquired or vendors have new releases. Without strong, foundational discovery information, these tools will not be accurate, so set a high standard for data quality. Business Impact: Regardless of whether or not an organization has an existing ITAM repository, it will likely benefit from the detailed information in a software license optimization tool. Gartner expects that this market will continue as a subset of the overarching ITAM market, and acquisitions will likely occur as enterprise-class vendors look to plug holes in their product functionality. Benefit Rating: Moderate Market Penetration: 1% to 5% of target audience Maturity: Emerging Sample Vendors: Aspera; Eracent; Flexera Software Recommended Reading: "Software License Optimization Vendor Overview" "IT Asset management MarketScope 2013"

Page 22 of 98
This research note is restricted to the personal use of [email protected]

Gartner, Inc. | G00252566

This research note is restricted to the personal use of [email protected]

IT Operations Analytics
Analysis By: Will Cappelli; Colin Fletcher; Jonah Kowall Definition: IT operations analytics (ITOA) technologies are primarily used to discover complex patterns in high volumes of often "noisy" IT system availability and performance data. Providing a real inference capability not generally found in traditional tools, ITOA uses the coordinated deployment of five capabilities: complex operations event processing (COEP), statistical pattern discovery and recognition (SPDR), unstructured text indexing, search and inference (UTISI), topological analysis (TA), and multidimensional database search and analysis (MDSA). Position and Adoption Speed Justification: The amount of data available to IT operations teams for performance and availability management has increased by an order of magnitude over the last five years. This is attributable both to an increase in the number of data types that can be monitored and the number of devices and logical elements that require monitoring. At the same time, increased demand for system adaptability and for systems to be capable of computations that are essentially resource-consumptive (NP-complete), they need to have a complex internal structure; i.e., they must be composed of a large number of moving parts that move in relative independence from one another. This, however, means that it is not possible to infer the behavior of the whole from the behavior of any individual part. This combination of increasing data volume, variety and velocity and increasing system complexity is driving the demand for well-defined and segregated ITOA platforms and services. ITOA systems tend to be used by IT operations teams for five different purposes. First, the models, structures or patterns are regarded as descriptions topological, mathematical or verbal, depending on the particular analytics technology deployed of the IT infrastructure or application stack being monitored. This description is then used to correct or extend the outputs of other discovery-oriented tools to improve the fidelity of information used in operational tasks (e.g., service dependency maps, application runtime architecture topologies and network topologies). Second, and to date this has been the most successful use case, the models, structures and pattern descriptions help the user pinpoint fine-grained and previously unknown root causes of overall system behavior pathologies. Third, when multiple root causes are known, the selected analytics systems output is used to determine and rank their relative impact, so that resources can be devoted to correcting the fault in the most time-efficient and cost-effective way possible. Fourth, and this is probably the most widely publicized use case, since the selected output is effectively a "scientific hypothesis" describing the mechanisms that cause system behavior, it can be used to predict future system states and the impact of those states on performance. Fifth, as a result of the analytics system's ability to formulate plans for action, it can determine how to resolve problems or, at least, direct the results of its inferences to the most appropriate individuals or communities in the enterprise for problem resolution. ITOA platforms have their counterparts in the business intelligence (BI) and security information and event management (SIEM) markets. Gartner does expect some technology sharing and even convergence with BI and SIEM technologies, particularly the latter.

Gartner, Inc. | G00252566 This research note is restricted to the personal use of [email protected]

Page 23 of 98

This research note is restricted to the personal use of [email protected]

User Advice: Users who are at least at ITScore Level 3 or above in IT operations maturity should investigate where and how they can use ITOA tools to run their operations more efficiently, provide detailed trend analysis of issues, and deliver highly visible and improved customer service. These systems may take the form of stand-alone platforms, independent of technology or process domain, or they may be tightly coupled with a specific technology domain (e.g., applications) or process (e.g., configuration management). It is also important to keep in mind that most of the tools available today take a siloed approach to ITOA, reflected in the fact that most support only one, two or three of the five ITOA capabilities. IT operations teams can use these tools to make adjustments and improvements to the IT operational services they deliver to their business customers. For example, they can delay the acquisition of hardware by showing the cost of unused capacity or help consolidate and rationalize employing applications based on utilization and cost data. The ability of ITOA tools to integrate data from multiple vendor sources and process large amounts of real-time data is improving but is still limited. However, the ITOA tools that have emerged from specific IT operations areas have the potential to extend their capabilities more broadly. Most of these tools rely on manual intervention to identify the data sources, understand the business problem they are trying to solve, and build expertise in the tools for interpreting events and providing automated actions or recommendations. Investments in these tools are likely to be disruptive for customers, particularly as newer, innovative vendors get acquired. This means that the product must have significant value for the customer today to mitigate the risk of acquisition and subsequent disruptions to product advancements or changes to product strategy. A critical requirement for choosing a tool is understanding the data sources with which it can integrate, the amount of manual effort required to run analytics and the training needs of the IT staff. Users should also be aware that these tools have solved, and will continue to emerge to solve, specific IT operations problems; e.g., workload automation analytics, application performance monitoring (APM), performance and capacity analytics, root cause analysis (RCA), etc. Business Impact: ITOA tools will provide CIOs and senior IT operations managers with a source of operational and business data. The importance of this source will increase dramatically over the next five years, as more and more business processes become essentially digitized and, as a consequence, will, in their execution, generate data that is directly captured, aggregated and analyzed by ITOA platforms. Benefit Rating: High Market Penetration: 1% to 5% of target audience Maturity: Emerging Sample Vendors: AccelOps; Appnomic Systems; Apptio; Bay Dynamics; BMC; Evolven; Hagrid Solutions; HP; IBM; Loggly; Moogsoft; Nastel Technologies; Netuitive; OpTier; Prelert; Savision; SAS; SL; Splunk; Sumerian; Sumo Logic; Teleran; Terma Labs; VMware; Xpolog Recommended Reading:

Page 24 of 98
This research note is restricted to the personal use of [email protected]

Gartner, Inc. | G00252566

This research note is restricted to the personal use of [email protected]

"How to Fit Technologies to Use Cases for ITOA Systems" "Will IT Operations Analytics Platforms Replace APM Suites?" "IT Operations Analytics Technology Requires Planning and Training"

IT Service Support Management Tools


Analysis By: Jarod Greene; Jeffrey M. Brooks Definition: IT service support management (ITSSM) tools offer tightly integrated processes and functions that correlate with the activities of the broader IT support organization. ITSSM tools can leverage a business view of IT services, enabling the IT support organization to better prioritize and quickly resolve or escalate issues and problems, and improve root cause isolation. Position and Adoption Speed Justification: Having evolved from the IT service desk market, ITSSM tools are a segment of the IT market that IT service support organizations can adopt to support the business. ITSSM tool functionality extends beyond traditional IT service desk tools to address the changing dynamics of IT service support. The ITSSM tool market is still focused on IT service support, but with more emphasis on improving root cause isolation and providing higher levels of business user satisfaction. Using this business view, IT support organizations can manage incidents, problems and service requests throughout their life cycles at more efficient and effective rates. ITSSM tools enable organizations to automate the workflow of infrastructure and operations (I&O) processes familiar to frameworks such as ITIL. Processes included in ITSSM tools are incident, problem, change, release governance and request management. These tools provide modules that enable business end users to find knowledge to support/resolve computing-related issues, or request an IT service via an IT self-service module. At higher maturity levels, IT organizations are in the process of deriving higher value from solutions that offer tighter integration of functions across process modules (e.g., incident, problem and change management), if those processes are implemented and well-established. Because only a few organizations have reached ITScore for I&O (ITSIO) Maturity Level 3, the adoption of ITSSM tools will be slow. User Advice: IT organizations that plan to reach ITSIO Maturity Level 3, 4 or 5 during the next three years should replace IT service desk tools with ITSSM tools, if they have not already done so. Organizations need to understand that the ITSSM tool market alters the stratification of tools common to the IT service desk market. Functional differentiators for tool selection in the IT service desk stack commonly include:

Out-of-the-box best practices Ease of use graphical user interfaces (GUIs), upgrades, configurations and process configurations Ease of integration with infrastructure components

ITSSM tools have a foundational functionality that has been available for some time, but has not been taken advantage of in integrating people, process and technology perspectives. I&O maturity

Gartner, Inc. | G00252566 This research note is restricted to the personal use of [email protected]

Page 25 of 98

This research note is restricted to the personal use of [email protected]

assessments (see "ITScore for Infrastructure and Operations") will better position organizations to develop road maps to address resource gaps during the completion of these integrations. In evaluating ITSSM tools, reference the organization's I&O road map to know how and when functional modules will be implemented and integrated, because the acquisition of an ITSSM solution to leverage only incident and change management modules will result in higher costs (for licenses, maintenance and professional services) and lower utilization. ITSSM tools are differentiated from IT service desk tools, because they add additional functionality:

A fully functional mobile device interface, including incident update and resolution, authorization approvals, and access to reporting and metrics. Also included is location awareness of the mobile device, as well as the camera. End-to-end visualization of hierarchical and peer-to-peer relationships of configuration items that deliver IT services. Social capabilities that enable collaboration around a shared purpose. Advanced reporting capabilities focused on business value beyond the traditional productivity measures. Advanced reporting includes:

Specific reports tied to common critical success factors and key performance indicators (KPIs) for an IT service desk. Multidimensional charts to show how related metrics affect each other. A business value dashboard that shows the impact of critical success factors (financial or otherwise).

Process governance and reinforcement capabilities to ensure that optimal desired behaviors and business outcomes are delivered through the team using the ITSSM tools.

Business Impact: In the past, organizations that needed IT service desk tools could choose from several vendors that supplied robust suites of products promising vigorous integration of functional process modules. However, most organizations now find it difficult to move beyond the simple IT service desk functionality of incident, problem and change management. The end result is that organizations are implementing processes for service management entirely manually, without product integration. ITSSM tools promise to improve this by capitalizing on the ITIL concept that processes integrate by exchanging information. The automated exchange of information among processes within the tool enables users to be more efficient. The ITSSM tool market presents IT organizations with a set of solutions that focuses on the integration of process modules well beyond simple IT service. The market looks at tools that provide integrated functionality across several types of IT processes and functions:

Service desk Incident management, problem management, request management Engineering/administration Change management, configuration management, release governance End user Self-service, request management
Gartner, Inc. | G00252566 This research note is restricted to the personal use of [email protected]

Page 26 of 98

This research note is restricted to the personal use of [email protected]

Administrative SLA management, reporting, inventory/configuration repository

Benefit Rating: Moderate Market Penetration: 5% to 20% of target audience Maturity: Adolescent Sample Vendors: Axios Systems; BMC Software; CA Technologies; Cherwell Software; EasyVista; FrontRange; Hornbill; HP; IBM; LANDesk; ServiceNow Recommended Reading: "Criteria for Developing the 2013 Magic Quadrant for IT Service Support Management Tools" "Magic Quadrant for IT Service Support Management Tools" "IT Service Desk Tool Acquisitions Must Be Based on Infrastructure and Operations Maturity" "How to Decide Whether SaaS ITSSM Tools Make Sense for Your Organization" "How to Decipher IT Service Support Management Tool Pricing and Packaging"

Social IT Management
Analysis By: Jarod Greene Definition: Social IT management (ITM) involves the use of social collaboration processes and tools in support of infrastructure and operations (I&O) objectives. Common social ITM use cases include the use of social communities to foster peer-to-peer (P2P) IT service support, better capturing of out-of-band collaboration among IT staff members and the use social media to promote the value of the IT organization to the business. Position and Adoption Speed Justification: I&O organizations are demonstrating an increasingly strong interest in applying social collaboration processes and tools to the support of their objectives. Fifty-four percent of I&O organizations are either well into social ITM initiatives or planning one during the next six months. Social collaboration provides opportunities for pooling contributions, expertise location, interest cultivation, relationship leverage and flash coordination opportunities that I&O organizations recognize can provide significant benefits. Thus far, the challenge for I&O organizations is to develop a more mature social ITM strategy that helps them gather and identify resources and KPIs, which will reinforce the value of the program to the IT organization and the business users it supports. User Advice: Social ITM initiatives should begin with a dialogue with the business to determine whether the program will be meaningful and relevant to achieving business objectives. Through this dialogue, I&O organizations can also understand what social collaboration efforts have been initiated and determine business communication and collaboration trends and work patterns. If a social strategy is already in place, I&O can reinforce those efforts, while leveraging established

Gartner, Inc. | G00252566 This research note is restricted to the personal use of [email protected]

Page 27 of 98

This research note is restricted to the personal use of [email protected]

policies, processes, practices and technologies. One of the primary reasons social ITM initiatives fail is uncoordinated social strategies a dialogue with the business works to mitigate that risk. I&O organizations must then pinpoint social IT use cases. The following are common, but should not limit the scope of potential options:

P2P IT service support Social collaboration tools provide better and faster connections in pursuit of information to better leverage internal systems and solve frequent technology-related issues. Crowdsourcing via social software provides the means for users to become more productive in using business technologies and provides the IT support team opportunities to manage the community to better understand what's important to users and how best to resolve issues. Collaborative operations management Social collaboration tools facilitate the capture of information among IT staff that would not typically be captured via traditional communication methods. The unstructured processes or activities that occur in many IT operations organizations represent a potentially rich repository of organizational knowledge that has been difficult to collect using traditional IT service management (ITSM) products. This capability will become increasingly important in the emerging DevOps arena, as development and operations begin to work more closely to coordinate planning and build, test and release activities. Social business relationship management (BRM) Social software can provide the means for the IT organization to foster a two-way dialogue with the business. Typically, IT organizations unidirectionally inform the business of planned and unplanned outages, releases and new services via email or through an intranet portal. This type of communication is often disregarded or ignored. Social media enables dynamic communications whereby users can generate conversations within these notifications to understand the specific impact of the message in a forum open to the wider community of business end users. End users can follow the IT organization's announcements and services, as well as the configuration items that are important to them through social media tools.

A social IT strategy should be aligned with the goals, objectives and values of the organization. Failure to develop this strategy will cause end users and IT staff to create often-conflicting social collaboration tools and processes themselves, as well as solutions that lack the design and delivery principles associated with mass collaboration. Mass collaboration differentiates social media from other collaboration technologies and practices, and it is important to distinguish social media from other forms of collaboration (such as email, shared directories, knowledge management systems and Web content management systems) and to design social ITM initiatives with the principles of mass collaboration. Applying the principles to social ITM initiatives will increase the chances of maintaining an effective mass collaboration environment. IT organizations should plan on working with management and users to create social ITM strategies with a heavy emphasis on people and processes, before identifying technical requirements. Depending on which use case is of the highest priority, organizations should identify tools and solutions that provide the required functionality, rather than letting tools dictate the approach to social media. Popular social collaboration tools are abundant, so I&O organizations planning social
Page 28 of 98
This research note is restricted to the personal use of [email protected] Gartner, Inc. | G00252566

This research note is restricted to the personal use of [email protected]

ITM initiatives must match solutions to use cases, identify what is technically feasible and estimate costs for deployment. Many ITSM vendors are beginning to layer social collaboration on top of their toolsets. However, options to leverage social collaboration tools already in use by the business, as well as public social collaboration tools for which the goal is to engage users in the social communities they most frequently visit, present additional options. Success criteria for social ITM initiatives must consider everything that matters to be measurable (or visible), and business cases need to layer the benefits that social collaboration provides to the organization, in terms of improving the ability to manage scale, improve employee morale, and increase engagement and retention. Because most ITSM vendor approaches to social ITM are immature, the inability to measure and manage the context (user sentiment, usage patterns, influencers, etc.) has been a deterrent to many initiatives. Third-party social analytics tools can bridge this gap, as well as identify stealth IT or inefficient IT operations processes that might affect the delivery of IT services. Business Impact: Applied appropriately, social ITM presents the opportunity to enable the I&O organization to achieve their goals in supporting the business, including demonstrating higher levels of business value. Through improved collaboration, social ITM can help increase business user productivity, by enabling users to better provide solutions that the IT organization is not aware of, and better promote how IT services should be and can be used. Social ITM has also proved effective in removing some of the silos that exist between IT operations domains, and in collaborating on a common platform around a shared purpose. It can better convert ad hoc interactions and other forms of out-of-band communication among IT operations personnel into reusable assets, which can be leveraged in support of unstructured IT operations work patterns and improve the I&O organization's ability to support a highly available infrastructure. Benefit Rating: Low Market Penetration: 1% to 5% of target audience Maturity: Adolescent Sample Vendors: Axios Systems; BMC Software; CA Technologies; Cherwell Software; Hornbill; ITInvolve; ServiceNow Recommended Reading: "How to Get Started With Social IT Management" "Driving Engagement of Social IT Support With Gamification"

DevOps
Analysis By: Ronni J. Colville; Jim Duggan

Gartner, Inc. | G00252566 This research note is restricted to the personal use of [email protected]

Page 29 of 98

This research note is restricted to the personal use of [email protected]

Definition: The DevOps movement was born of the need to improve IT service delivery agility and found initial traction within many large public cloud services providers. Underpinning DevOps is the philosophy found in the Agile Manifesto, which emphasizes people (and culture), and seeks to improve collaboration between operations and development teams. DevOps implementers also attempt to better utilize technology especially automation tools that can leverage an increasingly programmable and dynamic infrastructure from a life cycle perspective. Position and Adoption Speed Justification: DevOps doesn't have a concrete set of mandates or standards, or a known framework (e.g., ITIL or Capability Maturity Model Integrated [CMMI]), making it subject to a more liberal interpretation. It is primarily associated with continuous integration and delivery as a means of providing linkages across the application life cycle, from development to production, as it relates to the IT services being delivered. This can accelerate adoption, as well as potentially inhibit it. DevOps concepts are becoming more widespread, spreading within the cloud and in more traditional enterprise environments (the latter often relates to customer-facing applications). The creation of DevOps (or environment management) teams is bringing development and operations staff together to manage (more consistently) an end-to-end view of an application or IT service. For some IT organizations, streamlining release deployments from development through to production is the first area of attention, as this is where some of the most acute service delivery pain exists. Practices associated with DevOps include the creation of a common process for the developer and the operations teams, formation of teams to manage the end-to-end provisioning and practices for promotion and release, a focus on high fidelity between the stage environments (development, test, staging and production), standard and automated practices for build or integration, higher levels of test automation and test coverage, automation of manual process steps and informal scripts, and more comprehensive simulation of production conditions throughout the application life cycle in the release process. Tools are emerging to replace custom scripting with consistent application or service models improving deployment success through more predictable configurations. The adoption of these tools is not usually associated with development or production support staff per se, but rather with groups that straddle development and production, and is typically instantiated to address specific Web applications with a need for increased release velocity. Monitoring and other productionrelated tools are starting to come into focus to provide, in essence, closed-loop feedback capabilities. To facilitate and improve testing and continuous integration, tools that offer monitoring specific to testers and operations staff are also beginning to emerge. Another aspect of DevOps adoption that remains a significant challenge is the requirement for pluggablity. Toolchains are critical to DevOps to enable the integration of function-specific automation from one part of the life cycle to another. Due to the lack of formality of how to implement or adopt DevOps, adoption is somewhat haphazard. Many aspire to reach the promised fluidity and agility, but few have done so. While most of the initial adoption sourced from developers, operations (application support, release managers and environment managers) has also begun various projects tagged as "DevOps." In addition to not having a specific playbook, understanding what types of applications are good candidates has also been a challenge.
Page 30 of 98
This research note is restricted to the personal use of [email protected] Gartner, Inc. | G00252566

This research note is restricted to the personal use of [email protected]

IT organizations that are leveraging pace-layering techniques can stratify and categorize applications and find applications (and development and operations teams) that could be good targets for adoption. We expect this bifurcation (development focus and operations focus) to continue for the next two years, but, as more applications or IT services become agile-based or customer-focused, the adoption of DevOps (and associated tools) will quickly follow. DevOps does not preclude the use of other frameworks or methodologies, such as ITIL, and the potential exists to incorporate some of these best-practice approaches to enhance overall service delivery. Enterprises adopting a DevOps approach often begin with one process that can span development and operations. It is imperative to understand the business requirements for time to market and to evolve accordingly using DevOps and other frameworks where appropriate. Release management, while not mature in adoption, is a pivotal starting point for many DevOps projects. User Advice: DevOps hype is beginning to peak among the tool vendors, with the term being applied aggressively and claims outrunning demonstrated capabilities. Many vendors are adapting their existing portfolios and branding them DevOps to gain attention, and some vendors are acquiring smaller point solutions specifically developed for DevOps to boost their portfolios. We expect this to continue. IT organizations must establish key criteria that will differentiate DevOps traits (strong toolchain integration, workflow, people and application orientation) from those of traditional management tools. Successful adoption or incorporation of this approach will not be achieved by a tool purchase, but is contingent on a sometimes difficult organizational philosophy shift. Because DevOps is not prescriptive, it will likely result in a variety of manifestations, making it more difficult to know whether one is actually "doing" DevOps. However, the lack of a formal process framework should not prevent IT organizations from developing their own repeatable processes to give them agility and control. Because DevOps is emerging in definition and practice, IT organizations should approach it as a set of guiding principles, not as process dogma. Select a project involving development and operations teams to test the fit of a DevOps-based approach in your enterprise. Often, this is aligned with one application environment. If adopted, consider expanding DevOps to incorporate technical architecture. At a minimum, examine activities along the existing developer-to-operations continuum, and look for opportunities where the adoption of more-agile communication processes and patterns can improve production deployments. Business Impact: DevOps is focused on improving business outcomes via the adoption of continuous improvement and incremental release principles adopted from agile methodologies. While agility often equates to speed (and faster time to market), there is a somewhat paradoxical impact, as well as smaller, more frequent updates to production that can work to improve overall stability and control, thus reducing risk. Benefit Rating: Transformational Market Penetration: 5% to 20% of target audience Maturity: Adolescent

Gartner, Inc. | G00252566 This research note is restricted to the personal use of [email protected]

Page 31 of 98

This research note is restricted to the personal use of [email protected]

Sample Vendors: Boundary; CFEngine; Circonus; Opscode; Puppet Labs; SaltStack Recommended Reading: "Deconstructing DevOps" "DevOps Toolchains Work to Deliver Integratable IT Process Management" "Leveraging DevOps and Other Process Frameworks Requires Significant Investment in People and Process" "DevOps and Monitoring: New Tools for New Environments" "Catalysts Signal the Growth of DevOps" "Application Release Automation Is a Key to DevOps"

Service Billing
Analysis By: Milind Govekar Definition: IT operations service-billing tools capture detailed usage records (similar to charging data records [CDRs] in the telecommunications industry); apply tariffs, prices and discounts; manage the process of rendering bills and bill delivery; and apply bill adjustments with a view of maintaining customer accounts. They provide cost and pricing information for services with some service cost and price modeling capabilities. Position and Adoption Speed Justification: Service-billing tools differ from IT chargeback or IT financial management tools in that they use resource usage data to calculate the costs for chargeback and aggregate it for a service. Alternatively, they may offer service-pricing options (such as per employee, per transaction) independent of resource usage. When pricing is based on usage, these tools can gather resource-based data across various infrastructure components, including servers, networks, storage, databases and applications. Service-billing tools perform proportional allocation based on the amount of resources (including virtualized and cloud-based) allocated and used by the service for accounting and chargeback purposes. Service-billing costs are based on service definitions and include infrastructure and other resource use (such as people) costs. As a result, they usually integrate with IT financial management tools and IT chargeback tools. These tools will be developed to work with service governors to set a billing policy that uses cost as a parameter, and to ensure that the resource allocation is managed based on cost and service levels. Due to their business imperative, these tools have been deployed among service providers, in cloud environments and by IT organizations that use or deploy applications, such as e-commerce applications. User Advice: Although COTS tools exist in the market, many service providers (including cloud service providers) have custom-developed these tools for their own environments. Service-billing

Page 32 of 98
This research note is restricted to the personal use of [email protected]

Gartner, Inc. | G00252566

This research note is restricted to the personal use of [email protected]

tools take a life cycle approach to services, perform service cost optimization based on underlying technology resource usage optimization during the entire life cycle and provide granular cost allocation information mainly for service pricing. Vendors and solutions in this area are being challenged by IT financial management vendors and solutions as they evolve to develop these capabilities. Thus, these tools have emerged from startups, as well as from IT financial management, asset management, e-commerce and telco billing, and software stack vendors. Business Impact: These tools are critical to running IT as a business by determining the financial effect of sharing IT and other resources in the context of services. They also feed billing data back to IT financial management tools and chargeback tools to help businesses understand the costs of IT and budget appropriately. Benefit Rating: Moderate Market Penetration: 1% to 5% of target audience Maturity: Emerging Sample Vendors: Apptio; Aria Systems; ComSci; IBM Tivoli; MetraTech; Transverse; VMware; Zuora

Application Release Automation


Analysis By: Ronni J. Colville; Colin Fletcher Definition: Application release automation (ARA) tools offer automation to enable best practices in moving related artifacts, applications, configurations and even data together across the application life cycle. To do so, ARA tools provide a combination of automation, environment modeling and workflow management capabilities to simultaneously improve the quality and velocity of application releases. These tools are a key part of enabling the DevOps goal of achieving continuous delivery with large numbers of rapid small releases. Position and Adoption Speed Justification: As with many processes, IT organizations are often very fragmented in their approach to application releases. In some cases, the process is led by operations, although it can also be (and increasingly is) managed from the development side of the organization or a joint venture of the two groups as in DevOps. This predictably results in tool acquisition fragmentation due to different buyers looking at different tools, rather than comprehensive solutions, to solve similar challenges. The intent of these tools is fivefold:

Speed the time to market associated with agile development by reducing the time it takes to deploy and configure across all environments. Eliminate the need to build and maintain custom scripts for application deployments and updates by standardizing and documenting the deployment processes across various environments.

Gartner, Inc. | G00252566 This research note is restricted to the personal use of [email protected]

Page 33 of 98

This research note is restricted to the personal use of [email protected]

Reduce configuration errors and downtime associated with individual releases within a single environment or across multiple environments. Coordinate and automate releases between multiple people, groups and process steps that are typically maintained manually in spreadsheets, email or both. Move the skill base from expensive, specialized script programmers to less costly resources.

Adoption and utilization of these tools are still emerging, but they continue to attract a significant amount of attention from large enterprises and enterprises with Web-facing applications. Although current use of the tools is typically limited to a small percentage of all applications in an enterprise application portfolio, this percentage is expected to increase in line with the continued adoption growth of agile development and Web and cloud application architectures. The largest competitors of these tools are in-house scripts and manual processes, which are ripe targets as cost and competitive pressures on IT organizations continue to increase. Additionally, the popularity and market mind share around DevOps continue to grow and bring significant attention to improving release and deployment or continuous deployments. This market momentum has also resulted in recent, significant acquisition and development activity among vendors that represents likely continued near- and long-term investment in the market. This focus also drives the appeal of treating the method of deployment of applications like that of coding the application. Some organizations are modeling their application deployments after large cloud providers and leveraging tools that enable application support teams and system engineers to develop automated scripts provided by newly commercial tools that have emerged from the opensource community. User Advice: Keep in mind that processes for ARA are not, and are unlikely to become, highly standardized. Assess your application life cycle management maturity specifically around your deployment processes and seek a tool or tools that can help automate the implementation of these processes across multiple development and operations teams. Organizational and political issues remain significant and can't be addressed solely by a tool purchase. Additionally, the better understanding you have of your current workflows for application release (especially if it is done manually), the easier the transition will be to an automated workflow, which will increase time to value for the tools. Understand and use your specific requirements for applications and platforms to narrow the scope of evaluation targets to determine whether one tool or multiple tools from one or more vendors will be required. Although most vendors provide a combination of automation, environment modeling and workflow management, the strengths, scope (application, platform and version support) and packaging of these respective capabilities vary significantly across vendors. While we expect this gap to continue to shrink, it is important to understand current support and future road maps. Include integrations with existing development and IT operations management (ITOM) tooling in your product evaluation criteria, with an eye toward building out the niche use of these tools into your broader provisioning and configuration environment. Organizations that want to extend the application life cycle beyond development to production environments using a consistent application model should evaluate development tools with ARA features or ARA point solutions that provide out-of-the-box integration with development tools. Additionally, evaluate integration with
Page 34 of 98
This research note is restricted to the personal use of [email protected] Gartner, Inc. | G00252566

This research note is restricted to the personal use of [email protected]

existing or planned cloud infrastructures or CMP tools for ongoing application release automation capability. Business Impact: ARA tools to date have most significantly and positively impacted business processes and services that must evolve rapidly to remain competitive and often rely on agiledeveloped, Web-based applications. That said, ARA tools yield agility, cost and risk mitigation benefits for most application types by improving the quality, by reducing human error, and velocity of releases through increased consistency and standardization. Business agility is improved by reducing the time it takes to deploy and configure applications across multiple environments, thereby speeding the business's ability to react to market changes. Cost savings are realized through significant reduction of required manual interactions by often high-skill/high-cost staff. Risk is inherently mitigated by ARA tools' documentation and standardization of processes and configurations across multiple technology domains. Benefit Rating: High Market Penetration: 5% to 20% of target audience Maturity: Adolescent Sample Vendors: BMC Software; CA (Nolio); Electric Cloud; HP; IBM (UrbanCode); MidVision; Opscode; Puppet Labs; SaltStack; Serena Software; ServiceMesh; UC4 Software; VMware; XebiaLabs Recommended Reading: "Cool Vendors in DevOps, 2013" "Know the Application Release Automation Vendor Landscape to Shortlist the Best Vendors for Your Organization" "Cool Vendors in DevOps, 2012" "Cool Vendors in Release Management, 2011" "From Development to Production: Integrating Change, Configuration and Release" "Application Release Automation Is a Key to DevOps" "Pursuing Smaller Infrastructure Releases" "DevOps Toolchains Work to Deliver Integratable IT Process Management" "Aligning Change to Configuration and Release Management" "How to Build a DevOps Release Team" "Best Practices in Change, Configuration and Release Management"

Gartner, Inc. | G00252566 This research note is restricted to the personal use of [email protected]

Page 35 of 98

This research note is restricted to the personal use of [email protected]

"Magic Quadrant for Application Life Cycle Management" "Are You Ready to Improve Release Velocity?"

At the Peak
IT Workload Automation Broker Tools
Analysis By: Biswajeet Mahapatra; Milind Govekar Definition: IT workload automation broker (ITWAB) tools automate mixed workloads based on business policies in which resources are assigned and deassigned in an automated fashion to meet service-level objectives. These tools use architectural patterns that facilitate easy, standards-based integration to automate processing requirements across applications and infrastructure platforms, based on events, workloads, resources and schedules. Position and Adoption Speed Justification: Some characteristics of ITWAB were defined in "IT Workload Automation Broker: Job Scheduler 2.0." ITWAB can emerge in vertical industry segments (such as insurance) where a set of standardized, risk model calculation processes is driven by a common definition of business policies. Alternatively, ITWAB is emerging in situations where decisions need to be made on the use and deployment of computing resources for example, to ensure processing workloads associated with business processes finish by a certain deadline. ITWAB may make decisions to use cloud-based computing resources, as needed, in addition to onpremises resources. Visibility, discovery and optimization of resource pools across the entire physical, virtual and cloud computing environment isn't yet possible. Intermediate solutions based on targeted environments, such as server resource pools that use virtualization management tools, will emerge first. Some tools are integrated with configuration management databases (CMDBs) to maintain batch services for better change and configuration management of the batch service to support reporting for compliance requirements. Integration with IT process automation (aka RBA) tools, data center automation tools and cloud computing management tools that provide end-to-end automation will also continue to evolve. These tools are also able to facilitate growing or shrinking of shared resource pools to intelligently and dynamically manage workloads that need to be managed intelligently and dynamically. Furthermore, critical-path analysis capabilities are being adopted by many of these tools to identify jobs that may breach SLA requirements. Some of these tools use built-in or external workload automation analytics to manage workload optimization and performance. These tools will also start to develop improved automation governance capabilities from this year onward that will provide IT operations with better visibility of the various unmanaged scripts (shell, PHP, Windows, etc.) that exist in the IT environment. This will enable IT operations to move from an opportunistic automation environment to a systematic automation environment.

Page 36 of 98
This research note is restricted to the personal use of [email protected]

Gartner, Inc. | G00252566

This research note is restricted to the personal use of [email protected]

User Advice: Users should choose these tools instead of traditional job-scheduling tools when they need to manage their batch or non-real-time environment using policies. Users should be aware that not all of the desired ITWAB capabilities have been delivered yet. Tools that have developed automation capabilities, such as IT operations process automation (aka RBA), and/or are able to integrate with other IT operations tools should be used to implement end-to-end automation. Business Impact: ITWAB tools will have a big impact on the dynamic management of batch SLAs, increasing batch throughput and decreasing planned downtime. They will play a role in end-to-end automation, and ITWAB tools will be required when implementing the service governor concept of RTI. Benefit Rating: High Market Penetration: 20% to 50% of target audience Maturity: Early mainstream Sample Vendors: Advanced Systems Concepts; BMC Software; CA Technologies; Cisco (Tidal); Honico; IBM Tivoli; Orsyp; Redwood; Stonebranch; UC4 Software Recommended Reading: "Magic Quadrant for Workload Automation" "IT Workload Automation Broker: Job Scheduler 2.0"

IT Financial Management Tools


Analysis By: Robert Naegle; Milind Govekar; Tapati Bandopadhyay Definition: IT financial management (ITFM) tools are IT owned and managed tools that provide IT leaders with IT budgeting, project financial management, chargebacks, cost optimization, performance metrics and benchmarking capabilities. ITFM tools provide the necessary financial transparency around both cost and value to support strategic IT decision making with dynamic reporting, robust analytics and multiple financials. Position and Adoption Speed Justification: ITFM tools (often referred to by vendors as IT business management tools) provide the means to manage the financial aspects of IT or to "run IT like a business." These tools have the ability to collect cost-related data from a heterogeneous and complex IT environment, along with the ability to build cost models with cost allocation and reporting capabilities. Gartner has seen an increase in interest and in the adoption of these tools, mainly to support showback, chargeback and effective cost transparency. Interest in ITFM tools has emerged during the past five years, and some of the current tools that have traditionally been used to provide end-to-end capabilities customized for IT finance and service functions as users have outgrown traditional spreadsheet approaches. The demand for these tools has grown due to increased interest in cost optimization and to service-based costing,

Gartner, Inc. | G00252566 This research note is restricted to the personal use of [email protected]

Page 37 of 98

This research note is restricted to the personal use of [email protected]

the increasing share of virtualization (shared infrastructure) in the production environment, interest in cloud computing service delivery models, and the need to provide greater IT cost, financial and value transparency. ITFM tools will continue to gain visibility and capability as the pressure increases on enterprise IT to run IT like a business. Furthermore, increased interest in cloud computing is putting additional pressure on IT to quickly justify external services sourcing and provide transparency of costs, billing and chargeback information, thus increasing the demand for the tools. Most organizations are beginning to see the benefits of effective financial transparency and are using these tools to provide better budgeting, forecasting, cost optimization, performance metrics and benchmarking. User Advice: Most IT organizations will need to transform themselves to become trusted service providers to the business. That said, aligning IT operations to define and provide services as opposed to managing technologies, understanding cost drivers in detail, and providing transparency of IT costs and value delivered will be key. Most corporate financial systems lack the granularity and flexibility IT operations require, while spreadsheets lack the required features, reporting and historical context. ITFM tools that are properly implemented and maintained are positioned to provide the business with improved cost transparency or showback in a multisourced (internal, outsourced, cloud) IT service environment, and also assist in allocating cost to the appropriate source and help build cost as one of the key decision-making components. ITFM tools can help with this process, especially in showing where consumption drives higher or lower variable costs. As IT moves toward a shared-service delivery model and external sourcing in an increasingly complex computing environment, these tools will enable more responsible and accurate financial management of IT. However, users must be willing to invest in the processes and resources required, including dedicated IT financial management capabilities, to maximize the successful implementation of these tools. Business Impact: ITFM tools mainly affect the IT organization's ability to provide cost transparency and perform accurate cost allocation, and have an impact on the value of the services provided. When IT organizations move from a cost center to adopt a service mode of operations with a welldefined service catalog, it is imperative to associate cost with each service defined in the catalog or at the portfolio level. Definition of the service and allocation of appropriate cost can happen only when proper costing methodologies are used, backed by effective ITFM tools. The tools enable the business and IT to manage and optimize the demand and supply of IT services. A major benefit of ITFM tools is that they enable enterprises to provide insights into IT costs and to fairly apportion IT service costs, if needed, based on differentiated levels of business unit service consumption. They also show how the IT organization contributes to business value. Benefit Rating: High Market Penetration: 1% to 5% of target audience Maturity: Adolescent Sample Vendors: Apptio; BMC Software; CloudCruiser; ComSci; HP; Nicus; UMT; VMware

Page 38 of 98
This research note is restricted to the personal use of [email protected]

Gartner, Inc. | G00252566

This research note is restricted to the personal use of [email protected]

Recommended Reading: "Best Practices in Implementing IT Financial Management Tools" "How to Use IT Financial Management to Validate I&O's Relevance to Business" "IT Financial Management Implementation Model Defines I&O Core Competencies" "Using IT Financial Management to Improve Business Outcomes" "IT Financial Management; CIO Desk Reference Chapter 23, Updated Q2 2012"

Cloud Management Platforms


Analysis By: Milind Govekar; Ronni J. Colville; Donna Scott Definition: CMP tools have specific functionality that addresses three key management layers: access management, service management and service optimization that enable organizations to manage public, private and hybrid cloud services and resources. Position and Adoption Speed Justification: A CMP must have the top three layers of a cloud services architecture (see "How to Build an Enterprise Cloud Service Architecture"), but may not include all functionality within each layer. The minimum capabilities to be considered a CMP include the entire access management layer; the service catalog, provisioning and showback/chargeback functionality of the service management layer; and the orchestration and abstractions/integrations functionality (for resource management and external service providers) in the service optimization layer. As the technologies and their uses mature, we expect additional functionality to be added to the minimum to be considered a CMP. For example, we are already seeing that most clients want to go beyond physical and virtual machine provisioning and enable provisioning of software and software stacks (internal PaaS). Those clients would also require their CMPs to have configuration management functionality, either embedded or integrated into a third-party product. Access management tier:

Self-service request interface Programmable interface Subscriber management Identity and access management

Service management tier:


Vendor/contract/license management Service catalog Service model

Gartner, Inc. | G00252566 This research note is restricted to the personal use of [email protected]

Page 39 of 98

This research note is restricted to the personal use of [email protected]

Service configuration management, including service provisioning Service-level management Service availability and performance management Service demand and capacity management Service financial management, including metering, showback and billing

Service optimization:

Service governor (policy management and optimization engine) Orchestration Abstraction layer to external service providers and to resource management tier (internal/ external) Federation

In addition to the three key layers of a CMP solution, another important capability needs to be considered: integration with external management adapters (or the ability to leverage APIs). CMP solutions are not islands, nor will any IT organization have a single-vendor environment. Private and hybrid clouds often start as projects focused on improving provisioning cycles for development and test environments. Most private clouds were originally focused on development and test environments; we are now seeing a significant increase in IT organizations moving business (production) applications into these cloud infrastructures. When this happens, the need to connect the cloud to the traditional infrastructure and existing management and processes becomes the next step. These management adapters enable integration with monitoring, performance, configuration, incident, problem and change management tooling. As these integration ties are enabled, it becomes important to provide access to analytics tools that can show deeper metrics for trending. In addition, for some IT organizations, private cloud has quickly shifted from IaaS to PaaS and, in some cases, desktop as a service (DaaS) for VDI environments. When this is the focus, CMP tools will need to be augmented with tools focused on service modeling and provisioning (middleware and database), and, in cases where there is a DevOps focus, application release automation may also be required as an add-on. This mature focus will also drive IT organizations to move from a VM or workload orientation to a services (multitier) focus. Not all CMP tools offer blueprinting capability that will enable the necessary mechanisms to define policies, security, costing and SLAs to manage the life cycle of the service. IT organizations will continue to be challenged to assess CMP solutions that vary greatly in the depth and breadth of their cloud management platform architectures. Much of the fast-paced adoption has been due to disparate cloud projects that have no centralized architecture guiding selection. We expect this to continue for the next two to three years. This will keep the pace of CMP adoption growing. CMP consolidation will occur, and it may involve rip and replace or integration and federation of CMPs.

Page 40 of 98
This research note is restricted to the personal use of [email protected]

Gartner, Inc. | G00252566

This research note is restricted to the personal use of [email protected]

CMP solutions provide a mechanism to manage the virtual infrastructure, whether private, public or hybrid, and some are now adding integration or connectors to traditional infrastructure provisioning automation. Some also manage, monitor and control the physical infrastructure. These capabilities vary by vendor. The major focus of most enterprise implementations continues to be on demand, utilization and placement of VM workloads. The difficult work of managing any infrastructure (physical or virtual) is the day-to-day hygiene of managing the IT service more holistically. As a result, we are seeing some of the CMP vendors introduce new adjacent solutions that offer deeper (more traditional) management capabilities (e.g., application performance management monitoring, patch and compliance). The CMP market is composed of vendors from a wide variety of market segments:

Traditional (Big 4) ITOM (BMC Software, CA, HP and IBM) Infrastructure software stack (Citrix, Microsoft, Oracle, Red Hat and VMware) Point solutions (Adaptive Computing, Cloudbolt, Egenera, NetIQ, Rightscale, ServiceMesh, Zimory, etc.) Open source (Cloudstack, Eucaplytus, Openstack) Fabric-based infrastructure (Cisco, Dell, HP, IBM, VCE)

User Advice: With the number of vendors continuing to grow and the high market volatility, IT organizations should consider that:

Today's investments may need to be tactical, especially with smaller vendors that may exit the market or be acquired or when investments are made before a complete cloud computing strategy is developed. No vendor provides a complete CMP solution. Therefore, for some requirements, IT organizations may need to augment, swap out or integrate additional cloud management or traditional management tools.

Getting value out of your CMP will heavily depend on the degree of standardization your infrastructure, software and services offer. Highly mature organizations implement CMP in a relatively short time period (one to two years). However, less mature organizations may require three or more years in order to design effective standards and processes that are repeatable and automatable. New roles may be required for example, development skills in the infrastructure and operations organization, financial management and capacity management. IT organizations must also centralize their cloud projects and develop an architecture that will support current and future requirements, or they run the risk of multiple disparate implementations. Business Impact: Enterprises will require CMPs to maximize the value of cloud computing services, regardless of whether they're external (public), internal (private) or hybrid. This means increasing agility, managing and governing the consumption of cloud services, lowering the cost of

Gartner, Inc. | G00252566 This research note is restricted to the personal use of [email protected]

Page 41 of 98

This research note is restricted to the personal use of [email protected]

service delivery, reducing the risks associated with these providers and potentially reducing lock-in to underlying software infrastructure. Benefit Rating: High Market Penetration: 5% to 20% of target audience Maturity: Early mainstream Sample Vendors: Abiquo; Adaptive Computing; BMC Software; CA Technologies; Citrix; Cloudbolt Software; CloudStack; Eucalyptus; HP; IBM; Microsoft; OpenStack; Red Hat; RightScale; ServiceMesh; VMware Recommended Reading: "How to Build an Enterprise Cloud Service Architecture" "How the Cloud Management Platform Market Shakeout Will Affect Buying Decisions"

Capacity-Planning and Management Tools


Analysis By: Ian Head; Milind Govekar Definition: These tools enable IT to plan, manage and optimize the use of IT infrastructure and application capacity for business and IT service life cycles and scenarios. They go beyond trending, providing "what if" scenario modeling, based on business and IT data. These tools also provide a real-time view of the capacity of resources in a physical, virtual and cloud data center. The tools provide guidance on matching workloads to resources to optimize the data center environment. Position and Adoption Speed Justification: Since 2010, we have seen growing interest in and implementation of capacity-planning tools. These products are increasingly being used for standard data center consolidation activities, as well as the related planning and management of virtual and cloud infrastructures. These tools are also used to match workload requirements to the most appropriate resources in a physical, virtual or cloud data center. They provide real-time visualization of capacity in a data center to help optimize workloads and associated resources. Some of these tools are embracing operational analytics functionality to provide performance and capacity information for structured and unstructured data and environments. These tools require skilled people who may be part of a performance management group. Capacity-planning tools provide value by enabling enterprises to build performance scenarios (models) that relate to business demand, often by asking what-if questions, and assessing the impact of the scenarios on various infrastructure components. Capacity also has to be managed (capacity management) in real time in a production environment that includes on-premises or cloud IT resources in a physical or virtual environment. This includes assessing the impact on performance of moving distressed workloads due to lack of resources to another environment with more resources, in real time (defragmenting the production environment).

Page 42 of 98
This research note is restricted to the personal use of [email protected]

Gartner, Inc. | G00252566

This research note is restricted to the personal use of [email protected]

Capacity-planning tools help plan IT support for optimal performance of business processes, based on planned variations of demand. These tools are designed to help IT organizations achieve performance goals and plan budgets, while preventing the overprovisioning of infrastructure or the purchasing of excessive off-premises capacity. Thus, the technology has evolved from purely a planning perspective to provide real-time information dissemination, and control of workloads to meet organizational performance objectives. Increasingly, these technologies are being used to plan and manage capacity at the IT service and business service levels, where the tools permit an increased focus on performance of the business process and resulting business value. Although physical infrastructure and primarily componentfocused capacity-planning tools have been available for a long time, products that support increasingly dynamic environments are not yet fully mature. Although adequately trained personnel will still be at a premium, some of these products have evolved to the point where many of their functions can be performed competently by individuals not associated with performance engineering teams, and some of the capacity-planning tools require little human intervention at all. These tools are helpful for performance engineering teams and should not be seen as an alternative. User Advice: Capacity planning and management has become especially critical due to the increase in shared infrastructures and enterprises devising strategies to implement hybrid-IT environments and where the potential for resource contention may be greater. Users should invest in capacity-planning and management tools to lower costs and to manage the risks associated with performance degradation and capacity shortfalls. Although some tools are easier to use and implement than others, many can still require a high level of skill, so adequate training must be available to maximize the utility of these products. A cautionary example would be virtual-server optimization tools, which have enormous potential, but require skillful use to avoid unexpected performance degradation elsewhere in the infrastructure. Finally, determine the requirements of your infrastructure and application management environment some organizations may only require support of virtual and cloud environments, while others will need to include support for what may still be a substantial legacy installed base. Business Impact: Organizations in which critical business services rely heavily on IT services should use capacity-planning and management tools to ensure high performance and minimize the costs associated with "just in case" capacity headroom excesses. When more accurate infrastructure investment plans and forecasts are required, these tools are essential, but they are usually implemented successfully only by organizations with high IT service management maturity and a dedicated performance management group. Benefit Rating: High Market Penetration: 5% to 20% of target audience Maturity: Early mainstream

Gartner, Inc. | G00252566 This research note is restricted to the personal use of [email protected]

Page 43 of 98

This research note is restricted to the personal use of [email protected]

Sample Vendors: BMC Software; CA Technologies; CiRBA; Dell (Quest Software); Opnet Technologies; Sumerian; TeamQuest; Veeam; VMTurbo; VMware Recommended Reading: "Govern the Infrastructure Capacity and Performance Planning Process With These 13 Key Tasks" "How to Create and Manage an Infrastructure Capacity and Performance Plan" "How to Build Best-Practice Infrastructure Capacity Plans" "Toolkit: Business and IT Operations Data for the Performance Management and Capacity Planning Process" "Toolkit: Server Performance Monitoring and Capacity Planning Tool RFI"

IT Service Catalog Tools


Analysis By: Jeffrey M. Brooks; Debra Curtis Definition: IT service catalog tools simplify the documentation of orderable IT service offerings and the creation of an IT service request portal so end users and business unit customers can easily submit IT service requests via a portal. This portal format includes space for easy-to-follow instructions on how to request services, details on service pricing, service-level commitments and escalation/exception-handling procedures. In addition, IT service catalog tools provide a process workflow engine to automate, manage and track service fulfillment. Position and Adoption Speed Justification: As IT organizations adopt a business-oriented IT service management strategy, they seek greater efficiency in discovering, defining and documenting IT services; automating the processes for delivering IT services; and managing service demand and service financials. Although the 2011 update to ITIL put additional focus on IT service catalogs, the target market for the tools is IT organizations that have attained the service-aligned level (Level 4) of the ITScore maturity model for infrastructure and operations (I&O). Self-assessments indicate this to be substantially less than 5% of IT organizations, which slows adoption speed and lengthens the time to plateau. Some IT organizations at the proactive level of ITScore for I&O (Level 3) that are attempting to automate the fulfillment of requests that come into the service desk may also be candidates for a service catalog. Additionally, cloud projects provide another impetus for IT to investigate the concepts and tools for a service catalog, although, in this case, the catalog tends to focus exclusively on cloud provisioning requests. IT organizations will proceed through a number of maturity steps, likely first documenting their IT service catalog in a simple Microsoft Word document, then storing it in an Excel spreadsheet or a homegrown database. A typical second stage of maturity appears with a homegrown IT service catalog portal on the intranet, which is placed under change control. Finally, IT organizations mature to using commercial off-the-shelf IT service catalog tools to present an online self-service portal for customers to place orders.
Page 44 of 98
This research note is restricted to the personal use of [email protected]

Gartner, Inc. | G00252566

This research note is restricted to the personal use of [email protected]

User Advice: Enterprises should have mature IT service management processes, documented IT architecture standards and a defined IT service portfolio before embarking on an IT service catalog project. Some functions of emerging IT service catalog tools overlap with more-mature IT service desk tools. There is a high potential for market consolidation and acquisition as IT service catalog features begin to blend with or disappear into other categories. The I&O organization should also recognize that subprojects may emerge in private or hybrid clouds, where service catalogs and request portals are inherent in the cloud management platform or other similar products. Business Impact: IT service catalog tools are intended to improve the business users' customer experience and increase IT operations efficiency. IT service catalogs simplify the service request process for customers and improve customer satisfaction by presenting a single face of IT to the customer for all kinds of IT interactions, including incident logging, change requests, employee onboarding, service requests, cloud provisioning requests, project requests and new application requests. Once services are described in standardized, orderable IT service catalog offerings, repeatable process methodologies for service fulfillment can be documented and automated. This will reduce errors in service delivery, help identify process bottlenecks and uncover opportunities for efficiency improvements. Users of the IT service catalog may have different views and services available based on their profile and position in the organization. IT service catalog tools provide reporting and, sometimes, a real-time dashboard display of service demand and service fulfillment milestones for IT analysis and for customers to track their service requests. In some cases, IT service catalog tools include financial management capabilities that help the IT operations group analyze service costs and service profitability, and communicate prices for different IT service options, enabling business unit customers to make better investment decisions. Benefit Rating: High Market Penetration: 1% to 5% of target audience Maturity: Adolescent Sample Vendors: Biomni; BMC Software; CA Technologies; Cisco; HP; IBM Tivoli; Kinetic Data; PMG; SMT-X; USU Recommended Reading: "Critical Capabilities for IT Service Catalog" "How to Make Selections With the IT Service Catalog Buyers Guide" "ITSM Fundamentals: How to Construct an IT Service Catalog" "An IT Service Catalog Is More Than Just Service Request Management"

Gartner, Inc. | G00252566 This research note is restricted to the personal use of [email protected]

Page 45 of 98

This research note is restricted to the personal use of [email protected]

Sliding Into the Trough


Enterprise Application Stores
Analysis By: Monica Basso; Ian Finley Definition: Enterprise application stores support application discovery and downloads through a local storefront client or browser on a smart device or PC. Enterprise application stores are private, cloud-based or deployed on-premises, and they help organizations deploy applications for employees and partners. Position and Adoption Speed Justification: Enterprise application stores offer a similar paradigm to public stores (such as Apple's App Store), but are private and implemented on internal servers or are delivered through private clouds. Unlike consumer app stores, they offer selected applications that meet enterprise requirements. An increasing number of enterprise portals promote applications that employees should or are recommended to download, either by passing through to the store or from local storage. Private mobile application stores are critical for organizations with many mobile apps in order to support easy discovery and distribution of applications to the mobile workforce and end customers and to provide additional security controls and management capabilities. Mobile device management (MDM) vendors, such as AirWatch, MobileIron, Citrix, Good Technology, SAP, Fiberlink, Symantec and BoxTone, provide corporate app store capabilities as part of their MDM offerings. Citrix, with its unified corporate app store for mobile, Web, SaaS and Windows applications, goes beyond mobile devices to support any endpoint client. Private or enterprise application store capabilities can also be found in offerings from mobile application management (MAM) vendors such as Partnerpedia and Apperian and mobile application development platform (MADP) vendors such as SAP, Antenna and Kony. IT organizations' demand for private application stores keeps growing as mobility adoption rapidly takes place. We expect that it will take less than two years before going to plateau, and many organizations already use them as a standard for mobile application distribution. Factors that may limit a broader adoption in the short term include lack of market maturity, costs and the viability of legacy business applications in new app stores. However, the pressure to implement safe enterprise app stores will grow as employees increasingly use personal mobile, Web and cloud apps at work, and as IT organizations understand the associated risks. Hence, we expect a growing number of organizations to implement enterprise app stores during the next few years. Although most providers offer basic enterprise app store functionality (either as software or as an as-a-service offering), few (for example, Citrix) provide a comprehensive solution for all scenarios. The current market is quite immature, in fact, but will expand during the next few years. We expect that more players, including MDM, MAM and MADP vendors, will start offering integrated capabilities, as will new entrants such as system integrators and service providers that will launch enterprise mobility services that include outsourced enterprise application stores. Application stores for PC and desktop Web applications may take much longer to mature. User Advice: Enterprises should evaluate private application stores to support enhanced application delivery and management on mobile and client computing devices for their mobile
Page 46 of 98
This research note is restricted to the personal use of [email protected] Gartner, Inc. | G00252566

This research note is restricted to the personal use of [email protected]

workforce. They can help improve the modularity, user experience, standards compliance, platform compatibility, provisioning, security and deployability of the application portfolio. Business Impact: Private app stores can help to reduce security risks through better management of application and data assets. Employees are increasingly using mobile and cloud apps available in public stores, both on corporate and personal devices that store corporate data and are connected to corporate systems. Some public apps can threaten IT security threats to the enterprise. Security leaders can reduce these threats by discouraging the use of unsafe applications and providing a safe enterprise alternative through a private app store that highlights safe public apps and corporate apps. Private app stores can help software asset managers lower administration overhead and drive cost accountability. An app store can help manage traditional software licensing models, SaaS subscriptions and other, more elastic on-demand cloud provisioning models by automating the capture of license, subscription and cost assignment data during check-out. More mature enterprises can use app store data to manage ongoing maintenance and support costs and to drive better accountability through more sophisticated and accurate chargeback models. Benefit Rating: High Market Penetration: 5% to 20% of target audience Maturity: Adolescent Sample Vendors: AirWatch; BoxTone; Citrix; Embarcadero; Good Technology; MobileIron; Partnerpedia; SAP; Symantec; VMware; Vodafone; Zenprise Recommended Reading: "Magic Quadrant for Mobile Device Management Software" "Enterprise App Stores Can Increase the ROI of the App Portfolio" "Regain Control of Mobile Software Licensing With an Enterprise App Store" "There's an App for That: The Growth of Enterprise Application Stores" "Two Foundations of a Successful App Store"

COBIT
Analysis By: Ian Head; Simon Mingay Definition: COBIT, owned by ISACA, originated as an IT control framework and COBIT 5 has evolved into a broader IT governance and management framework for the purpose of ensuring that the enterprise's investment in IT will enable the achievement of its goals. COBIT 4.1 was and remains used by many midsize to large organizations across a wide range of industries to

Gartner, Inc. | G00252566 This research note is restricted to the personal use of [email protected]

Page 47 of 98

This research note is restricted to the personal use of [email protected]

implement controls to manage key risks or meet an audit or compliance requirement. This profile considers COBIT from an IT operations perspective. Position and Adoption Speed Justification: COBIT 5 is a major initiative by ISACA to bring together many of their frameworks into a single governance and management framework. There is very limited mapping between COBIT 4.1 and COBIT 5 and, most notably, COBIT 5 uses the concepts of governance and management practices rather than control objectives. No longer is COBIT an acronym for Control Objectives for Information and Related Technologies, but is simply a brand name for the ISACA product. Organizations are being very cautious in their adoption of the new COBIT 5 released in April 2012, preferring to make use of the more established 4.1 until 5 has proved itself. COBIT is having a slow, but steadily increasing, effect on IT operations, as IT operations organizations start to realize its benefits, such as more-predictable operations. There are some enterprises that are adopting COBIT and issuing mandates for IT operations to comply with it. However, few operations leaders use it as a broad framework to manage and govern the creation of value and in-depth use of COBIT within operations is limited. COBIT 5 has the potential to act as a unifying force in the management and governance of the IT organization and the wider business. As a control framework, COBIT is well-established, especially among auditors and, while its indirect effect on IT operations can be significant, it's unlikely to be a frequent point of reference for IT operations management. As typical IT operations and other affected groups become more familiar with the implications of COBIT, and awareness and adoption increase, the framework will progress slowly along the Hype Cycle. Gartner again saw a small increase in client inquiry calls in 2012, and expects interest to increase as IT operations professionals increasingly understand how to leverage the framework for raising the maturity of service, process, risk and governance of IT. User Advice: Even with the v.5 update and its integration of ISACA's many frameworks, the focus of this high-level framework is on what must be done, not how to do it. Therefore, IT operations management has typically used COBIT 4.1 as part of a mandated program in the IT organization and to provide guidance regarding the kind of controls needed to meet the program's requirements. Process engineers can, in turn, leverage other standards, such as ITIL, for additional design details to use pragmatically. Despite v.5's expansion, it still complements, rather than replaces, ITIL, and COBIT 5 has the potential to be the tool used by leaders to identify business and IT needs and is the most appropriate framework or standard to address those needs. Because COBIT 5 has adopted the ISO 15504 process maturity model and also incorporates COBIT 4.1, Val IT 2.0, Risk IT, Business Model for Information Security (BMIS) and the Information Technology Assurance Framework (ITAF), COBIT 4.1 expertise will have limited applicability to COBIT 5. Consequently, a major training and familiarization exercise needs to be undertaken by organizations adopting COBIT 5 as a successor to COBIT 4.1, and is part of the reason for the slow adoption of 5. IT operations managers who want to assess their management and governance to better mitigate risks and reduce variations, and are aiming toward clearer business alignment of IT services, should use COBIT in conjunction with other frameworks, including ITIL and ISO 20000. Those IT operations
Page 48 of 98
This research note is restricted to the personal use of [email protected]

Gartner, Inc. | G00252566

This research note is restricted to the personal use of [email protected]

managers who want to gain insight into what auditors will look for, or into the potential implications for compliance programs, should also take a closer look at COBIT, but adoption of COBIT 5 can only be successful if the wider enterprise embraces the framework. Any operations team facing a demand for wholesale implementation should push back and focus its application in areas where there are specific risks in the context of its operation. In particular, operations leaders should know whether a specific audit is being conducted against the COBIT 4.1 or COBIT 5 framework, because there will be significant differences in approach. Successful adoption of COBIT 5 requires a concerted program of effort involving the audit team, IT operations and the other stakeholders to ensure all efforts are headed in the same direction. COBIT 4.1 is still better-positioned than ITIL in terms of managing IT operations' governance and high-level risks; as such, enterprises that wish to put their IT service management program in the broader context of a management and governance framework should use COBIT. COBIT 5, extends its scope to the business drivers and stakeholder needs that cascade ultimately to the IT-related goals. The mappings and weightings of the needs to the IT goals are essential to the COBIT 5 view of the questions that IT must address if it is to be successful. COBIT's scope is the entire enterprise; therefore, IT operations managers can refer to this source if they believe the goals of the enterprise are not clearly communicated and cascaded to their own functional teams. Services and processes and their associated capabilities must now be focused on addressing the explicit goals of the enterprise and not simply to implement a complete set of controls unless each relates to meeting a specific goal. If an organization were to be audited using COBIT 5 this may also highlight where business goals are not well articulated or the goal's implications are not cascaded down into IT operations goals. These cascading goals can serve as audit trails to justify the IT activities, processes and services, and can help build business cases around each of them at the different levels of detail as required. Each COBIT 5 process is part of a cascade that links directly to business goals to justify what it focuses on, how it plans to achieve the targets and how it can be measured (metrics). An additional consideration is that service improvement programs that seek to leverage ITIL all too frequently set themselves up as bottom-up, tactical, process engineering exercises, lacking a strategic or business context. ITIL encourages and provides guidance for a more strategic approach, and COBIT can help in achieving that, particularly by drawing business stakeholders into the organizational change. Business Impact: Although v.5 moves COBIT toward a broader management and governance framework, it is seen by most users as a framework for effective governance and reducing risk. It affects all areas of managing the IT organization, including aspects of IT operations. Management should review how COBIT 5 can be used to enhance governance practices and help better manage risks and, thus, result in improved performance. COBIT's usefulness has moved a long way beyond a simple audit tool. However, the lack of compatibility with earlier versions will necessitate an extensive training program for all those affected by the adoption of COBIT 5. Benefit Rating: Moderate

Gartner, Inc. | G00252566 This research note is restricted to the personal use of [email protected]

Page 49 of 98

This research note is restricted to the personal use of [email protected]

Market Penetration: 20% to 50% of target audience Maturity: Adolescent Recommended Reading: "Leveraging COBIT for Infrastructure and Operations" "Understanding IT Controls and COBIT" "Updates in COBIT 5 Aim for Greater Relevance to Wider Business Audience" "Market Trends: IT Governance, Risk and Compliance Management, Worldwide, 2013" "The Executive Guide to Managing Regulatory Change"

IT Process Automation Tools


Analysis By: Ronni J. Colville; Aneel Lakhani Definition: IT operations process automation (ITPA) tools automate across traditional, virtual and public cloud resources, and can be used to integrate and orchestrate multiple IT operations management tools. ITPA products have three key functional elements: a workflow design studio, an automation engine and an integration framework. ITPA tools can focus on a specific process (e.g., server provisioning), replacing or augmenting scripts and manual processes, or can apply to processes that span different domains. Position and Adoption Speed Justification: Adoption of IT operations process automation tools continues to grow as a key focus for IT organizations looking to improve IT operations efficiencies and provide a means to track and measure process execution. ITPA tools provide a mechanism to help IT organizations take manual or scripted processes and automate them, as well as providing a way to integrate disparate IT operations tool portfolios to improve process handoffs. We expect many IT organizations to have multiple ITPA tools, acquired to solve specific problems or as an embedded capability in a (larger) vendor enterprise agreement. This in itself, as in many tool spaces, may result in overlapping functionality and the use of multiple nonintegrating tools. One key driver for ITPA tools (or the ITPA capability) is the uptick in private, public and hybrid cloud adoption. Several cloud management platform (CMP) vendors have integrated existing ITPA tooling as an orchestration layer within their CMP tools. Additionally, some IT organizations are leveraging existing ITPA tools that they have in place for other automation initiatives (e.g., fault and event, server provisioning, etc.) for basic cloud management provisioning activities or as the glue tying together the various tools used for cloud projects. For example, IT organizations may be using a service desk tool for service catalog or service requests, and are calling the ITPA tool from the service desk tool for execution of workflows to provision the appropriate resources (either on- or off-premises). A key enabler of these use cases is the ability to interact with and orchestrate Web services and APIs.

Page 50 of 98
This research note is restricted to the personal use of [email protected]

Gartner, Inc. | G00252566

This research note is restricted to the personal use of [email protected]

In addition to cloud projects, ITPA tools continue to be used to drive efficiencies in automating incident resolution and closed-loop change management. There has even been an increase in activity where ITPA tools are linked to workload automation and job scheduling tools to drive endto-end batch processes and business process automation. ITPA tools drive lower-level automation, replacing manual tasks and scripts, which often cause outages due to misconfiguration or breakages in scripts. ITPA tools continue to be enhanced, particularly in the areas of scalability, performance, usability, and prepackaged content and connectors or adapters. Some have added advanced embedded decision-making logic in their workflows to allow automatic decisions on process execution. There are no signs that the adoption and visibility of these tools will diminish, as they continue to be used to address some of today's key IT challenges, including reducing IT operational costs; the automation of virtual infrastructure; and supporting private, public and hybrid cloud initiatives. The three biggest inhibitors to more widespread adoption continue to be:

The requirement for base content that is extensible. While many ITPA tools offer ample content, most IT organizations have unique requirements and have to modify the content to meet their specific requirements. This still requires a potentially significant and deep skill set to build and maintain content. For content or connectors that are not out of the box or extensible, IT organizations must develop this content, then migrate it as new versions of the tool become available. A lack of knowledge of the tasks or activities being automated. Many organizations try to use these tools without the necessary process knowledge, and developing this process design often requires cross-domain expertise and coordination. IT organizations that don't have their processes and task workflows documented often take longer to succeed with these tools. Replacing of existing automation achieved through scripting may be slightly to very difficult, depending on the level of knowledge about how this automation was achieved. IT administrators have often used scripting as an opportunistic solution to any automation problem encountered, thereby creating an unmanaged and fragile environment that is not wellunderstood. Furthermore, replacement of the ITPA tool itself is difficult as each tends to have its own semantics and conventions that are nontransferrable.

User Advice: ITPA tools that have a specific orientation (e.g., user provisioning and server provisioning) and provide a defined (out-of-the-box) process framework can aid in achieving rapid value. When used in this way, the tools are focused on a specific set of IT operations management processes. However, using a more general-purpose ITPA tool requires more-mature, understood process workflows and specific skills to develop, build and maintain unique automation or integration connector content. Select ITPA tools with an understanding of your process maturity and the tool's framework orientation. Clients should expect to see ITPA tools positioned and sold to augment and enhance current IT management products within a single vendor's product portfolio. However, when used to support a broader range of process needs that cross domains and multiple processes, or even possibly multiple vendor tool portfolios, clients should develop and document their IT operations management processes before implementing ITPA tools. IT operations managers

Gartner, Inc. | G00252566 This research note is restricted to the personal use of [email protected]

Page 51 of 98

This research note is restricted to the personal use of [email protected]

who understand the challenges and benefits of using IT operations management tools should consider ITPA tools as a way to reduce risk where handoffs occur, or to improve efficiencies where multiple tool integrations can establish repeatable best-practice activities. This can only be achieved after the issues of complexity are removed through standardizing processes to improve repeatability and predictability. In addition, IT operations processes that cross different IT management domain areas will require organizational cooperation and support, and the establishment of process owners. Business Impact: ITPA tools will have a significant effect on running IT operations as a business, even with cloud architectures extending to public resources, by providing consistent, measurable and repeatable services at better costs. They will reduce the human factor and associated risks by automating safe, repeatable processes, and will increase IT operations efficiencies by integrating and leveraging the IT management tools needed to support IT operations processes across IT domains. Benefit Rating: High Market Penetration: 20% to 50% of target audience Maturity: Early mainstream Sample Vendors: Appnomic Systems; BMC Software; CA Technologies; Cisco; gen-E; HP; Microsoft; NetIQ; Network Automation; UC4 Software; VMware Recommended Reading: "Best Practices for and Approaches to IT Operations Process Automation" "Run Book Automation Reaches the Peak of Inflated Expectations" "RBA 2.0: The Evolution of IT Operations Process Automation" "The Future of IT Operations Management Suites" "IT Operations Management Framework 2.0"

Application Performance Monitoring


Analysis By: Jonah Kowall; Will Cappelli Definition: Gartner defines application performance monitoring (APM) as one or more software and/or hardware components that facilitate monitoring to meet five main functional dimensions: end-user experience monitoring (EUM), application topology discovery and visualization, userdefined transaction profiling, application component deep dive, and IT operations analytics. Position and Adoption Speed Justification: Gartner has seen a rise in demand from clients investing in these tools, as most enterprises continue their transformations from purely infrastructure monitoring to application monitoring. In their journeys toward IT service management, APM tools provide value to multiple IT organizations for the rapid isolation and root cause analysis
Page 52 of 98
This research note is restricted to the personal use of [email protected] Gartner, Inc. | G00252566

This research note is restricted to the personal use of [email protected]

(RCA) of problems, understanding user behavior and experience, as well as understanding the performance changes of an application's components. The increasing adoption of private and public cloud computing is stimulating the desire for more insight into application and user behavior. This journey will require collaboration with, and coordination among, the application development teams and the IT infrastructure and operations teams, which don't always have the same priorities and goals. The need to work together and share information is a key delivery of APM solutions, and often requires to support agile development and release processes, which are the underpinnings of DevOps. This is critical to better understand the quality of releases versus previously deployed software in support of agile release schedules. The demand and importance placed on APM tools has increased significantly during the past several years, and will continue as applications and infrastructures become more complex, dynamic, and additional layers of abstraction are introduced (for example, virtualization, SDN, and API abstraction). These products are well-adopted, but also maturing to handle the changes presented by public and private cloud, therefore, the adoption is high. Older APM technologies are being replaced by second- and third-generation technologies. User Advice: Enterprises should use these tools to proactively measure application availability and performance. These technologies should be evaluated to monitor custom applications and packaged applications (e.g., SAP, Oracle Applications and Middleware, or vertically aligned applications), as well as support technologies, such as application servers, databases and data stores, messaging middleware and mainframe components. Converged APM products are creating easier deployments, and delivery models leveraging software as a service (SaaS) will continue to allow lower cost adoption of these technologies. This technology is particularly suited for highly complex applications and dynamic infrastructure environments. These technologies become even more important as enterprises adopt cloud infrastructure, and are critical when deploying in public cloud production environments. The requirement of integration to current monitoring systems, including server, network, virtual fabric and storage, is becoming more critical. Some products will integrate, while others will monitor these components separately. Enterprises should take into consideration that the complexity of the tools varies from very simple deployments to highly complex solutions requiring consulting engagements for those with complex and diverse application environments. Many organizations start with end-user experience monitoring tools to first get a view of end-user or application-level performance. These buying centers often find the most value in specific dimensions, such as EUM, application topology, transaction profiling, deep dive, and analytics. Business Impact: APM tools are critical in the problem-isolation process, thus shortening mean time to repair and improving service availability. They provide correlated performance data that business users can utilize to answer questions about service levels and user populations, often in the form of easily digestible dashboards. These tools are paramount to improving and understanding service quality as users interact with applications. They allow for multiple IT groups to share captured data and assist users with advanced analysis needs. Benefit Rating: High

Gartner, Inc. | G00252566 This research note is restricted to the personal use of [email protected]

Page 53 of 98

This research note is restricted to the personal use of [email protected]

Market Penetration: 20% to 50% of target audience Maturity: Early mainstream Sample Vendors: Absolute Performance; Apica; AppDynamics; AppEnsure; Appnomic; AppNeta; AppSense; Arcturus Technologies; ASG Software Solutions; Aternity; BlueStripe Software; BMC Software; Boundary; Catchpoint; CA Technologies; Cedexis; Compuware; Correlsense; Corvil; Crittercism; Dell (Quest Software); eG Innovations; Endace; ExtraHop Networks; Fluke Networks; HP; IBM; Idera; Inetco Systems; InfoVista; ITRS; JenniferSoft; Keynote Systems; Knoa Software; Lucierna; ManageEngine; Microsoft; Nastel Technologies; NetScout Systems; Network Instruments; Neustar; New Relic; Nexthink; OpTier; Oracle; Precise; Progress Software; Riverbed Networks; SL Corporation; SmartBear Software; Splunk; Sumo Logic; Triometric; Virtual Instruments Recommended Reading: "Magic Quadrant for Application Performance Monitoring" "Use Synthetic Monitoring to Measure Availability and Real-User Monitoring for Performance" "Criteria for the 2013 Magic Quadrant for Application Performance Monitoring" "Will IT Operations Analytics Platforms Replace APM Suites?" "Cool Vendors in Application Performance Monitoring, 2013"

IT Service View CMDB


Analysis By: Ronni J. Colville; Jarod Greene Definition: An IT service view configuration management database (CMDB) is a repository that has four functional characteristics: IT service modeling and mapping, integration/federation, reconciliation, and synchronization. It provides a consolidated configuration view of various sources of data (discovered, manually or documented), which are integrated and reconciled into a single IT service view. Position and Adoption Speed Justification: For many years, adoption of an IT service view CMDB has been associated with progressing along an ITIL journey, which usually begins with a focus on problem, incident and change management and then evolves into an IT service view CMDB. However, adoption over the last five years has been tied more tightly to a broader set of projects (e.g., compliance, asset management, disaster recovery, data center consolidation and enterprise architecture gap analysis) to gain insight into and visibility of key peer-to-peer and hierarchical relationships in IT services. Foundational to a successful IT service view CMDB are mature change and configuration management processes. IT organizations that do not have a focus on governing changes and tracking and maintaining accurate configurations will not be successful with an IT service view CMDB implementation. New drivers continue to emerge, including leveraging the IT service view CMDB for service assurance correlation (event and incident correlation). As cloud projects continue to expand from

Page 54 of 98
This research note is restricted to the personal use of [email protected]

Gartner, Inc. | G00252566

This research note is restricted to the personal use of [email protected]

private and public to hybrid, we expect mature IT organizations to extend their IT service view CMDBs to track and assist in assessing critical changes of IT services that traverse a variety of cloud resources, especially as the percentage of external sources is increasing from approximately 5% to 25% by 2015. Gartner has seen an increase in successful implementations in up to 40% of midsize and more than 60% of large enterprises modeling and mapping three, but oftentimes many more. Smaller organizations may not need an IT service view CMDB, since their focus is primarily on achieving a centralized view of all assets. Additionally, they may not have the process maturity needed to implement a successful IT service view CMDB. Successfully mapped IT services improves the ability to provide an accurate and trusted view of IT service configurations that can be used for a variety of projects but most often to assess the impact of a change on components in IT services. Although IT service view CMDB tools have been available since late 2005, the combination of maturing tools and maturing IT organization processes, as well as the realization that this type of project does not have a quick ROI, has drawn out the planning, architecting and tool selection time frame. IT service view CMDB implementations can take from three to five years to establish, but they have no end date because they are ongoing projects in which new use cases for the data can add new data and new integrations. Even with longer time frames, IT organizations can achieve incremental benefits (for example, data center visibility where there was no prior documentation that was accurate or trusted) throughout the ongoing implementation and find that it provides quantifiable benefits to the organization. ITIL V3 introduced a concept called a configuration management system (CMS). This can be an IT service view CMDB or a completely federated repository. The concept of a CMS offers a varied approach to consolidating a view of all the relative information pertaining to an IT service. CMS tooling is actually the same as an IT service view CMDB; because federation is still immature, the reality of a CMS is still not technically viable. Without standards of any significance being adopted by IT service view CMDB vendors, CMSs and the management data repository vendors that are suppliers of federated information, IT organizations should continue to focus on IT service view CMDB implementations. User Advice: Enterprises must have a clear understanding of the goals and objectives of their IT service view CMDB projects and have several key milestones that must be accomplished for a successful implementation. Without the development of specific goals, IT organizations will not be able to demonstrate ROI and will also increase the likelihood of scope creep, in which excess data kept in the CMDB will become obsolete. Enterprises lacking change and configuration management processes are likely to establish inventory data stores that don't represent real-time or near-realtime data records and will have a difficult time maturing to implement a trusted IT service view CMDB. Establishing trusted data sources and associated reconciliation and normalization processes is a critical success factor. IT organizations must know what trusted data they have and what data will be needed to populate the IT service view CMDB models that will achieve their goals. Only data that has ownership and a direct effect on a goal should be in IT service view CMDB configuration models; everything else should be federated (e.g., financial and contractual data should remain in the IT asset management

Gartner, Inc. | G00252566 This research note is restricted to the personal use of [email protected]

Page 55 of 98

This research note is restricted to the personal use of [email protected]

repository, and incident tickets should remain with the IT service desk). If there are key sources that will need to be federated (e.g., various discovery tools, IT asset repository, service assurance tools), these should be specifically evaluated as part of the proof of concept for the IT service view CMDB selection. Business Impact: An IT service view CMDB affects nearly all areas of IT operations. It will benefit providers (of data) and subscribers (of IT service views). It is a foundation for improving the quality of service and improving service delivery. An IT service view CMDB implementation improves risk assessment of proposed changes and can assist with root cause analyses. It also facilitates a nearreal-time business IT service view. Benefit Rating: High Market Penetration: 20% to 50% of target audience Maturity: Mature mainstream Sample Vendors: BMC Software; CA Technologies; HP; IBM (Tivoli); ServiceNow Recommended Reading: "Top Five CMDB and Configuration Management System Market Trends" "Cloud Environments Need a CMDB and a CMS" "IT Service View CMDB Vendor Landscape, 2012" "What SMBs Should Know About CMDBs" "IT Service Dependency Mapping Vendor Landscape, 2012"

Real-Time Infrastructure
Analysis By: Donna Scott Definition: RTI represents a shared IT infrastructure in which business policies and SLAs drive the dynamic allocation and optimization of IT resources so that service levels are predictable and consistent, despite unpredictable IT service demand. RTI provides the elasticity, functionality, and dynamic optimization and tuning of the runtime environment based on policies and priorities across private, public and hybrid cloud architectures. When resources are constrained, business policies determine how resources are allocated to meet business goals. Position and Adoption Speed Justification: The technology and implementation practices are immature from the standpoint of architecting and automating an entire data center and its IT services for real-time infrastructure (RTI). However, point solutions have emerged that optimize specific applications or environments, such as dynamically optimizing virtual servers (through the use of performance management metrics and virtual server live-migration technologies) and dynamically optimizing Java Platform, Enterprise Edition (Java EE)-based shared application environments that are designed to enable scale-out capacity increases. RTI is also emerging in
Page 56 of 98
This research note is restricted to the personal use of [email protected] Gartner, Inc. | G00252566

This research note is restricted to the personal use of [email protected]

cloud management solutions, initially for optimizing the placement of workloads or services upon startup based on pre-established policies. Many cloud management platform (CMP) vendors have enabled models or automation engines to achieve RTI (for example, through the implementation of logical service models with policies that are defined for the minimum and maximum instances that can be deployed in a runtime environment). However, these vendors have not yet implemented all the analytical triggers and the deployment automation to make elasticity truly turnkey. Rather, IT organizations must still write custom code (for example, automation and orchestration logic) to achieve their overall dynamic optimization goals, such as to scale a website up/down or in/out to use optimal resources for increasing or decreasing service demand. Moreover, although virtualization is not required to architect for RTI, many CMP solutions only support virtualized environments instead of offering more complex alternatives that require integration to physical resources. In addition, RTI may be architected for a specific application environment and not as a generalized operations management offering. Lack of architecture and application development skills in the infrastructure and operations (I&O) organization hampers implementation of RTI in all but the most advanced organizations. Organizations that pursue agile development for their Web environments will often implement RTI for these services in order to map increasing demand on their sites with an increasing supply of resources. In another RTI use case, enterprises are implementing shared disaster recovery data centers, whereby they dynamically reconfigure test/development environments to look like the production environment for disaster recovery testing and disaster strikes. This type of architecture can typically achieve recovery time objectives in the range of one to four hours after a disaster is declared. Typically, implementation is not triggered automatically but is manually initiated where the automation is prewritten. Because of the advancement in server virtualization and cloud computing, RTI solutions are making progress in the market, especially for targeted use cases where enterprises write specific automation, such as to scale a website up/down and in/out. However, there is low market penetration, primarily because of a lack of service modeling (inclusive of runtime policies and triggers for elasticity), standards and strong service governors/policy engines in the market. For customers who desire dynamic optimization to integrate multiple technologies together and orchestrate analytics with actions, a great deal of integration and technical skills is required. Gartner believes that RTI will go through another round of hype in the market as vendors seize on the "software defined" terminology that generally has the same connotation as RTI: automation and optimization. As in the past, we will see individual vendor progress, especially in "software stacks," but not in largely heterogeneous environments because of the lack of standards and the desire for vendors that build such functionality to benefit their platforms (and not their competitors' platforms). User Advice: Surveys of Gartner clients indicate that the majority of IT organizations view RTI architectures as desirable for gaining agility, reducing costs and attaining higher IT service quality and that about 20% of organizations have implemented RTI for some portion of their portfolios.

Gartner, Inc. | G00252566 This research note is restricted to the personal use of [email protected]

Page 57 of 98

This research note is restricted to the personal use of [email protected]

Overall progress is slow for internal deployments of RTI architectures because of many impediments, especially the lack of IT management process and technology maturity levels, but also because of organizational and cultural issues. RTI is also slow for public cloud services, where application developers may have to write to a specific and proprietary set of technologies to get dynamic elasticity. Gartner sees technology as a significant barrier to RTI, specifically in the areas of root cause analysis (which is required to determine what optimization actions to take), service governors (the runtime execution engines behind RTI analysis and actions) and integrated IT process/tool architectures and standards. However, RTI has taken a step forward in particular focused areas, such as:

Dynamic and policy-based provisioning of development/testing/staging and production environments across private, public and hybrid cloud computing resources Optimally provisioned cloud services based on capacity and policies (for example, workload and service placement) Server virtualization and dynamic workload movement and optimization Reconfiguring capacity during failure or disaster events Service-oriented architecture (SOA) and Java EE environments for dynamic scaling of application instances Specific and customized automation that is written for specific use cases (for example, scaling up/down or out/in a website that has variable demand)

Many IT organizations that have been maturing their IT management processes and using IT process automation tools aka run book automation (RBA) tools to integrate processes (and tools) to enable complex, automated actions are moving closer to RTI through these actions. IT organizations that desire RTI should focus on maturing their management processes using ITIL and maturity models (such as Gartner's ITScore for I&O Maturity Model) as well as their technology architectures (such as through standardization, consolidation and virtualization). They should also build a culture that is conducive to sharing the infrastructure and should provide incentives such as reduced costs for shared infrastructures. Gartner recommends that IT organizations move to at least Level 3 proactive on the ITScore for I&O Maturity Model in order to plan for and implement RTI; before that level, a lack of skills and processes derails success. Organizations should investigate and consider implementing RTI solutions early in the public or private cloud or across data centers in a hybrid implementation, which can add business value and solve a particular pain point, but should not embark on datacenter-wide RTI initiatives. Business Impact: RTI has three value propositions, which are expressed as business goals:

Reduced costs that are achieved by better, more efficient resource use and by reduced IT operations (labor) costs Improved service levels that are achieved by the dynamic tuning of IT services

Page 58 of 98
This research note is restricted to the personal use of [email protected]

Gartner, Inc. | G00252566

This research note is restricted to the personal use of [email protected]

Increased agility that is achieved by rapid provisioning of new services or resources and scaling the capacity (up and down) of established services across both internally and externally sourced data centers

Benefit Rating: Transformational Market Penetration: 5% to 20% of target audience Maturity: Emerging Sample Vendors: Adaptive Computing; Amazon; BMC Software; CA Technologies; IBM; Microsoft; Oracle; Red Hat; RightScale; ServiceMesh; Tibco Software; VMTurbo; VMware Recommended Reading: "Cool Vendors in Cloud Management, 2013" "Cool Vendors in Cloud Management, 2012" "Provisioning and Configuration Management for Private Cloud Computing and Real-Time Infrastructure" "How to Build an Enterprise Cloud Service Architecture"

Workspace Virtualization
Analysis By: Terrence Cosgrove; Mark A. Margevicius; Federica Troni Definition: Workspace virtualization tools manage user-specific configurations (profiles, settings, policies and, in some cases, applications) separately from the OS. This allows users to have a personalized workspace regardless of the platform they use (PC, terminal server or virtual desktop). These tools help maintain consistency and compliance across multiple computing platforms. Some products in this category extend application personalization for hosted virtual desktops (HVDs), and serve as an alternative to application virtualization for HVDs. Position and Adoption Speed Justification: Workspace virtualization was once a niche technology, used primarily to maintain user profiles in server-based computing (SBC) environments. During the past three years, it has entered the mainstream, driven by the growth of SBC, HVDs and mobile computing. Organizations typically develop a need for workspace virtualization when they expand SBC and HVD environments and encounter performance and scalability issues with roaming user profiles. The profile management aspect of workspace virtualization is maturing. Many products can now maintain profiles reliably across SBC, physical and HVD environments, and, therefore, pricing for these products has declined during the past 18 months. Large organizations with a high degree of complexity (e.g., multiple application delivery and client computing architectures) may still require best-of-breed products in this space. As HVD environments have grown, workspace virtualization vendors have added capabilities to address other personalization challenges beyond those associated with user profiles, such as

Gartner, Inc. | G00252566 This research note is restricted to the personal use of [email protected]

Page 59 of 98

This research note is restricted to the personal use of [email protected]

application delivery (including user-installed applications), application privilege elevation and license management. This area is less mature, and products still must improve scalability and some functional aspects to meet enterprise requirements. While some of the vendors in this space have developed products for mobile device management, mobile application management and file synchronization, the workspace virtualization products are specific to the Windows environments, and are not applicable to non-Windows platforms (such as iOS and Android). User Advice: Develop your requirements first, and then look to the market for tools. The most significant factors that affect product choices include:

Platform support Physical, HVDs and SBC, as well as back versions of Windows (i.e., XP and Server 2003). Complexity Workspace virtualization tools vary in terms of complexity. In general, the most comprehensive products require substantial IT staffing to administer. Include your own staffing and administrative expertise in the evaluation. Profile management Various events can trigger a profile capture (e.g., logoff and application close). More comprehensive coverage of profile triggers can reduce the instances of profile corruptions and "last write wins" scenarios. Application personalization Some vendors in the market focus on persisting user-specific applications, while minimizing image complexity.

Business Impact: Workspace virtualization tools can help improve the user experience by making the desktop more personalized and enhancing performance. They can also reduce infrastructure and operations costs by reducing the amount of servers, storage and images that organizations must use to provide users with a personalized desktop. Workspace virtualization tools are critical to making user-centric computing work. Benefit Rating: Moderate Market Penetration: 5% to 20% of target audience Maturity: Early mainstream Sample Vendors: AppSense; Citrix; Liquidware Labs; Microsoft; RES Software; Scense; Unidesk; VMware

IT Service Dependency Mapping


Analysis By: Ronni J. Colville; Patricia Adams Definition: IT service dependency mapping (SDM) tools discover, document and track relationships by leveraging blueprints or templates to map dependencies among infrastructure components (like servers, networks, storage and applications) in both physical and virtual environments to form an IT service view. The tools provide various methods to develop blueprints or templates for internally

Page 60 of 98
This research note is restricted to the personal use of [email protected]

Gartner, Inc. | G00252566

This research note is restricted to the personal use of [email protected]

developed or custom applications. Key differentiators are breadth of blueprints, mainframe discovery, and depth of discovery across physical and virtual infrastructures. Position and Adoption Speed Justification: Enterprises continue to struggle to maintain an accurate and up-to-date view of the system and application dependencies across IT infrastructure components that make up IT services, usually relying on data manually entered into diagrams and spreadsheets that may not reflect a timely view of the environment. Many IT organizations have deployed discovery and inventory tools (often multiple tools in any given IT organization) to provide insight about the individual components and basic parent/child information (software-to-hardware relationships), as well as license management, but they do not provide the necessary peer-to-peer and hierarchical relationship information regarding how an IT service is configured. SDM tools continue to fall short in the area of homegrown or custom applications, not actually due to the tool itself, but because they require multiple stakeholders (e.g., application development or support, system administrators, business liaisons) to work together. Although SDM tools provide functionality to develop the blueprints that depict the desired state or a known logical representation of an application or IT service, the task remains labor-intensive, which will slow enterprisewide adoption of the tools beyond their primary use of discovery. The primary use case for these tools is still as a jump-start or companion to IT service view configuration management database (CMDB) projects. Manual modeling of IT services or applications was too labor-intensive and also fraught with errors; leveraging IT SDM tools as a key source of relationship discovery provided a more reliable source and reduced (but did not eliminated) the need for manual efforts. An additional use case emerged in the past year specific to data center consolidations and disaster recovery. While the users of IT SDM tools are often the teams that are part of the IT infrastructure and application support, these two newer use cases drive new buyers. While we moved the position of these tools along the Hype Cycle curve due to the challenges in adoption and broader use, we feel that the position may decelerate in the coming years based on another use case. With the expansive growth of virtual infrastructures and cloud projects, there is a need for IT SDM tools to help discover and track IT services throughout their life cycle of changes. While these tools are scan-based, and not real-time, the view of IT services that can be discovered will become more compelling with hybrid cloud deployments that include production applications (versus early adoption for development/test). IT organizations will need to understand where application components are being hosted. Cloud services providers (CSPs) will be driven to move workloads based on capacity and rightsizing exercises, while IT organizations will need to be concerned with maintaining availability and compliance. This kind of understanding and focus will require a Level 3 or Level 4 maturity, where change and configuration are also achieved, as well as service orientation with a strong standards focus. The vendor landscape is composed of a narrow set of suppliers that acquired point solution vendors predominantly to complement their IT service view CMDB solutions almost a decade ago. Some IT SDM solutions also have an embedded IT service view CMDB, or have some IT service view CMDB functionality (e.g., reconciliation). We have seen very little new vendor activity. Thus far, new tools have some capability for discovering relationships, but fall short in the depth and breadth

Gartner, Inc. | G00252566 This research note is restricted to the personal use of [email protected]

Page 61 of 98

This research note is restricted to the personal use of [email protected]

of blueprints and types of relationships (e.g., for mainframes and virtualized infrastructures). While they don't compare competitively to the more mature tools, organizations with less complexity might find them to be sufficient. To meet the demand of today's dynamic data center, IT SDM tools require expanded functionality for breadth and depth of discovery, such as a broad range of storage devices, virtual machines, mainframes and applications that cross into the public cloud. Although the tools are expensive, they can be easily justified based on their ability to provide a discovered view of logical and physical relationships for applications and IT services because, without this information, IT cannot track relationship impacts for planned and unplanned changes. The adoption of these tools continues to increase because the number of new stakeholders (e.g., disaster recovery planners) and business drivers with growing use cases (e.g., enterprise architecture planning, data center consolidation and migration projects, growing virtual infrastructures) have increased. New requirements for hybrid cloud will likely take two to three years to mature, because most activity for hybrid clouds is just moving to production applications. User Advice: Evaluate IT SDM tools to address requirements for configuration discovery of IT infrastructure components and software, especially where there is a requirement for hierarchical and peer-to-peer relationship discovery. The tools should also be considered as precursors to IT service view CMDB initiatives. If the primary focus is to build out IT services or applications, be aware that if you select one tool, the vendor is likely to try to thrust its companion IT service view CMDB technology at you, especially if the CMDB is part of the underlying architecture of the discovery tool. If the IT SDM tool you select is different from the CMDB, ensure that the IT SDM vendor has an adapter to integrate and federate to the desired or purchased CMDB. These tools can also be used to augment or supplement other initiatives, such as disaster recovery and data center consolidation initiatives, and other tasks that benefit from a near-real-time view of the relationships across a data center infrastructure (e.g., asset management). Although most of these tools aren't capable of action-oriented configuration functions (e.g., patch management), the discovery of the relationships can be used for a variety of high-profile projects in which a near-realtime view of the relationships in a data center is required, including compliance reporting and enterprise architecture gap analysis. IT SDM tools can document what is installed and where, and can provide an audit trail of configuration changes. If the use case for these tools is to gain visibility in your virtual or cloud infrastructure, ensure that the tool can discover and map virtual-to-virtual relationships (where IT services are within a single host or can be across hosts and data centers), as well as virtual-to-physical relationships (e.g., where the application might be virtualized, but the database might still be physical). If the virtual infrastructure includes public cloud resources, ensure that the IT SDM tool supports CSPs' APIs (e.g., Amazon). Business Impact: IT SDM tools will have an effect on high-profile initiatives, such as IT service view CMDBs, by establishing a baseline configuration and helping populate the CMDB. These tools will also have a less glamorous, but significant, effect on the day-to-day requirements to improve configuration change control by enabling near-real-time change impact analysis, and by providing missing relationship data critical to disaster recovery initiatives.

Page 62 of 98
This research note is restricted to the personal use of [email protected]

Gartner, Inc. | G00252566

This research note is restricted to the personal use of [email protected]

The overall value of IT SDM tools will be to improve quality of service by providing a mechanism for understanding and analyzing the effect of changes to one component and its related components within a service. These tools provide a mechanism that enables a near-real-time view of relationships that previously would have been maintained manually with extensive time delays for updates. The value is in the real-time view of the infrastructure, so that the effect of a change can be easily understood prior to release. This level of proactive change impact analysis can create a more stable IT environment, thereby reducing unplanned downtime for critical IT services, which will save money and ensure that support staff are allocated efficiently, rather than fixing preventable problems. Using dependency mapping tools in conjunction with tools that can make configurationlevel changes, companies have experienced labor efficiencies that have enabled them to manage their environments more effectively and have created improved stability of the IT services. Benefit Rating: High Market Penetration: 5% to 20% of target audience Maturity: Mature mainstream Sample Vendors: BMC Software; CA Technologies; HP; IBM; Neebula; ServiceNow; VMware Recommended Reading: "Selection Criteria for IT Service Dependency Mapping Vendors" "Toolkit: RFP/RFI for IT Service Dependency Mapping Tools" "IT Service View CMDB Vendor Landscape, 2012" "IT Service Dependency Mapping Vendor Landscape, 2012" "Seven Steps to Select Configuration Item Data and Develop a CMDB Project That Delivers Business Value"

Business Service Management Tools


Analysis By: Colin Fletcher Definition: Business service management (BSM) tools enable business-oriented prioritization of IT operations by supporting the construction of logical relationships between business priorities and the IT infrastructure and applications that support them. These constructs help define a real-time, end-to-end IT service model against which associated operational-status data is gathered, processed and provided via business-oriented dashboards that are used to support change impact planning, root cause analysis and other operational processes. Position and Adoption Speed Justification: Most, if not all, organizations seek to dynamically focus IT operational resources on issues impacting the most important business functions at any given point in time. Very few, however, have been able to embrace and invest in the people, process and technology transformation required to succeed. The upstream, cross-functional nature

Gartner, Inc. | G00252566 This research note is restricted to the personal use of [email protected]

Page 63 of 98

This research note is restricted to the personal use of [email protected]

of BSM often results in the uncovering of any deficits in subordinate monitoring and operational capabilities. This typically causes significant discomfort and slows BSM initiatives, as multiple remedial efforts are spawned that must be addressed before any further progress can be made. Mature BSM-enabling technologies continue to evolve and improve and are increasingly appearing as capabilities in other toolsets (e.g., IT event correlation and analysis tools), leaving skill deficits (primarily business fluency) and organizational and process immaturity as the most significant, and perhaps most intractable, barriers to widespread adoption. Staring in the face of these difficult and often long-avoided challenges, many organizations have either slowed or stalled BSM initiatives in favor of smaller, more technology-focused efforts, such as application performance monitoring (APM) or infrastructure monitoring consolidation. There are real reasons to believe BSM initiatives will indeed traverse the Trough of Disillusionment, primarily due to external forces (increased competition from service providers whose very business depends on service alignment, IT recruitment of business-fluent staff, and disruptive technologies like virtualization and cloud that provide required dependency models as a matter of course); however, this journey will remain a long one. User Advice: IT organizations should investigate BSM once they have successfully developed significant operational maturity across all dimensions (people, process and technology), so that IT is operating in a service-aligned manner. In constructing BSM initiatives, care should be taken to ensure a tightly focused, stepwise implementation that includes appropriately significant investments in organizational change and skill development, in addition to technology selection. The number of operational status and model data sources should be kept as small as possible, and plans to provide business service context to other toolsets should be considered a separate effort addressed only after successful implementation. Business Impact: By enabling the business-aligned prioritization of IT operational efforts, all business services and processes across all verticals stand to realize:

Productivity gains Via higher-quality service delivery and shorter mean time to repair Cost optimization Via a clearer understanding of resources required to specifically support a given service and the ability to compare that cost with the service's value to the business Risk mitigation Via a clear definition of systems that require specific controls and assigned ownership from BSM initiatives

BSM initiatives also provide a level of transparency and business context to the mechanics of IT operations, typically through dashboards and reports, which fosters the level of collaboration between business and IT leaders needed to discover new business opportunities and the optimal way to design IT's support of them. Benefit Rating: High Market Penetration: 5% to 20% of target audience Maturity: Adolescent

Page 64 of 98
This research note is restricted to the personal use of [email protected]

Gartner, Inc. | G00252566

This research note is restricted to the personal use of [email protected]

Sample Vendors: ASG; Blue Elephant; BMC Software; CA Technologies; Centerity; Centreon; Compuware; eMite; EMC; Firescope; GroundWork Open Source; HP; IBM (Tivoli); Interlink Software; Kratos Defense; Microsoft; Neebula; NetIQ; SL; Tango/04; USU; Zenoss; Zyrion Recommended Reading: "Toolkit: IT Operations Monitoring Assessment and Rationalization" "Deploy a Multivendor Strategy for Availability and Performance Monitoring" "Aligning ECA and BSM to the IT Infrastructure and Operations Maturity Model" "Market Definition: Availability and Performance Monitoring"

Network Configuration and Change Management Tools


Analysis By: Vivek Bhalla; Jonah Kowall; Colin Fletcher Definition: Network configuration and change management (NCCM) tools are focused on setup and configuration, patching, rollout and rollback, resource use and change history. These tools discover and document network device configurations; detect, audit and alert on changes; compare configurations with the policy or "gold standard" for that device; and deploy configuration updates to multivendor network devices. Position and Adoption Speed Justification: NCCM remains primarily a labor-intensive, manual process that involves remote access for example, via Telnet or Secure Shell (SSH) to individual network devices and typing commands into vendor-specific, command-line interfaces. These activities are fraught with opportunities for human error, and alternative approaches such as creating homegrown scripts to ease retyping requirements are used to ease effort, as opposed to ensuring accuracy and eliminating inconsistencies. Enterprise network managers do not often consider rigorous configuration and change management, compliance audits and disaster recovery (DR) rollback processes when executing network configuration alterations, even though these changes often are the root causes of network issues. However, corporate audit and compliance initiatives have forced a shift in this behavior. NCCM tool vendors are meeting these challenges by providing solutions that operate in multivendor environments, enable automated configuration management and bring more rigorous adherence to the change management process, as well as provide compliance audit capabilities. The market has progressed to the point where many startups have been acquired, and new vendors have entered the market using various angles to differentiate themselves. NCCM tools are nearing the Trough of Disillusionment, although not because of the tools, which work well and can deliver strong benefits to a network management team. The network configuration management discipline is held back by a lack of process maturity that has pushed network teams toward taking their own pragmatic approaches to resolving their organization's specific requirements. This has frequently resulted in a cultural reluctance to modify standard operating procedures that have evolved organically as opposed to systematically.

Gartner, Inc. | G00252566 This research note is restricted to the personal use of [email protected]

Page 65 of 98

This research note is restricted to the personal use of [email protected]

Network configuration management is frequently practiced by router experts who are the only individuals familiar with the arcane, command-line interfaces for the various network devices. Without a sufficient level of process maturity, it is a challenge to transform this status quo, particularly when there is resistance to change by those who may feel their skills are being undervalued. A top-down effort is required from senior IT management and a change in personnel performance review metrics is needed to convince network managers of the business importance of documented network device configuration policies, rigorous change management procedures and tested DR capabilities. User Advice: Replace manual processes with automated NCCM tools to monitor and control network device configurations to improve staff efficiency, reduce risk and enable the enforcement of compliance policies. Prior to investing in tools, establish standard network device configuration policies to reduce complexity and enable more-effective automated change. NCCM, although a discipline unto itself, must increasingly be considered part of the configuration and change management processes for an end-to-end IT service, and viewed as an enabler for the real-time infrastructure (RTI). New pressures are coming from cloud implementations, where policy-based network configuration updates must be made in lockstep with changes to other technologies, such as servers and storage, to initiate the end-to-end cloud service. This will require participation in strategic, companywide change management processes (which are usually implemented as part of an IT service support management toolset) and integration with configuration management tools for other technologies, such as servers and storage. Network managers need to gain trust in automated tools before they let any product perform a corrective action without human oversight. With cost minimization and service quality maximization promised by new, dynamically virtualized, cloud-based RTIs, automation is becoming a requirement, because humans will no longer be able to manually keep up with real-time configuration changes. Business Impact: These tools provide an automated way to maintain network device configurations, offering an opportunity to lower costs, reduce the number of human errors and improve compliance with configuration policies. Benefit Rating: Moderate Market Penetration: 5% to 20% of target audience Maturity: Adolescent Sample Vendors: AlterPoint; BMC Software; CA Technologies; Cisco; Dell (Quest Software); Dorado Software; EMC; HP; IBM Tivoli; Ipswitch; ManageEngine; NetBrain; SolarWinds; Tail-f Systems; Uplogix; Zenoss Recommended Reading: "Take a Four-Step Network Configuration and Change Management Approach to Stem Disasters"

Page 66 of 98
This research note is restricted to the personal use of [email protected]

Gartner, Inc. | G00252566

This research note is restricted to the personal use of [email protected]

"MarketScope for Network Configuration and Change Management"

Configuration Auditing
Analysis By: Ronni J. Colville Definition: Configuration auditing tools provide change detection and configuration assessment. Some can also provide reconciliation against approved change requests and/or can remediate to a desired state. Company-specific policies or industry-recognized security configuration assessment templates are used to maintain the fidelity of the system for auditing, hardening or improved availability. These tools mainly focus on requirements specific to servers or PCs, but some also address network components, applications, databases and virtual infrastructures. Position and Adoption Speed Justification: Configuration auditing continues to be a top driver for adopting server provisioning and configuration automation in physical and virtual data center infrastructures. There is still a heightened awareness of security vulnerabilities and missing patches, as well as the requirement to provide documented change control for internal and external auditors. IT organizations establish policies that are translated to templates with specific configuration settings. Systems are then assessed against these company-specific policies (for example, the "golden image"), or industry-recognized security configuration assessment templates (such as those of the U.S. National Institute of Standards and Technology and the Center for Internet Security). Some tools provide change detection in the form of file integrity monitoring (FIM), which can be used for PCI compliance, and to support other policy templates (such as the United States Government Configuration Baseline [USGCB]). Exception reports can be generated, and some tools automatically return the settings to their desired values, or block changes based on approvals or specific change windows. Reports are generated from these tools that are leveraged by a variety of stakeholders, including change managers, system administrators, internal auditors, external auditors and security staff. With the frequency of changes being made across data centers and extending to public cloud resources, IT organizations can use configuration auditing tools as a mechanism to track and validate changes, and enforce corporate standards. Although these tools focus on requirements specific to servers or PCs, some address network components, applications, databases and virtual infrastructures, including virtual machines (VMs). Access to public cloud resources is driven by visibility into their infrastructures. IT organizations must understand how that access and visibility is enabled by cloud services providers (CSPs). CSPs will drive placement of VMs and applications based on capacity, which may fly in the face of the overall compliance requirements of the business. This conflict and responsibility will fall back to the IT organization; therefore, it will become more important for adoption of configuration auditing (functionality and specific tooling) to become more widespread. Configuration auditing has two major drivers: external (regulatory compliance) and internal (improved availability). Technology implementation is gated by the organization's process maturity. Prerequisites include the ability to define and implement configuration standards. Although a robust, formalized and broadly adopted change management process is desirable, configuration auditing

Gartner, Inc. | G00252566 This research note is restricted to the personal use of [email protected]

Page 67 of 98

This research note is restricted to the personal use of [email protected]

tools offer significant benefits for tracking configuration change activity without automating change reconciliation. Without the reconciliation requirement, other tools (e.g., operational configuration tools and security configuration assessment tools) can be considered for configuration auditing, and can broaden the vendor landscape. Configuration auditing tools are bought by those in operational system administration roles (e.g., system administrators and system engineers), and by security practitioners who need to assess security configuration standards. Security administrators often implement configuration auditing capabilities provided by various security products, while those in operational system administration roles tend to utilize the configuration audit functions within broader configuration management tools that also provide mitigation functions. The adoption of configuration auditing capabilities within broader operations and security tools will continue to accelerate, but point solution tools will sometimes be purchased to address individual auditing and assessment needs. The breadth of platform coverage (e.g., servers, PCs and network devices) and policy support varies greatly among the tools, depending on whether they are securityoriented or operations-oriented. Thus, several tools may end up being purchased throughout an enterprise, based on the buying center and specific functional requirements. User Advice: Develop sound configuration and change management practices before introducing configuration auditing technology in the organization. Greater benefits can be achieved if robust change management processes are also implemented, with the primary goal of becoming proactive (before the change occurs) versus reactive (tracking changes that violate policy, introduce risk and/or cause system outages). Process and technology deployment should focus on systems that are material to the compliance issue being resolved; however, broader functional requirements should also be evaluated, because many organizations can benefit from more than one area of focus, and often need to add new functions within 12 months. Define the specific audit controls required before selecting configuration auditing technology, because each configuration auditing tool has a different focus and breadth (e.g., security regulation, system hardening, application consistency and OS consistency). IT system administrators, network administrators and system engineers should evaluate configuration auditing tools to maintain operational configuration standards, and to provide a reporting mechanism for change activity. Security officers should evaluate the security configuration assessment capabilities of incumbent security technologies to conduct a broad assessment of system hardening and security configuration compliance independent of operational configuration auditing tools. Enterprise and cloud architects, as well as cloud administrators that are expanding their computing infrastructure to public cloud providers or those who are creating hybrid cloud infrastructures, must insist on specific compliance policies and governance capabilities that meet company regulatory and availability requirements. Business Impact: Not all regulations provide a clear definition of what constitutes compliance for IT operations and production support, so businesses must select reasonable and appropriate controls based on reasonably anticipated risks, and should build a case that their controls are appropriate for the situation. This is not a "one and done" exercise. Continued review and revision of policies must be done to ensure that updates to regulatory standards are kept up to date, as well as
Page 68 of 98
This research note is restricted to the personal use of [email protected]

Gartner, Inc. | G00252566

This research note is restricted to the personal use of [email protected]

ensuring that new technologies introduced into the computing infrastructure have the appropriate level of compliance governance applied. Reducing unauthorized change is part of a good control environment. Although configuration auditing has been tasked individually in each IT domain, as enterprises begin to develop an IT service view, configuration reporting and remediation (as well as broader configuration management capabilities) will ensure reliable and predictable configuration changes, and will offer policy-based compliance with audit reporting. Benefit Rating: High Market Penetration: 20% to 50% of target audience Maturity: Mature mainstream Sample Vendors: BMC Software; IBM; NetIQ; Qualys; Symantec; Tripwire; VMware Recommended Reading: "Server Configuration Baselining and Auditing: Vendor Landscape" "Market Trends and Dynamics for Server Provisioning and Configuration Management Tools" "Security Configuration Management Capabilities in Security and Operations Tools"

IT Management Process Maturity


Analysis By: George Spafford; Ian Head; Tapati Bandopadhyay Definition: Gartner's ITScore for infrastructure and operations (I&O) posits that there are four disciplines that contribute to maturity, namely, people management, process management, technology management and business management. Users conduct an online, questionnaire-based self-assessment that indicates maturity for each discipline on a 1-to-5 maturity scale. Position and Adoption Speed Justification: As part of IT organizations' efforts to align themselves more effectively with business needs, IT operations departments are being pressured to move from a component orientation (such as managing networks, servers, storage, databases and applications) to managing business-oriented, end-to-end IT services. Because IT operations departments have been segregated in IT technical silo-oriented organizational structures and metrics, this transition is usually challenging. Over 800 organizations have completed Gartner's ITScore for I&O since 2012, with a mean score of 2.33. Process management is the lowest maturity discipline, which suggests that, although most organizations have worked on incident and change management to some extent, most have not achieved consistent process alignment nor moved forward into end-to-end service management and integrated IT management processes. Gartner regards the proactive state (Level 3) as the start of maturity, and the ITScore statistics align with Gartner's other client interactions, suggesting that fewer than 10% of large-enterprise IT organizations have made the transition to achieve the higher proactive, service-aligned or business partnership levels of the ITScore maturity model. To do so

Gartner, Inc. | G00252566 This research note is restricted to the personal use of [email protected]

Page 69 of 98

This research note is restricted to the personal use of [email protected]

requires incorporating principles of service design, implementing end-to-end IT SLAs and managing IT service delivery to those SLAs. Thus, Gartner places IT management process maturity as having a market penetration of 5% to 20% of the target audience. Based on the challenges that are involved, Gartner estimates that, for most organizations, the transformation from the committed state (Level 2) of IT management process maturity through the proactive state (Level 3) to achieve the service-aligned state (Level 4) will take at least three years and may take five years or longer if commitment is limited. Significant cultural change is required as well as strong leadership and vision. This realization has pushed the concept of IT management process maturity into the Trough of Disillusionment as the profound organizational change it requires becomes abundantly clear to many enterprises. However, the benefits of improved IT service quality, greater levels of agility, lower costs and reduced risks, especially risk that is associated with innovation, real-time infrastructure (RTI) and cloud computing, will require enterprises to mature and continually optimize their system of IT management processes. Organizations can be at varying levels of IT management process maturity in different service and technology domains. For positioning on this Hype Cycle, however, Gartner is assessing the aggregate measure of process maturity across the IT for I&O organization as needed to implement RTI architectures. User Advice: Assess your position in the process discipline of the ITScore maturity model. Study the definitions and descriptions for each level, and set a goal for the level you must reach to support your business. At a minimum, you should set a goal to reach the proactive level (Level 3); below this level, costs are too high, and service quality and agility are too low. Industry leaders will need to reach the service-aligned level (Level 4) of the model, where they can target sustainable success in provisioning and maintaining demand-based, cost-effective business services that are dynamically optimized during runtime execution. All practitioners should define and document integrated and flexible IT management processes that can be instrumented and automated to create a service delivery chain, with a focus on improving the performance of the end-to-end IT service. Leverage industry-standard IT management process guidance such as COBIT, IT Infrastructure Library (ITIL) and International Organization for Standardization/International Electrotechnical Commission (ISO/IEC) 20000 during process design and continual improvement efforts. Those seeking to control risk while raising agility should consult Gartner's research on ValueOps. Improving IT management process maturity will affect your organizational design because it generally requires a matrix management approach, with process managers whose responsibilities cross multiple IT component-oriented technology domains, such as servers, storage and networking. In addition to the ITScore for I&O maturity model, specific process maturity models provide a valuable diagnostic tool to aid management to continually improve IT for I&O. Independent frameworks include the ITIL Service Management Process Maturity Framework and the ISO 15504 Information Technology Process Assessment family. Establishing the organizational change plans, including the necessary personnel performance metrics and rewards, will encourage IT operations management and staff to look beyond reactive
Page 70 of 98
This research note is restricted to the personal use of [email protected]

Gartner, Inc. | G00252566

This research note is restricted to the personal use of [email protected]

"firefighting" modes and devote the time and training that are necessary to document repeatable processes, become proficient in their execution and achieve predictable service quality. Clearly, senior management support is essential where significant organizational change programs are undertaken. Business Impact: Moving to a higher level of IT management process maturity improves service quality and business value of IT services by enabling greater levels of organizational agility and lower labor costs (personnel costs are on average 40% of I&O operational expenditures), all of which positively affects the efficiency and effectiveness of IT operations, as well as increases IT's contribution to the business. More mature IT enables the business to take advantage of leading infrastructure technologies, including RTI and cloud computing architectures. Benefit Rating: Transformational Market Penetration: 5% to 20% of target audience Maturity: Adolescent Recommended Reading: "ITScore for Infrastructure and Operations" "ITScore for I&O Analysis: Take Action Now to Improve Your Organization's Maturity" "Infrastructure & Operations Maturity: How Do You Compare?" "Use a ValueOps Perspective to Balance Risk and Agility in IT Operations" "Successful ITIL and Service Management Projects Avoid These 10 Common Failings" "Five Steps Toward a Faster, Better, Cheaper I&O"

ITIL
Analysis By: Ian Head; Tapati Bandopadhyay; Simon Mingay Definition: ITIL is an IT service management framework that provides guidance on the full life cycle of IT services. ITIL is part of a joint-venture between the U.K. government and Capita. ITIL is structured as five core books: service strategy, service design, service transition, service operation and continual service improvement. Specific implementation guidance is not provided, the focus being a set of good practices that an organization should adapt to its needs. Position and Adoption Speed Justification: ITIL has been evolving for more than 20 years. It is well-established as the de facto standard in service management, and shares many concepts and principles with the formal service management standard ISO/IEC 20000, although the alignment is not perfect, with differences reflecting the different origins and goals of the two bodies of work.

Gartner, Inc. | G00252566 This research note is restricted to the personal use of [email protected]

Page 71 of 98

This research note is restricted to the personal use of [email protected]

The current release, ITIL 2011, was the first update to the major version 3 (v.3) release in 2007. ITIL has the highest adoption rate of the related frameworks used within IT operations (e.g., COBIT, CMMI, MOF) and is mainstream today. Based on our polls, most organizations worldwide use the ITIL framework, but the number of organizations using additional approaches such as continuous delivery and DevOps is growing. Also, even after the significant improvement to service strategy in the 2011 update, ITIL is primarily used for guidance in service operation and transition. The unbalanced adoption is the reason penetration is shown as 20% to 50%. The current version of ITIL covers the entire IT service life cycle. This includes service strategy, business relationship management, transition planning and support, and design coordination as well as the essential operational processes such as incident management and change management. ITIL advises on IT strategies to enable the business, processes for the design of IT services, their transition into production, ongoing operational support, and continual service improvement. In general, service transition and service operation are the most commonly used books and could arguably justify a position higher on the Plateau of Productivity. In contrast, service strategy has not gained momentum since the 2011 rewrite and, therefore, could be placed much earlier in the Hype Cycle. Integration, defined as the exchange of information, is a key focus, and ITIL 2011 provides much clearer guidance with respect to integration and the scope of different processes, such as change management and transition planning. For nearly all IT organizations, ITIL can play a major role in operational process design, even where cloud, hybrid and Pace-Layered Application Strategy are embraced. ITIL will continue to serve as a source of guidance for those responsible for delivering IT services through their process and organization design and tool selection and implementation. Overall, we continue to see a tremendous span of adoption and maturity levels. Some organizations are just embarking or have stalled on their journey for a variety of reasons, whereas others are well on their way and pursuing continual improvement. Leaders are integrating ITIL with other approaches to improve service lean and DevOps being notable examples. In fact, a combination of process guidance from various sources tends to do a better job of addressing requirements than any framework in isolation. User Advice: Leverage ITIL as guidance to accelerate adoption of industry best practices, refined to meet the needs of your specific business goals. Some recent developments, such as the rise in agile methods and Pace-Layered Application Strategy (see Gartner's ValueOps research), have yet to be explicitly reflected in the ITIL body of knowledge. While the core practices are sound, users currently need to look for additional inspiration in sources such as ValueOps, lean, DevOps and Continual Integration if they are to keep with changing operational needs. ITIL is helpful in putting IT service management into a strategic context and providing high-level guidance on reference processes and other factors in the service life cycle. To optimize service improvements, IT organizations must first define objectives and then pragmatically leverage ITIL during the design of their own unique processes. There is a large pool of ITIL trained staff available, so this requirement should be a part of the development and recruitment process.

Page 72 of 98
This research note is restricted to the personal use of [email protected]

Gartner, Inc. | G00252566

This research note is restricted to the personal use of [email protected]

Business Impact: ITIL provides a framework for the strategy, design, transition, operation and continual improvement of IT services, including the organization, processes, technology and management practices that underpin them. Most IT organizations need to start or continue the transition from their traditional technology and asset focus to a focus on services and service outcomes as described in this framework. IT service management is a critical discipline in achieving that change, and ITIL provides useful reference guidance for IT management. Service management professionals must also accept that ITIL is not a standard and, therefore, precise implementation instruction is not provided. Benefit Rating: Transformational Market Penetration: 20% to 50% of target audience Maturity: Early mainstream Recommended Reading: "ITIL 2011 Service Strategy: An Important Missing Link Between IT and Business" "Five Ways to Manage IT Service Transitions to Cloud, Leveraging ITIL Processes and ITOM Tools" "Use Six Sigma With ITIL 2011 to Improve IT Operations Processes and Effectively Leverage the Cloud" "How to Leverage ITIL 2011 and Avoid Three Common Cost Traps" "Increase I&O Effectiveness With the ValueOps Perspective" "Running IT Like a Business 2.0: The Service-Optimizing IT Delivery Model"

Server Provisioning and Configuration Management


Analysis By: Ronni J. Colville; Donna Scott Definition: Server provisioning and configuration management tools manage the software configuration life cycle for physical and virtual servers. Some vendors offer functionality for the entire life cycle; others offer specific point solutions in one or two areas. The main categories are server provisioning (physical or virtual), application provisioning (binaries) and configuration management, patching, inventory, and configuration compliance. Position and Adoption Speed Justification: Server provisioning and configuration management tools continue to expand their depth of function, as well as integrate with adjacent technologies, such as IT process automation tools, and, most recently, cloud management platforms for which they often provide a core capability for initial provisioning and virtual machine provisioning. Although the tools continue to progress, configuration policies, organizational structures (server platform team silos) and processes inside the typical enterprise are causing them to struggle with full life cycle adoption. As cloud initiatives progress and mature, there will be a "Day 2" requirement

Gartner, Inc. | G00252566 This research note is restricted to the personal use of [email protected]

Page 73 of 98

This research note is restricted to the personal use of [email protected]

for configuration hygiene to manage configuration compliance and patching, which will bring these tools back into the picture. Virtual server provisioning also offers another option cloning or copying the VM and making subsequent changes to personalize the clone (versus using a tool to manage the overall stack). Initial private cloud initiatives were focused on infrastructure as a service (IaaS), in which thin and standard OS images were provisioned with the VM, but then additional software was layered on top via application provisioning and configuration management. More recent private cloud initiatives are also including a focus on middleware and database provisioning (internal platform as a service [PaaS]), which typically uses application provisioning and configuration management to provide the software stack on top of the standard OS because this method reduces the image sprawl that comes with the combinations and permutations of software stack builds. Most large enterprises have adopted one or more of the vendors in this category with varying degrees of success and deployment of the life cycle. As a result, we are seeing an increase in the uptick in adoption of additional life cycle functionality by both midsize and large enterprises to solve specific problems (e.g., multiplatform provisioning and compliance-driven audits, including improving patch management). Another new shift in the past year is the focus on DevOps, which appeals to some organizations that want to build infrastructure via code, as they do with their application development teams. Tools that were previously open source and have shifted to commercial are now becoming an alternative to the "GUI-based" traditional tools in this category, especially for Linux. These tools offer a programmatic approach to provisioning and configuring software on top of physical and virtual servers, and some offer bare-metal initial provisioning (and some can also address networking) with a scalable approach. Initially, they offered a different approach (pull versus push) compared with the traditional vendors in this category, but of late, most have also added push capability. Cloud computing trends are encouraging standardization; for that reason, we believe penetration and broader adoption of these tools will increase more rapidly in the next two to five years. However, these tools are continuing to progress toward the Trough of Disillusionment not so much due to the tools, but the IT organizations' inability to standardize and use the tools broadly across the groups supporting the entire software stack. There could be a rejuvenation of these tools in two years as cloud adoption matures, and the subsequent need to manage inside the VM becomes a renewed priority. User Advice: With an increase in the frequency and number of changes to servers and applications, IT organizations should emphasize the standardization of server stacks and processes to improve and increase availability, as well as to succeed in using server provisioning and configuration management tools for physical and virtual servers. Besides providing increased quality, these tools can reduce the overall cost to manage and support patching and rapid deployments and VM policy enforcement, as well as provide a mechanism to monitor and enforce compliance. Evaluation criteria should include capabilities focused on multiplatform physical and provisioning, software deployment and installation, and continued configuration for ongoing maintenance, as well as auditing and reporting. The criteria should also include the capability to address the unique
Page 74 of 98
This research note is restricted to the personal use of [email protected]

Gartner, Inc. | G00252566

This research note is restricted to the personal use of [email protected]

requirements of virtual servers and VM guests. When IT standards have been put in place, we recommend that organizations implement these tools to automate manual tasks for repeatable, accurate and auditable configuration change control. The tools help organizations gain efficiencies in moving from a monolithic imaging strategy to a dynamic layered approach to incremental changes. When evaluating products, organizations need to:

Evaluate functionality across the life cycle, and not just the particular pain point at hand. Consider physical systems, physical hosts, and VM server provisioning and configuration management requirements together. Conduct rigorous testing to ensure that functionality is consistent across required platforms. Ensure that tools address a variety of compliance requirements.

If private clouds are a focus, it is also important to understand if the cloud management platform can provide Day 2 server provisioning and configuration management capability or if a separate tool will be needed to supplement it. If the latter, integration or coexistence will be needed and should also be part of the evaluation. Business Impact: Server provisioning and configuration management tools help IT operations automate many server-provisioning tasks, thereby lowering the cost of IT operations, enforcing standards, and increasing application availability and the speed of modifications to software and servers. They also provide a mechanism for enforcing security and operational policy compliance. Benefit Rating: High Market Penetration: 20% to 50% of target audience Maturity: Early mainstream Sample Vendors: BMC Software; CA Technologies; CFEngine; HP; IBM; Microsoft; Opscode; Puppet Labs; SaltStack; ScaleXtreme; Tripwire; VMware Recommended Reading: "Midsize Enterprises Should Use These Considerations to Select Server Provisioning and Configuration Tools" "Server Provisioning Automation: Vendor Landscape" "Provisioning and Configuration Management for Private Cloud Computing and Real-time Infrastructure" "Server Configuration Baselining and Auditing: Vendor Landscape" "Market Trends and Dynamics for Server Provisioning and Configuration Management Tools" "The Patch Management Vendor Market Landscape, 2011"

Gartner, Inc. | G00252566 This research note is restricted to the personal use of [email protected]

Page 75 of 98

This research note is restricted to the personal use of [email protected]

"Cool Vendors in DevOps, 2012" "Cool Vendors in IT Operations Management, 2012" "Cool Vendors in DevOps, 2013"

IT Asset Management Tools


Analysis By: Patricia Adams Definition: IT asset management (ITAM) is a centralized repository, essentially an information hub, that holds inventory, financial and contractual data that can then be used to manage the IT asset throughout its life cycle. ITAM depends on robust processes, with tools to automate manual processes. This data then enables organizations to effectively manage IT assets, vendors and a software and hardware asset portfolio from requisition through retirement, thus monitoring the asset's performance throughout its life cycle. Position and Adoption Speed Justification: The ITAM discipline, when integrated with tools, is adopted during business cycles that reflect the degree of emphasis that enterprises put on controlling costs, managing and optimizing the use of hardware and software. Both hardware asset management and software asset management are subprocesses of the holistic ITAM discipline. With an increased focus on software audits, configuration management databases (CMDBs), bring your own device (BYOD), managing virtualized software on servers and clients, developing IT service catalogs and tracking software license use in the cloud, ITAM initiatives will gain priority and acceptance in IT operations. ITAM data is necessary to understand the costs associated with a business service, and the resulting data is used to make decisions about standards and demand forecasts. Without this data, companies don't have accurate cost information on which to base decisions regarding service levels that vary by cost, or to implement chargeback/showback. Visibility into contract events is also critical when decisions are being made to extend life cycles or refresh faster. Software as a service (SaaS) delivery models are having an effect on market adoption growth rates, albeit a slow one. We expect ITAM market penetration, currently at 45%, to continue growing during the next five years. User Advice: Many companies embark on ITAM initiatives in response to specific problems, such as impending software audits (or shortly after an audit), CMDB implementations, virtual software sprawl or OS migrations. Inventory and software usage tools, which feed into an ITAM repository, can assist with software license compliance and monitor the use of installed applications. Without ongoing visibility into performance metrics, companies will remain in a reactive position, without achieving a proactive position that diminishes the negative effects of an audit or the ability to see whether the environment is performing effectively. ITAM has a strong operational focus, with tight linkages to IT service management and end-user client management by creating efficiencies and effectively using software and hardware assets. Tools are purchased either stand-alone or as part of the CMDB or IT service and support management (ITSSM) suite, depending upon the customer focus. ITAM data can easily identify

Page 76 of 98
This research note is restricted to the personal use of [email protected]

Gartner, Inc. | G00252566

This research note is restricted to the personal use of [email protected]

opportunities for the accurate purchase of software licenses, the efficient use of all installed software or identify unused software for harvesting, and to ensure that standards are in place to lower support costs and rationalize the application portfolio. To gain value from an ITAM program, a combination of people, policies, processes and tools must be in place. Begin by focusing on the life cycle and process associated with the current problem, but chart a course aimed at resolving higher-level problems, such as building a service catalog or facilitating application rationalization processes. As process maturity improves, ITAM efforts will focus increasingly on financial and spending management related to controlling asset investments, and will provide integration with project and portfolio management and enterprise architecture. ITAM processes and best practices also play a role in how operational assets are managed. Companies should plan for this evolution in thinking. Business Impact: As more enterprises implement an IT service management strategy, an understanding of costs to deliver business IT services will become essential. It is a necessity to ensure that external vendor contracts are in place to deliver the specified service levels the business requires, especially as more software is put in the cloud. ITAM tools that have bidirectional data feeds into many systems, such as enterprise architecture, client configuration management, portfolio management, and CMDBs will assist with achieving a holistic view of the organizations assets. The ITAM discipline will realize value in organizations that undertake these projects as part of an ongoing strategy that evaluates continuous opportunities to achieve savings. Benefit Rating: Moderate Market Penetration: 20% to 50% of target audience Maturity: Early mainstream Sample Vendors: BMC Software; CA Technologies; EasyVista; HP; IBM; Provance Technologies; ServiceNow Recommended Reading: "MarketScope for the IT Asset Management Repository" "How to Build PC ITAM Life Cycle Processes"

Service-Level Reporting Tools


Analysis By: Ian Head Definition: Service-level reporting tools incorporate and aggregate multiple types of metrics from various management disciplines; provide a calendaring function to specify service hours, planned service uptime and scheduled maintenance periods; and compare measured results to the servicelevel targets agreed to between the IT operations organization and the business units to determine success or failure. At a minimum, they must incorporate service desk metrics along with IT infrastructure availability and performance metrics.

Gartner, Inc. | G00252566 This research note is restricted to the personal use of [email protected]

Page 77 of 98

This research note is restricted to the personal use of [email protected]

Position and Adoption Speed Justification: Just about every IT operations management software vendor offers basic reporting tools that they typically describe as service-level management. In general, these tools do not satisfy all the requirements in Gartner's definition of the service-level reporting category. Thus, the industry suffers from product ambiguity and market confusion, causing this category to remain positioned near the Trough of Disillusionment. Most IT operations management tools are tactical tools for specific domains (such as IT service desk, network monitoring or server administration) in which production statistics are collected for component- or process-oriented operational-level agreements, rather than true, service-aligned, business-oriented SLAs. Only the IT organizations that have attained the service-aligned level of Gartner's ITScore Maturity Model for IT infrastructure and operations (see "ITScore for Infrastructure and Operations") have the skills and expectations to demand end-to-end IT service management capabilities from their service-level reporting tools. ITScore self-assessments report that well under 10% of IT organizations are at the service-aligned level, which slows the adoption speed and lengthens the time to the Plateau of Productivity. Some cloud computing vendors have developed simplistic service displays for their infrastructures and applications, but they're not heterogeneous and do not include on-premises infrastructures and applications. User Advice: Clients use many types of tools to piece together their service-level reports, including basic spreadsheets and PowerPoint presentations. Although service-level reporting tools can be used to track just service desk metrics or IT infrastructure component availability and performance metrics, they are most valuable when used by clients who have defined business-oriented IT services and SLAs with penalties and incentives. End-user response time metrics (including results from application performance monitoring [APM] tools) can enhance service-level reports, and are sometimes used as a "good enough" proxy for end-to-end IT service quality. Service-level reporting tools will increasingly have to deal with on-premises applications and infrastructure, as well as cater to off-premises cloud infrastructures and applications. When evaluating a service-level reporting tool, ensure it can report on all of the various metrics needed to manage multiple SLAs. In addition to comparing measured historical results to service-level targets at the end of the reporting period, more-advanced service-level reporting tools will keep a running, up-to-the-minute total that displays real-time service-level results, and predicts when service levels will not be met. This will forewarn IT operations staff of impending trouble. Monitoring alone will not solve service-level problems. IT organizations need to focus on changing workplace cultures and behavior so that employees are measured, motivated and rewarded based on end-to-end IT service quality, as this affects business goals. Clients should choose SLA metrics wisely and move toward measures of business value so that this exercise provides action-oriented results. Business Impact: SLAs help the IT organization demonstrate its value to the business. Once IT and the business have agreed to IT service definitions and, thus, established a common nomenclature, service-level reporting tools are used as the primary communication vehicles to corroborate that IT service quality is in compliance with business customer requirements. Defining business-oriented IT services with associated SLAs, proactively measuring service levels and reporting on compliance can help IT organizations deliver more-consistent, predictable performance and maintain customer
Page 78 of 98
This research note is restricted to the personal use of [email protected] Gartner, Inc. | G00252566

This research note is restricted to the personal use of [email protected]

satisfaction with IT services. By tracking service levels and analyzing historical service-level trends, IT organizations can use service-level reporting tools to predict and prevent problems before they affect business users. Benefit Rating: Moderate Market Penetration: 5% to 20% of target audience Maturity: Adolescent Sample Vendors: BMC Software; CA Technologies; Compuware; eMite; Grand Central Communications; GrandSLA; HP; IBM Tivoli; Interlink Software; NetIQ; VMware Recommended Reading: "The Challenges and Approaches of Establishing IT Infrastructure Monitoring SLAs in IT Operations"

Climbing the Slope


Hosted Virtual Desktops
Analysis By: Mark A. Margevicius; Ronni J. Colville; Terrence Cosgrove Definition: A hosted virtual desktop (HVD) is a full, thick-client user environment run as a virtual machine (VM) on a server and accessed remotely. HVD implementations comprise server virtualization software to host desktop software (as a server workload), brokering/session management software to connect users to their desktop environments, and tools for managing the provisioning and maintenance (e.g., updates and patches) of the virtual desktop software stack. Position and Adoption Speed Justification: An HVD involves the use of server virtualization to support the disaggregation of a thick-client desktop stack that can be accessed remotely by its user. By combining server virtualization software with a brokering/session manager that connects users to their desktop instances (that is, the OS, applications and data), enterprises can centralize and secure user data and applications, and manage personalized desktop instances centrally. Because only the presentation layer is sent to the accessing device, a thin-client terminal can be used. For most early adopters, the appeal of HVDs has been the ability to thin the accessing device without significant re-engineering at the application level (usually required for server-based computing). While customers implementing HVDs cite many reasons for deployments, three important factors have contributed to the increased focus on HVD: the desire to implement new client computing capabilities in conjunction with Windows 7 migrations, the desire for bring your own device (BYOD) and device choice (particularly iPads), and the uptick in customers focused on security and compliance issues. During the past few years, the adoption of virtual infrastructures in enterprise data centers has increased, making HVDs easier to deploy. With this increase comes a level of maturity and an understanding of how to better utilize the technology. This awareness aids HVD implementations where desktop engineers and data center administrators work together.

Gartner, Inc. | G00252566 This research note is restricted to the personal use of [email protected]

Page 79 of 98

This research note is restricted to the personal use of [email protected]

Early adoption of this technology has been hindered by several factors, including licensing compliance issues for the Windows client OS. This has since been resolved through Microsoft Windows Virtual Desktop Access (VDA) licensing offerings; however, the cost still inhibits adoption. This was only one aspect of the higher total cost of ownership (TCO) associated with implementing HVD on a broad scale. Although many IT organizations made significant progress in virtualizing their data center server infrastructures, HVD implementations required additional virtual capacity for server and storage (above and beyond what was in place for physical to virtual migrations). Even with Microsoft's reduced license costs for the Windows OS, which enables an HVD image to be accessed from a primary and secondary device with one license, there are still other technical issues that hinder mainstream adoption. Since late 2007, HVD deployments have grown steadily, reaching around 18 million users by the end of 1Q13. Because of the constraints, the broad applicability of HVDs has been limited to specific scenarios, primarily structured-task workers in call centers, and kiosks, trading floors and secure remote access. About 50 million endpoints remain the target population of the total 700 million desktops. Throughout the second half of 2013 and into 2014, we expect general deployments to continue, albeit at a slower pace than in 2012. Inhibitors to general adoption involve the cost of the data center infrastructure required to host the desktop images (servers and storage in particular) and network constraints. Even with the increased adoption of virtual infrastructures, cost-justifying HVD implementations remains a challenge because of HVD and PC cost comparisons. Some advancements in leveraging application virtualization make HVD less cumbersome by introducing the ability to layer applications. This makes managing the image and maintaining the HVD easier. Availability of the skills necessary to manage virtual desktops remains a challenge, as is deploying HVDs to mobile/offline users, despite the promises of offline VMs and advanced synchronization technologies. The virtual graphics processing units (GPUs) introduced in 2012 will eventually allow a broader audience, but will not have much impact until the end of 2013 and into 2014. HVD marketing has promised to deliver diminishing marginal, per-user costs, due to the high level of standardization and automation required for successful implementations. However, this is currently only achievable for persistent users where images remain intact a small use case of the overall user population. As other virtualization technologies mature (e.g., brokers and persistent personalization), this restraint will decrease. This will create a business case for organizations that adopt HVDs to expand their deployments, once the technology permits more users to be viably addressed. Enterprises that adopt HVDs aggressively will see later adopters achieve superior results for lower costs. However, these enterprises will need to migrate to new broker and complementary management software as products mature and standards emerge. User Advice: Unless your organization has an urgent requirement to deploy HVDs immediately for securing the environment or centralizing data management, wait until late 2013 before initiating deployments for broader (mainstream) desktop user scenarios. Through 2013 and 2014, all organizations should carefully assess the user types for which this technology is best-suited. Clients that make strategic HVD investments will gradually build institutional knowledge. These investments will allow them to refine technical architecture and organizational processes, and to grow internal IT staff expertise before IT is expected to support the technology on a larger scale through 2016.
Page 80 of 98
This research note is restricted to the personal use of [email protected] Gartner, Inc. | G00252566

This research note is restricted to the personal use of [email protected]

Balance the benefits of centralized management with the additional overhead of infrastructure and resource costs. Customers should recognize that HVDs may resolve some management issues, but will not become panaceas for unmanaged desktops. In most cases, the promised TCO reductions will not be significant, and will require initial capital expenditures to achieve. The best-case scenario for HVDs remains securing and centralizing data management, and structured-task users. Organizations must optimize desktop processes, IT staff responsibilities and best practices to fit HVDs, just as organizations did with traditional PCs. Leverage desktop management processes for the lessons learned. The range of users and applications that can be viably addressed through HVDs will grow steadily through 2013. Although the user population is narrow, it will eventually include mobile/offline users. Organizations that deploy HVDs should plan for growing viability across their user populations, but should be wary of rolling out deployments too quickly. Employ diligence in testing to ensure a good fit of HVD capabilities with management infrastructure and processes, and integration with newer management techniques (such as application virtualization and software streaming). Visibility into future product road maps from suppliers is essential. Business Impact: HVDs provide mechanisms for centralizing a thick-client desktop PC without reengineering each application for centralized execution. This appeals to enterprises on the basis of manageability and data security. Benefit Rating: High Market Penetration: 1% to 5% of target audience Maturity: Adolescent Sample Vendors: Citrix; Dell; Desktone; Microsoft; Red Hat; Virtual Bridges; VMware

IT Event Correlation and Analysis Tools


Analysis By: Colin Fletcher Definition: IT event correlation and analysis (ECA) tools support the processing of events and alarms from IT components; consolidate, filter and correlate events; notify the appropriate IT operations personnel of critical events; automate corrective actions when possible (directly or through integrations with trouble ticketing, CRM, or other systems); and often serve as a "manager of managers" for IT operations teams. Position and Adoption Speed Justification: ECA tools have widespread use as general-purpose event consoles monitoring multiple IT domains such as servers (physical and virtual), networks and storage, and are becoming critical to processes that analyze the root causes of problems across increasingly diverse and rapidly changing environments. While the tools have reached a level of functional maturity and consistency across vendors, tool innovation (except the organic incorporation of IT operations analytics technologies in some tools) and adoption have slowed in recent years as operations teams struggle to make the investments in skills, cultural and organizational changes, and monitoring coverage needed to fully realize ECA tools' potential value.

Gartner, Inc. | G00252566 This research note is restricted to the personal use of [email protected]

Page 81 of 98

This research note is restricted to the personal use of [email protected]

Integration of data-providing specialist monitoring tools and consuming upstream tools and processes (for example, business service management [BSM], IT service support management [ITSSM] and CRM, for which ECA tools often provide foundational technical service context), while becoming dramatically more simple in recent years, still slows implementations and remains a maintenance challenge for most. While vendors are rapidly incorporating advanced IT operations analytics technologies (either directly or through additional products) to enhance the tools' analytical prowess, simplification on all dimensions (implementation, UI, and ongoing maintenance) remains the primary opportunity for meaningful innovation. User Advice: ECA tools are functionally mature, but facilitate an often complex, cross-domain level of problem identification and diagnosis that requires a significant level of operational maturity (Gartner ITScore for I&O Level 3 Proactive) across people, process and technology to get expected value. Irrespective of vendor integration strategies, organizations that have not yet achieved this level of maturity should instead focus on increased standardization, tool consolidation, and ensuring adequate monitoring coverage first. Writing and maintaining event correlation rules are nontrivial exercises. Enterprises should, at an absolute minimum, look for ECA tools that come with a good selection of predefined, out-of-thebox correlation rules that can be modified or updated. Better yet, look for ECA tools that automatically adjust rules based on trend analysis, behavioral learning, statistical pattern discovery, or other advanced IT operations analytics technologies (see "How Enterprises Can Avoid Event Management Overload"). Lastly, as ECA tool value is directly related to the data ingested and provided to upstream tools and processes (such as BSM, configuration and ITSSM), enterprises should carefully scrutinize device and integration support coverage and policies to ensure they adequately address current and future needs. Business Impact: ECA tools lower IT operational costs and improve the quality of experience through speeding reactive problem resolution (root cause analysis [RCA]) and adding a degree of proactive, preventative foresight across a heterogeneous IT infrastructure. Benefit Rating: Moderate Market Penetration: 20% to 50% of target audience Maturity: Early mainstream Sample Vendors: AccelOps; Augur Systems; BMC Software; Boundary; CA Technologies; Centerity; Dell (Quest Software); eG Innovations; EMC; GroundWork Open Source; HP; IBM Tivoli; Interlink Software; Kratos Defense; Microsoft; Moogsoft; NetIQ; RiverMuse; ScienceLogic; Tango/04; uptime software; Zenoss; Zyrion Recommended Reading: "Vendor Landscape for IT Event Correlation and Analysis" "Toolkit: IT Event Correlation and Analysis RFP Template"

Page 82 of 98
This research note is restricted to the personal use of [email protected]

Gartner, Inc. | G00252566

This research note is restricted to the personal use of [email protected]

"Six Steps to Event and Network Fault Management Tool Integration and Device Support" "How Enterprises Can Avoid Event Management Overload" "Early Stage IT Operations Analytics Could Reduce IT Service Outage Minutes"

PC Application Virtualization
Analysis By: Terrence Cosgrove Definition: PC application virtualization is an application packaging and deployment technology that isolates applications from each other, and limits the degree to which they interact with the underlying OS. Application virtualization provides an alternative to traditional packaging and installation technologies, and is an important enabling technology for hosted virtual desktops (HVDs). Position and Adoption Speed Justification: PC application virtualization can reduce the time it takes to deploy applications by reducing packaging complexity and scope for application conflicts typically experienced when using traditional Windows Installer packaging. PC application virtualization tools are most often adopted as supplements to client management tools as a means of addressing application packaging challenges. Organizations continue to use application virtualization to enable desktop centralization through server-based computing and HVDs. Applications can be delivered to terminal servers more easily when virtualized. Organizations also use application virtualization to deliver applications to nonpersistent HVDs. For physical PCs, much of the interest in PC application virtualization is driven by the promise that this technology will alleviate regression testing overhead in application deployments and Windows migrations (although it generally cannot be relied on to remediate application compatibility issues with Windows 7). Other benefits include enabling the efficient and rapid deployment of applications that couldn't be deployed due to potential conflicts with other applications, or the time required to test and package the application for deployment. What continues to impede widespread adoption is that application virtualization cannot be used for 100% of applications, and may never work with many legacy applications, especially those developed in-house where isolation techniques cannot be used. Several vendors offer application virtualization products, including Microsoft (App-V), VMware (ThinApp), Symantec, Spoon, and Numecent. Microsoft App-V is becoming a more dominant product with a strong product and market presence. This threatens the viability of other vendors in this space. User Advice: Implement PC application virtualization to reduce packaging complexity, particularly if you have a lot of applications that are not packaged. Analyze how this technology will interface with established and planned client management tools to avoid driving up the cost of a new application delivery technology, and to ensure that virtualized applications are manageable. Test as many applications as you can during the evaluation, but recognize that some applications probably can't be virtualized. Consider application virtualization tools for:

Gartner, Inc. | G00252566 This research note is restricted to the personal use of [email protected]

Page 83 of 98

This research note is restricted to the personal use of [email protected]

Applications that have not been packaged, when the overhead (cost and time) of current packaging tools is too high, or the number of users receiving the application is too low to justify packaging Applications that have not been successfully packaged and deployed using client management tools, because of application conflicts Nonpersistent HVD deployments Delivering Windows applications to terminal servers

Enterprises must consider the potential support implications of this technology. Not all application vendors support their applications running in a virtualized manner. Interoperability requirements also must be understood; with some application virtualization products, applications that call another application during runtime must be virtualized together or be manually linked. Business Impact: PC application virtualization can improve manageability for corporate IT, and can reduce the amount of infrastructure required to support an HVD infrastructure. By isolating applications, IT organizations can gain improvements in the delivery of applications, and reduce (perhaps significantly) testing and outages due to application conflicts. This will improve IT agility by allowing applications to be delivered to users more quickly after users' request. Benefit Rating: Moderate Market Penetration: 20% to 50% of target audience Maturity: Early mainstream Sample Vendors: Citrix; Microsoft; Numecent; Spoon; Symantec; VMware

Network Performance Monitoring Tools


Analysis By: Jonah Kowall; Vivek Bhalla; Colin Fletcher Definition: Network performance monitoring tools provide trend analysis via performance and availability monitoring for the data communication network (including network devices and network traffic). These tools collect performance data over time and include features such as baselining, threshold evaluation, network traffic analysis, service-level reporting, trend analysis, historical reporting and, in some cases, interfaces to billing and chargeback systems. Position and Adoption Speed Justification: These tools are widely deployed and are useful for identifying network capacity use trends, predicting capacity problems, quality of service (QoS) tracking, end-user experience monitoring and preventing minor service degradations from becoming major problems for network users. New technologies and approaches continue to emerge in and around these tools, including the application of analytics technologies to deal with the large dataset collected by these products.

Page 84 of 98
This research note is restricted to the personal use of [email protected]

Gartner, Inc. | G00252566

This research note is restricted to the personal use of [email protected]

User Advice: The goal of collecting and analyzing performance data is to enable the network manager to be more proactive. There are two common methods of monitoring network performance: Polling network devices to collect standard Simple Network Management Protocol (SNMP) Management Information Base (MIB) data for performance reporting and trend analysis Using specialized network instrumentation (such as probes, appliances [including virtual appliances] to perform packet capture and analysis, NetFlow, and other flow protocols) to analyze the makeup of the network traffic for performance monitoring and troubleshooting NetFlow instrumentation has grown in popularity as an inexpensive data source, with details about the distribution of protocols and the makeup of application traffic on the network. However, NetFlow by design is generated by network devices analyzing packet contents and provides a summarization approach to network usage, which normally lacks granularity and critical network and application performance metrics, latency and response time. Broad NetFlow coverage should be balanced with fine-grained packet capture capabilities for critical network segments. Expect new form factors for traffic analysis, such as virtual appliances and microprobes that piggyback on existing hardware and interfaces in the network fabric, thus providing the depth of a probe at a much lower cost while approaching the ubiquity of NetFlow. Clients should look for network performance monitoring products that not only track performance, but also automatically establish a baseline measurement of normal behavior for the time of day and day of the week, dynamically set warning and critical thresholds as standard deviations from the baseline, and notify the network manager only when an exception condition occurs. A simple static threshold based on an industry average or a guideline will generate false alarms. Clients looking for the utmost efficiency should link network performance management processes to network configuration management processes so that bandwidth allocation and traffic prioritization settings are automatically updated based on changing business demands and SLAs. Business Impact: These tools help improve network availability and performance, confirm network service quality, and justify network investments. Ongoing capacity utilization analysis enables the reallocation of network resources to higher-priority users or applications without the need for additional capital investment, using various bandwidth allocation, traffic engineering and quality-ofservice techniques. Without an understanding of previous network performance, it's impossible to demonstrate and monitor the current and improving service-level agreements after changes, additions or investments have been made. Without a baseline service-level measurement for comparison, a network manager can't detect growth trends or be forewarned of expansion requirements. Benefit Rating: Moderate Market Penetration: More than 50% of target audience Maturity: Mature mainstream

Gartner, Inc. | G00252566 This research note is restricted to the personal use of [email protected]

Page 85 of 98

This research note is restricted to the personal use of [email protected]

Sample Vendors: 7 Signal; ActionPacked Networks; AppNeta; Arbor Networks; Blue Coat Systems; Boundary; CA Technologies; Cisco; Compuware Corp.; Dell (Quest Software); Dorado Software; Empirix; Emulex; EMC; Fluke Networks; Genie Networks; Heroix; HP; IBM Tivoli; InfoVista; Ipswitch; JDS Uniphase; Lancope; ManageEngine; nPulse; Net Evidence; NetDialog; NetIQ; NetScout Systems; NetSocket; Network Instruments; Niksun; Opnet Technologies; Orsyp; Packet Design; Paessler; PathSolutions; Prism Microsystems; Procera Networks; Riverbed; Riverbed Networks; ServicePilot; SevOne; Solana Networks; SolarWinds; Solera Networks; Statseeker; Tektronix; uptime software; Visual Network Systems; WildPackets; Zyrion Recommended Reading: "Vendor Landscape for Application-Aware Network Performance Monitoring and Network Packet Brokers" "When Is NetFlow 'Good Enough'?" "NPM Delivers Improved Network Visibility to IT Operations"

Mobile Device Management


Analysis By: Leif-Olof Wallin; Phillip Redman Definition: Enterprise mobile device management (MDM) software is primarily a policy and configuration management tool for mobile handheld devices based on smartphone or tablet OSs. MDM software helps enterprises manage the complex mobile computing and communications environment across multiple OS platforms. This is especially important in bring your own device (BYOD) initiatives. MDM can support corporate-owned and personal devices. The primary delivery model is on-premises, but MDM can also be offered as software as a service (SaaS) in the cloud. Position and Adoption Speed Justification: MDM has gone through different phases of the Hype Cycle because much relies on what the tool is managing. Also, the scope of MDM has broadened as incumbent management, security and application tool vendors enter the market with broad suites to compete against the startups. Mobile platforms are still evolving, which affects the placement of MDM on the Hype Cycle. Many organizations use MDM tools that are specific to a device platform (BlackBerry or ruggedized equipment) or that manage a certain part of the life cycle (e.g., device lock or wipe), resulting in the adoption of fragmented toolsets. The increasing adoption of MDM continues to be triggered by the adoption of more consumeroriented devices. Although IT organizations vary in their approaches to implementing and owning the tools that manage mobile devices (the messaging group, some other network group, the desktop group, etc.), few manage the full life cycle across multiple device platforms. Organizations are realizing that users are broadening their use of personal devices for business applications. In addition, many organizations are using different ways to deploy MDM to support different management styles. These factors will drive the adoption of tools to manage the full life cycle of mobile devices. Gartner believes that mobile devices will increasingly be supported in the client computing support group in most organizations (where notebooks and PCs are managed). These devices will become
Page 86 of 98
This research note is restricted to the personal use of [email protected] Gartner, Inc. | G00252566

This research note is restricted to the personal use of [email protected]

peers with notebooks and desktops from a support standpoint. Some organizations are already replacing PCs with tablets for niche user groups. An increasing number of organizations are looking for MDM functionality from PC configuration life cycle management (PCCLM) tools, which are also beginning to emerge. Also, mobile application development platform (MADP) products provide some basic MDM functionality. MDM is evolving and broadening its functionality, going beyond what has been traditionally managed on a device, adding enterprise file synchronization and sharing (EFSS), mobile application management (MAM) and containerization. Although there has been much uptake of MDM, Gartner believes that it is now about to plateau and could emerge as something else entirely as it begins to support mobile application development; adds higher levels of security, such as data loss prevention (DLP); and offers enterprise document management capabilities. Cloud-based offerings are maturing and are increasingly being adopted. User Advice: Assess the types of mobile platforms, devices and applications you will be supporting during the next few years. Although MDM features have commoditized with little differentiation, the platforms are expanding deeper into enterprise mobile software and document management support. Enterprises should look at a vendor's MDM technology, as well as how well it can support enterprise mobile needs. Match what the enterprise needs will be to what MDM currently offers and what it will offer during the next couple years. Strategize where MDM best sits in the enterprise and who will manage it, but buy tactically (24- to 36-month horizon). Business Impact: As more users rely on mobile computing in their jobs, the number of handheld devices and tablets used for business purposes is growing. Therefore, MDM capabilities are becoming increasingly important. Mobile devices are being used more frequently to support business-critical applications, thus requiring more stringent manageability to ensure secure user access and system availability. In this regard, MDM tools can have material benefits to secure corporate data and reduce support costs while increasing support levels. In the short term, MDM tools will add additional per-user and per-device costs to the IT budget. Organizations will be under pressure to allocate funds and effort to put increasing numbers of devices under management that seem far less expensive than notebooks and may be owned by the user. The need for security, privacy and compliance must be understood as factors beyond user choice, and must be recognized as a cost of doing business in a BYOD scenario. Benefit Rating: High Market Penetration: 20% to 50% of target audience Maturity: Early mainstream Sample Vendors: AirWatch; BoxTone; Citrix; Fiberlink Communications; Good Technology; IBM; McAfee; MobileIron; Symantec Recommended Reading: "Magic Quadrant for Mobile Device Management Software"

Gartner, Inc. | G00252566 This research note is restricted to the personal use of [email protected]

Page 87 of 98

This research note is restricted to the personal use of [email protected]

"Toolkit: Mobile Device Management RFI and RFP Template" "Critical Capabilities for Mobile Device Management"

Entering the Plateau


Client Management Tools
Analysis By: Terrence Cosgrove; Ronni J. Colville Definition: Client management tools manage the configurations of client systems. Specific functionality includes OS deployment, inventory, software distribution, patch management, software usage monitoring and remote control. Desktop support organizations use client management tools to automate system administration and support functions that would otherwise be done manually. Position and Adoption Speed Justification: Client management tools are used primarily to manage Windows PCs, but many organizations look to use them to manage their Windows servers, smartphones, tablets and non-Windows client platforms (e.g., Mac and Linux). Client management tools are widely adopted. Product capabilities for inventory, software distribution, and OS deployment are similar across products. Patch management in particular, the ability to patch non-Microsoft applications differentiates products. Organizations increasingly support non-Windows endpoints, such as mobile devices and Macs. Desktop engineers often look to their client management tools first to support these platforms, but the majority of products in the client management space do not have strong support for both Mac OS X and mobile platforms. This leads organizations to use three separate products to manage Windows PCs, mobile devices and Macs. Most client management tools support mobile devices well, and few provide strong Mac support. Overall, however, the market has been slow to meet the needs of customers that want strong functionality across all three major platform types. This follows a pattern in the client management market, where vendors tend to offer new functionality only when that market reaches a critical mass and requirements are well-understood. More recently, software as a service (SaaS) has gained interest as an alternative delivery model for client management tools. In the short term, most organizations will use this model for specific scenarios (e.g., mergers and acquisitions, and full-time telework,) rather than as the standard architecture for all users. Many vendors are now offering SaaS as an alternative or sole delivery model, but regulatory requirements and technical challenges will inhibit rapid growth. As applications evolve away from classic Win32 applications to Web and WinRT apps, and as client computing continues to become more constantly connected, SaaS will be a more suitable model for client management. User Advice: Users will benefit most from client management tools when standardization and policies are in place before automation is introduced. Although these tools can significantly offset staffing resource costs, they require dedicated resources to maintain the product, define resource groups, package applications, test deployments and maintain policies.
Page 88 of 98
This research note is restricted to the personal use of [email protected] Gartner, Inc. | G00252566

This research note is restricted to the personal use of [email protected]

Many factors could make certain vendors more appropriate for your environment than others. For example, evaluate:

Support for mobile devices and Mac OS X Ease of deployment and usability Scalability Integration between service desk and client management functionality Geographic focus Capabilities that meet a specific regulatory requirement

Business Impact: Among IT operations management tools, client management tools have one of the most obvious ROIs managing the client environment in an automated, one-to-many fashion, rather than on a manual, one-to-one basis. Benefit Rating: Moderate Market Penetration: More than 50% of target audience Maturity: Mature mainstream Sample Vendors: Absolute Software; BMC Software; CA Technologies; Dell Kace; FrontRange Solutions; IBM; Kaseya; LANDesk; Matrix42; Microsoft; Novell; Symantec Recommended Reading: "Magic Quadrant for Client Management Tools"

Infrastructure Monitoring
Analysis By: Ian Head; Jonah Kowall; Milind Govekar Definition: Infrastructure monitoring tools monitor the performance and availability of the various data center components and sometimes wider portions of the infrastructure. These tools typically monitor the availability and performance of servers, networks, database instances, storage, virtual fabric, single application instances and, sometimes, wider infrastructure performance. It is common to monitor real-time performance and to perform historical data analysis or trending of the particular component or infrastructure portion that they monitor. Position and Adoption Speed Justification: Infrastructure monitoring tools usually collect resource utilization metrics, such as CPU, memory, disk input/output (I/O), network bandwidth, disk space and file utilization. Many tools are able to capture and analyze a wider scope than individual components and enable users to set thresholds or automatically baseline, derive thresholds against utilization metrics, and dispatch consequent alerts. Commercial open-source technologies are commonly used. These tools may also be implemented in a software as a service (SaaS) delivery

Gartner, Inc. | G00252566 This research note is restricted to the personal use of [email protected]

Page 89 of 98

This research note is restricted to the personal use of [email protected]

model where enterprises do not have to buy and implement software in their data centers, but may buy a subscription service. Infrastructure monitoring tools can be acquired:

From the "Big 4" BMC Software, CA Technologies, HP and IBM From platform vendors, such as Microsoft and VMware From scores of smaller, focused commercial vendors As open-source tools that provide functionality across multiple infrastructure components (e.g., server, networks, storage) or tools that just focus on a particular infrastructure component (e.g., just the server).

Furthermore, most capacity-planning vendors provide this functionality as part of their tools. The application instance monitoring products differ from application performance monitoring products, in that they look at single servers running instances of an application versus a distributed application in an end-to-end manner. Examples of these supported applications may include IBM WebSphere MQ, Oracle WebLogic Server, Microsoft Active Directory, Microsoft Exchange and Microsoft SharePoint. The tools also monitor the virtual infrastructure by typically interfacing with vendor-supplied hypervisor management products such as VMware vCenter or Microsoft System Center Virtual Machine Manager. Infrastructure component monitoring tools have been available for decades, supporting various IT infrastructure components. They are typically the first set of tools in which enterprises invest to monitor availability and performance in a reactive manner. These tools provide valuable data that can be used by organizations to become more proactive by using the data to plot utilization trends and for resource capacity planning. These tools are mature and are differentiated by: the scope of their footprint when collecting relevant metrics, ease of deployment, user interface and price. Component-level tools are now augmented by tools with a wider scope that look to provide information on the health of broader segments of the overall infrastructure, sometimes right out to the end-user device. The ongoing pressure to reduce IT costs is increasing interest in open-source tools and SaaS delivery models. Almost all Global 2000 enterprises have implemented these tools for their data center components as a minimum scope. User Advice: Enterprises that have not invested in these monitoring tools must invest in this area as a first step toward becoming more proactive in monitoring their infrastructure components. Enterprises should also examine the tools provided by their infrastructure platform vendors, because these tools usually provide good-enough monitoring data for that infrastructure component. Most organizations could improve their proactive event and incident management by making more extensive use of the tools they already have and then by deploying the more extensive infrastructure monitoring solutions. Organizations may extend their proactive infrastructure monitoring capability by deploying application performance monitoring and operational analytics solutions.

Page 90 of 98
This research note is restricted to the personal use of [email protected]

Gartner, Inc. | G00252566

This research note is restricted to the personal use of [email protected]

Business Impact: Infrastructure monitoring tools are used to monitor and manage the quality of service of components and sections of the infrastructure and help lower the total cost of ownership (TCO) of managing a large and complex infrastructure environment. Benefit Rating: Low Market Penetration: More than 50% of target audience Maturity: Mature mainstream Sample Vendors: Absolute Performance; AccelOps; AppFirst; ASG; Bijk; Blue Elephant; BMC Software; CA Technologies; Centerity; Centreon; Circonus; Compuware; Datadog; Dell; eG Innovations; Fujitsu; GFI; GFI Software; GroundWork; GSX; Heroix; Hitachi; HP; IBM; Interlink Data; Ipswitch; Kaseya; logicMonitor; LogMatrix; ManageEngine; Microsoft; MRTG; Nagios; Neebula; NetIQ; Nexthink; NEC; Opsview; OP5; Oracle; Orsyp; Paessler; Quest Software; Realtek Semiconductor; ScaleXtreme; ScienceLogic; Scout; Server Density; ServersCheck; SevOne; SolarWinds; SpiceWorks; Tango 04; Virtual Instruments; VMware; Wormly; Zabbix; Zenoss; Zyrion Recommended Reading: "Cool Vendors in IT Operations Management, 2012" "Toolkit: Server Performance Monitoring and Capacity Planning Tool RFI" "Vendor Landscape for Application-Aware Network Performance Monitoring and Network Packet Brokers" "Open-Source Monitoring: The Free Way" "Open-Source Monitoring: Commercial Offerings"

Network Fault Monitoring Tools


Analysis By: Vivek Bhalla; Jonah Kowall; Colin Fletcher Definition: Network fault monitoring (NFM) tools indicate status of network components, such as routers and switches. These tools isolate, aggregate, deduplicate, filter, prioritize and resolve faults/ alerts on the network. In some cases, the tools discover and visualize the topology of physical and logical relationships and dependencies among network elements. This helps depict the up/down status of those elements in a contextual map, provide basic root cause analysis (RCA), and enhance error deduplication and suppression capabilities. Position and Adoption Speed Justification: These tools have been widely deployed, primarily to address the reactive nature of network monitoring in IT operations. They provide network teams with a single location to monitor, alert and coordinate diagnosis of all network-related fault and availability information. These events are useful to help ensure that critical network devices remain available to support the business applications and services that rely on them.

Gartner, Inc. | G00252566 This research note is restricted to the personal use of [email protected]

Page 91 of 98

This research note is restricted to the personal use of [email protected]

NFM tools can filter events based on predetermined rules, suppress symptomatic or superfluous events, identify alerts and prioritize them accordingly. They then are able to initiate predefined, automated workflows, including the creation of trouble tickets or alert notifications. Event enrichment is also a common feature whereby the original event is updated with additional context to resolve the fault or ensure that information can be passed on to pertinent parties at the earliest opportunity and with the most-relevant details. User Advice: Users should leverage network fault monitoring tools to assess the status of network components, but should work toward improving problem resolution capabilities and aligning network management tools with IT service and business goals. NFM tools are frequently used for blame avoidance, rather than problem resolution, with the goal of proving that the problem is not the network's fault. Resolving problems, not just avoiding blame, should be the goal. More sophisticated network fault monitoring tools add the capability of correlating network events and leveraging network topology knowledge to automatically determine the likely root cause of issues, which can improve network management staff productivity, and reduce the time required to identify the problem. However, without network topology discovery capabilities in the network fault monitoring tool, the network manager would be required to manually define correlation rules regarding device interconnections, eliminating the potential advantages of any automated root cause analysis capabilities. A new requirement being placed on these tools is to go beyond physical connectivity to understand the logical parent-child and containment relationships in the virtual infrastructure. The use of IT operations analytics tools is helping enhance the ability and value of such logical understanding of the network environment. These new capability requirements have yet to be incorporated by all solutions offered by vendors in this space, nor are they fully exploited by end users at this stage, hence the readjustment in this year's Hype Cycle position to reflect this area of innovation and progress. Business Impact: These tools help IT organizations view their network events through a single, "network pane of glass." This helps improve the availability of the network infrastructure and shorten the response time for noticing and repairing network issues that affect business productivity. NFM tools support day-to-day network administration, and provide useful features for network engineers. However, they generally treat the network as a largely undifferentiated utility, and don't assist in aligning the network with business applications, business services or business impact. Benefit Rating: Moderate Market Penetration: More than 50% of target audience Maturity: Mature mainstream Sample Vendors: Absolute Performance; AccelOps; ASG Software Solutions; CA Technologies; Cisco; Dartware; LLC; Dell (Quest Software); Dorado Software; eG Innovations; EMC; Entuity; GFI Software; GroundWork; HP; IBM Tivoli; Ipswitch; logicMonitor; ManageEngine; Nagios; NetIQ (Novell); OP5; ScienceLogic; SolarWinds; SpiceWorks; Uplogix; Zabbix; Zenoss; Zyrion

Page 92 of 98
This research note is restricted to the personal use of [email protected]

Gartner, Inc. | G00252566

This research note is restricted to the personal use of [email protected]

Recommended Reading: "Six Steps to Event and Network Fault Management Tool Integration and Device Support"

Job-Scheduling Tools
Analysis By: Milind Govekar; Biswajeet Mahapatra Definition: Job-scheduling tools perform tasks or jobs based on a date and time schedule in an automated fashion. These tasks can be bill calculations or the transfer of data between servers or applications. A scheduled job usually has a date, time and frequency, as well as other dependencies, inputs and outputs associated with it. Advanced IT workload automation broker tools also have job-scheduling capabilities. Position and Adoption Speed Justification: Job-scheduling tools are widely used, mature technologies. They support Java and .NET application server platforms in addition to integration technology. Job-scheduling tools help enterprises in their automation requirements across heterogeneous computing environments. They automate critical batch business processes, such as billing or IT operational processes, including backups, and they provide event-driven automation and batch application integration (for example, integrating CRM processes with ERP processes). Some of the vendors in this market have IT workload automation capabilities in addition to traditional job scheduling, and some have evolved toward handling dynamic, policy-driven workloads; thus, they have moved toward IT workload automation broker tools. These tools have been used on the mainframe for decades for automating mission-critical tasks and processes. Some of these tools have either built-in managed file transfer capabilities or have a separate module that integrates easily with the job-scheduling tool. There are also tools on the market that are targeted at specific environments like Java or .NET to automate jobs or tasks in that environment. Likewise, there are products that automate tasks only in the distributed systems environment with application adapters, disaster recovery automation capabilities and file transfer capabilities. User Advice: Many enterprises are in the process of evaluating a job-scheduling tool, upgrading from an older version or tool, or consolidating more than two tools. Enterprises choose tools based on cost, ease of migration, familiarity with the tool, availability of in-house skills and their own longterm perception of the strategic nature of job-scheduling tools. Enterprises should plan to use a single job-scheduling tool or as few as possible to enable running jobs in their heterogeneous environments. Enterprises should also look at job-scheduling tools from a long-term perspective rather than for immediate needs only. Environments that are largely static may choose to invest in these tools. However, enterprises looking for policy-driven, dynamic workload management capabilities should consider IT workload automation broker tools. Business Impact: These tools can automate a batch process to improve the availability and reliability of a business process that depends on it. Benefit Rating: Low

Gartner, Inc. | G00252566 This research note is restricted to the personal use of [email protected]

Page 93 of 98

This research note is restricted to the personal use of [email protected]

Market Penetration: More than 50% of target audience Maturity: Legacy Sample Vendors: Argent Software; ASG Software Solutions; Flux; Help/Systems; MVP Systems Software; Software & Management Associates; SOS-Berlin; Terracotta; Vinzant Software Recommended Reading: "Magic Quadrant for Workload Automation" "Toolkit: Preparing an RFI for Workload Automation or Job-Scheduling Tools" "How to Modernize Your Job Scheduling Environment" "IT Workload Automation Broker: Job Scheduler 2.0" "Toolkit: Best Practices for Job Scheduling"

Appendices

Page 94 of 98
This research note is restricted to the personal use of [email protected]

Gartner, Inc. | G00252566

This research note is restricted to the personal use of [email protected]

Figure 3. Hyper Cycle for IT Operations Management, 2012

IT Service Catalog Tools Capacity-Planning and Management Tools Cloud Management Platforms IT Financial Management Tools IT Workload Automation Broker Tools

expectations

IT Process Automation Tools Workspace Virtualization COBIT

Service Billing Application Release Automation

IT Service Support Management Tools Social IT Management DevOps IT Operations Analytics Business Productivity Teams IT Operations Gamification

Job-Scheduling Tools Application Performance Monitoring IT Service Dependency Mapping Network Fault Monitoring Tools IT Service View CMDB Real-Time Infrastructure IT Service Desk Tools Business Service Management Tools Infrastructure Monitoring Network Configuration and Client Management Tools Change Management Tools Network Performance Monitoring Tools Configuration Auditing Mobile Device Management Server Provisioning ITIL IT Event Correlation and Analysis Tools and Configuration Management PC Application Virtualization PC Application Streaming Service-Level Reporting Tools IT Change Management Tools Hosted Virtual Desktops IT Asset Management Tools As of July 2012

Technology Trigger

Peak of Inflated Expectations

Trough of Disillusionment

Slope of Enlightenment

Plateau of Productivity

time
Plateau will be reached in: less than 2 years
Source; Gartner (July 2012)

2 to 5 years

5 to 10 years

more than 10 years

obsolete before plateau

Gartner, Inc. | G00252566

Page 95 of 98

This research note is restricted to the personal use of [email protected]

This research note is restricted to the personal use of [email protected]

Hype Cycle Phases, Benefit Ratings and Maturity Levels


Table 1. Hype Cycle Phases Phase Innovation Trigger Definition A breakthrough, public demonstration, product launch or other event generates significant press and industry interest. During this phase of overenthusiasm and unrealistic projections, a flurry of wellpublicized activity by technology leaders results in some successes, but more failures, as the technology is pushed to its limits. The only enterprises making money are conference organizers and magazine publishers. Because the technology does not live up to its overinflated expectations, it rapidly becomes unfashionable. Media interest wanes, except for a few cautionary tales. Focused experimentation and solid hard work by an increasingly diverse range of organizations lead to a true understanding of the technology's applicability, risks and benefits. Commercial off-the-shelf methodologies and tools ease the development process. The real-world benefits of the technology are demonstrated and accepted. Tools and methodologies are increasingly stable as they enter their second and third generations. Growing numbers of organizations feel comfortable with the reduced level of risk; the rapid growth phase of adoption begins. Approximately 20% of the technology's target audience has adopted or is adopting the technology as it enters this phase. The time required for the technology to reach the Plateau of Productivity.

Peak of Inflated Expectations

Trough of Disillusionment Slope of Enlightenment

Plateau of Productivity

Years to Mainstream Adoption


Source: Gartner (July 2013)

Table 2. Benefit Ratings Benefit Rating Transformational Definition Enables new ways of doing business across industries that will result in major shifts in industry dynamics Enables new ways of performing horizontal or vertical processes that will result in significantly increased revenue or cost savings for an enterprise Provides incremental improvements to established processes that will result in increased revenue or cost savings for an enterprise Slightly improves processes (for example, improved user experience) that will be difficult to translate into increased revenue or cost savings

High

Moderate

Low

Source: Gartner (July 2013)

Page 96 of 98
This research note is restricted to the personal use of [email protected]

Gartner, Inc. | G00252566

This research note is restricted to the personal use of [email protected]

Table 3. Maturity Levels Maturity Level Embryonic Emerging Status


Products/Vendors

In labs Commercialization by vendors Pilots and deployments by industry leaders

None First generation High price Much customization Second generation Less customization

Adolescent

Maturing technology capabilities and process understanding Uptake beyond early adopters Proven technology Vendors, technology and adoption rapidly evolving

Early mainstream

Third generation More out of box Methodologies Several dominant vendors

Mature mainstream Legacy

Robust technology Not much evolution in vendors or technology Not appropriate for new developments Cost of migration constrains replacement Rarely used

Maintenance revenue focus

Obsolete

Used/resale market only

Source: Gartner (July 2013)

Recommended Reading
Some documents may not be available as part of your current Gartner subscription. "Toolkit: ITScore for Infrastructure and Operations Service Improvement Project Planning" "Successful ITIL and Service Management Projects Avoid These 10 Common Failings" "Magic Quadrant for Mobile Device Management Software" "Magic Quadrant for Client Management Tools" "MarketScope for the IT Asset Management Repository"

Gartner, Inc. | G00252566 This research note is restricted to the personal use of [email protected]

Page 97 of 98

This research note is restricted to the personal use of [email protected]

GARTNER HEADQUARTERS Corporate Headquarters 56 Top Gallant Road Stamford, CT 06902-7700 USA +1 203 964 0096 Regional Headquarters AUSTRALIA BRAZIL JAPAN UNITED KINGDOM

For a complete list of worldwide locations, visit http://www.gartner.com/technology/about.jsp

2013 Gartner, Inc. and/or its affiliates. All rights reserved. Gartner is a registered trademark of Gartner, Inc. or its affiliates. This publication may not be reproduced or distributed in any form without Gartners prior written permission. If you are authorized to access this publication, your use of it is subject to the Usage Guidelines for Gartner Services posted on gartner.com. The information contained in this publication has been obtained from sources believed to be reliable. Gartner disclaims all warranties as to the accuracy, completeness or adequacy of such information and shall have no liability for errors, omissions or inadequacies in such information. This publication consists of the opinions of Gartners research organization and should not be construed as statements of fact. The opinions expressed herein are subject to change without notice. Although Gartner research may include a discussion of related legal issues, Gartner does not provide legal advice or services and its research should not be construed or used as such. Gartner is a public company, and its shareholders may include firms and funds that have financial interests in entities covered in Gartner research. Gartners Board of Directors may include senior managers of these firms or funds. Gartner research is produced independently by its research organization without input or influence from these firms, funds or their managers. For further information on the independence and integrity of Gartner research, see Guiding Principles on Independence and Objectivity.

Page 98 of 98
This research note is restricted to the personal use of [email protected]

Gartner, Inc. | G00252566

You might also like