A Mess of May Data
My research in May was all about #observability. Which seems terribly apropos as we slide into Memorial Day here in the US and observe a day of sober reflection on the cost of our freedom today and in days long past.
In the world of technology, observability is not just visibility 2.0 or monitoring 2.0. It's a higher level construct, a domain within an enterprise architecture for digital business, that focuses on bringing together the disparate data points (signals) that offer insight into the state of the system. That is, the digital business.
It's a terribly important domain, and many other domains such as #automation depend on the telemetry gathered through an observability practice.
So you're going to need a coffee. Get a big cup, not one of those little cups but a BIG STEAMING CUP because there really is a mess of data to go over this month.
Without further ado, let's dig into the data. Figuratively, of course.
Observability
Is there any domain today that isn't struggling with some form of sprawl? Seems to me if it pervades every domain, then it isn't a technology problem, but rather a people and process problem. But hey, no one asked me, did they? The finding that 72% of organizations agree that the number of tools they use adds complexity from Chronosphere's Observability and Demystifying AIOps may be why ManageEngine's State of ITOM 2023 found that 62% of all organizations prefer using a single, unified observability tool. That still leaves 34% that prefer multiple tools, and we'd like to talk them and find out why.
Also an interesting find by ManageEngine was the 26% who struggle with managing uninstrumented systems. Uninstrumented rarely generate real-time data (signals) needed to fill in observability "gaps" so we're surprised that so few organizations struggle with this challenge. Unless they've figured out how to leverage app delivery services to proxy that data. Cause yeah, that's a use case for app delivery. Other use cases found were:
- 49% insights into app/infra environments
- 48% real time insights into app/infra environments to meet SLA
- 41% insights to improve security posture and impact analysis
- 40% insights into app/infra to automate operations
I won't comment too harshly about the mixing of scenarios here. The first use case is effectively the second use case with a desired goal, which seems to be stacking the survey to ensure that "insights into app/infra" is a big winner, but that is essentially the core use case for observability anyway so .. it's not like it's skewing things that badly.
What I did find amusing was that the #security related use case was only number three, event though 42% in the same survey tagged security monitoring as the top observability tool. Weird, hunh? But I guess security is a kind of insight so ... it is what it is.
You have to wonder what all these orgs are doing with the data they collect. Guess what, Ventana dug into that in its Evolving World of Analytics and Data. Aside from the data on how often organizations analyze the data they collect (70% daily, 32% hourly or in real-time) and that more (39%) prefer to deploy analytics and data on-premises rather than in the cloud (34%), the most interesting - and probably overlooked - statistic is about a different kind of data.
OBJECT STORES are now deployed in more than one-half of organizations (53%) with another 18% planning to adopt within 24 months.
You know why that's so important? Because object stores are things like S3 and Azure Storage. You know, API-accessible data. No app, just an API and the data. This architectural shift will drive data into app architectures themselves. Hit this link and take a gander at the diagram of a digital service. Notice the data? Right up there with apps and other logic? Yeah. Exactly. Pay attention to this shift, because it's actually huge - and it's going to hit security in a hard way. Harder even than APIs have hit the practice.
App Architectures
Speaking of app architectures, I have two reports with juicy data about them. The first is one you might not think of for data on app architecture. It's Splunk's State of Observability 2023. Yeah, yeah, it's got all sorts of good observability data, too, but what stood out to me was this statistic:
44% of internally developed applications are still built on monolithic architectures (on average).
This stood out for two reasons:
- Well, yeah. No surprise there. I'm sure if you go read all about Amazon's recent monolithic adventure you can see why orgs still depend on this architectural approach to applications.
- It is external validation of our own research, which puts traditional app architectures at about 40% of the enterprise portfolio.
Basically, this is more validation that the future of IT is #hybrid - from app portfolios to environments. /off soapbox
Now, that whole hybrid thing is important, especially when you consider the importance placed on application portability by respondents to Datastax Distributed Cloud Series: Cloud-native Applications:
- 19% critical
- 67% very important
- 12% somewhat important
- 1% not important
So the ability to seamless move applications across environments is pretty important. This is why we've seen so much attention in the market around #multicloudnetworking from new products to services to acquisitions.
Cloud
Can I even call this a newsletter if I don't mention #cloud? Didn't think so. My find this month was related to a search for downtime costs and, as usual, the Uptime Institute does not disappoint when it comes to downtime. Guess what the top cause of major third-party, network, and IT system/software related outages are? Go ahead. Guess.
If you said "configuration/change management" give yourself a delicious cookie. In ever type of outage, configuration is fingered as the top culprit. The average cost of downtime from outages? Well, more than 2/3 of all outages cost more than $100,000. The breakdown:
- 29% under 100K
- 45% 100K – 1M
- 25% over 1M
The number of "major" third-party (including cloud provider) outages may be the reason organizations are rethinking how they view cloud. I'll leave you with this bit of data:
With respect to the resiliency of cloud and running mission-critical workloads:
- 34% cloud is resilient enough to run only some
- 25% cloud is resilient enough to run most
- 18% cloud is NOT resilient enough to run any
- 11% cloud is resilient enough to run all
Security? What security?
I often joke about security being a necessary component to any newsletter, but this time I'm being terribly existential by purposefully leaving out security.
I know, I'm terrible. Don't worry, I'm sure next month will be filled to the brim with security. Specifically, #APIsecurity.
Until then take care, be safe, and enjoy the start of summer!
Sr Strategic Architect at F5 Networks. The view expressed on my feed are mine and do not necessarily reflect the views of my employer.
1ycurious if you're embracing chatGPT+ for research efforts? I've been watching some videos on chatGPT+ w/ world news and pandas/jupyter notebooks plugins that look promising for quickly arriving at curated, combed, and combined data sets.