Increasing automation and digitization is inevitable. More companies are transferring their operations to IT systems, and more of these operations are being automated.
However, what isn’t inevitable is the rise in IT failures and periods of downtime that digitization and automation entail. Businesses are losing billions of dollars per year from IT downtime.
Fortunately, the increasing use of AI-based predictive analytics can root out problems before they even arise.
First, let’s just get a firm handle on the scale of the problem, and just how much of the economy is being digitized and automated. Almost 80 percent of companies in the United States are in the process of digital transformation, meaning that 80 percent of American businesses are turning increasingly to IT systems to handle and execute various aspects of their work. And they’re pumping lots of money into this process of change: According to a recent study from Reports and Data , the global digital transformation market was valued at $261.9 billion in 2018, while it’s estimated to reach $1.051 trillion by 2026.
In other words, massive shifts are taking place around the world as businesses come to depend more on IT systems and digital platforms. At the same time, much of the functioning of these systems and platforms is being automated. A report from Deloitte published this year found that 58 percent of organizations globally have introduced some form of automation into their work processes, while the number of companies implementing automation at scale has doubled over the last year. This is another monumental change, indicating that as companies move to IT systems, they’re also moving towards automating much of what these systems do.
This is all very exciting, but unfortunately, this shift has caused an exponential rise in opportunities for IT failures and downtime. As more processes are put on some kind of computer system, and as more of these processes are executed by algorithms, then inevitably more chances for faults and breakdowns arise, particularly as staff are ill-equipped to monitor everything an increasingly automated system does. Indeed, estimates of the costs of downtime in lost revenue went from $26.5 billion globally in 2011 to $700 billion in 2016 (and only for North American firms).
Things are getting out of hand, and one of the main reasons why many firms haven’t been able to solve this challenge is because they’ve approached it with the wrong mentality. Generally, they’ve been developing and using tools to detect IT problems as and when they appear. This might sound fine at first glance, but waiting for problems to arise can be dangerous, since they can sometimes take a long time to resolve.
For instance, the UK Parliament’s Treasury Committee released a report in October complaining about the spate of IT bank failures that had been occurring in Britain over the last few years, and about how these had left millions of customers locked out of their accounts as the institutions concerned struggled to restore their systems. One of the worst examples of this occurred in 2018 when an IT outage affecting Lloyds Bank resulted in 1.9 million customers being locked out of their accounts for weeks, with the underlying problems taking several months to completely resolve.
To avoid such disasters, businesses should really take a proactive approach to their IT systems. Specifically, they need to focus on preventing problems from materializing in the first place, so that they aren’t left with periods of downtime that end up hurting their bottom lines. Artificial intelligence is the key to achieving this.
AI-based detection platforms are capable of monitoring IT systems in real-time, checking for early signs of potential failures. To take one example, my company Appnomic has managed to handle 250,000 severe IT incidents for our clients with AI, which equals more than 850,000 man-hours of work.
By harnessing machine learning , such platforms can use past data to learn how problems typically develop, enabling a company to step in before anything unfortunate occurs. In 2017, Gartner coined the term “artificial intelligence systems for IT operations” (AIOps ) to describe this kind of AI-driven predictive analysis, and the market research firm believes that the use of AIOps will grow considerably over the next few years. In 2018, only 5 percent of large enterprises are using AIOps , but the firm estimates that by 2023 this figure is set to rise to 30 percent.
This growth will be driven by the fact that several benefits come from the application of machine learning and data science to IT systems. Aside from detecting likely problems before they occur, AI can significantly reduce false alarms, in that it can gain a more reliable grasp of what actually leads to failures than previous technologies and human operators. On top of this, it can detect anomalies that won’t necessarily lead to failures or downtime, but that may be making an IT system less efficient.
This is why AI analytics will make IT systems more resilient and robust overall. And as more companies migrate to AIOps and related platforms they will create a snowball effect, forcing their competitors to either join the race to avoid unnecessary downtime or be left behind. And it makes perfect sense that, as automation in IT systems increases, there should be a parallel increase in automated predictive analytic systems. Because as software eats the world and we humans become less central to our own jobs, it’s only AI that can keep up with AI.
This article was originally published by Cuneyt Buyukbezci on TechTalks , a publication that examines trends in technology, how they affect the way we live and do business, and the problems they solve. But we also discuss the evil side of technology, the darker implications of new tech and what we need to look out for. You can read the original article here .