Most Machine Learning for Predictive Maintenance projects never get off the ground or are stuck in PoC Purgatory. According to a study by McKinsey, less than 30% of IoT initiatives progress beyond the proof-of-concept stage.
The following is an interview with Eitan Vesely, the SKF AI Offering Manager, on the topic of how to scale Machine Learning for Predictive Maintenance.
How do industrial plants prioritise which assets a Predictive Maintenance solution should cover?
We see three different approaches.
The most traditional approach is based on an assessment of the criticality of assets to be monitored by advanced systems. For instance, in a power plant, a turbine is a core asset. However, a pump running fluids is not a core asset. Therefore, it is less likely to be selected.
A second criterion is Mean Time Between Failure or MTBF. If the asset fails often, this factor is usually also taken into consideration when prioritizing coverage.
I have also seen a somewhat counter-intuitive approach by industrial plants that are already monitoring core assets. They apply AI predictive maintenance to auxiliary equipment that is currently under-monitored.
The assumption is that monitoring is sufficient for the core assets already under coverage. Without making a recommendation, I have noticed that industrial plants sometimes over-estimate the efficacy of current SCADA-based monitoring systems when selecting assets to be covered by Machine Learning Predictive Maintenance.
Is there machinery that you would not recommend being monitored by a Machine Learning Predictive Maintenance solution?
Let’s look at the type of data generated instead of at the actual machinery itself.
As a starting point, if not enough data is generated because insufficient sensors have been installed or the sampling rate is low, this machinery is probably not a good candidate for a Machine Learning-based Predictive Maintenance solution. You do not need a Big Data solution if you do not generate Big Data. In this case, the standard vibration monitoring is a more appropriate solution.
Although it is less common today, in some cases the sensor data is generated but is inaccessible outside of the machine, thereby preventing analysis. In cases such as these there is a growing urgency to find workaround solutions, including wireless data transmission.
The issue of data governance cannot be underestimated. We are still seeing dysfunctional organisational hierarchies that prevent business users from accessing data. In some instances, third-party vendors are either blocking access or even charging for access. As the importance of data integrity, hygiene, and ownership is gaining acceptance among senior executives, these types of hierarchies should become less common.
Now let me go back to your original question. Machine Learning for Predictive Maintenance is only one component of an overall maintenance strategy and, in some cases, preventive maintenance programs are more suitable. At the same time, it is often not the asset type but the data quality that makes machinery not suitable for Machine Learning-based Predictive Maintenance.
What is the optimal data required for Machine Learning for Predictive Maintenance?
The best-case scenario would be if there is historical data that is in the Historian database to train our algorithms on. In any case, for the Machine Learning monitoring to work as expected, it must have access to real-time, continuously generated data streams. The definition of “real-time” can vary between industries. In cloud deployments, when we refer to “real time,” there can still be a latency of a few minutes. In some instances, such as wind turbines, we are getting data once every 10 minutes.
Another important source of data is a log file of historical failures. These are used to perfect the algorithms and to increase the accuracy of their predictions.
Are there specific industries or production processes in which ML Predictive Maintenance is more relevant?
So-called process industries in which production processes are continuous are the most suitable for Machine Learning-based Predictive Maintenance. The sensor data generated by the continuous production process forms patterns from which we can predict future behavior in case a suspicious pattern is detected.
With discrete or batch production processes, machinery is used to fill specific production quotas. Each batch has different configurations based on the specifications of the production output. In such cases, there are different approaches to use ML for Predictive Maintenance.
Bottom line, discrete and process manufacturing are very different in the mode of machines operation and thus will require different Machine Learning solutions.
With continuous production, we can apply Machine Learning to predict future asset behavior based on past behavior. At a high level, anomalies relative to the expected behavior can indicate an evolving asset failure.
Of course, I am simplifying very complex data science because the algorithms are trained to detect false positives and a single example of anomalous behavior is not necessarily indicative of potential asset failure.
Do companies need an industrial analytics platform before deploying a Predictive Maintenance solution that is based on analysing sensor-generated Big Data?
It depends on the particular Predictive Maintenance application. An analytics platform is not a pre-requisite for our Machine Learning solution because we are extracting existing SCADA data from the Historian database. Other analytical packages are platform add-ons; in these cases, the underlying platform is required.
What are the typical reasons why industrial plants delay deployment of a Machine Learning Predictive Maintenance solution?
There are some obvious roadblocks, such as a lack of available data or connectivity. Another scenario that we see is organizational disconnects. In some instances, IT is the de facto “owner” of the data that the machinery generates and is not aligned with the operational stakeholders.
I believe that with more executive sponsorship, these issues will become less prevalent.
There is much concern about the topic of security. How does SKF Enlight AI address this topic?
When we stream the data to our cloud, we do so using a secure and encoded channel (SSL or others). We are using leading cloud vendors and making sure to install all of the latest security updates. If required, we can anonymize the data, put it on a customer’s private cloud or even install it on-premise.