A data readiness cheat sheet

Machine learning depends on more than raw data

The availability of large quantities of high-quality process data is a prerequisite to machine learning-based predictive maintenance. However, quality data comprised of plant experience and knowledge can enhance solution performance.

Big Data is the starting point for any conversation on implementing machine learning based predictive maintenance solutions. Plants that generate enough data on which machine learning algorithms can be trained to recognize asset behavioural patterns are candidates for successful deployments of such solutions.

However, data quality is just as important as data quantity when it comes to AI-driven Industrial Analytics model training and configuration. Though a predictive maintenance vendor should be capable of processing, cleaning and formatting data for machine learning model analysis, improving data quality can often depend on organizational factors beyond the SCADA or raw data itself.

In the following article, we discuss three ways plants can improve data quality based on an existing resource: their intimate knowledge of their machines and how they operate. By sharing this learned knowledge of asset behaviour with their machine learning predictive maintenance vendor, plants can expect returns in the form of more accurate predictions and root cause related insights.

Be transparent about the asset’s operational modes

Most assets have more than one operational mode. Some assets may be set to work more slowly at night, while assets that process more than one type of material may behave differently according to which type of material they are being fed.

Detailing these different modes of operation to the machine learning solution provider is a starting point for accurately training machine learning algorithms to differentiate between asset failure signals and planned downtimes, resulting in fewer false positives.

Submit precise log events

SKF Enlight AI trains machine learning algorithms on data from Historian databases and augments this data with asset failure event logs. While some plants have already automated or are in the process of automating failure event logs, others rely on manual technician input. There are several potential data quality issues that could arise from technicians filling in an event log for a single asset, including the following:

  • Within the rush to remediate a failed asset and resume production, technicians may forget to fill in and record the event in the log.
  • There can be a discrepancy between how frequently different technicians update the event log and how the failure is described.
  • The event log may include references to other maintenance issues that are not directly related to the asset itself.
  • If a failure event is entered retroactively, the technician may only have a partial memory or even a false memory of what happened and when.

In all of the above cases, the event logs will prove less effective in training machine learning algorithms to detect evolving failure patterns. However, plants don’t have to invest in IoT event log technology to improve the data quality of these logs. Instead, they can train technicians to update the event log in real time and to focus on the features of the failure itself, such as:

  • What was the duration of the failure (Start date/time – End date/time)?
  • In which component of the machine did the failure originate?
  • What was the cause of failure (Examples: bearing failure, bearing misalignment, oil leak)?
  • What were the action items taken to remediate the issue?

Prioritizing failure log entries will have to be aligned throughout maintenance, and plants should expect this process to take some time as technicians get used to this new way of working. Nevertheless, once standardized this unique asset failure intelligence contributes to more precise machine learning model training, failure prediction and root cause identification.

Asset mapping matters

Having deeper knowledge of an asset’s mechanical hierarchy enables machine learning models to be correctly trained and configured for each subcomponent of a machine.

Here, the difference is not the accuracy of the failure prediction but rather the end user experience. Without an accurate technical hierarchy of the machine, SKF Enlight AI’s AutoML algorithms will still be able to identify anomalous data patterns, issue upcoming failure alerts and single out the sensors detected the abnormal asset behaviour. With precise mapping of the asset, the solution can narrow these sensors down to a specific asset component, helping technicians pinpoint the root cause diagnosis more quickly.

Summary and conclusion

SKF Enlight AI’s hybrid model of a cutting-edge AI-powered machine learning engine paired with insights and input from SKF subject matter experts adds a new level of customization and intelligence to our machine learning predictive maintenance solution. Adding customer-side experience and knowledge to our AutoML engine enables us to further customize the solution and provide even more accurate predictions and indications of suspected root cause.

SKF Enlight AI

Industrial plants generate terabytes of process data. SKF Enlight AI is a SaaS Predictive Maintenance solution that uses Automated Machine Learning to identify emerging asset failure patterns within this data. It provides early warnings and sensor-level intelligence to help avert unplanned downtime and meet production goals. For more information on how SKF Enlight AI can improve performance and reliability, click here.

Leave a Reply

Your email address will not be published.