Improved Approach to Turnaround and Shutdown Planning

What is turnaround/shutdown planning?

Turnarounds and planned shutdowns are critical periods for refineries and other process plants as they provide an opportunity to perform major maintenance, inspections, and upgrades necessary to ensure safe, reliable, and efficient operations. However, turnarounds and shutdowns can be complex, time-consuming, and expensive, and any delays or cost overruns can have significant consequences for the facility and its stakeholders.

One of the challenges associated with turnarounds and shutdowns is the need to coordinate and manage multiple teams, each with different tasks and responsibilities, to complete the work within the scheduled timeframe. This can be particularly challenging for large facilities with complex operations and multiple production units. It can also be difficult to balance the competing demands of maintenance, inspection, and upgrade work with the need to get the plant back up and running as quickly as possible.

The cost of turnarounds and shutdowns can also be significant, as they require significant resources and often involve contracting with third-party vendors and suppliers. In addition, unexpected issues or complications during the turnaround process can lead to cost overruns and delays, which can further increase the financial burden on the facility.

To address these challenges, refineries and process plants can use various strategies, such as implementing efficient planning and scheduling processes, leveraging data and analytics to identify potential issues before they arise, and investing in technology and automation to improve efficiency and reduce costs. Effective communication and collaboration between all teams involved in the turnaround process are also critical to ensuring a successful outcome.

Why conduct turnaround/shutdown planning in Newton™?

Conducting turnaround planning in Newton™ with quantitative methodology and recommendations provides several benefits, including the following:

  • Faster than traditional methods of turnaround planning
  • Reduced cost from eliminated unnecessary turnaround scope
  • Quantitative turnaround optimization results in a more accurate scope of work
  • Improved performance between turnarounds because the right assets are targeted during the turnaround. Read more in Moving from Reactive to Proactive Strategies

How does Newton™ improve turnaround/shutdown planning?

When conducting turnarounds or shutdowns, it makes sense to complete maintenance and repairs on a group of assets, thus eliminating outages outside of these synchronized maintenance windows. This sounds simple, but how do you know how often these turnarounds or shutdowns should occur, and what equipment should be included? These are very difficult questions to answer, and industrial facilities have entire groups dedicated to turnaround planning and optimization.

Using Newton™, a quantitative reliability software, leverages data, subject matter expertise (SME), and simulation to improve turnaround and shutdown planning. The initial step of the analysis involves establishing the asset register and developing a facility model to estimate the potential production losses that may arise due to equipment breakdown. To accomplish this, Newton employs asset risk analysis, covering both fixed and rotating equipment, and guides the user through the process of modeling each asset’s functionality, components, failure modes, and mechanisms. Using information from Computerized Maintenance Management Systems (CMMS), Inspection Data Management Systems (IDMS), process historians, and production loss tracking, Newton quantifies the failure modes, the probability of failure, and associated consequences into statistical distributions. This analysis is then combined with SME recommendations and simulation to predict failure dates. Once we understand the likely failure windows, turnaround optimization in Newton can be completed.

As part of this analysis, Newton combines predicted failure dates, risk tolerance thresholds, equipment redundancy, and future turnaround dates to create an optimized turnaround plan. This plan provides a more accurate scope of work, including the assets and tasks that should be prioritized in the next turnaround. With this plan, facilities can optimize the effort they need to spend on planning future turnarounds, ultimately reducing costs and improving production.

To understand more about conducting turnaround planning in Newton™ or creating a data-driven reliability program, schedule a discovery call.

Case Study: Refinery Leverages Advanced Modeling to Optimize CMLs and Recognize $800,000 of Value

Learn how a refinery quantifies the value of its Condition Monitoring Locations (CMLs) to determine where its inspections and resources are the most effective in reducing uncertainty.


A refinery could not confidently identify which CMLs provided optimal inspection coverage for one of its hydrocarbon reformer units and was overspending on inspections.


A Quantitative Reliability Optimization (QRO) pilot was implemented on the unit to quantify the value of its CML placements and optimize the site’s inspection budget for value-adding CMLs.


The refinery is projected to gain a total value of about $800,000 over the next five years from the cost reduction and increased availability resulting from removing inspections that do not provide value and adding CMLs that provide optimal coverage.


One aspect of an effective risk management approach focuses on eliminating the inspections that do not add value while adding necessary inspections to ensure sufficient coverage. However, even in today’s data-rich environment, many risk models rely too heavily on outdated algorithms and the conservative opinions of subject matter experts. These relative models can make it difficult for leaders to confidently define and justify future inspection plans to key decision-makers. A data-driven approach to reliability that leverages advanced modeling helps facilities quantify the uncertainty of a system’s probability of failure (POF) to drive valuable inspections that provide sufficient coverage while optimizing inspection spending.

The Challenge

A leading petroleum refiner and producer of renewable fuels with a robust mechanical integrity (MI) program in place continued to experience an increasing number of leaks at one of its facilities. To reduce the number of leaks, the site implemented a Risk-Based Inspection (RBI) approach to identify risk proactively. While this approach provided value, it required significant manual efforts from site personnel, and despite providing increased risk-based prioritized inspection intervals, the facility continued to experience unexpected piping leaks.

In addition to reducing the number of leaks, the site’s leadership wanted to better hone its resources in areas where inspections were more likely to identify active degradation while minimizing inspections in areas that returned little to no value. With its existing RBI program, the site struggled to assess the impact of specific CMLs on the overall risk of the facility and was spending its resources on inspections that did not need to be completed. Without quantifying the risk of the locations of individual inspection, they could not confidently identify which CMLs added value and which could be postponed or eliminated. Additionally, they struggled to identify areas where different types of CMLs should be added to mitigate the risk of future leaks further.

For example, one of the site’s hydrocarbon reformer units with a history of little to no degradation had more than 13,000 CMLs due for inspection over the next five years. The unit had recently completed an RBI revalidation study, and while it exhibited relatively low corrosion, the facility had not removed or delayed any existing CMLs. The site needed an approach that would quantify uncertainty and use it as a basis for the probability of failure to identify and prioritize the inspections that needed to be performed.

Pinnacle’s Solution

The facility piloted Quantitative Reliability Optimization (QRO), a methodology that evaluates the impact of individual assets on the overall performance of a system by connecting reliability and integrity data to a system model. The primary objectives of this pilot were to determine and quantify the following:

  1. Cost-saving opportunities by identifying inspections that do not impact risk reduction.
  2. Opportunities to mitigate future risk by adding inspections that reduce the uncertainty and POF of a circuit.

During this pilot, a system model was created for the hydrocarbon reformer unit. This unit is comprised of a variety of circuits, including vessels, drums, exchangers, air-fin coolers, normal flow process piping, dead-leg piping, injection points, and mix points. These circuits held more than 16,000 CMLs, all of which required point Ultrasonic Thickness (UT) inspections.

The facility’s Inspection Data Management System (IDMS) data provided a baseline for the model, which predicted the unit’s future inspection costs and availability based on inspections planned for the next ten years. Future scheduled inspections were modeled to calculate the POF and associated uncertainty of each CML and circuit to measure the impact of specific tasks over the next ten years. With this quantitative model, the facility is able to determine where inspections are the most effective in reducing uncertainty.

To further evaluate the impact of inspection program changes on the unit’s availability, inspection spending, and projected risk, the team used the quantitative model to compare three potential scenarios with the baseline inspection plan.

Scenario 1a

Scenario 1a modeled the projected availability and costs of the facility’s current plan, which included inspecting all existing CMLs on their next scheduled inspection due dates. This is the expected availability based on the facility’s current CMLs and planned inspections.

Scenario 1b

Scenario 1b uses the planned inspections modeled in Scenario 1a but includes degradation modes and circuits that have been under-inspected. This is the true projected availability of the facility’s current inspection plan and, as expected, has a lower availability than in Scenario 1a.

Scenario 2

Scenario 2 models the facility’s existing plan for Scenario 1b after optimization removes the CMLs that do not have a statistical bearing on the unit’s future projected availability. The projected availability is the same as Scenario 1b but reflects lower inspection costs because of the removal of non-value-adding CMLs.

Scenario 3

Scenario 3 models the impact of the proposed optimized inspection plan defined in Scenario 2 but adds inspection for three localized corrosion circuits with insufficiently planned inspections and 50 missing circuit CMLs. Compared to Scenario 2, this scenario shows an increase in cost and availability through the uncertainty reduction achieved by adding inspections.
Table 1: Modeled Scenarios

To evaluate the difference between the current inspection plan and the recommended plan generated by the quantitative model, the costs from Scenario 1a and the availability from Scenario 1b are compared to those from Scenario 3. Then, a list of optimized tasks was exported, including the techniques and dates for the planned CMLs, identifying the CMLs that should be removed or deferred, and suggestions for additional inspections during the next revalidation. In some instances, these additional inspections may require high-cost methods such as automated UT scanning, shown in the increase in total cost between Scenarios 2 and 3.


Based on the pilot, the total value the refinery is expected to gain over the next five years from cost reduction and increased availability is nearly $800,000. The value consists of two primary elements:

  1. Cost Reduction of $400,000: The model helped the facility identify and remove more than 11,000 CMLs that added cost without adding value. The cost reduction of removing these CMLs is projected to save over $400,000 for the facility over the next five years.
  2. Improvement in Availability of 0.1%: The projected improved availability expected for Scenario 3 over the next five years is estimated at another $400,000 in increased production.

The pilot revealed two critical insights. First, the pilot supported the hypothesis that the facility was overspending on the inspections within the unit. Additionally, the quantitative nature of this analysis provides the basis for identifying which CMLs can be confidently removed since they did not significantly impact the amount of uncertainty present in the unit’s performance.

Second, there are opportunities for the refinery to further impact risk and availability by identifying which assets continue to contribute the most to risk and availability. Adding inspections in this use case resulted in a projected availability improvement of 0.1% over five years. Additionally, by adding inspections to small areas that showed higher degrees of uncertainty due to a lack of data, the facility is projected to recognize an additional 6% reduction of risk achieved by adding inspections to small areas that showed higher degrees of uncertainty due to a lack of data.


With QRO, the refinery can quantify uncertainty to drive valuable inspections that provide sufficient coverage while optimizing inspection spending. This quantitative model will allow the refinery to define, justify, and scale future inspection plans across multiple sites.

The next step for the refiner is to conduct a deeper analysis into the specific damage mechanisms and expected corrosion rates to identify where local corrosion is a significant concern and may be occurring, but inspection methods and/or coverage is not sufficient to confidently find it.


For further information about how QRO can be applied to CML Optimization, click the images below to view the full PDFs of our white paper and article.

The Value of Quantifying Uncertainty

If you cannot quantify the uncertainty of when your assets will fail, you will never recognize the full value of your risk analysis. In this video, Ryan Sitton, Pinnacle’s Founder, and CEO, walks through the impact of leveraging data to quantify uncertainty, decrease downtime, and better define whether an upcoming task should be a repair, replacement, or upgrade.

The Concept of Uncertainty/Combating Uncertainty

The concept of quantifying uncertainty is a novel approach for many industrial facilities. However, despite advancements in technology and risk analysis methodologies, there’s still uncertainty associated with how long an asset will operate before failing. One way to minimize this uncertainty is through monitoring. While taking thickness measurements, evaluating flow rates, and analyzing the contents of the fluids can help facilities identify the specific failure modes of an asset and reduce uncertainty, ultimately, the value of monitoring is limited.

The uncertainty of a specific asset impacts more than the life of the asset in question; it impacts the uncertainty of the entire system of assets connected to it. Many facilities have historically managed this data type in siloes, making it difficult to use in future analyses. As an industry, there’s an opportunity to better connect the impact of an asset’s data on the overall system so that facilities can plan tasks that will reduce spending, decrease downtime, and minimize risk more proactively.

So how do we apply the concept of uncertainty to an asset? Using the example mentioned in the above video, if we’re studying a specific piece of pipe, we need to understand what’s happening with it and the range of uncertainty associated with its failure. As we begin to narrow the degree of uncertainty by learning more about this pipe and its surrounding assets, we can better plan for the proactive activities that need to be incorporated into the plan.

How Can You Minimize Uncertainty?

One way to minimize uncertainty is through Lifetime Variability Curves (LVC). An LVC is a data-driven model that leverages a facility’s data to estimate asset performance, predict failure, and quantify uncertainty. LVC works similarly to a hurricane tracker. As the model collects more data in real-time, it becomes more refined and predicts when assets will fail. This model helps facilities pinpoint the impact of individual data points on a system of assets to identify the activities that need to be completed to reduce risk.

Quantifying the Value of Your Inspections

Learn how two refineries quantified the uncertainty associated with different inspection techniques to confidently identify which inspections could be postponed, removed, or added to improve production.
Read more >

Reduce Your Uncertainty with Quantitative Reliability Optimization (QRO)

Quantitative Reliability Optimization (QRO) is a methodology that measures the uncertainty of your reliability data. Learn how this approach can help you identify the tasks that actually impact your facility’s performance.
Read more >

Supermajor Uses QRO to Predict 94.22% Availability and Associated HSE Risk

Learn how a facility modeled severe vibration and thinning scenarios to monitor and understand the impact of real-time data on the facility’s availability, risk, and cost.
Read more >

Quantifying the Value of Your Inspections

Inspectioneering Journal, March/April 2023 Issue

In this article, we explore the use cases of how a quantitative model helped two facilities confidently identify which inspections could be confidently postponed, removed, or added to improve production:

Quantifying Business Value of Targeting Specific Condition Monitoring Locations (CMLs) in a Reformer

A refinery leveraged quantitative modeling to quantify the value of its CML placements and optimize its budget on value-adding CMLs. This optimization yielded a high return on investment, equating to about $800,000 in total value gain while maintaining or improving asset risk and availability.

Quantifying Business Value of Targeted Inspections in a Flare Header

A refinery experiencing leaks in one of its flare headers leveraged a quantitative model to identify the top contributors to downtime by accounting for the uncertainty associated with different inspection techniques. This optimization resulted in the refiner recognizing a cost savings of $75,000 per flare header system, extrapolated across six units for a total cost savings of $375,000 and a projected 0.4% increase in availability over the next five years.

Meet the Authors

Curious how quantitative modeling can help you identify the value of your facility’s CML placements?
Reach out to our authors, Lynne Kaley or Siddharth Sanghavi.

The Value of Quantifying Uncertainty

You cannot maximize the value of your risk analysis without the ability to quantify the uncertainty of when an asset will fail. While monitoring various elements such as thickness measurements and flow rate can reduce uncertainty, ultimately, the value of monitoring is limited.

In this video, Ryan Sitton, Pinnacle’s Founder and CEO, walks through the impact of leveraging your data to better define whether an upcoming task should be a repair, replacement, or upgrade to reduce spending levels and downtime.

Cognite Product Tour 2023: Data-Driven Asset Performance Management

Cognite Product Tour 2023: Data-Driven Asset Performance Management

In the world of reliability and maintenance, facilities are constantly challenged with maximizing uptime and reducing downtime in a cost-effective way. Unfortunately, there are many challenges to this approach. Facilities often have multiple data repositories, lack data contextualization, and have critical insights locked into vendor-specific systems. As a result, work processes between fixed and non-fixed assets are disconnected, data is siloed and static, and data analysis is focused on the asset level.

Successful, data-driven asset management focuses on solving this industrial data problem. A data-driven approach to reliability extends asset life and maximizes labor productivity in a cost-effective way. In this webinar, Lewis Makin, Partner at Pinnacle, walks through a demonstration of an analysis platform that connects the data of both fixed and non-fixed assets into a system model, enabling business and reliability teams to better understand the current and future states of their facility.

Watch this webinar to learn how you can:

  • Implement and scale successful asset performance management strategies across your organization
  • Forecast availability and identify your top 10 contributors to downtime
  • Gain better visibility into which tasks have the most significant impact on your facility to prioritize your decisions more strategically
  • Review how the data of specific assets such as corrosion rates, economic risk, and health, safety, and environmental (HSE) risks impact the system
  • Predict future downtimes to plan preventive maintenance to update in real time as more data is collected to drive future and more predictive analytics

Want to learn how you can drive future and predictive analytics?

A Data-Driven Approach to Sustaining and Improving Your MI Program

Pinnacle & Inspectioneering Webinar: A Data-Driven Approach to Sustaining and Improving Your MI Program

Whether you are a maintenance manager or a business leader, a data-driven approach to reliability can help you overcome the critical challenges of sustaining your mechanical integrity (MI) program. In this webinar, Michael Wallace and Sid Sanghavi, Partners at Pinnacle, provide the practical insights and tools you need to enhance the sustainability and performance of your MI and reliability programs. By the end of this webinar, you will have a better understanding of the importance of taking a data-driven approach to continuously enhance the value of your programs and will know how to start applying these concepts in your organization. The webinar will also include a discussion on how you can identify and prioritize improvement opportunities based on data-driven insights and stakeholder feedback, and how you can better monitor and communicate the effectiveness of program improvements using key performance indicators (KPIs), metrics, and feedback loops.

Want to learn more about how to improve and sustain your MI program?

Quantitative Reliability Optimization (QRO) Example Application: CML Optimization

How do you know which Condition Monitoring Locations (CMLs) add value to your facility? When many existing CMLs were initially placed more than 30 years ago, the industry knew very little about the nature of degradation, and there was no historical data to indicate the areas with the highest or most inconsistent corrosion. As technology has evolved and additional historical data has become more available, it’s apparent that many of these CMLs were not placed in the most strategic locations.  

Facilities have an opportunity to optimize their inspection programs. However, despite the plethora of data available today, many facilities cannot calculate the value of individual CMLs and need help identifying the specific CMLs that should be added or removed to achieve the optimal balance of inspection spending while ensuring adequate coverage. 

Among the many approaches developed to identify and place CMLs, Quantitative Reliability Optimization (QRO) is the only approach that quantifies uncertainty and uses it as a basis for the probability of failure. When applied to CML Optimization, QRO quantifies the impact of specific tasks on an asset’s POF to determine where inspections are the most effective in reducing uncertainty. As a result, facilities can confidently identify and remove inspections with little to no value, which is not achievable with current RBI approaches. 

This analysis, which can be done through Newton™, enables facilities to calculate risk more effectively, eliminate inspections that do not add value, and ultimately, optimize risk management strategies.  

Download the white paper below to learn more about how QRO can elevate your approach to CML Optimization.

Data Validation: Are We Data-Rich, but Information-Poor? Fall 2022 “Meeting of the Minds”

*As seen in Inspectioneering Journal’s January/February 2023 issue.

Back in November, Inspectioneering and Pinnacle had the privilege of co-hosting our 10th “Meeting of the Minds” (MOTM) roundtable discussion; this time in one of my favorite cities in the world, New Orleans. This bi-annual meeting has consistently brought together a select group of leading mechanical integrity (MI) experts to discuss pertinent topics related to fixed equipment reliability and share their personal experiences and opinions. As with previous meetings, participants came from various sectors of the industry, including oil refining, petrochemicals, offshore production, and chemical processing.

Over the years, we’ve always tried to share key takeaways from these meetings with our readers because we believe the insights shared could greatly benefit the industry at large. Previous recap articles have summarized discussions on corrosion under insulation (CUI) programs, emerging inspection technologies, integrity operating windows (IOWs), corrosion control documents (CCDs), risk-based inspection (RBI), mechanical integrity project hit lists, and most recently, data collection and analysis.

This discussion focused more on Data Validation and was prompted by the following statement and question: at no point in history have we had access to more data about our assets than we do right now; but is it really helping or are we simply “data rich, information poor?” For over an hour, the participants openly discussed the importance of having clear and consistent definitions of what constitutes “good data,” as well as effective processes for identifying and addressing data that does not meet those standards. Additionally, they discussed the benefits of using automated systems for data validation and the challenges of working with large volumes of data. The participants also emphasized the importance of collaboration and sharing knowledge among industry professionals to drive continuous improvement and advancement in a rapidly changing industry.

Clear and Consistent Definitions of Good Data

Data is a general term used to describe the myriad of information being gathered, organized, analyzed, and used to make critical decisions in your facilities. But not all data is “good data.” During the discussion, one of the participants emphasized the importance of having clear and consistent definitions of what “good data” looks like. He noted that without a solid understanding of what constitutes valid data, organizations can struggle to effectively use the data they have available. This can lead to confusion and inconsistency in how the data is leveraged, and can ultimately result in incorrect or unreliable decisions.

Having clear and consistent definitions of valid data is critical to ensuring that the data is accurate and reliable. By defining specific criteria that data must meet in order to be considered valid, organizations can better-ensure that the data they are using is of high quality. This can include things like specifying acceptable tolerances for measurement errors, defining the acceptable sources of data, and establishing protocols for verifying the accuracy of the data. Clear and consistent definitions can also help ensure that bad data is properly screened out. By defining specific criteria for what constitutes valid data, and establishing clear guidelines for how to handle data that does not meet those criteria, organizations can improve the accuracy and reliability of the data they use, and ultimately make better, data-driven reliability decisions.

Automated Systems for Data Validation

The participants also discussed the benefits of using automated systems for data validation, noting that automated systems can help to ensure consistency and reduce the potential for human error, which can improve the accuracy and reliability of the data. Automated systems can be programmed to apply the same criteria and rules to all data, regardless of who collected it or how it was entered. Moreover, using automated systems for data validation can help identify potential errors or inconsistencies in the data, as they can be programmed to check the data for things like completeness, consistency, and accuracy, and can alert users to potential issues that need to be addressed. This can help to identify and correct errors before they impact decision-making, and can ultimately improve the quality of the data.

While today’s software and automated systems can do a lot of the heavy lifting, there is still a need for qualified personnel that can digest incoming data and make decisions based off of it. Especially now that facilities are having to marry existing conventional data with all of these emerging technologies like photoimagery, drones, and all sorts of other new, unconventional data. One participant admitted that “one thing his inspection and reliability team has found they are woefully underprepared for is to review and make decisions based on these new and emerging datasets coming in.”

Challenges of Working with Large Volumes of Data

The participants also discussed the challenges of working with large volumes of data, noting that managing and organizing large volumes of data can be complex and time-consuming, and that organizations need to have effective systems in place to ensure that the data is used effectively. Many facilities are collecting billions of data points, but their management systems are often not mature enough to fully leverage the data they have or identify the ROI they’re getting. “An important consideration is forethought,” said one participant. “For a lot of the data being collected, there isn’t enough forethought for how it is going to be used. How are we using it? How are we going to be using it in the future? And is it even needed?”

One of the key challenges of working with large volumes of data is the need for robust data management systems. These systems need to be able to handle the complex relationships between different types of data, and need to provide users with the ability to quickly and easily find the information they need. This can require sophisticated systems that can support the needs of different users across the organization, and can provide the right level of access and control to ensure that the data is used effectively.

Another challenge of working with large volumes of data is the need for clear protocols for data entry and verification. To ensure that quality data can be leveraged by all interested parties at your site, it is important to have clear procedures in place for all aspects of the data collection process, including the training and supervision of personnel, the use of standardized measurement instruments, and the procedures for verifying the accuracy of the data. This can help to ensure that the data is properly collected and logged into your management system, and can ultimately improve the quality of the data long term.

Data overload is also leading to lots of quality data being lost or left behind, but it’s important to remember that “old data can be used to ensure good decisions are still being made,” said one participant. “It’s not a one-time thing. We’re constantly trying to find clever ways to slice and dice our data using statistical analysis to see if we can wring any more value out of it,” another added.

Speaking on data “sitting on the shelf” with untapped potential, one participant said he thinks “the data we have in our RBI systems is underutilized. Our organizations tend to look at it as just the inputs to get the output of the inspection plan, not to better understand the condition of the individual pieces of equipment, and what’s driving those risks, and what can be done about it from a reliability standpoint, a project standpoint, and a budgeting standpoint. You could even use that same information to benchmark your facilities against each other. There’s just so much we can do with that information, but right now it seems like it’s just being primarily used to drive compliance tasks.”

Collaboration and Knowledge Sharing

During the discussion, the participants emphasized the importance of collaboration and sharing knowledge among industry professionals to drive continuous improvement and advancement in a rapidly changing industry. They discussed the benefits of sharing data and best practices within the organization and of networking with other industry professionals to learn from their experiences and share insights.

One of the key benefits of collaboration and sharing knowledge among industry professionals is that it can help organizations to learn from each other and improve their processes and systems. By sharing data and best practices, organizations can learn from the experiences of others, and can identify opportunities for improvement in their own operations. This can help to drive continuous improvement and innovation, and can ultimately help organizations continue to advance and grow.

Networking with industry peers can also provide valuable opportunities to learn from others and share knowledge and experiences. By attending conferences and workshops and participating in industry forums and online communities, organizations can learn about the latest trends and developments in the industry, and can gain (and give) valuable insight into the challenges and opportunities facing the industry. This can help organizations not only stay up to date with new strategies and technologies, but can also help influence the development of codes and standards that are fair and reasonable for all stakeholders, especially as it relates to governance on data collection and transmission practices.


This MOTM discussion emphasized the importance of data validation in the context of mechanical integrity in the oil and gas industry. The participants discussed the need for clear and consistent definitions of good data, the benefits of using automated systems for data validation, and the challenges of working with large volumes of data. They also emphasized the importance of collaboration and sharing knowledge among industry professionals to drive continuous improvement and advance the industry.

Everyone acknowledged that data validation is a critical component of ensuring the accuracy and reliability of the data that organizations use to drive decision-making. By implementing effective data validation processes and sharing knowledge and best practices, organizations and the industry at large can continue to improve and advance.

Inspectioneering and Pinnacle would like to thank all of the participants for sharing their insights and experiences. We sincerely appreciate your participation in these discussions and your dedication to educating and advancing the Inspectioneering community.

Case Study: Improvements in a Piping Program Yield a 3X ROI

Learn how we implemented a piping reliability program that turned a compliance project into a 3X Return on Investment (ROI) for a refiner. The implementation of this program was an important, calculated step in the operator’s evolution toward an effective RBI program.


This refiner lacked a standardized inspection program and needed to proactively identify and mitigate its loss of containment (LOC) risks to meet compliance.


Pinnacle implemented asset strategies for the operator’s piping across four sites which included appropriate considerations of the operations and maintenance practices of the piping.


This refiner recognized 3X ROI and safely reduced the number of CMLs by 27.4%.


The impact of LOC events can range from a loss of profit to serious Health, Safety, and Environment (HSE) consequences. A strong, integrated mechanical integrity (MI) program can help facilities satisfy compliance regulations, improve reliability performance, and prevent LOC events from occurring. Having standardized, scalable asset strategies that strategically target inspections will help facilities proactively identify potential risks, understand those risks and drivers, and prevent LOC events before they occur.

The Challenge

This refiner experienced two massive LOC events at multiple sites, which resulted in astronomical expenses and compliance violations. One of the events was caused by a leak in a section of insulated carbon steel piping that had thinned over time due to corrosion. As a result of the event, the refiner was required to implement inspection strategies across various piping classes to prevent future leaks.

Before the events, the refiner struggled to proactively identify and mitigate LOC risks and lacked an integrated, holistic MI program. Additionally, there was no formal system in place to flag assets that violated the acceptable range for process operating conditions. Further, some sites relied more heavily on the knowledge of experienced materials engineers and inspectors than others, and as a result, the quality of document organization varied by site.

To address these challenges and meet compliance, the operator needed to develop and implement a series of inspection strategies across its fixed equipment and piping that would enable employees to proactively identify, manage, and mitigate LOC risks at a system level and satisfy Recognized and Generally Accepted Good Engineering Practices (RAGAGEP). Specifically, these strategies needed to include a defined set of integrity operating windows (IOWs), change management criteria, and processes that would guide when to act on assets before they violated the acceptable process condition ranges.

Initially, the refiner attempted to implement these strategies on its own and quickly realized it needed additional resources and MI expertise. Additionally, some site leadership worried that they would not have the necessary resources to keep up with the increased level of inspection required when the initial and recurring inspection intervals started to overlap. Ultimately, Pinnacle was brought in to implement a standardized program that would strategically target areas to inspect and could be replicated across all sites.

Pinnacle’s Solution

As part of the solution, the Pinnacle team worked with the refiner to create a set of corporate piping standards. These standards, which were rolled out across four sites, focused on improving the operation and maintenance of the operator’s fixed equipment and preserving the piping’s pressure boundaries. The primary goal of the implementation was to provide the refiner with standardized drawings that could be leveraged by multiple sites and disciplines, including inspectors, processes, designs, and turnaround planning.

The scope of the project included the following:


The objective of this step was to create a solid, system-level foundation for the inspection strategies in an expedited timeline. During this stage, the team developed a deep understanding of the operator’s piping, operational elements, and equipment by gathering and systemizing critical asset data from the piping process flow diagrams (PFDs), piping and instrumentation diagrams (P&IDs), and process data.

System-Level Damage Mechanism Identification (DMR) & IOWs

After all critical data was gathered from existing documentation, the Pinnacle team identified the damage mechanisms and associated inspection strategies that could apply to those systems. Each system had specific damage mechanisms based on stream chemistry, materials of construction, and operating conditions. In addition to identifying whether the damage mechanism was local or generalized, the strategies specified whether the failure mode was cracking or thinning and the type of LOC event that was likely to occur if the asset failed.

After the system-level damage mechanisms were assigned, the team implemented a series of IOWs to help the facility identify potential damage mechanisms and the associated parameters that cause the damage.

With an IOW system in place, operators will receive an alert when the IOW parameter exceeds or crosses a specific operating threshold and can then adjust the facility’s conditions to bring the parameter back into the normal operating range, if feasible, or take other action as appropriate. The series of IOWs was prioritized by the level of urgency and classified the time and magnitude of potential failure. Some parameters, like sulfur content or temperature, may be inherent to the process. Regardless, to attain the desired reliability, remedies will be identified and recommended appropriately.


The objective of this step was to conduct a more detailed analysis of the operator’s piping. After identifying the damage mechanisms and implementing IOWs at the system level, the team took a deeper dive into specific circuits, which enabled a deeper and more accurate understanding of specific process conditions and their potential impact on materials of construction and damage mechanisms.

For this project, the established systems were further broken down into circuits based on relevant considerations such as temperature, fluid velocity, and piping material.

ISO Correlation

The next task involved taking the circuits from the previous circuitization step and translating them to the inspection drawings, allowing inspectors to identify the start and stop points in the field and correlate CMLs to the circuits.  Additionally, during circuitization and ISO correlation, the team validated, defined, and marked deadlegs of potential concern within the systems.

Circuit-Level DMR

Damage mechanisms for each circuit were identified based on the materials, design and operating properties, and practices. The mode of damage, e.g., localized versus generalized corrosion, cracking, metallurgical, mechanical, creep, and brittle fracture, were considered along with their potential damage severity.

CML Optimization

Next, the Pinnacle team completed CML Optimization, which included the identification of potential CMLs and selected CMLs:

  • Potential CMLs (PCML): Understanding and leveraging the damage mechanisms identified during the circuit-level DMR and IOW meeting, the team identified every PCML location per associated inspection strategy development.
  • Selected CMLs: Once all PCMLs were placed, the team totaled the counts and applied inspection requirement guidance, which leverages pipe class and damage mechanism susceptibility.
  • CML Numbering and Datamining: Once selected CMLs were finalized, the team numbered them per circuit according to flow order. Flow order is important in understanding and using the system dynamics. Key information was populated for each selected CML, which included damage mechanism morphology, characteristic locations of damage (e.g., weld, HAZ, base material, elbows, 12 o’clock position, ID or OD, extrados versus intrados, etc.) diameter, component type, pipe schedule, nominal thickness, and corrosion allowance. This information was utilized to determine the proper methods of inspection (e.g., radiographic or ultrasonic), which would be captured in the CML nomenclature as displayed (i.e., consistent with) in the refinery’s inspection data management system (IDMS) and piping isometrics. This information was also confirmed for accuracy or flagged for an update when a selected CML coincided with an existing CML.

Inspection, Testing, and Preventive Maintenance (ITPM)

The team conducted a high-level review of the elements required for an inspection per circuit. During this review, the team detailed crucial information, nondestructive evaluation (NDE) types, selected CML counts, and deadlines/inspection intervals which gave inspectors a chance to review the drawings and provide feedback. The team also aligned its inspection efforts so that previously planned activities could be expanded to acquire readings and groups of CMLs could be evaluated together to avoid large numbers of CMLs coming due for inspection at the same time.

Asset Strategy

A formal asset strategy report was delivered during the project. These asset strategies detailed the necessary resources needed to complete the work and any potential issues that could occur. Most importantly, these strategies illustrated how inspections would be affected by the changes in inspection methods and pipe class. The report also included summaries of the new CMLs and NDE inspection methods.

IDMS Update

The Pinnacle team then uploaded the information into the IDMS. This step will help the team better maintain its asset history and will create standardization across multiple sites. This database will include all validated information and assessments, previous inspection results and summaries, and will alert facility leadership of future inspection dates.


Evergreening is one of the most important aspects of a MI program. During this stage, the Pinnacle team worked with the site’s employees to sustain the MI program over time. The evergreening phase consists of helping the sites manage any changes and evaluate the above steps for potential changes that need to be implemented. For example, one site may need to replace a carbon steel circuit with stainless steel. For that change to be implemented smoothly, the team would review existing damage mechanisms and susceptibilities and likely need to update damage mechanism assignments, exclusions, and susceptibilities. Additionally, the team would rework the placement of potential CMLs and would need to check the selection requirements based on the new information. Following that, the team would rework the selection, update the ISOs with the new CML locations and names as well as other critical fields, and would ensure these updates are captured in the IDMS.

Additionally, the program implementation established a systematic inspection strategy that can be replicated across additional sites and equipment types. The standardization of inspection strategies at these four sites reduced the operator’s CML count by 27.4% and enabled the operator to proactively identify, manage, and mitigate LOC risks, helping the operator meet compliance.


The program implementation yielded a 3X ROI for the operator. The inspection strategies cost approximately $100MM to implement and the resulting deliverables identified over 200 integrity threat recommendations (ITRs). Since these ITRs were identified prior to causing failures, the operator was able to prevent the significant costs that would occur if the piping failed. The operator calculated that if 50% of these threats had resulted in failure events, the probable cost of incidence (COI) would have totaled hundreds of millions of dollars.

Additionally, the program implementation established a systematic inspection strategy that can be replicated across additional sites and equipment types. The standardization of inspection strategies at these four sites reduced the operator’s CML count by 27.4% and enabled the operator to proactively identify, manage, and mitigate LOC risks, helping the operator meet compliance.


The implementation of piping inspection strategies helped the refiner take a step towards having a more integrated, holistic MI program. With these new strategies in place, the operator is able to better focus its approach to risk management across various classes of piping and will ensure that these sites meet compliance.

Failure Modes and Effects Analysis (FMEA) In Newton™

How to conduct Failure Modes and Effects Analysis (FMEA) in Newton™

Production facilities are large and complex.  To optimize performance and manage risk, the operations and maintenance teams need to understand the equipment function, how it can fail, and the consequences of that failure.  Only then can they build asset strategies that will minimize impacts from equipment failure. To accomplish this difficult task, the industry best practice is to complete a Failure Modes and Effects Analysis (FMEA). Traditionally, this is done with on-site interviews with subject matter experts (SME) and maintenance personnel that are familiar with site operations. During these interviews, the SME is asked a series of questions to subjectively determine failure modes, mechanisms, likelihood of failure, and severity of the event. The information gathered is then used to design an asset strategy that addresses the findings of the FMEA. Although this approach has some benefits, it is primarily subjective and heavily reliant on collective or passed down knowledge. It is also time-consuming and can lead to inconsistencies and strategies that are ineffective.

Alternatively, Newton™ offers a quantitative approach to FMEA.  Newton™ connects every facet of reliability and is the only software application in the world that facilitates the Quantitative Reliability Optimization (QRO) methodology. The analysis starts by defining the asset register and creating a facility model that is used to calculate the production losses that can result from equipment failure. The Newton™ framework utilizes asset templates (fixed and rotating equipment) and walks the user through customizing each asset’s function, components, and failure modes. Leveraging data from computerized maintenance management systems (CMMS), process historians and production loss accounting; the failure modes, probability of failure, and consequences are quantified into statistical distributions, eliminating subjectivity. Lastly, this comprehensive quantitative model is calculated and used to assess criticality. From there, optimal asset strategies can be implemented from scratch, or customized from available templates. Producing asset strategies quantitatively produces results that are more specific and consistent than relying on qualitative methodology.

Moreover, the model can now be utilized to dynamically learn and update probability of failure using condition-based monitoring data. This creates an evergreen model that automatically responds to new data and threats. This also allows the user to quantify the effectiveness of the existing maintenance strategies and provides a systematic approach to continually improve their reliability program.

What is an FMEA?

A Failure Modes and Effects Analysis (FMEA) is a foundational application used to gather data in reliability programs such as Reliability Centered Maintenance (RCM) or QRO. It is a step-by-step qualitative process used to pinpoint the functional requirements of an asset, system, or unit.

What Makes Newton™ Different?

Conducting an FMEA in Newton™ with quantitative methodology and recommendations provide several benefits, including:

  • A more accurate criticality ranking
  • Cost Savings due to reduction of overly conservative recommendation
  • Less time required from site SMEs
  • Faster implementation through templating and data-driven analytics
  • Faster time to value
  • Improved production performance
  • A dynamic, learning model to keep recommendations up to date
  • A consistent approach used for all assets, rotating and fixed

To understand more about conducting FMEAs in Newton™ or creating a data-driven reliability program schedule a discovery call.