A Day in the Life of a Process Engineer: Looking Back and Looking Forward with Self-Service Analytics
Opinion

A Day in the Life of a Process Engineer: Looking Back and Looking Forward with Self-Service Analytics

In nearly 26 years of process engineering work at Toray Plastics (America), with almost 20 of those years in direct support of manufacturing, working on rotating 12-hour shifts, my team and I have faced and have overcome many challenges.

  • By Mike Malone , Process Engineer, Toray Plastics and Eduardo Hernadez | April 22, 2021

In nearly 26 years of process engineering work at Toray Plastics (America), with almost 20 of those years in direct support of manufacturing, working on rotating 12-hour shifts, my team and I have faced and have overcome many challenges. While a lot of those challenges have been related to process engineering and therefore technical in nature, we also had to address challenges within manufacturing. However, in doing so, the knowledge and experience we all gained have greatly benefited our team and company as a whole.

In my current role, I’m one step removed from the actual manufacturing floor. Today, my projects and day-to-day work are in support of manufacturing (uprate, productivity, and yield) and include far more oversight than if I was down in the trenches, so to speak. When I was asked about my thoughts on what “a day in the life of a Process Engineer” is like, I had a chance to reflect on the main challenges I encounter on a day-to-day basis. I boiled it down to these three important areas: Communication, Process Control Management, and Data Analytics.

The Process Manufacturing Communication Challenge

If you work in industrial manufacturing, you know about the challenges in communicating across teams and departments. It can be daunting keeping all departments that directly support manufacturing abreast of all relevant process information related to Safety, Quality, and Production. To accomplish this, our teams have daily communication meetings in the mornings and afternoons. All groups attend. That includes Management and Supervisors from Safety, Production, Maintenance, Engineering, Technical/Product/Process Engineering, Quality, and Scheduling. During these meetings, we adhere to a preset agenda starting with Safety Review, followed by Quality, and then on to Production, and we complete the meeting in 20-25 minutes. If we have major issues, we table these for smaller groups. Accountability is assigned to each group to follow up and report on, and flexibility is provided in the allocation of resources and production schedule planning. The path forward and contingency plans are also discussed. This information sharing is directly dependent upon the accuracy and timely reporting of process manufacturing information.
As you know, the potential for human distraction and error in maintaining complete records for process information and reports is always a possibility. It is probably an understatement to say that I had often felt frustrated searching for information through an endless thread of messages and emails, only to realize that there is contradictory information that exacerbates the underlying issue. Self-service analytics has helped improve information sharing and communication, although its capacity in this area is not commonly touted.

Lately, we’ve started incorporating process information from our communication meetings into our self-service analytics platform, so all users have access to the same information whether in attendance or not. We are at the beginning of fully using self-service analytics for streamlining communication and information-sharing. This is a goal we are working towards and are now moving in this direction.

The Process Control Management (PCM) Challenge

The tenure of both engineers and operators is generally not “full career” – meaning these personnel don’t work their whole careers at one company. Therefore, all their knowledge and skills must be captured and documented and then made available for all newcomers to learn. To address this knowledge-sharing challenge, we used Process Control Management (PCM).
PCM has several objectives:

  • Job training and skill maintenance.
  • Promoting exchanges between all departments.
  • Creating efficient and solid teamwork.
  • Establishing measurable and repeatable equipment conditions.
  • Maintaining and expanding the process know-how library.
  • Promoting rapid and organized access to process information, performance metrics, and equipment status.

PCM’s goals consist of establishing “Deeper Thinking” based on “Precise Analysis”, effectively using available knowledge and progressing quicker by taking advantage of innovation through observations and breakthroughs.

I know the importance of standardization in the way different process issues are addressed by engineers. So as with any manufacturing process, we needed to establish a method for root cause analysis and troubleshooting. Our method was strongly based on critical review of past events and “what-ifs.” With a defined method across departments, we were able to foster confidence in our troubleshooting strategy and to establish consistency. This method also provided less experienced supervisors and engineers with the authority and capability to execute a predetermined plan of action.

That said, we have now been using self-service analytics for three years, and it has greatly improved efficient root cause analysis and troubleshooting. The functionalities it provides enables process experts to answer everyday questions as it relates to process data, shining and delivering in two main areas: First, self-service analytics provides engineers with a tool that allows them to perform a correlation analysis within seconds, without relying on support from a data scientist (or someone from a specialized group) who might not know all the intricacies of the manufacturing process. Second, self-service analytics helps engineers by providing a quick way to perform a visual comparison of process data from different time periods and to get basic statistics that help them understand why certain problems, or variations in the product quality, could have developed throughout the manufacturing process. Whether through a correlation analysis that helps find the precursor of some process upset or a statistical/layering analysis of process data between different time periods, self-service analytics helps solve pressing process issues that need quick resolution. It also maximizes productivity and improves the quality and safety of the operation. Consequently, root cause analysis and troubleshooting that would have taken many hours or even days to complete can be done in much less time, increasing overall efficiency and profitability.

The Data Analytics Challenge

The data analytics challenge deals with getting more out of the process data being captured and by understanding if the data trend is normal and whether the behavior has happened before. The recording and analysis of process data has always been a staple of effective process engineering. In particular, recording all available data to benchmark the stable running of a product is high priority. And this historical data set is extremely helpful in troubleshooting process upsets in subsequent production runs. The digitization of process data in our plant over the past eight years has allowed process engineering to be taken to another level. However, the sheer amount of captured data quickly became overwhelming and as a result was under-utilized. We were able to solve this challenge by using self-service analytics, with the most significant outcome being that we could answer two of the most important process engineering questions: 1) Is this normal? and 2) Has this happened before? 

Our process experts can now answer these questions every day. We have gone from paper chart recorders and manually collected “point-in-time" data to full digitization of process data from the data historian, Data Collector Expansion, and on to self-service industrial analytics. In the past years, many companies have made the leap into this data analytics realm not only because they need to closely monitor current process conditions but also because they have realized that the historical data tied to their historians is digital gold waiting to be mined.
Therefore, just by simply uncovering that historical data and making it accessible to the engineers for in depth analysis instantly gives a new perspective and knowledge of what is the best and proven method (because data doesn’t lie) to ensure consistent quality and maximize productivity. I would add that such historical data is not meant to underline deficiencies in past operation but rather is meant to uncover a special place to dive in to search for the answers to the two questions introduced earlier: 1) Is this normal? and 2) Has this happened before?

Other challenges that I’ve seen over the years are the barricades that sometimes exist that make it harder to access process data and other sources of information that might exist in different databases or spreadsheets. This is one area where self-service analytics also plays a key role. It democratizes data across the company and between different data sources by getting this information into the hands of all stakeholders. This helps create a more complete and accessible source of information that helps process experts address the data analytics challenge.

Data Analytics in Use: Scale Hopper Rate vs Output Rate Use Case

A recent example shows how we used self-service industrial analytics (TrendMiner) to solve a recurring problem which involved our raw material scaling process. There have been situations when the rate of raw material preparation had dropped below the production output rate resulting in undesirable process changes, line speed reductions, and in the worst cases, production downtime. Raw material preparation is a two-step process which involves batch scaling from multiple silos and then vacuum drying. Batch preparation is automated in most cases with the batch cycles repeating over and over. When Production realizes that batches are taking longer than usual based on demand, the drying process is the first place to look at to speed things up. However, our data analytics showed that even small recurring delays in the scaling process would have a compounding impact downstream getting the batches dried in time to meet production demand.

From the scaling and output rates available in the data historian from our plant manufacturing logs, we were able to quickly create a calculated “If/Else formula tag” in TrendMiner to compare the scaling rate to the output rate (Figure 4). This formed the basis for setting a process monitor in TrendMiner to send an alert to Production if the output rate starts to exceed the scaling rate. When an alert goes out, Production can investigate and take the appropriate action before the situation gets out of control ( Figure 5). This is an excellent example of the power and efficiency of self-service analytics.

Concluding Thoughts

Looking back on my work progression over the past 26 years, I have seen an organic evolution of “the day in the life of a process engineer” and what it takes to work in industrial manufacturing. Nowadays, companies understand the importance of digitalizing their factories and processes in order to stay competitive and survive in the current market. And this brings about the challenge of how best to use all of the captured data. In response to discussing adoption of new technology and a new way of working towards digitalization maturity, you often hear “We’ve always done it this way and it’s working so why change?”. And that may very well be true. There is no guarantee for future success; however, those companies taking this approach will be left behind. Competitive industrial manufacturing companies are waking up to the substantial benefits, efficiency, and sustainability of self-service industrial analytics and are adopting it for improving and optimizing their workflows and processes.

The benefits of bringing TrendMiner self-service analytics into Toray have been obvious almost from Day 1. The environment inside TrendMiner is very intuitive, and this facilitates bringing new users up to speed quickly. The Total Data Access is superior to any other tool we have used before; to navigate and search 6 years of process data in seconds is unrivaled. Additionally, the self-service component of setting up alerts saves time and money. There is no need to submit requests for updates in the plant manufacturing systems. Users can set up their search criteria, activate a monitor trigger, and have email and text alerts sent to the responsible parties in a matter of minutes. It is hard to imagine now a day in the life of a Process Engineer without self-service analytics.

Register Now to Attend NextGen Chemicals & Petrochemicals Summit 2024, 11-12 July 2024, Mumbai

Other Related stories

Startups

Chemical

Petrochemical

Energy

Digitization