Autonomous Yield and Throughput Optimization 

March 8, 2022
Partnerships

Some manufacturing processes and machinery are highly prone to errors and easily affected by environmental factors, requiring intense supervision and frequent adjustment of parameters to ensure continued production and to minimize scrap losses. In such cases, the processes require an operator with extensive knowledge and expertise in supervising the manufacturing process. When needed, the operator must adjust the parameters manually. In such situations, it is not uncommon that two operators selected a different set of parameters based on their individual experience. This adds process variability, costs and complexity and leads to inevitable scrap losses in the production process. 

Autonomous yield and throughput optimization can solve these challenges by automatically suggesting optimal process set point values to operators (open-loop optimization) or directly re-adjusting critical process parameters without human intervention (closed- loop). Autonomous optimization solutions utilize Artificial Intelligence-based Non-Linear data models trained prior using process data to analyze the production process, optimizing the production capacity and thereby simplifying the supervision and adjustment requirements for machine operators.  

The solution for autonomous yield and throughput optimization utilizes the real-time analysis and optimization technology from OPTIMITIVE, hardware from Intel® and the underlying infrastructure for data capture, storage, and contextualization by the United Manufacturing Hub. In successful reference implementation projects, we have been able to increase throughput by up to 20% and yield by up to 10% through the reduction in scrap losses, and reduce energy consumption by up to 15%. For instance, a successful implementation at a printing and cutting machine reduced the required operator time for manual adjustments by 15% and decreased the scrap rate by 10%.  

Manufacturers across  sectors can benefit from automated yield and throughput optimization. Successful implementations of the solution can be found in particular in  the Cement and Chemical industries. In the Cement industry, optimized  assets include  kilns, vertical mills, and horizontal mills, with the potential to apply the solution to coal mills as well. In the Chemical industry, optimized assets include, among others, distillation columns, boilers, and heaters. In both industries, key improvements reflect in throughput, quality, yield, energy consumption and emissions reduction. High improvement potentials through the deployment of the solution exists in various other sectors as well, in particular the Oil & Gas, Pulp & Paper, Power Generation, Pharmaceuticals, as well as Steel and Aluminium production.  

Image: an example of closed-loop yield optimization at DCC Aachen. A linear actuator controls the tape guidance (on the left). The upper right picture shows different types of printing errors that the optimization aims to prevent. A camera solution (machine vision-based quality inspection) was implemented to get information during the process (bottom right picture).

The optimization process starts by gathering relevant process data from various shopfloor systems including PLCs, SCADA, MES and DCS as well as quality data from QMS. This data is utilized to build data models based on neural networks to optimize a defined target function for e.g., yield, throughput, or energy consumption by respecting process constraints. As an output, the strategy developed using machine learning algorithm produces the optimum machine parameters for the defined target function optimization. A real-time optimizer provides the asset input parameters, enabling the operation of the machine at its calculated optimum without the need for extensive expertise from operators. This output can be deployed in  open-loop, i.e., with human intervention, or in closed-loop operation. Before going live with the optimization strategy on a given asset, it is tested with offline data several times. Additionally, the self-learning features configured in the data models built on Machine learning algorithms ensure continuous learning and improvement of the models. 

The typical implementation time frame for a first solution implementation is around 12 weeks, with the possibility to cut the time for re-installations by up to 50% (clone and adapt).


Image: Process steps in the implementation phase.

Ultimately, the solution enables the operation of complex and error-prone assets at a constant optimum by deploying machine learning algorithm-based solutions as opposed to relying on extensive human expertise. This allows for a smooth functioning of assets that are usually prone to bottleneck entire production lines, and avoids the need for intensive CAPEX investments when replacing the machinery.  

We look forward to answering any questions of yours regarding our solution. Simply drop us a message through the form below.