Rainer von Königslöw, Ph.D.
15 York Valley Crescent, Toronto, Ontario, M2P 1A8
Tel.: (416) 489-2222 e-mail: drrainer@rogers.com
In this paper I want to illustrate applying an ergonomic perspective to process control. The motivation for this approach is experience in working with expert systems. It appears that one of the serious limitations in process control is the problem of managing all the knowledge we have about the process. We have to integrate the knowledge from engineers, research people, and production people. We have to live with limited data about the state of the process from sensors. And above all, we have to make the knowledge accessible and easy to work with for production people such as operators and shift supervisors, so that they can use it in operating the process.
To use the terminology from expert systems, this paper focusses on knowledge acquisition and knowledge maintenance. A case will be made that for good knowledge management it is important to be able to understand the data and controls as well as the control logic. The ergonomics of information is concerned with how all this information is represented to make it understandable and useable.
The case we shall make is that the development of new and more integrative control strategies will initially involve participation of the production team. Later, once these strategies are better understood, full automation may be possible, but that is a ways into the future.
We shall motivate the discussion from three perspectives. One is from issues of process control. A second is from an organizational perspective that deals with the maintenance and enhancement of systems. The third is from the building and maintaining of expert systems. We shall then set the stage with a simple production line. After a brief discussion of time we shall get into our main topic of information ergonomics and knowledge management by discussing how to implement it for our sample production line.
We shall illustrate applying an individual part tracking model to a continuous process. We shall illustrate how this kind of model can be understood and used on the production floor, and how it can be used for limited optimization and heuristic feed forward process control strategies.
Traditional process control uses negative feedback information from a sensor to adjust a process. The basic concept is that a desired state is indicated with a setpoint, deviations from the desired state are monitored, and the process is adjusted automatically to move toward the desired state. This kind of automatic control is normally applied to a single stage in the processing.
A second type of control process uses feed forward control information to adjust a process. The basic concept is that information from earlier processing stages can be used to influence processing at later stages. Ideally this information can come all the way from customer orders and from receiving inspections of feedstock. This kind of control process can be used to correct or compensate for deviations introduced in earlier processing.
Feed forward information is routinely used in controlling the control strategy. The most common example is called setup and applies when a new order is being filled with different specifications. The switch over from the old order to the new order required changes which propagate down the line.
Negative feedback information can also be used from later stages in the processing to correct and improve the control strategy used for the processing of subsequent material, even though it may be too late to correct observed deviations. Ideally this reaches all the way to final quality control.
Control strategies are getting more complex and multidimensional. One would like to satisfy quality control guidelines while minimizing production costs and increasing production throughput. Pollution controls and maintaining a satisfactory work environment add constraints. At the same time products are changing and customers are getting more demanding. In this environment control strategies are not stable, but have to evolve. At the same time, under increasing cost controls, control systems and strategies have to be developed that are satisfactory even though not necessarily optimal (c.f. Herb Simon).
With the increasing costs of processing equipment and the decreasing costs of computer control, the concept of individual part tracking is starting to emerge. The concept is that we can track the deviations from individual parts and use these for later processing. For instance, rather than having a screw that is exactly on dimensions, we can match a slightly bigger screw with a slightly wider nut and vice versa for a smaller screw. Similarly, to compensate for a deviation in earlier processing (or in the feedstock) we can adjust later processing to compensate.
Traditionally control systems are developed far from the point of use. Many negative feedback controls are incorporated into processing equipment. Other distributed control systems are designed and developed by vendors and then installed and tuned by contractors or by an engineering department. All of this assumes that basic control strategies can be designed and implemented at a distance, without the experience and expertise of the people in production.
As the demands for controls increased and the cost for individualized control systems have decreased, there have been many more customized add-on controls, including many based on expert systems technology. Most of these projects have originated out of, and have been managed by engineering and research departments. Most of these projects have been designed and implemented as static solutions, i.e., as add-on systems that would require little tuning, maintenance or enhancements.
This kind of approach has not been satisfactory in many situations for feedforward systems or for larger scale integration of control systems. In this paper we suggest a much more modest approach to feedforward systems and larger scale integrations that do not promise a hands-off stable solution but that work closely and interactively with the shop floor people, and that draws on the expertise from production.
Traditionally expert systems automate access to the expertise of a live human expert. Expertise is acquired from the expert with interviews and encoded into a specialized database usually called a knowledgebase. The knowledge is encoded as rules, frames, or other forms of knowledge representation.
Many expert systems are based on the expertise of a single expert. Others may be based on the expertise of a group of experts. If their expertise is overlapping, i.e., deals with a single domain of knowledge, the expertise may be blended together in a single homogeneous knowledgebase. In some cases, even with overlapping expertise, the expertise from distinct sources may be kept distinct. To illustrate the latter, we once built an expert system based on seven experts with quite distinct approaches. When executing, the expert system would use each of the seven knowledgebases in turn and compare the results. The report was based on a voting strategy, with a majority and a minority report if the outcomes were sufficiently distinct.
Expert systems may also merge together knowledge from different experts with distinct domains of expertise. Again, the knowledge may be blended. It may even be acquired and maintained by a committee. In other cases the knowledge may be kept distinct in separate knowledgebases so that only the overlap has be be handled by committee. This modular approach simplifies maintenance and further enhancements.
There is a third type, where some of the crucial expertise does not exist ahead of time, so that it cannot be acquired from human experts. In this case other methods of knowledge acquisition have to be designed. Approaches include acquiring some of the knowledge experimentally, usually in conjunction with a research or optimization project, and the use of systems such as neural nets which exhibit automatic learning by the machine.
The initial motivation for building expert systems was the automation of access to human expertise. The first expert systems focussed on encoding the knowledge from human experts so that computers could simulate their reasoning. This work resulted both in commercial products (expert system shells) and in a suite of methodologies (knowledge acquisition, rapid prototyping). We are now into the third generation of such products.
A second motivation for building expert systems has arisen from experience in building expert systems. The knowledge representation schemes developed to enable computers to understand human knowledge can also be used to enable humans to understand and manage the knowledge from other humans. The fact that the knowledgebase is mechanistic and interpretable by computers adds a validity to the scheme. Knowledge that is represented in a knowledgebase can be understood, managed and extended in ways that were not possible for more traditional language based representations of knowledge. This role of a mechanistic and tranferrable knowledge representation was previously limited to mathematical models such as in physics and chemistry, but it did not allow satisfactorily for qualitative and heuristic knowledge such as that found on the shop floor. Also, the expert system representation is based on a simplification of natural language reasoning, and as such is much more accessible to most people. It does not require training in mathematics, logic or computer science. In other words, the knowledge representation schemes developed for expert systems extend our ability to represent knowledge in a mechanistic, formal and tranferrable form.
The initial impetus for making the knowledge representation schemes understandable by people came from the requirements of quality assurance for expert systems. The computer has to simulate how the human expert reasons with data. To assure that the computer reasoning is correct, the human expert has to understand how the computer is using the knowledge.
There have now been a number of cases where this knowledge representation has allowed the transfer of knowledge to the shop floor, to help in maintaining, tuning and enhancing the system. In some cases expertise has been missing, and new knowledge has been found to add to the knowledgebase.
Let us start by analyzing the situation. We shall limit ourselves to continuous processes for ease of illustration. We shall differentiate between physical processes and information processes. Information processes are split into data components and control logic components.
In a traditional process control display, all the data is sampled concurrently, so that the operator can view data recorded at the same time. Using this perspective, one views what all the physical processing units are doing at the same time, but one views a great variety of material, from material being mixed to material being baked to material being analyzed by quality control. This approach is well suited for the control of unanticipated events, such as from the malfunction of equipment. I would claim that it is less well suited to the optimal management of processing materials. For the latter it may be better to be able to focus on particular clumps of material both to analyze what has happened to it and on how it should be treated for the remainder of the processing.
If one wanted to analyze all the data applying to the same material then one would have to time-shift the data. If the material under consideration was currently being analyzed by quality control, then the relevant data from the dryer for that material would have been data recorded 5 minutes earlier. The baking data for the same material would have been from 10 minutes ago, the extrusion from 15 minutes ago, and the mixing data from before that. If all the data is being recorded then one can time-shift it so that data corresponding to the same material is being displayed together.
The time-shifting discussed above applies to descriptive data. A similar concept can be used for control data. If we know, say from the mixing data, that some chunk of material has a slightly different composition, we may also know that this material should receive slightly different treatment such as higher temperature baking or lower temperature drying. We can now use time-shifting of control data to effect this customized treatment. We know that 5 minutes after the material has been extruded, the setpoint in the baking process should be increased by a given amount, or that 10 minutes after extrusion the drying temperature should be lowered by a fixed amount.
In the above example we time-shifted control information of the sort: for this material increase the standard temperature setpoint by 5 degrees. The control logic information is timed to reach the appropriate control at the appropriate time for the material to which it applies. The time-shifted control logic can be more complex, in that we can take into consideration data obtained during the processing. For instance we might have observed unusually high moisture of one of the feedstocks. We might first try to control it with the extrusion process, but that might not always be successful. We might therefore timeshift conditional logic such as: If the humidity recorded for this material in 5 minutes exceeds the standard by x percent, then in 10 minutes increase the baking temperature by x * 0.5 degrees.
One of the advantages of this type of approach is that it does not require fancy process control systems to be implemented. In fact it could be implemented with paper and pencil as well as traditional read-outs and controls. Doing it with paper and pencil of course requires more labour.
Another advantage of this approach is that it does not require extensive data and extensive controls. Of course more data and controls tends to be better, but there is always a cost.
A third advantage of this approach is that one does not have to understand the process in detail. Again, the better it is understood the better it is for control, but generally one cannot afford Ph.D. food chemists to run production lines.
For this approach, the continuous process is divided into segments or discrete chunks of material. The unit of division is pragmatic. Each of these chunks will be tracked individually. If paper and pencil is used, one might want to use 5 minute segments, so that 96 reports are generated during an 8 hour shift. If a distributed control system is used, the segmentation will depend on the sampling period, which might be 15 second or 1 minute intervals. The segmentation could also depend on events at the input end, such as discrete changes in the mix. Ideally the segmentation should include a short enough time period that major variations tend to span multiple time segments.
The next thing that has to be done is to identify the time offsets for different stages in the processing. This can be done simply with a stopwatch by marking some material at the mixing stage and then walking it through. For sake of simplicity we shall assume that transport times are constant. (If transport sppeds are controlled, then time offsets have to be calculated from the current control settings.)
Next, the data and control parameters have to be identified, along with their permissible values. This is especially important if it is to be done by paper and pencil. Some of the values may be qualitative, such as identifying the origins and characteristics of the feedstock. Only characteristics that may change during the processing between chunks need to be included, and only if variations can be readily perceived. "Unknown" may have to be allowed as a value for some parameters.
Finally, assuming our paper and pencil approach, forms may have to be prepared with the parameter names and the relevant time offsets.
For a few shifts we may simply want to collect data based on this individual part tracking concept. To do so we start a new form at the beginning of the process, e.g., at the mixing end, and start passing it down the line along with the material it is tracking. (Just how this is done will of course depend on the processing line, on how it is normally operated, on what data read-outs and data recorders are available, etc.) For each data or control point it passes, the current values are recorded at the appropriate time-offset in the appropriate row and column in the form. The paper accompanies the material all the way into quality control, or whereever it reaches the end of the processing line.
Generally the analysis starts with the quality control data, since that is where customer or market requirements meet with the production line. The type of analysis depends on the skill of the people involved in the analysis, and on the demands made on the analysis. Again, to take a simple example, the papers can be grouped into 2 or 3 piles. One might differentiate between those chunks which did not satisfy QC, those that marginally satisfied, and those that were ok. One can then look for patterns, both in the data and in the control settings, that differentiate between the piles. Sometimes clear patterns will emerge, and sometimes one has to search a little harder with relative frequency tables, averages, Markov chains, etc. Of particular interest are sequential patterns. The QC based approach essentially sorts the data in reverse time sequence. One can also order in time sequence to detect and analyze patterns. The patterns one is looking for are patterns that predict QC success or failure as early in the processing as possible.
If no interesting and predictive patterns are found, one can still try the discrete local optimizing approach described below.
If predictive patterns are found, one can use them as heuristic feed forward controls, in conjunction with the data that is being time-shifted across processing stages. Ideally the pattern looks like a decision tree that indicates both successful and unsuccessful control strategies, such as: if the moisture is high at time-offset 5, and the control is set at 37 at offset 10, then the chunk fails QC; but if the moisture is high at time-offset of 5, and the control is set at 39 at offset 10, then the chunk passes QC. Using this pattern we can formulate a simple feedforward control strategy: if the moisture is high at time-offset 5 then set the control to 39 at offset 10.
This may seem like a very simplistic approach to feedforward controls, and it is. In most cases the rules will be somewhat more complex, with more conditions included. Also, the rule will be under constant review, if the production is being monitored with the approach suggested above.
The advantage of this kind of approach is that it is quite easy to understand by the people in production. It can therefore be tuned and modified as conditions warrant.
We can combine the pattern analysis approach with the feed forward control approach to search for local optimizations. This is especially important where multiple objectives have to be met, possibly with different levels of priorities and costs of failure. For instance, one may be required to stay within QC limits and minimize power consumption. It may also be clear that a 1% reject rate is equivalent in cost to a 5% reduced power consumption.
Using the patterns above one can investigate systematic changes in the controls and study their impact both on power consumption and the rejection rate. One can then convert the results into new control strategies.
It should be clear that the approach described above can be automated with a simple expert system, as long as the data and the controls are accessible in electronic form. The time-shift feature is easy to implement with a cyclic data structure in a file or with a large array such as a spreadsheet. The expert system can then work with the time-shifted data to set controls automatically.
The important aspect in implementing such a system is to keep it easily understandable to the production team, so that the production team can monitor and tune it quickly and easily to adapt to changing circumstances.