Big Bytes Out of Design
While it has seen limited use to date, experimental design offers some compelling benefits, such as greater efficiency through reduced runs and more comprehensive information, more robust results and faster cycle times.
Three main barriers hinder the implementation of traditional experimental design. They are:
So, what's the solution? Surprisingly, the use of PC-based experimental design software provides a means of addressing all of the concerns outlined above. The lack of a statistics department and excessively large, complex experiments are both easily solved with the use of experimental design software packages that possess the proper functions.
What are those functions? The ideal "do-it-yourself" software package should run on commonly available PC units; cover basic design types without excessive options; permit 2-12 variables for optimization, up to 25 variables for screening; permit noise arrays; permit a reasonable number of responses (at least 12); permit simple data transformations; permit model building; permit easy evaluation of the validity of the models built; provide results in both numerical and graphical formats; permit "learn as you go" teaching methods; and provide context-sensitive help screens to guide design set up and analysis issues.
Experimental design forces a different order of development activities - more fact finding and brain storming up front, followed by execution of experimentation. These up-front activities will often eliminate some of the work associated with just diving into the experiments.
The up-front planning for a design drives a clearer definition of the problem, an accurate recognition of the work to be done and better decision rules for success. The negotiations for decision rules also tend to clarify a project, and can often eliminate steps or alter approaches.
The calculation of the experimental runs is another instructive step in the design process. In bread dough formulation, for example, corrections necessary to accommodate additional fiber become apparent as the various formulas are worked out. This kind of information can illuminate how various fiber levels are affected by gluten and water changes necessary to make a consistent dough.
Once the preparatory work is complete, the design can be executed. This becomes a period of carefully controlled, but relatively rote experimentation. Because the experiment is an "event," rather than just another single run, researchers tend to collect all the information they are likely to need during the design process. Although it may require a few extra evaluations, the payback comes in the learning during the analysis process. The runs are made, the evaluations completed, and the data is collected and entered into the computer program.
Once data is entered, the experimental design software allows for in-depth analysis. Upon review, researchers may realize that some responses vary a great deal between the experimental runs, and some do not move at all. This allows the development team to focus on the responses that are significantly affected by the variables tested. A review of the correlation matrix on the data provides an opportunity for new learning.
For example, if sensory and objective measures are highly correlated, researchers may wish to explore the relationship further. Conversely, if a sensory measure is not correlated with an objective measure in an expected way, then they have learned something as well. And since well-planned experimental designs tend to produce a wide range of products (both good and bad) over the whole design space; they serve as an ideal vehicle to test out these kinds of questions.
Next, researchers build the models to best explain the experimental results. This is a simple process guided by tools like half normal plots, box whisker plots, and histograms that help illustrate the fit of the model to the experimental data. Context-sensitive help screens guide users each step of the way.
Since team scientists do the work and they understand the system as well or better than anyone, they are the most qualified to select the variables to be included in the model. If a model requires a term that doesn't seem reasonable, it can sometimes be replaced by another more reasonable combination of variables that explain the data equally well. And if it turns out that the unexpected variable is the only way to explain the data, researchers may have discovered a clue to important new learning about their food systems.
Next, researchers prioritize responses and begin to choose the best combination of variables to meet their objectives. Overlay plots can be very helpful with this task. But most importantly, it's the R&D scientist that makes the trade-offs to choose the best answer. They bring the knowledge that small movements from the mathematical optimum may be a much better or more realistic solution.
Often, the next step is to review findings and recommendations with team members. Their questions and expertise add to the whole experimental design process. And if they bring in important new input, researchers can review the analysis. The software, however, allows them to incorporate the learning without going back into the lab. This is the first point at which real time savings begin to appear.
The final step is to confirm recommendations with a test run. Most often, if the study has been planned and controlled well, the confirmatory run will come out as expected. On those rare occasions where that does not occur, researchers have the in-depth understanding of all aspects of the study to identify the appropriate next steps.
At the end of this process, researchers have an answer to the original question, just as if they had done traditional single-variable experimentation. But they also have the ability to answer new questions without additional experimentation and new insights and learning as well.
Sidebar: DOE Software Makes Dough for EarthgrainsScientists at Earthgrains have the best of both worlds. While R&D and Operations personnel have the luxury of the Luftig & Associates statisticians to work on their complex experimental design and analysis issues, they also utilize design of experiments (DOE) software from Int'l. Qual-Tech Ltd., Plymouth, Minn., to speed up development of new and improved products and processes.
Luftig & Associates has worked hand-in-hand to help Earthgrains formulate and implement their Total Quality Management systems, and their statistical consultants meet regularly with plant TQM managers and R&D technologists to solve their most difficult problems.
The user-friendly DOE software complements the consulting by allowing research scientists to design screening and optimization experiments to formulate better product and process solutions in a timely manner. A typical experiment can be designed, conducted and analyzed by the user in two or three days. Findings from one experiment help formulate additional experiments that leapfrog the experimenter to higher levels of understanding of the food system or process that is being developed.
Experiments can be conducted on 2-31 variables in the screening phase and upwards of 13 factors at the optimization phase. DOE can be used at any level in the development cycle from benchtop to plant scale. Examples of DOE successes at Earthgrains include:
"With Luftig's statisticians and our Computer Aided Design of Experiments software, we put the power and understanding of DOE into the hands of the scientist and engineer," declares Douglas Edmonson, vice president of R&D for Earthgrains Refrigerated Foods Division. "It makes things happen faster for more robust product and process solutions. It's the best of all worlds!"