Burns pone en cuestión aspectos básicos de Six Sigma, el más fundamental, la discusión misma sobre el control estadístico de defectos:"Look at the results," you might say. Thousands of companies have saved thousands of dollars with Six Sigma programs. It's equally true that placebos have cured thousands of sick people. Could Six Sigma be a placebo?
Six Sigma is different from programs that have gone before it, such as quality circles, TQM, quality improvement and continuous improvement. Past programs have typically been driven by a quality manager with no line authority and little if any budget. Even "Vice President of Quality" has often been a title lacking in real power. Six Sigma is different in that it's been driven from the top, with senior executives such as Jack Welch playing a central role. Consequently, expenditure on quality has been at unprecedented levels, with companies like General Electric spending more than half a billion dollars per annum on Six Sigma. Any program that's driven with such dedication and force is likely to produce results.
...Y un paso más arriba, el significado de manejarse por el control estadístico de defectos:Rather than accepting the corporate world revolving around the Six Sigma sun, we’ve got to risk accusations of heresy and start asking questions. First, the Six Sigma methodology suggests we should count defects, and if we have fewer than 3.4 defects per million opportunities, we have a six sigma process. The number 3.4 comes from assuming processes are normally distributed, then applying a factor of +/- 1.5 sigma to account for the drift that processes inevitably experience over time. In other words, suppose a process has a target value of 10.0 and control limits work out to be, say, 13.0 and 7.0 with a sigma of 1.0. Process drift implies that the mean will drift to 11.5 (or 8.5), with control limits changing to 14.5 and 8.5. Now this is terrible news for customers expecting to receive a product that stays on target. They're being told that the Six Sigma process will produce exceptional quality, and at the same time they can expect huge variations in the process mean.
You may wish to vary the numbers above to suit your particular product, but the result is the same. Drifting process averages imply poor quality. If I ask for a product with a certain target value, I want that target value to remain. I don't want to be told by a supplier that it's "inevitable" that the mean will drift considerably. How did this extraordinary situation arise? Where did the 1.5 sigma drifting means originate?
The +/-1.5 shift was introduced by Mikel Harry. Where did he get it? Harry refers to a paper written in 1975 by Evans, “Statistical Tolerancing: The State of the Art. Part 3. Shifts and Drifts.” The paper is about tolerancing. That's how the overall error in an assembly is affected by the errors in components. Evans refers to a paper by Bender in 1962, “Benderizing Tolerances—A Simple Practical Probability Method for Handling Tolerances for Limit Stack Ups.” He looked at the classical situation with a stack of disks and how the overall error in the size of the stack related to errors in the individual disks. Based on probability, approximations and experience, he asks:
"How is this related to monitoring the myriad processes that people are concerned about?" Very little. Harry then takes things a step further. Imagine a process where five samples are taken every half hour and plotted on a control chart. Harry considered the “instantaneous” initial five samples as being “short term” (Harry’s n=5) and the samples throughout the day as being “long term” (Harry’s g=50 points). Because of random variation in the first five points, the mean of the initial sample is different from the overall mean. Harry derived a relationship between the short-term and long-term capability, using the equation to produce a capability shift or “Z shift” of 1.5. Over time, the original meaning of instantaneous “short term” and the 50-sample point “long term” has been changed to result in long-term drifting means.
Harry has clung tenaciously to the “1.5,” but over the years its derivation has been modified. In a recent note, Harry writes, “We employed the value of 1.5 since no other empirical information was available at the time of reporting.” In other words, 1.5 has now become an empirical rather than theoretical value. A further softening from Harry: “… the 1.5 constant would not be needed as an approximation.”
Six Sigma is a specification-driven methodology. Six Sigma is based on counting defects, and defects relate to the specification. It's easy for consultants to claim they'll halve defects. They simply change the specification. Specifications tell us nothing about what the process is doing. Specifications are the voice of the customer, not the process. If we're to improve processes, we must listen to the process. The voice of the process is the control limit. Control limits have been and will always be based on three sigma.Secundariamente, Burns también critica el número de herramientas utilizadas, y el uso de "Black Belts", como un signo de elitismo (Do companies need elitism of this kind? Deming taught us in point 9 of his 14 points, "to drive out fear" and in point 10, "To break down barriers between departments." Reducing elitism leads to better communication and allows people from different areas to work better together to solve problems in the workplace)
En fin,
Perhaps the above gives some clue as to why Toyota continues to make the number-one quality-rated car in the U.S. (J.D. Power 2005). Unlike Ford and General Motors, which are strong followers of Six Sigma, Toyota does NOT use Six Sigma. G.M. had a loss of $8.6 billion for 2005. Ford lost $4 billion in the first nine months of 2005 and plans to eliminate 30,000 jobs and close 14 plants across the United States over the next six years.
At the risk of being burnt at the stake, people should question what's put before them rather than blindly accepting common viewpoints at face value.
No hay comentarios.:
Publicar un comentario