individual project ii laura m williams, rn, clnc, …...a.zargari - instructor page 2 2/23/2014...

12
Running head: INDIVIDUAL PROJECT II 1 Individual Project II Laura M Williams, RN, CLNC, MSN IET603: Statistical Quality Assurance in Science and Technology Morehead State University Dr. Ahmad Zargari 22 February 2014

Upload: others

Post on 14-Mar-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Individual Project II Laura M Williams, RN, CLNC, …...A.Zargari - Instructor Page 2 2/23/2014 Laura Williams - Student Individual Project II (50 points): Please solve and Submit

Running head: INDIVIDUAL PROJECT II 1

Individual Project II

Laura M Williams, RN, CLNC, MSN

IET603: Statistical Quality Assurance in Science and Technology

Morehead State University

Dr. Ahmad Zargari

22 February 2014

Page 2: Individual Project II Laura M Williams, RN, CLNC, …...A.Zargari - Instructor Page 2 2/23/2014 Laura Williams - Student Individual Project II (50 points): Please solve and Submit

A.Zargari - Instructor Page 2 2/23/2014

Laura Williams - Student

Individual Project II (50 points): Please solve and Submit your completed project by 2-25-2014 at 10:00 p.m. 1. Explain the foundation of Shewhart’s notion of scientific approach and the basic activities

involved in developing means for satisfying the customers (in approximately 100 words).

In 1924, Shewhart suggested that continual process-adjustment in reaction to non-conformance actually increased variation and degraded quality and stressed the importance of reducing variations in the manufacturing process. Shewhart developed the control chart which framed the problem in terms of assignable-cause and chance-cause variation as a tool for distinguishing between the two. Shewhart stressed that bringing a production process into a state of statistical control, where there is only chance-cause variation, and keeping it in control is necessary to predict future output and to manage a process economically. Dr. Shewhart created the basis for the control chart and the concept of a state of statistical control by carefully designed experiments. As a statistician, Shewhart drew from pure mathematical statistical theories. He discovered that observed variation in manufacturing data did not always behave the same way as data in nature and found that data from physical processes never produce a "normal distribution curve." Shewhart concluded that while every process displays variation, some processes display controlled variation that is natural to the process, while others display uncontrolled variation that is not present in the process causal system at all times. http://en.wikipedia.org/wiki/Walter_A._Shewhart

2. Explain the Juran Trilogy, in approximately 50 words:

Juran exposed the cost of poor quality and asserted that without change, there will be a constant waste; however, during change there will be increased costs, but after the improvement, margins will be higher and the increased costs get recouped. To effect a change, Juran suggested a new approach to cross-functional management composed of three managerial processes: quality planning, quality control, and quality improvement – the “Juran Trilogy.”

Planning – establish objectives and requirements for quality

Control – operational techniques and activities used to fulfill requirements for quality

Improvement – systematic and continuous actions that lead to measurable improvement

http://en.wikipedia.org/wiki/Joseph_M._Juran

3. Briefly describe the purpose for basic tools for quality improvement (Basic Process Improvement Toolbox).

The Basic Tools of Quality are called basic because they are suitable for people with little formal training in statistics and because they can be used to solve the vast majority of quality-related issues. These tools are a fixed set of graphical techniques which were identified by Kaoru Ishikawa as being the MOST helpful when troubleshooting issues relate to quality. This toolbox includes the:

Page 3: Individual Project II Laura M Williams, RN, CLNC, …...A.Zargari - Instructor Page 2 2/23/2014 Laura Williams - Student Individual Project II (50 points): Please solve and Submit

A.Zargari - Instructor Page 3 2/23/2014

Laura Williams - Student

Cause-and-effect diagram (alternately, “fishbone” or Ishikawa diagram)

Check sheet

Control chart

Histogram

Pareto chart

Scatter diagram

Stratification (alternately, flow chart or run chart)

These basic tools can be used separately or in conjunction with one or more of the other tools to identify weaknesses in the process. Ishikawa encouraged every member of an organization to learn these tools and become adept in their use so that the quality mentality was a shared vision within the company.

http://en.wikipedia.org/wiki/Seven_Basic_Tools_of_Quality

4. Explain the properties of random variables

A random variable is a function that associates a unique numerical value with every outcome of an experiment. The value of the random variable will vary from trial to trial as the experiment is repeated, and cannot be predicted with complete certainty. Their values are subject to variations due to chance and do not have a fixed value. This allows the random variable to assume a set of possible different values, each with an associated probability outcome of a random process. The numerical values of a random variable can be manipulated to construct summary values for the variable, allowing for an observation of a moment of the random variable. Once the random variable has been defined, the process can be examined using probability distribution (discrete random variable) or probability density function (continuous random variable). http://www.stats.gla.ac.uk/glossary/?q=node/410

5. The data shown below are the times in minutes that successive customers had to wait for service at an oil change facility. Using hand calculation and formula 1) find sample mean and standard deviation, 2) construct a histogram, and 3) use MINITAB to validate your findings and the histogram.

9.93 10.13 9.98 9.92 9.98 9.92 9.78 10.07 9.84 10.01

9.97 9.97 9.92 10.09 10.09 9.96 10.08 10.01 9.84 10.08

9.91 10.15 9.94 9.98 10.00 9.90 9.93 9.88 9.92 10.02

10.09 9.99 10.05 10.01 10.03 10.07 9.91 10.06 9.86 10.03

10.01 10.05 10.21 9.95 10.02 10.10 9.88 10.13 9.83 9.97

Page 4: Individual Project II Laura M Williams, RN, CLNC, …...A.Zargari - Instructor Page 2 2/23/2014 Laura Williams - Student Individual Project II (50 points): Please solve and Submit

A.Zargari - Instructor Page 4 2/23/2014

Laura Williams - Student

Mean = 9.9890 Standard Deviation = 0.0918

Page 5: Individual Project II Laura M Williams, RN, CLNC, …...A.Zargari - Instructor Page 2 2/23/2014 Laura Williams - Student Individual Project II (50 points): Please solve and Submit

A.Zargari - Instructor Page 5 2/23/2014

Laura Williams - Student

6. If the probability that any individual will react positively to a drug is 0.8, what is the probability that 4 individuals will react positively from a sample of 10 individuals? (note to self: reference slide 30 from PPT by Ding & Yang)

P10(4) = C410 x 0.84 x (1 – 0.8) 10-4

= 10! x 0.84 x (1 – 0.8) 10-4

4! x (10-4)!

= 3628800 x 0.4096 x 0.000064

720 = 209.81 x 0.00002621 = 0.0055

7. Suppose the average number of customers arriving at ATM during the lunch hour

is 12 customers per hour. The probability of exactly two arrivals during the lunch hour is: P(x; μ) = (e-μ) (μx) / x! X=2, μ=12

P(x=2) = (e-12) (122) / 2! P(x=2) = 0.0004424

8. In a sample of 100 items produced by a machine that produces 2% defective items, what is the probability that 5 items are defective? (Calculate with binomial distribution formula and verify your response using MINITAB).

b (x; n, P) = nCx * Px * (1 - P)n – x

n=100, x=5, p=0.02

b (x=5) = 100C5 * 0.025 * (1 – 0.02)100 – 5

b(x=5) = 0.0353468047277209

Page 6: Individual Project II Laura M Williams, RN, CLNC, …...A.Zargari - Instructor Page 2 2/23/2014 Laura Williams - Student Individual Project II (50 points): Please solve and Submit

A.Zargari - Instructor Page 6 2/23/2014

Laura Williams - Student

Solve the question #8 using Poisson distribution formula and verify your response using MINITAB. P(x; μ) = (e-μ) (μx) / x! P(5; 0.02) = (e-5) (50.02) / 5! p = 0.007

9. It is assumed that the inductance of particular inductors produced by ABC

Company is normally distributed. The of inductors is = 20,000 mH, and of 90 my. If acceptable inductance range is from 19,750 mH to 20,200 mH. Using both formula and MINITAB, determine the expected number of rejected inductors in a production run of 10,000 inductors. P(a<x<b) =p[(a- μ )/ σ < (x- μ)/ σ < (b- μ)/ σ]

=p[(a- μ)/ σ < Z< (b- μ)/ σ]

P = (x- μ)/ σ

= (20200-20000)/90 = 200/90 = 2.22 Z = 0.9868 (x>a); Z = 1 – 0.9868 = 0.0132 Z = (19750-20000)/90 Z = -250/90 Z = -2.78 P (z<-2.78) = 0.0027 (Normal Distribution Table http://www.math.upenn.edu/~chhays/zscoretable.pdf) 0.0132 + 0.0027 = 0.0159 10,000 * 0.0159 = 159 inductors rejected 10,000 – 159 = 9,841 inductors in range

Page 7: Individual Project II Laura M Williams, RN, CLNC, …...A.Zargari - Instructor Page 2 2/23/2014 Laura Williams - Student Individual Project II (50 points): Please solve and Submit

A.Zargari - Instructor Page 7 2/23/2014

Laura Williams - Student

10. Explain Null Hypothesis, Alternative Hypothesis, Types of error, Significance

Level, Risk Level in hypothesis testing.

Null Hypothesis – (H0 : μ) represents a theory that has been put forward, either

because it is believed to be true or because it is to be used as a basis for argument, but has not been proved.

Alternative Hypothesis – (H1 : μ) the alternative hypothesis (or maintained

hypothesis or research hypothesis) . It is a statement of what a statistical hypothesis test is set up to establish.

Type I Error –incorrect rejection of a true null hypothesis.

Type II Error - failure to reject a false null hypothesis.

Significance Level - is a statistical assessment of the probability of wrongly rejecting the null hypothesis, if in fact, it is true. (The probability of having a Type I error).

Risk Level in hypothesis testing - The level of risk that the researcher wishes to assume in the analysis is represented by the probability of Type I and Type II errors. Given a specific test design (sample size, hypothesis), the researcher specifies the level of Type I risk, referred to as the significance level. The compliment of the significance level is the confidence level.

Page 8: Individual Project II Laura M Williams, RN, CLNC, …...A.Zargari - Instructor Page 2 2/23/2014 Laura Williams - Student Individual Project II (50 points): Please solve and Submit

A.Zargari - Instructor Page 8 2/23/2014

Laura Williams - Student

11. Formulate the appropriate null hypothesis and alternative hypothesis for testing that the starting salary for graduates with a B.S. degree in electrical engineering

is greater than $38,000 per year. The significance, or risk, is: What does = 0.05 mean?

H0 : μ = 38000 H1 : μ > 38000

= 0.05 implies that only 5% of the time would the supposed statistical process produce a finding this extreme if the null hypothesis were true.

A p=value is a measure of the strength of evidence against the null hypothesis. It is a conditional probability in that its calculation is based on the assumption that the null hypothesis is true. The smaller the p-value, the more strongly the test rejects the null hypothesis gives a probability. A p=value of 0.05 or less rejects the null hypothesis.

12. In a New York Times/CBS poll, 56 percent of 2,000 randomly selected voters in

New York City said that they would vote for the incumbent in a certain two-candidate race. Calculate a 95 percent confidence interval for the population proportion. Discuss its implication. Carefully discuss what is meant by the population, how you would carry out the random sampling, and what other factors could lead to differences between the responses to the surveys and the actual votes on the day of the election.

In this instance, the “population” is the 2,000 randomly selected voters. The random sampling could be achieved by doing telephone calls to registered voters in the area or by doing surveys at a shopping center. Differences in responses should be considered if there were differences in the socioeconomic statuses in the geographical area of the shopping centers. Additionally, additional factors such as weather or whether or not the sampling occurred on a typical “day off from work” could account for differences in the sampling response and the actual response. If it was a pretty Saturday, a random sampling might suggest a better outcome than a rainy Tuesday. Conversely, a sampling on a rainy Monday may suggest a worse outcome than a pretty Tuesday for voting. In this instance, the confidence interval is a +/- 2.18. This means that we can be 95% certain that the true proportion falls between 53.82% - 58.18%.

Page 9: Individual Project II Laura M Williams, RN, CLNC, …...A.Zargari - Instructor Page 2 2/23/2014 Laura Williams - Student Individual Project II (50 points): Please solve and Submit

A.Zargari - Instructor Page 9 2/23/2014

Laura Williams - Student

13. A random sample of 50 teaching assistants at the University of Iowa in the fall of

1996 indicated that 30 of them were planning to join the union for teaching assistants. Calculate a 95 percent confidence interval for the proportion of University of Iowa teaching assistants who are in favor of joining a union.

In this instance, the confidence interval is a +/- 13.58. This means that we can be 95% certain that the true proportion falls between 45.18% - 73.59%.

14. Suppose that the number of wire-bonding defects per unit that occur in a

semiconductor device is Poisson distributed with parameter lambda (Mu or variance) of 4. Then, the probability that a randomly selected semiconductor device will contain 2 or fewer wire-bonding defects is: P(x; μ) = (e-μ) (μx) / x! P(4; 2) = (e-4) (42) / 4!

P = 0.147

Page 10: Individual Project II Laura M Williams, RN, CLNC, …...A.Zargari - Instructor Page 2 2/23/2014 Laura Williams - Student Individual Project II (50 points): Please solve and Submit

A.Zargari - Instructor Page 10 2/23/2014

Laura Williams - Student

15. List and explain Deming’s 7 deadly diseases.

1) Lack of constancy of purpose to plan product and service that will have a

market and keep the company in business, and provide jobs. As long as the focus is on short term thinking, management will fail to plan adequately. Without good long

term planning, worker efforts will be irrelevant. More significantly this disease is warning that TQM

cannot be a fad. If management changes its philosophy by whatever was the latest book it read, then there

will be no long term forward progress.

2) Emphasis on short-term profits: short-term thinking (just the opposite of

constancy of purpose to stay in business), fed by fear of unfriendly takeover,

and by push from bankers and owners for dividends. There is nothing easier to do than boost profits in the short term. All a manager has to do is cut any

expense related to the long term: training, maintenance, purchase of new capital, etc. For non-profits like

schools and hospitals, substitute "Emphasis on short term costs" instead of "Emphasis on short term

profits." These institutions, especially when in budget crises, focus on cutting short term costs without

regard to long term consequences.

3) Personal review systems, or evaluation of performance, merit rating, annual

review, or annual appraisal, by whatever name, for people in management, the

effects of which are devastating. Management by objective, on a go, no-go

basis, without a method for accomplishment of the objective, is the same thing

by another name. Management by fear would still be better. The essential problem with merit systems is that they reward results rather than process improvement.

Results will almost always have a lot of system luck mixed in.

4) Mobility of management; job hopping. This is perhaps the simplest and yet one of the most deadly of diseases. When top management changes

organizations every three or four years, that means continuous improvement efforts will be broken and

disjointed as the new "leaders" come on board. Moreover, with changes in leadership, there is frequently a

change in management philosophy. How can there be constancy of purpose in such an environment?

Page 11: Individual Project II Laura M Williams, RN, CLNC, …...A.Zargari - Instructor Page 2 2/23/2014 Laura Williams - Student Individual Project II (50 points): Please solve and Submit

A.Zargari - Instructor Page 11 2/23/2014

Laura Williams - Student

5) Use of visible figures only for management, with little or no consideration of

figures that are unknown or unknowable. Many consultants in the quality field have been quoted to say, "If you can't measure it, you can't manage

it." Certainly Deming would have been one of the first to argue that good data is essential and should be

factored into all decisions whenever possible. Deming was in fact very critical of people who fail to use

data when it is available.

6) Excessive medical costs. American auto companies pay more for medical care than they do for steel. For the economy as a whole,

health care as a percent of overall expenditures has steadily risen for decades gradually pushing numerous

business and government budgets into a state of crisis. Deming would have approved of the political system

attempting to reform health care.

7) Excessive costs of liability. Deming blamed America's lawyers in part for the problems of American business. The US has more

lawyers per capita than any other country in the world. They make their livings to a considerable extent by

finding people to sue. Like health care costs, Deming believed the solution to this disease will probably

have to come from the government.

Retrieved 21 February 2014 from http://www.endsoftheearth.com/Deming14Pts.htm

16. Explain who, when, and why sis sigma was developed. What is the focus of six sigma? Six sigma, a set of techniques and tools for process improvement, was developed by Motorola in 1986, and later made famous by Jack Welch from General Electric in 1995. The focus of six sigma is to improve the quality of process outputs by identifying and removing the causes of defects (errors) and minimizing variability in manufacturing and business processes. It uses a set of quality management methods, including statistical methods, and creates a special infrastructure of people within the organization ("Champions", "Black Belts", "Green Belts", "Yellow Belts", etc.) who are experts in the methods.

http://en.wikipedia.org/wiki/Six_Sigma

17. Explain the DMAIC process and its relevance to quality.

DMAIC is an abbreviation of the five improvement steps it compromises: Define, Measure, Analyze, Improve and Control. DMAIC refers to a data-driven improvement cycle used for improving, optimizing and stabilizing business processes and designs by clearly defining the problems, goals, resources, scope, and timeline. ALL of the DMAIC process steps are required and are always implemented in the given order. The DMAIC improvement cycle is the core tool used to drive Six Sigma projects. However, DMAIC is not exclusive to Six Sigma and can be used as the framework for other improvement applications. http://en.wikipedia.org/wiki/DMAIC

Page 12: Individual Project II Laura M Williams, RN, CLNC, …...A.Zargari - Instructor Page 2 2/23/2014 Laura Williams - Student Individual Project II (50 points): Please solve and Submit

A.Zargari - Instructor Page 12 2/23/2014

Laura Williams - Student

18. What is HISTOGRAM and why it is being used?

A histogram is a graphical representation of the distribution of data. It is an estimate of the probability distribution of either a discrete or continuous variable. It is a summary of the data divided into groups which are represented by a rectangular bar whose height is proportional to the number of observations. Unlike the bar graph, the bars in the histogram touch. Histograms are useful in detecting any unusual observations (outliers). http://www.stats.gla.ac.uk/steps/glossary/presenting_data.html#hist

19. Define Probability Distribution. What is the difference between discrete and

continuous distributions? Provide Examples. The probability distribution of a discrete random variable is a list of probabilities associated with each of its possible values. A discrete random variable is one which may take on only a countable number of distinct values such as 0, 1, 2, 3, 4, ... Discrete random variables are usually (but not necessarily) counts. If a random variable can take only a finite number of distinct values, then it must be discrete. Examples of discrete variable include the number of tickets sold, number of returns, or the number of houses on a street. Conversely, a continuous variable is one which takes an infinite number of possible values. Continuous random variables are usually measurements. Examples include measurements of length, width, height, hours, and time.

20. Define Bernoulli trials and Poisson distributions and explain their uses in Quality.

Give example.

The Bernoulli trial is an experiment with exactly two possible outcomes. The classic example of a Bernoulli trial (or binomial trial) is the coin toss where the only two outcomes possible are “heads” or “tails.” Other examples include yes/no questions (opinion polls) or success/failure experiments. In regards to quality, the Bernoulli trials are used to determine the probability of success. The Poisson distribution includes time in its evaluation as either a certain time interval or as a spatial area. To determine the Poisson distribution, events must occur at a constant average and the length of the observation period must be fixed in advance. In regards to quality, this allows the determination to be made of the probability of an actual number of successes that occur in the fixed observation period. Examples can include number of customers at a certain hour or number of cars passing an exact spot in an hour. http://stattrek.com/probability-distributions/poisson.aspx