1. Statistically speaking, we are generally agnostic to which is a bigger problem, type I (false positive) errors or type II (false negative) errors. However, in certain circumstances it may be important to try and put more emphasis on avoiding one or the other. Can you think of an example of where you may want to try harder to avoid one type or another? Can you think of a policy; political, economic, social, or otherwise, that pushes people toward avoiding one type or another? What are the repercussions of such policies?
2.The U.S. FDA is responsible for approving new drugs. Many consumer groups feel that the approval process is too easy and, therefore, too many drugs are approved that are later found to be unsafe. On the other hand, a number of industry lobbyists have pushed for a more lenient approval process so that pharmaceutical companies can get new drugs approved more easily and quickly. This is from an article in the Wall Street Journal. Consider a null hypothesis that a new, unapproved drug is unsafe and an alternative hypothesis that a new, unapproved drug is safe.
a) Explain the risks of committing a Type 1 or Type 2 error.
b) Which type of error is the consumer group trying to avoid?
c) Which type of error is the industry lobbyists trying to avoid?
d) How would it be possible to lower the chances of both Type 1 and 2 errors?
(a) A Type I error is the mistake of approving an unsafe drug. A Type II error is not approving a safe drug.
(b) The consumer groups are trying to avoid a Type I error.
(c) The industry lobbyists are trying to avoid a Type II error.
(d) To lower both Type I and Type II errors, the FDA can require more information and evidence in the form of more rigorous testing. This can easily translate into longer time to approve a new drug.
Can you think of any other examples?
3. Many of you will have heard of Six Sigma management. What you may not realize is that the etymology of the term Six Sigma is rooted in statistics. As you should have seen by now in your textbook, statisticians use the Greek letter sigma (σ) to denote a standard deviation. So when these Six Sigma people start talking about “six sigma processes,” what they mean is that they want to have processes where there are (at least) six standard deviations between the mean and what would be determined to be a failure. For example, you may be examining the output of a factory that makes airline grade aluminum. The average tensile strength of each piece is 65 ksi, and you view a particular output as a failure if the tensile strength is anything less than 64 ksi. If the standard deviation is less than .166, then the process is six sigma. The odds of a failure within a six sigma process are 3.4 in a million, which corresponds to the 99.9997% confidence level. When we are doing statistics, we usually use the 95% confidence level, which is roughly 2 sigmas.
In the case of the tensile strength of airline grade aluminum, 6 sigmas is probably a good level to be at—catastrophic failure on an airplane could open you up to lawsuits worth billions of dollars. But there are some other processes that you probably don’t need to be so certain about getting acceptable products from. Give some examples from your own business life of random processes that are likely to be normally distributed, and say how many sigmas you think the process should be at.
4. Instead of answering the questions exactly,give a 90% confidence level:
1. What was Martin Luther King, Jr.’s age at death?
2. What is the length of the Nile River, in miles?
3. How many countries belong to OPEC?
4. How many books are there in the Old Testament?
5. What is the diameter of the moon, in miles?
6. What is the weight of an empty Boeing 747, in pounds?
7. In what year was Mozart born?
8. What is the gestation period of an Asian elephant, in days?
9. What is the air distance from London to Tokyo, in miles?
10. What is the deepest known point in the ocean, in feet?
For example, if the question were “How many teams are in the NHL,” a student with no idea might give an answer of 5-40, a student with a good idea might give an answer of 28-32, and so forth. This exercise:
· Gets you thinking about confidence intervals and understanding the basic idea of what they are all about
· It really highlights the importance and role of research and statistics. It also gives some insight into the human condition. Not only are we all dumb, we are way dumber than we think!
· Illuminates the duality of confidence intervals–if you are 90% sure that the correct number is between X and Y, then 90% of all such confidence levels should contain the true value, but 10% will not.