zotero/storage/H9EDYBW9/.zotero-ft-cache

3833 lines
199 KiB
Plaintext
Raw Permalink Normal View History

2024-08-27 21:48:20 -05:00
AN INTRODUCTION TO
Error Analysis
THE STUDY OF UNCERTAINTIES IN PHYSICAL MEASUREMENTS
SECOND EDITION
John R. Taylor
PROFESSOR OF PHYSICS UNIVERSITY OF COLORADO
University Science Books Sausalito, California
University Science Books 55D Gate Five Road Sausalito, CA 94965 Fax: (415) 332-5393
Production manager: Susanna Tadlock Manuscript editor: Ann McGuire Designer: Robert lshi Illustrators: John and Judy Waller Compositor: Maple- Vail Book Manufacturing Group Printer and binder: Maple- Vail Book Manufacturing Group
This book is printed on acid-free paper.
Copyright © 1982, 1997 by University Science Books
Reproduction or translation of any part of this work beyond that permitted by Section 107 or 108 of the 1976 United States Copyright Act without the permission of the copyright owner is unlawful. Requests for permission or further information should be addressed to the Permissions Department, University Science Books.
Library of Congress Cataloging-in-Publication Data
Taylor, John R. (John Robert), 1939-
An introduction to error analysis / John R. Taylor.-2nd ed.
p. cm.
Includes bibliographical references and index.
ISBN 0-935702-42-3 (cloth).-ISBN 0-935702-75-X (pbk.)
1. Physical measurements. 2. Error analysis (Mathematics)
3. Mathematical physics. I. Title.
QC39.T4 1997
530.1 '6---dc20
96-953
CIP
Printed in the United States of America 10 9 8 7 6 5 4 3 2 1
Contents
Preface to the Second Edition xi Preface to the First Edition xv
Part I
Chapter I. Preliminary Description of Error Analysis 3
I. I Errors as Uncertainties 3
1.2 Inevitability of Uncertainty 3 1.3 Importance of Knowing the Uncertainties 5 1.4 More Examples 6 1.5 Estimating Uncertainties When Reading Scales 8 1.6 Estimating Uncertainties in Repeatable Measurements I 0
Chapter 2. How to Report and Use Uncertainties 13
2.1 Best Estimate ± Uncertainty 13 2.2 Significant Figures 14 2.3 Discrepancy 16 2.4 Comparison of Measured and Accepted Values 18 2.5 Comparison of Two Measured Numbers 20 2.6 Checking Relationships with a Graph 24
2.7 Fractional Uncertainties 28
2.8 Significant Figures and Fractional Uncertainties 30 2.9 Multiplying Two Measured Numbers 31
Problems for Chapter 2 35
Chapter 3. Propagation of Uncertainties 45
3.1 Uncertainties in Direct Measurements 46 3.2 The Square-Root Rule for a Counting Experiment 48 3.3 Sums and Differences; Products and Quotients 49 3.4 Two Important Special Cases 54 3.5 Independent Uncertainties in a Sum 57 3.6 More About Independent Uncertainties 60 3.7 Arbitrary Functions of One Variable 63 3.8 Propagation Step by Step 66 3.9 Examples 68 3.10 A More Complicated Example 71 3.1 I General Formula for Error Propagation 73
Problems for Chapter 3 79
Chapter 4. Statistical Analysis of Random Uncertainties 93
4.1 Random and Systematic Errors 94
vii
viii
Introduction to Error Analysis
4.2 The Mean and Standard Deviation 97 4.3 The Standard Deviation as the Uncertainty in a Single
Measurement IO I 4.4 The Standard Deviation of the Mean I 02 4.5 Examples I 04 4.6 Systematic Errors I 06
Problems for Chapter 4 I I 0
Chapter 5. The Normal Distribution 121
5.1 Histograms and Distributions 122 5.2 Limiting Distributions 126 5.3 The Normal Distribution 129 5.4 The Standard Deviation as 68% Confidence Limit 135 5.5 Justification of the Mean as Best Estimate 137 5.6 Justification of Addition in Quadrature 141 5.7 Standard Deviation of the Mean 147 5.8 Acceptability of a Measured Answer 149
Problems for Chapter 5 154
Part II
Chapter 6. Rejection of Data 165
6.1 The Problem of Rejecting Data 165 6.2 Chauvenet's Criterion 166 6.3 Discussion 169
Problems for Chapter 6 170
Chapter 7. Weighted Averages 173
7.1 The Problem of Combining Separate Measurements 173 7.2 The Weighted Average 174 7.3 An Example 176
Problems for Chapter 7 178
Chapter 8. Least-Squares Fitting 181
8.1 Data That Should Fit a Straight Line 181 8.2 Calculation of the Constants A and B 182 8.3 Uncertainty in the Measurements of y 186 8.4 Uncertainty in the Constants A and B 188 8.5 An Example 190 8.6 Least-Squares Fits to Other Curves 193
Problems for Chapter 8 199
Chapter 9. Covariance and Correlation 209
9.1 Review of Error Propagation 209 9.2 Covariance in Error Propagation 21 I 9.3 Coefficient of Linear Correlation 215 9.4 Quantitative Significance of r 218 9.5 Examples 220
Problems for Chapter 9 222
Contents
ix
Chapter I 0. The Binomial Distribution 227
I0. I Distributions 227 I0.2 Probabilities in Dice Throwing 228 I 0.3 Definition of the Binomial Distribution 228 I0.4 Properties of the Binomial Distribution 231 I0.5 The Gauss Distribution for Random Errors 235 I0.6 Applications; Testing of Hypotheses 236
Problems for Chapter IO 241
Chapter I I . The Poisson Distribution 245
I I. I Definition of the Poisson Distribution 245 I 1.2 Properties of the Poisson Distribution 249 I 1.3 Applications 252 I 1.4 Subtracting a Background 254
Problems for Chapter I I 256
Chapter 12. The Chi-Squared Test for a Distribution 261
12.1 Introduction to Chi Squared 261 12.2 General Definition of Chi Squared 265 12.3 Degrees of Freedom and Reduced Chi Squared 268 12.4 Probabilities for Chi Squared 271 12.5 Examples 274
Problems for Chapter 12 278
Appendixes 285 Appendix A Normal Error Integral, I 286 Appendix B. Normal Error Integral, II 288 Appendix C. Probabilities for Correlation Coefficients 290 Appendix D. Probabilities for Chi Squared 292 Appendix E. Two Proofs Concerning Sample Standard Deviations 294 Bibliography 299
Answers to Quick Checks and Odd-Numbered Problems 30 I
Index 323
Preface to the Second Edition
I first wrote An Introduction to Error Analysis because my experience teaching
introductory laboratory classes for several years had convinced me of a serious need
for a book that truly introduced the subject to the college science student. Several
fine books on the topic were available, but none was really suitable for a student
new to the subject. The favorable reception to the first edition confirmed the exis-
tence of that need and suggests the book met it.
The continuing success of the first edition suggests it still meets that need.
Nevertheless, after more than a decade, every author of a college textbook must
surely feel obliged to improve and update the original version. Ideas for modifica-
tions came from several sources: suggestions from readers, the need to adapt the
book to the wide availability of calculators and personal computers, and my own
experiences in teaching from the book and finding portions that could be improved.
Because of the overwhelmingly favorable reaction to the first edition, I have
maintained its basic level and general approach. Hence, many revisions are simply
changes in wording to improve clarity. A few changes are major, the most important
of which are as follows:
(1) The number of problems at the end of each chapter is nearly doubled to
give users a wider choice and teachers the ability to vary their assigned problems
from year to year. Needless to say, any given reader does not need to solve any-
where near the 264 problems offered; on the contrary, half a dozen problems from
each chapter is probably sufficient.
(2) Several readers recommended placing a few simple exercises regularly
throughout the text to let readers check that they really understand the ideas just
presented. Such exercises now appear as "Quick Checks," and I strongly urge stu-
dents new to the subject to try them all. If any Quick Check takes much longer than
a minute or two, you probably need to reread the preceding few paragraphs. The
answers to all Quick Checks are given in the answer section at the back of the book.
Those who find this kind of exercise distracting can easily skip them.
(3) Also new to this edition are complete summaries of all the important equa-
tions at the end of each chapter to supplement the first edition's brief summaries
inside the front and back covers. These new summaries list all key equations from
the chapter and from the problem sets as well.
(4) Many new figures appear in this edition, particularly in the earlier chapters.
The figures help make the text seem less intimidating and reflect my conscious
xi
xii
Introduction to Error Analysis
effort to encourage students to think more visually about uncertainties. I have observed, for example, that many students grasp issues such as the consistency of measurements if they think visually in terms of error bars.
(5) I have reorganized the problem sets at the end of each chapter in three ways. First, the Answers section at the back of the book now gives answers to all of the odd-numbered problems. (The first edition contained answers only to selected problems.) The new arrangement is simpler and more traditional. Second, as a rough guide to the level of difficulty of each problem, I have labeled the problems with a system of stars: One star (*) indicates a simple exercise that should take no more than a couple of minutes if you understand the material. Two stars (**) indicate a somewhat harder problem, and three stars (***) indicate a really searching problem that involves several different concepts and requires more time. I freely admit that the classification is extremely approximate, but students studying on their own should find these indications helpful, as may teachers choosing problems to assign to their students.
Third, I have arranged the problems by section number. As soon as you have read Section N, you should be ready to try any problem listed for that section. Although this system is convenient for the student and the teacher, it seems to be currently out of favor. I assume this disfavor stems from the argument that the system might exclude the deep problems that involve many ideas from different sections. I consider this argument specious; a problem listed for Section N can, of course, involve ideas from many earlier sections and can, therefore, be just as general and deep as any problem listed under a more general heading.
(6) I have added problems that call for the use of computer spreadsheet programs such as Lotus 123 or Excel. None of these problems is specific to a particular system; rather, they urge the student to learn how to do various tasks using whatever system is available. Similarly, several problems encourage students to learn to use the built-in functions on their calculators to calculate standard deviations and the like.
(7) I have added an appendix (Appendix E) to show two proofs that concern sample standard deviations: first, that, based on N measurements of a quantity, the best estimate of the true width of its distribution is the sample standard deviation with (N - 1) in the denominator, and second, that the uncertainty in this estimate is as given by Equation (5.46). These proofs are surprisingly difficult and not easily found in the literature.
It is a pleasure to thank the many people who have made suggestions for this second edition. Among my friends and colleagues at the University of Colorado, the people who gave most generously of their time and knowledge were David Alexander, Dana Anderson, David Bartlett, Barry Bruce, John Cumalat, Mike Dubson, Bill Ford, Mark Johnson, Jerry Leigh, Uriel Nauenberg, Bill O'Sullivan, Bob Ristinen, Rod Smythe, and Chris Zafiratos. At other institutions, I particularly want to thank R. G. Chambers of Leeds, England, Sharif Heger of the University of New Mexico, Steven Hoffmaster of Gonzaga University, Hilliard Macomber of the University of Northern Iowa, Mark Semon of Bates College, Peter Timbie of Brown University, and David Van Dyke of the University of Pennsylvania. I am deeply indebted to all of these people for their generous help. I am also most grateful to Bruce Armbruster
Preface to the Second Edition
xiii
of University Science Books for his generous encouragement and support. Above all, I want to thank my wife Debby; I don't know how she puts up with the stresses and strains of book writing, but I am so grateful she does.
J. R. Taylor
September 1996 Boulder, Colorado
Preface to the First Edition
All measurements, however careful and scientific, are subject to some uncertainties.
Error analysis is the study and evaluation of these uncertainties, its two main func-
tions being to allow the scientist to estimate how large his uncertainties are, and to
help him to reduce them when necessary. The analysis of uncertainties, or "errors,"
is a vital part of any scientific experiment, and error analysis is therefore an im-
portant part of any college course in experimental science. It can also be one of the
most interesting parts of the course. The challenges of estimating uncertainties and
of reducing them to a level that allows a proper conclusion to be drawn can turn a
dull and routine set of measurements into a truly interesting exercise.
This book is an introduction to error analysis for use with an introductory col-
lege course in experimental physics of the sort usually taken by freshmen or sopho-
mores in the sciences or engineering. I certainly do not claim that error analysis is
the most (let alone the only) important part of such a course, but I have found that
it is often the most abused and neglected part. In many such courses, error analysis
is "taught" by handing out a couple of pages of notes containing a few formulas,
and the student is then expected to get on with the job solo. The result is that error
analysis becomes a meaningless ritual, in which the student adds a few lines of
calculation to the end of each laboratory report, not because he or she understands
why, but simply because the instructor has said to do so.
I wrote this book with the conviction that any student, even one who has never
heard of the subject, should be able to learn what error analysis is, why it is interest-
ing and important, and how to use the basic tools of the subject in laboratory reports.
Part I of the book (Chapters 1 to 5) tries to do all this, with many examples of the
kind of experiment encountered in teaching laboratories. The student who masters
this material should then know and understand almost all the error analysis he or
she would be expected to learn in a freshman laboratory course: error propagation,
the use of elementary statistics, and their justification in terms of the normal distri-
bution.
Part II contains a selection of more advanced topics: least-squares fitting, the
correlation coefficient, the i2 test, and others. These would almost certainly not be
included officially in a freshman laboratory course, although a few students might
become interested in some of them. However, several of these topics would be
needed in a second laboratory course, and it is primarily for that reason that I have
included them.
xv
xvi
Introduction to Error Analysis
I am well aware that there is all too little time to devote to a subject like error analysis in most laboratory courses. At the University of Colorado we give a onehour lecture in each of the first six weeks of our freshman laboratory course. These lectures, together with a few homework assignments using the problems at the ends of the chapters, have let us cover Chapters 1 through 4 in detail and Chapter 5 briefly. This gives the students a working knowledge of error propagation and the elements of statistics, plus a nodding acquaintance with the underlying theory of the normal distribution.
From several students' comments at Colorado, it was evident that the lectures were an unnecessary luxury for at least some of the students, who could probably have learned the necessary material from assigned reading and problem sets. I certainly believe the book could be studied without any help from lectures.
Part II could be taught in a few lectures at the start of a second-year laboratory course (again supplemented with some assigned problems). But, even more than Part I, it was intended to be read by the student at any time that his or her own needs and interests might dictate. Its seven chapters are almost completely independent of one another, in order to encourage this kind of use.
I have included a selection of problems at the end of each chapter; the reader does need to work several of these to master the techniques. Most calculations of errors are quite straightforward. A student who finds himself or herself doing many complicated calculations (either in the problems of this book or in laboratory reports) is almost certainly doing something in an unnecessarily difficult way. In order to give teachers and readers a good choice, I have included many more problems than the average reader need try. A reader who did one-third of the problems would be doing well.
Inside the front and back covers are summaries of all the principal formulas. I hope the reader will find these a useful reference, both while studying the book and afterward. The summaries are organized by chapters, and will also, I hope, serve as brief reviews to which the reader can turn after studying each chapter.
Within the text, a few statements-equations and rules of procedure-have been highlighted by a shaded background. This highlighting is reserved for statements that are important and are in their final form (that is, will not be modified by later work). You will definitely need to remember these statements, so they have been highlighted to bring them to your attention.
The level of mathematics expected of the reader rises slowly through the book. The first two chapters require only algebra; Chapter 3 requires differentiation (and partial differentiation in Section 3.11, which is optional); Chapter 5 needs a knowledge of integration and the exponential function. In Part II, I assume that the reader is entirely comfortable with all these ideas.
The book contains numerous examples of physics experiments, but an understanding of the underlying theory is not essential. Furthermore, the examples are mostly taken from elementary mechanics and optics to make it more likely that the student will already have studied the theory. The reader who needs it can find an account of the theory by looking at the index of any introductory physics text.
Error analysis is a subject about which people feel passionately, and no single treatment can hope to please everyone. My own prejudice is that, when a choice has to be made between ease of understanding and strict rigor, a physics text should
Preface to the First Edition
xvii
choose the former. For example, on the controversial question of combining errors in quadrature versus direct addition, I have chosen to treat direct addition first, since the student can easily understand the arguments that lead to it.
In the last few years, a dramatic change has occurred in student laboratories with the advent of the pocket calculator. This has a few unfortunate consequencesmost notably, the atrocious habit of quoting ridiculously insignificant figures just because the calculator produced them-but it is from almost every point of view a tremendous advantage, especially in error analysis. The pocket calculator allows one to compute, in a few seconds, means and standard deviations that previously would have taken hours. It renders unnecessary many tables, since one can now compute functions like the Gauss function more quickly than one could find them in a book of tables. I have tried to exploit this wonderful tool wherever possible.
It is my pleasure to thank several people for their helpful comments and suggestions. A preliminary edition of the book was used at several colleges, and I am grateful to many students and colleagues for their criticisms. Especially helpful were the comments of John Morrison and David Nesbitt at the University of Colorado, Professors Pratt and Schroeder at Michigan State, Professor Shugart at U.C. Berkeley, and Professor Semon at Bates College. Diane Casparian, Linda Frueh, and Connie Gurule typed successive drafts beautifully and at great speed. Without my mother-in-law, Frances Kretschmann, the proofreading would never have been done in time. I am grateful to all of these people for their help; but above all I thank my wife, whose painstaking and ruthless editing improved the whole book beyond measure.
J. R. Taylor
November 1, 1981 Boulder, Colorado
AN INTRODUCTION TO
Error Analysis
Part I
I. Preliminary Description of Error Analysis 2. How to Report and Use Uncertainties 3. Propagation of Uncertainties 4. Statistical Analysis of Random Uncertainties 5. The Normal Distribution
Part I introduces the basic ideas of error analysis as they are needed in a typical first-year, college physics laboratory. The first two chapters describe what error analysis is, why it is important, and how it can be used in a typical laboratory report. Chapter 3 describes error propagation, whereby uncertainties in the original measurements "propagate" through calculations to cause uncertainties in the calculated final answers. Chapters 4 and 5 introduce the statistical methods with which the socalled random uncertainties can be evaluated.
Chapter I
Preliminary Description of Error Analysis
Error analysis is the study and evaluation of uncertainty in measurement. Experience has shown that no measurement, however carefully made, can be completely free of uncertainties. Because the whole structure and application of science depends on measurements, the ability to evaluate these uncertainties and keep them to a minimum is crucially important.
This first chapter describes some simple measurements that illustrate the inevitable occurrence of experimental uncertainties and show the importance of knowing how large these uncertainties are. The chapter then describes how (in some simple cases, at least) the magnitude of the experimental uncertainties can be estimated realistically, often by means of little more than plain common sense.
1.1 Errors as Uncertainties
In science, the word error does not carry the usual connotations of the terms mistake or blunder. Error in a scientific measurement means the inevitable uncertainty that attends all measurements. As such, errors are not mistakes; you cannot eliminate them by being very careful. The best you can hope to do is to ensure that errors are as small as reasonably possible and to have a reliable estimate of how large they are. Most textbooks introduce additional definitions of error, and these are discussed later. For now, error is used exclusively in the sense of uncertainty, and the two words are used interchangeably.
1.2 Inevitability of Uncertainty
To illustrate the inevitable occurrence of uncertainties, we have only to examine any
everyday measurement carefully. Consider, for example, a carpenter who must mea-
sure the height of a doorway before installing a door. As a first rough measurement,
he might simply look at the doorway and estimate its height as 210 cm. This crude
"measurement" is certainly subject to uncertainty. If pressed, the carpenter might
express this uncertainty by admitting that the height could be anywhere between
205 cm and 215 cm.
3
4
Chapter I: Preliminary Description of Error Analysis
If he wanted a more accurate measurement, he would use a tape measure and might find the height is 211.3 cm. This measurement is certainly more precise than his original estimate, but it is obviously still subject to some uncertainty, because it is impossible for him to know the height to be exactly 211.3000 cm rather than 211.3001 cm, for example.
This remaining uncertainty has many sources, several of which are discussed in this book. Some causes could be removed if the carpenter took enough trouble. For example, one source of uncertainty might be that poor lighting hampers reading of the tape; this problem could be corrected by improving the lighting.
On the other hand, some sources of uncertainty are intrinsic to the process of measurement and can never be removed entirely. For example, let us suppose the carpenter's tape is graduated in half-centimeters. The top of the door probably will not coincide precisely with one of the half-centimeter marks, and if it does not, the carpenter must estimate just where the top lies between two marks. Even if the top happens to coincide with one of the marks, the mark itself is perhaps a millimeter wide; so he must estimate just where the top lies within the mark. In either case, the carpenter ultimately must estimate where the top of the door lies relative to the markings on the tape, and this necessity causes some uncertainty in the measurement.
By buying a better tape with closer and finer markings, the carpenter can reduce his uncertainty but cannot eliminate it entirely. If he becomes obsessively determined to find the height of the door with the greatest precision technically possible, he could buy an expensive laser interferometer. But even the precision of an interferometer is limited to distances of the order of the wavelength of light (about 0.5 X 10-6 meters). Although the carpenter would now be able to measure the height with fantastic precision, he still would not know the height of the doorway exactly.
Furthermore, as our carpenter strives for greater precision, he will encounter an important problem of principle. He will certainly find that the height is different in different places. Even in one place, he will find that the height varies if the temperature and humidity vary, or even if he accidentally rubs off a thin layer of dirt. In other words, he will find that there is no such thing as the height of the doorway. This kind of problem is called a problem of definition (the height of the door is not a well-defined quantity) and plays an important role in many scientific measurements.
Our carpenter's experiences illustrate a point generally found to be true, that is, that no physical quantity (a length, time, or temperature, for example) can be measured with complete certainty. With care, we may be able to reduce the uncertainties until they are extremely small, but to eliminate them entirely is impossible.
In everyday measurements, we do not usually bother to discuss uncertainties. Sometimes the uncertainties simply are not interesting. If we say that the distance between home and school is 3 miles, whether this means "somewhere between 2.5 and 3.5 miles" or "somewhere between 2.99 and 3.01 miles" is usually unimportant. Often the uncertainties are important but can be allowed for instinctively and without explicit consideration. When our carpenter fits his door, he must know its height with an uncertainty that is less than 1 mm or so. As long as the uncertainty is this small, the door will (for all practical purposes) be a perfect fit, and his concern with error analysis is at an end.
Section 1.3 Importance of Knowing the Uncertainties
5
1.3 Importance of Knowing the Uncertainties
Our example of the carpenter measuring a doorway illustrates how uncertainties are always present in measurements. Let us now consider an example that illustrates more clearly the crucial importance of knowing how big these uncertainties are.
Suppose we are faced with a problem like the one said to have been solved by Archimedes. We are asked to find out whether a crown is made of 18-karat gold, as claimed, or a cheaper alloy. Following Archimedes, we decide to test the crown's density p knowing that the densities of 18-karat gold and the suspected alloy are
Pgold = 15.5 gram/cm3
and
Panoy = 13.8 gram/cm3•
If we can measure the density of the crown, we should be able (as Archimedes suggested) to decide whether the crown is really gold by comparing p with the known densities Pgold and Panoy.
Suppose we summon two experts in the measurement of density. The first expert, George, might make a quick measurement of p and report that his best estimate for p is 15 and that it almost certainly lies between 13.5 and 16.5 gram/cm3. Our second expert, Martha, might take a little longer and then report a best estimate of 13.9 and a probable range from 13.7 to 14.1 grarn/cm3. The findings of our two experts are summarized in Figure 1.1.
Density p (gram/cm3)
17
George-
16 gold
15
Martha
I
14 alloy
13
Figure 1.1. Two measurements of the density of a supposedly gold crown. The two black dots show George's and Martha's best estimates for the density; the two vertical error bars show their margins of error, the ranges within which they believe the density probably lies. George's uncertainty is so large that both gold and the suspected alloy fall within his margins of error; therefore, his measurement does not determine which metal was used. Martha's uncertainty is appreciably smaller, and her measurement shows clearly that the crown is not made of gold.
6
Chapter I: Preliminary Description of Error Analysis
The first point to notice about these results is that although Martha's measurement is much more precise, George's measurement is probably also correct. Each expert states a range within which he or she is confident p lies, and these ranges overlap; so it is perfectly possible (and even probable) that both statements are correct.
Note next that the uncertainty in George's measurement is so large that his results are of no use. The densities of 18-karat gold and of the alloy both lie within his range, from 13.5 to 16.5 gram/cm3; so no conclusion can be drawn from George's measurements. On the other hand, Martha's measurements indicate clearly that the crown is not genuine; the density of the suspected alloy, 13.8, lies comfortably inside Martha's estimated range of 13.7 to 14.1, but that of 18-karat gold, 15.5, is far outside it. Evidently, if the measurements are to allow a conclusion, the experimental uncertainties must not be too large. The uncertainties do not need to be extremely small, however. In this respect, our example is typical of many scientific measurements, for which uncertainties have to be reasonably small (perhaps a few percent of the measured value) but for which extreme precision is often unnecessary.
Because our decision hinges on Martha's claim that p lies between 13.7 and 14.1 gram/cm3, she must give us sufficient reason to believe her claim. In other words, she must justify her stated range of values. This point is often overlooked by beginning students, who simply assert their uncertainties but omit any justification. Without a brief explanation of how the uncertainty was estimated, the assertion is almost useless.
The most important point about our two experts' measurements is this: Like most scientific measurements, they would both have been useless if they had not included reliable statements of their uncertainties. In fact, if we knew only the two best estimates (15 for George and 13.9 for Martha), not only would we have been unable to draw a valid conclusion, but we could actually have been misled, because George's result (15) seems to suggest the crown is genuine.
1.4 More Examples
The examples in the past two sections were chosen, not for their great importance, but to introduce some principal features of error analysis. Thus, you can be excused for thinking them a little contrived. It is easy, however, to think of examples of great importance in almost any branch of applied or basic science.
In the applied sciences, for example, the engineers designing a power plant must know the characteristics of the materials and fuels they plan to use. The manufacturer of a pocket calculator must know the properties of its various electronic components. In each case, somebody must measure the required parameters, and having measured them, must establish their reliability, which requires error analysis. Engineers concerned with the safety of airplanes, trains, or cars must understand the uncertainties in drivers' reaction times, in braking distances, and in a host of other variables; failure to carry out error analysis can lead to accidents such as that shown on the cover of this book. Even in a less scientific field, such as the manufacture of clothing, error analysis in the form of quality control plays a vital part.
Section 1.4 More Examples
7
In the basic sciences, error analysis has an even more fundamental role. When any new theory is proposed, it must be tested against older theories by means of one or more experiments for which the new and old theories predict different outcomes. In principle, a researcher simply performs the experiment and lets the outcome decide between the rival theories. In practice, however, the situation is complicated by the inevitable experimental uncertainties. These uncertainties must all be analyzed carefully and their effects reduced until the experiment singles out one acceptable theory. That is, the experimental results, with their uncertainties, must be consistent with the predictions of one theory and inconsistent with those of all known, reasonable alternatives. Obviously, the success of such a procedure depends critically on the scientist's understanding of error analysis and ability to convince others of this understanding.
A famous example of such a test of a scientific theory is the measurement of the bending of light as it passes near the sun. When Einstein published his general theory of relativity in 1916, he pointed out that the theory predicted that light from
a star would be bent through an angle a = 1.8" as it passes near the sun. The simplest classical theory would predict no bending (a = 0), and a more careful
classical analysis would predict (as Einstein himself noted in 1911) bending through
an angle a = 0.9". In principle, all that was necessary was to observe a star when
it was aligned with the edge of the sun and to measure the angle of bending a. If
the result were a = 1.8", general relativity would be vindicated (at least for this
phenomenon); if a were found to be O or 0.9", general relativity would be wrong and one of the older theories right.
In practice, measuring the bending of light by the sun was extremely hard and was possible only during a solar eclipse. Nonetheless, in 1919 it was successfully measured by Dyson, Eddington, and Davidson, who reported their best estimate as a= 2", with 95% confidence that it lay between 1.7" and 2.3".1 Obviously, this result was consistent with general relativity and inconsistent with either of the older predictions. Therefore, it gave strong support to Einstein's theory of general relativity.
At the time, this result was controversial. Many people suggested that the uncertainties had been badly underestimated and hence that the experiment was inconclusive. Subsequent experiments have tended to confirm Einstein's prediction and to vindicate the conclusion of Dyson, Eddington, and Davidson. The important point here is that the whole question hinged on the experimenters' ability to estimate reliably all their uncertainties and to convince everyone else they had done so.
Students in introductory physics laboratories are not usually able to conduct definitive tests of new theories. Often, however, they do perform experiments that test existing physical theories. For example, Newton's theory of gravity predicts that bodies fall with constant acceleration g (under the appropriate conditions), and students can conduct experiments to test whether this prediction is correct. At first, this kind of experiment may seem artificial and pointless because the theories have obvi-
1 This simplified account is based on the original paper of F. W. Dyson, A. S. Eddington, and C. Davidson (Philosophical Transactions of the Royal Society, 220A, 1920, 291). I have converted the probable error originally quoted into the 95% confidence limits. The precise significance of such confidence limits will be established in Chapter 5.
8
Chapter I: Preliminary Description of Error Analysis
ously been tested many times with much more precision than possible in a teaching laboratory. Nonetheless, if you understand the crucial role of error analysis and accept the challenge to make the most precise test possible with the available equipment, such experiments can be interesting and instructive exercises.
1.5 Estimating Uncertainties When Reading Scales
Thus far, we have considered several examples that illustrate why every measurement suffers from uncertainties and why their magnitude is important to know. We have not yet discussed how we can actually evaluate the magnitude of an uncertainty. Such evaluation can be fairly complicated and is the main topic of this book. Fortunately, reasonable estimates of the uncertainty of some simple measurements are easy to make, often using no more than common sense. Here and in Section 1.6, I discuss examples of such measurements. An understanding of these examples will allow you to begin using error analysis in your experiments and will form the basis for later discussions.
The first example is a measurement using a marked scale, such as the ruler in Figure 1.2 or the voltmeter in Figure 1.3. To measure the length of the pencil in
millimeters
0
10 20 30 40 50
Figure 1.2. Measuring a length with a ruler.
Figure 1.2, we must first place the end of the pencil opposite the zero of the ruler and then decide where the tip comes to on the ruler's scale. To measure the voltage in Figure 1.3, we have to decide where the needle points on the voltmeter's scale. If we assume the ruler and voltmeter are reliable, then in each case the main prob-
volts
5
4
6
.9
Figure 1.3. A reading on a voltmeter.
Section 1.5 Estimating Uncertainties When Reading Scales
9
lem is to decide where a certain point lies in relation to the scale markings. (Of course, if there is any possibility the ruler and voltmeter are not reliable, we will have to take this uncertainty into account as well.)
The markings of the ruler in Figure 1.2 are fairly close together (1 mm apart). We might reasonably decide that the length shown is undoubtedly closer to 36 mm than it is to 35 or 37 mm but that no more precise reading is possible. In this case, we would state our conclusion as
best estimate of length = 36 mm,
(1.1)
probable range: 35.5 to 36.5 mm
and would say that we have measured the length to the nearest millimeter. This type of conclusion-that the quantity lies closer to a given mark than to
either of its neighboring marks-is quite common. For this reason, many scientists
introduce the convention that the statement "l = 36 mm" without any qualification
is presumed to mean that l is closer to 36 than to 35 or 37; that is,
l = 36 mm
means
35.5 mm :::; l ,;;; 36.5 mm.
In the same way, an answer such as x = 1.27 without any stated uncertainty would be presumed to mean that x lies between 1.265 and 1.275. In this book, I do not use this convention but instead always indicate uncertainties explicitly. Nevertheless, you need to understand the convention and know that it applies to any number stated without an uncertainty, especially in this age of pocket calculators, which display many digits. If you unthinkingly copy a number such as 123.456 from your calculator without any qualification, then your reader is entitled to assume the number is definitely correct to six significant figures, which is very unlikely.
The markings on the voltmeter shown in Figure 1.3 are more widely spaced than those on the ruler. Here, most observers would agree that you can do better than simply identify the mark to which the pointer is closest. Because the spacing is larger, you can realistically estimate where the pointer lies in the space between two marks. Thus, a reasonable conclusion for the voltage shown might be
best estimate of voltage = 5.3 volts,
(1.2)
probable range: 5.2 to 5.4 volts.
The process of estimating positions between the scale markings is called interpolation. It is an important technique that can be improved with practice.
Different observers might not agree with the precise estimates given in Equations (1.1) and (1.2). You might well decide that you could interpolate for the length in Figure 1.2 and measure it with a smaller uncertainty than that given in Equation (1.1). Nevertheless, few people would deny that Equations (1.1) and (1.2) are reasonable estimates of the quantities concerned and of their probable uncertainties. Thus, we see that approximate estimation of uncertainties is fairly easy when the only problem is to locate a point on a marked scale.
I O
Chapter I: Preliminary Description of Error Analysis
1.6 Estimating Uncertainties in Repeatable Measurements
Many measurements involve uncertainties that are much harder to estimate than those connected with locating points on a scale. For example, when we measure a time interval using a stopwatch, the main source of uncertainty is not the difficulty of reading the dial but our own unknown reaction time in starting and stopping the watch. Sometimes these kinds of uncertainty can be estimated reliably, if we can repeat the measurement several times. Suppose, for example, we time the period of a pendulum once and get an answer of 2.3 seconds. From one measurement, we can't say much about the experimental uncertainty. But if we repeat the measurement and get 2.4 seconds, then we can immediately say that the uncertainty is probably of the order of 0.1 s. If a sequence of four timings gives the results (in seconds),
2.3, 2.4, 2.5, 2.4,
(1.3)
then we can begin to make some fairly realistic estimates. First, a natural assumption is that the best estimate of the period is the average 2
value, 2.4 s. Second, another reasonably safe assumption is that the correct period lies be-
tween the lowest value, 2.3, and the highest, 2.5. Thus, we might reasonably conclude that
best estimate = average = 2.4 s,
(1.4)
probable range: 2.3 to 2.5 s.
Whenever you can repeat the same measurement several times, the spread in your measured values gives a valuable indication of the uncertainty in your measurements. In Chapters 4 and 5, I discuss statistical methods for treating such repeated measurements. Under the right conditions, these statistical methods give a more accurate estimate of uncertainty than we have found in Equation (1.4) using just common sense. A proper statistical treatment also has the advantage of giving an objective value for the uncertainty, independent of the observer's individual judgment.3 Nevertheless, the estimate in statement (1.4) represents a simple, realistic conclusion to draw from the four measurements in (1.3).
Repeated measurements such as those in (1.3) cannot always be relied on to reveal the uncertainties. First, we must be sure that the quantity measured is really the same quantity each time. Suppose, for example, we measure the breaking strength of two supposedly identical wires by breaking them (something we can't do more than once with each wire). If we get two different answers, this difference may indicate that our measurements were uncertain or that the two wires were not really identical. By itself, the difference between the two answers sheds no light on the reliability of our measurements.
2 I will prove in Chapter 5 that the best estimate based on several measurements of a quantity is almost always the average of the measurements.
3 Also, a proper statistical treatment usually gives a smaller uncertainty than the full range from the lowest to the highest observed value. Thus, upon looking at the four timings in (1.3), we have judged that the period is "probably" somewhere between 2.3 and 2.5 s. The statistical methods of Chapters 4 and 5 let us state with 70% confidence that the period lies in the smaller range of 2.36 to 2.44 s.
Section 1.6 Estimating Uncertainties in Repeatable Measurements
11
Even when we can be sure we are measuring the same quantity each time, repeated measurements do not always reveal uncertainties. For example, suppose the clock used for the timings in (1.3) was running consistently 5% fast. Then, all timings made with it will be 5% too long, and no amount of repeating (with the same clock) will reveal this deficiency. Errors of this sort, which affect all measurements in the same way, are called systematic errors and can be hard to detect, as discussed in Chapter 4. In this example, the remedy is to check the clock against a more reliable one. More generally, if the reliability of any measuring device is in doubt, it should clearly be checked against a device known to be more reliable.
The examples discussed in this and the previous section show that experimental uncertainties sometimes can be estimated easily. On the other hand, many measurements have uncertainties that are not so easily evaluated. Also, we ultimately want more precise values for the uncertainties than the simple estimates just discussed. These topics will occupy us from Chapter 3 onward. In Chapter 2, I assume temporarily that you know how to estimate the uncertainties in all quantities of interest, so that we can discuss how the uncertainties are best reported and how they are used in drawing an experimental conclusion.
Chapter 2
How to Report and Use Uncertainties
Having read Chapter 1, you should now have some idea of the importance of experimental uncertainties and how they arise. You should also understand how uncertainties can be estimated in a few simple situations. In this chapter, you will learn some basic notations and rules of error analysis and study examples of their use in typical experiments in a physics laboratory. The aim is to familiarize you with the basic vocabulary of error analysis and its use in the introductory laboratory. Chapter 3 begins a systematic study of how uncertainties are actually evaluated.
Sections 2.1 to 2.3 define several basic concepts in error analysis and discuss general rules for stating uncertainties. Sections 2.4 to 2.6 discuss how these ideas could be used in typical experiments in an introductory physics laboratory. Finally, Sections 2.7 to 2.9 introduce fractional uncertainty and discuss its significance.
2.1 Best Estimate + Uncertainty
We have seen that the correct way to state the result of measurement is to give a best estimate of the quantity and the range within which you are confident the quantity lies. For example, the result of the timings discussed in Section 1.6 was reported as
best estimate of time = 2.4 s,
(2.1)
probable range: 2.3 to 2.5 s.
Here, the best estimate, 2.4 s, lies at the midpoint of the estimated range of probable values, 2.3 to 2.5 s, as it has in all the examples. This relationship is obviously natural and pertains in most measurements. It allows the results of the measurement to be expressed in compact form. For example, the measurement of the time recorded in (2.1) is usually stated as follows:
measured value of time = 2.4 ± 0.1 s.
(2.2)
This single equation is equivalent to the two statements in (2.1). In general, the result of any measurement of a quantity x is stated as
(2.3)
13
14
Chapter 2: How to Report and Use Uncertainties
This statement means, first, that the experimenter's best estimate for the quantity concerned is the number xbest, and second, that he or she is reasonably confident the
quantity lies somewhere between xbest - & and xbest + &. The number & is called
the uncertainty, or error, or margin of error in the measurement of x. For conve-
nience, the uncertainty & is always defined to be positive, so that xbest + & is
always the highest probable value of the measured quantity and xbest - & the lowest.
I have intentionally left the meaning of the range xbest - & to xbest + & some-
what vague, but it can sometimes be made more precise. In a simple measurement such as that of the height of a doorway, we can easily state a range xbest - &
to xbest + & within which we are absolutely certain the measured quantity lies.
Unfortunately, in most scientific measurements, such a statement is hard to make. In particular, to be completely certain that the measured quantity lies between
xbest - & and xbest + &, we usually have to choose a value for & that is too large
to be useful. To avoid this situation, we can sometimes choose a value for & that lets us state with a certain percent confidence that the actual quantity lies within the range xbest ± &. For instance, the public opinion polls conducted during elections are traditionally stated with margins of error that represent 95% confidence limits. The statement that 60% of the electorate favor Candidate A, with a margin of error of 3 percentage points (60 ± 3), means that the pollsters are 95% confident that the percent of voters favoring Candidate A is between 57 and 63; in other words, after many elections, we should expect the correct answer to have been inside the stated margins of error 95% of the times and outside these margins only 5% of the times.
Obviously, we cannot state a percent confidence in our margins of error until we understand the statistical laws that govern the process of measurement. I return to this point in Chapter 4. For now, let us be content with defining the uncertainty & so that we are "reasonably certain" the measured quantity lies between xbest - &
and Xbest + Sx.
Quick Check 1 2.1. (a) A student measures the length of a simple pendulum and reports his best estimate as 110 mm and the range in which the length probably lies as 108 to 112 mm. Rewrite this result in the standard form (2.3).
(b) If another student reports her measurement of a current as/ = 3.05 ± 0.03
amps, what is the range within which I probably lies?
2.2 Significant Figures
Several basic rules for stating uncertainties are worth emphasizing. First, because the quantity Sx is an estimate of an uncertainty, obviously it should not be stated
1 These "Quick Checks" appear at intervals through the text to give you a chance to check your understanding of the concept just introduced. They are straightforward exercises, and many can be done in your head. I urge you to take a moment to make sure you can do them; if you cannot, you should reread the preceding few paragraphs.
Section 2.2 Significant Figures
I 5
with too much precision. If we measure the acceleration of gravity g, it would be absurd to state a result like
(measured g) = 9.82 ± 0.02385 m/s2.
(2.4)
The uncertainty in the measurement cannot conceivably be known to four significant figures. In high-precision work, uncertainties are sometimes stated with two significant figures, but for our purposes we can state the following rule:
(2.5)
Thus, if some calculation yields the uncertainty og = 0.02385 m/s2, this answer
should be rounded to og = 0.02 m/s2, and the conclusion (2.4) should be rewritten
as
(measured g) = 9.82 ± 0.02 m/s2.
(2.6)
An important practical consequence of this rule is that many error calculations can be carried out mentally without using a calculator or even pencil and paper.
The rule (2.5) has only one significant exception. If the leading digit in the uncertainty &: is a 1, then keeping two significant figures in &: may be better. For
example, suppose that some calculation gave the uncertainty &: = 0.14. Rounding this number to &: = 0.1 would be a substantial proportionate reduction, so we could argue that retaining two figures might be less misleading, and quote &: = 0.14. The
same argument could perhaps be applied if the leading digit is a 2 but certainly not if it is any larger.
Once the uncertainty in a measurement has been estimated, the significant fig-
ures in the measured value must be considered. A statement such as
measured speed = 6051.78 ± 30 m/s
(2.7)
is obviously ridiculous. The uncertainty of 30 means that the digit 5 might really be as small as 2 or as large as 8. Clearly the trailing digits 1, 7, and 8 have no significance at all and should be rounded. That is, the correct statement of (2.7) is
measured speed = 6050 ± 30 m/s.
(2.8)
The general rule is this:
(2.9)
16
Chapter 2: How to Report and Use Uncertainties
For example, the answer 92.81 with an uncertainty of 0.3 should be rounded as
92.8 ± 0.3.
If its uncertainty is 3, then the same answer should be rounded as
93 ± 3,
and if the uncertainty is 30, then the answer should be
90 ± 30.
An important qualification to rules (2.5) and (2.9) is as follows: To reduce inaccuracies caused by rounding, any numbers to be used in subsequent calculations should normally retain at least one significant figure more than is finally justified. At the end of the calculations, the final answer should be rounded to remove these extra, insignificant figures. An electronic calculator will happily carry numbers with far more digits than are likely to be significant in any calculation you make in a laboratory. Obviously, these numbers do not need to be rounded in the middle of a calculation but certainly must be rounded appropriately for the final answers.2
Note that the uncertainty in any measured quantity has the same dimensions as the measured quantity itself. Therefore, writing the units (m/s2, cm3, etc.) after both the answer and the uncertainty is clearer and more economical, as in Equations (2.6) and (2.8). By the same token, if a measured number is so large or small that it calls for scientific notation (the use of the form 3 X 103 instead of 3,000, for example), then it is simpler and clearer to put the answer and uncertainty in the same form. For example, the result
measured charge = (1.61 ± 0.05) X 10- 19 coulombs
is much easier to read and understand in this form than it would be in the form
measured charge = 1.61 X 10- 19 ± 5 X 10- 21 coulombs.
Quick Check 2.2. Rewrite each of the following measurements in its most
appropriate form:
(a) v = 8.123456 ± 0.0312 m/s
(b) X = 3.1234 X 104 ± 2 m
(c) m 5.6789 X 10-7 ± 3 X 10-9 kg.
2.3 Discrepancy
Before I address the question of how to use uncertainties in experimental reports, a few important terms should be introduced and defined. First, if two measurements
2 Rule (2.9) has one more small exception. If the leading digit in the uncertainty is small (a 1 or, perhaps, a 2), retaining one extra digit in the final answer may be appropriate. For example, an answer such as 3.6 ± 1 is quite acceptable because one could argue that rounding it to 4 ± 1 would waste information.
Section 2.3 Discrepancy
17
of the same quantity disagree, we say there is a discrepancy. Numerically, we define the discrepancy between two measurements as their difference:
(2.10)
More specifically, each of the two measurements consists of a best estimate and an uncertainty, and we define the discrepancy as the difference between the two best estimates. For example, if two students measure the same resistance as follows
Student A: 15 ± 1 ohms
and
Student B: 25 ± 2 ohms,
their discrepancy is
discrepancy = 25 - 15 = 10 ohms.
Recognize that a discrepancy may or may not be significant. The two measurements just discussed are illustrated in Figure 2.l(a), which shows clearly that the discrepancy of 10 ohms is significant because no single value of the resistance is compatible with both measurements. Obviously, at least one measurement is incorrect, and some careful checking is needed to find out what went wrong.
t 30
,...._
a"'
.,::
0
'-'
0 u
20
-~
.."0::' 10
lBIdiscrepancy == 10
AI-
t 30
,...._
a"'
.,::
0
a'-'
0 u
20
.....
-~
..0"::' 10
D
_ ] discr,pmcy a 10
cl-
0 (a)
0 (b)
Figure 2.1. (a) Two measurements of the same resistance. Each measurement includes a best estimate, shown by a block dot, and a range of probable values, shown by a vertical error bar. The discrepancy (difference between the two best estimates) is 10 ohms and is significant because it is much larger than the combined uncertainty in the two measurements. Almost certainly, at least one of the experimenters made a mistake. (b) Two different measurements of the same resistance. The discrepancy is again 10 ohms, but in this case it is insignificant because the stated margins of error overlap. There is no reason to doubt either measurement (although they could be criticized for being rather imprecise).
18
Chapter 2: How to Report and Use Uncertainties
Suppose, on the other hand, two other students had reported these results:
Student C: 16 ± 8 ohms
and
Student D: 26 ± 9 ohms.
Here again, the discrepancy is 10 ohms, but in this case the discrepancy is insignificant because, as shown in Figure 2.l(b), the two students' margins of error overlap comfortably and both measurements could well be correct. The discrepancy between two measurements of the same quantity should be assessed not just by its size but, more importantly, by how big it is compared with the uncertainties in the measurements.
In the teaching laboratory, you may be asked to measure a quantity that has been measured carefully many times before, and for which an accurate accepted value is known and published, for example, the electron's charge or the universal gas constant. This accepted value is not exact, of course; it is the result of measurements and, like all measurements, has some uncertainty. Nonetheless, in many cases the accepted value is much more accurate than you could possibly achieve yourself. For example, the currently accepted value of the universal gas constant R is
(accepted R) = 8.31451 ± 0.00007 J/(mol •K).
(2.11)
As expected, this value is uncertain, but the uncertainty is extremely small by the standards of most teaching laboratories. Thus, when you compare your measured value of such a constant with the accepted value, you can usually treat the accepted value as exact.3
Although many experiments call for measurement of a quantity whose accepted value is known, few require measurement of a quantity whose true value is known.4 In fact, the true value of a measured quantity can almost never be known exactly and is, in fact, hard to define. Nevertheless, discussing the difference between a measured value and the corresponding true value is sometimes useful. Some authors call this difference the true error.
2.4 Comparison of Measured and Accepted Values
Performing an experiment without drawing some sort of c-:mclusion has little merit. A few experiments may have mainly qualitative results-the appearance of an interference pattern on a ripple tank or the color of light transmitted by some optical system-but the vast majority of experiments lead to quantitative conclusions, that is, to a statement of numerical results. It is important to recognize that the statement of a single measured number is completely uninteresting. Statements that the density
3 Tiris is not always so. For example, if you look up the refractive index of glass, you find values ranging from 1.5 to 1.9, depending on the composition of the glass. In an experiment to measure the refractive index of a piece of glass whose composition is unknown, the accepted value is therefore no more than a rough guide to the expected answer.
4 Here is an example: If you measure the ratio of a circle's circumference to its diameter, the true answer is exactly 7t. (Obviously such an experiment is rather contrived.)
Section 2.4 Comparison of Measured and Accepted Values
19
340
t,-..
-.._Eoo:,
I 330
t
-----1"-------i~------- -
d 1 accept, va uo
320
Figure 2.2. Three measurements of the speed of sound at standard temperature and pressure. Because the accepted value (331 m/s) is within Student A's margins of error, her result is satisfactory. The accepted value is just outside Student B's margin of error, but his measurement is nevertheless acceptable. The accepted value is far outside Student C's stated margins, and his measurement is definitely unsatisfactory.
of some metal was measured as 9.3 ± 0.2 gram/cm3 or that the momentum of a cart was measured as 0.051 ± 0.004 kg·m/s are, by themselves, of no interest. An
interesting conclusion must compare two or more numbers: a measurement with the accepted value, a measurement with a theoretically predicted value, or several measurements, to show that they are related to one another in accordance with some physical law. It is in such comparison of numbers that error analysis is so important. This and the next two sections discuss three typical experiments to illustrate how the estimated uncertainties are used to draw a conclusion.
Perhaps the simplest type of experiment is a measurement of a quantity whose accepted value is known. As discussed, this exercise is a somewhat artificial experiment peculiar to the teaching laboratory. The procedure is to measure the quantity, estimate the experimental uncertainty, and compare these values with the accepted value. Thus, in an experiment to measure the speed of sound in air (at standard temperature and pressure), Student A might arrive at the conclusion
Ns measured speed = 329 ± 5 m/s,
(2.12)
compared with the
accepted speed = 331 m/s.
(2.13)
Student A might choose to display this result graphically as in Figure 2.2. She should certainly include in her report both Equations (2.12) and (2.13) next to each other, so her readers can clearly appreciate her result. She should probably add an explicit statement that because the accepted value lies inside her margins of error, her measurement seems satisfactory.
The meaning of the uncertainty & is that the correct value of x probably lies
between xbest - & and xbest + &; it is certainly possible that the correct value lies
slightly outside this range. Therefore, a measurement can be regarded as satisfactory even if the accepted value lies slightly outside the estimated range of the measured
20
Chapter 2: How to Report and Use Uncertainties
value. For example, if Student B found the value
B's measured speed = 325 ± 5 mis,
he could certainly claim that his measurement is consistent with the accepted value of 331 m/s.
On the other hand, if the accepted value is well outside the margins of error (the discrepancy is appreciably more than twice the uncertainty, say), there is reason to think something has gone wrong. For example, suppose the unlucky Student C finds
C's measured speed = 345 ± 2 m/s
(2.14)
compared with the
accepted speed = 331 m/s.
(2.15)
Student C's discrepancy is 14 m/s, which is seven times bigger than his stated uncertainty (see Figure 2.2). He will need to check his measurements and calculations to find out what has gone wrong.
Unfortunately, the tracing of C's mistake may be a tedious business because of the numerous possibilities. He may have made a mistake in the measurements or calculations that led to the answer 345 m/s. He may have estimated his uncertainty incorrectly. (The answer 345 ± 15 m/s would have been acceptable.) He also might be comparing his measurement with the wrong accepted value. For example, the accepted value 331 m/s is the speed of sound at standard temperature and pressure. Because standard temperature is 0°C, there is a good chance the measured speed in (2.14) was not taken at standard temperature. In fact, if the measurement was made at 20°C (that is, normal room temperature), the correct accepted value for the speed of sound is 343 m/s, and the measurement would be entirely acceptable.
Finally, and perhaps most likely, a discrepancy such as that between (2.14) and (2.15) may indicate some undetected source of systematic error (such as a clock that runs consistently slow, as discussed in Chapter 1). Detection of such systematic errors (ones that consistently push the result in one direction) requires careful checking of the calibration of all instruments and detailed review of all procedures.
2.5 Comparison of Two Measured Numbers
Many experiments involve measuring two numbers that theory predicts should be equal. For example, the law of conservation of momentum states that the total momentum of an isolated system is constant. To test it, we might perform a series of experiments with two carts that collide as they move along a frictionless track. We could measure the total momentum of the two carts before (p) and after (q) they
collide and check whether p = q within experimental uncertainties. For a single pair
of measurements, our results could be
initial momentum p = 1.49 ± 0.03 kg·m/s
and
final momentum q 1.56 ± 0.06 kg·m/s.
Section 2.5 Comparison of Two Measured Numbers
21
t 1.6
,--,
8"'
e6:o,
·I
s 1.5
E
PI
s 0
0
~
1.4
Figure 2.3. Measured values of the total momentum of two carts before (p) and after (q) a collision. Because the margins of error for p and q overlap, these measurements are certainly consistent with conservation of momentum (which implies that p and q should be equal).
Here, the range in which p probably lies (1.46 to 1.52) overlaps the range in which q probably lies (1.50 to 1.62). (See Figure 2.3.) Therefore, these measurements are consistent with conservation of momentum. If, on the other hand, the two probable ranges were not even close to overlapping, the measurements would be inconsistent with conservation of momentum, and we would have to check for mistakes in our measurements or calculations, for possible systematic errors, and for the possibility that some external forces (such as gravity or friction) are causing the momentum of the system to change.
If we repeat similar pairs of measurements several times, what is the best way to display our results? First, using a table to record a sequence of similar measurements is usually better than listing the results as several distinct statements. Second, the uncertainties often differ little from one measurement to the next. For example, we might convince ourselves that the uncertainties in all measurements of the initial momentum p are about Sp = 0.03 kg·m/s and that the uncertainties in the final q are all about 8q = 0.06 kg·m/s. If so, a good way to display our measurements would be as shown in Table 2.1.
Table 2.1. Measured momenta (kg·m/s).
Trial number
Initial momentum p (all ±0.03)
Final momentum q (all ±0.06)
1
1.49
1.56
2
3.10
3.12
3
2.16
2.05
etc.
For each pair of measurements, the probable range of values for p overlaps (or nearly overlaps) the range of values for q. If this overlap continues for all measurements, our results can be pronounced consistent with conservation of momentum. Note that our experiment does not prove conservation of momentum; no experiment can. The best you can hope for is to conduct many more trials with progressively
Section 2.5 Comparison of Two Measured Numbers
23
0.20
t 0.10
3
,,--,
a"'
~
0 1-----+-----+----- 2
<::J,
I
1
~ -0.10
expected value (zero)
-0.20
Figure 2.4. Three trials in a test of the conservation of momentum. The student has measured the total momentum of two carts before and after they collide (p and q, respectively). If momentum is conserved, the differences p - q should all be zero. The plot shows the value of p - q with its error bar for each trial. The expected value 0 is inside the margins of error in trials 1 and 2 and only slightly outside in trial 3. Therefore, these results are consistent with the conservation of momentum.
Whether our results are consistent with conservation of momentum can now be seen at a glance by checking whether the numbers in the final column are consistent with zero (that is, are less than, or comparable with, the uncertainty 0.09). Alternatively, and perhaps even better, we could plot the results as in Figure 2.4 and check visually. Yet another way to achieve the same effect would be to calculate the ratios
q!p, which should all be consistent with the expected value q/p = l. (Here, we
would need to calculate the uncertainty in q!p, a problem discussed in Chapter 3.) Our discussion of the uncertainty in p - q applies to the difference of any two
measured numbers. If we had measured any two numbers x and y and used our measured values to compute the difference x - y, by the argument just given, the resulting uncertainty in the difference would be the sum of the separate uncertainties in x and y. We have, therefore, established the following provisional rule:
Uncertainty in a Difference (Provisional Rule)
If two quantities x and y are measured with uncertainties ax and 5y, and if the measured values x and y are used to calculate
the difference q = x - y, the uncertainty in q is the sum of
the uncertainties in x and y:
aq = ax+ ay.
(2.18)
I call this rule "provisional" because we will find in Chapter 3 that the uncertainty
in the quantity q = x - y is often somewhat smaller than that given by Equation
24
Chapter 2: How to Report and Use Uncertainties
(2.18). Thus, we will be replacing the provisional rule (2.18) by an "improved"
rule-in which the uncertainty in q = x - y is given by the so-called quadratic
sum of 5x and 5y, as defined in Equation (3.13). Because this improved rule gives a somewhat smaller uncertainty for q, you will want to use it when appropriate. For now, however, let us be content with the provisional rule (2.18) for three reasons: (1) The rule (2.18) is easy to understand-much more so than the improved rule of Chapter 3. (2) In most cases, the difference between the two rules is small. (3) The
rule (2.18) always gives an upper bound on the uncertainty in q = x - y; thus, we
know at least that the uncertainty in x - y is never worse than the answer given in (2.18).
The result (2.18) is the first in a series of rules for the propagation of errors. To calculate a quantity q in terms of measured quantities x and y, we need to know how the uncertainties in x and y "propagate" to cause uncertainty in q. A complete discussion of error propagation appears in Chapter 3.
Quick Check 2.3. In an experiment to measure the latent heat of ice, a student adds a chunk of ice to water in a styrofoam cup and observes the change in temperature as the ice melts. To determine the mass of ice added, she weighs the cup of water before and after she adds the ice and then takes the difference. If her two measurements were
(mass of cup & water) = m1 = 203 ± 2 grams
and
(mass of cup, water, & ice) = m2 = 246 ± 3 grams,
find her answer for the mass of ice, m2 - m1, with its uncertainty, as given by the provisional rule (2.18).
2.6 Checking Relationships with a Graph
Many physical laws imply that one quantity should be proportional to another. For example, Hooke's law states that the extension of a spring is proportional to the force stretching it, and Newton's law says that the acceleration of a body is proportional to the total applied force. Many experiments in a teaching laboratory are designed to check this kind of proportionality.
If one quantity y is proportional to some other quantity x, a graph of y against x is a straight line through the origin. Thus, to test whether y is proportional to x, you can plot the measured values of y against those of x and note whether the resulting points do lie on a straight line through the origin. Because a straight line is so easily recognizable, this method is a simple, effective way to check for proportionality.
To illustrate this use of graphs, let us imagine an experiment to test Hooke's
law. This law, usually written as F = kx, asserts that the extension x of a spring is proportional to the force F stretching it, so x = Flk, where k is the "force constant"
Section 2.6 Checking Relationships with a Graph
25
of the spring. A simple way to test this law is to hang the spring vertically and suspend various masses m from it. Here, the force F is the weight mg of the load; so the extension should be
(2.19)
The extension x should be proportional to the load m, and a graph of x against m should be a straight line through the origin.
If we measure x for a variety of different loads m and plot our measured values of x and m, the resulting points almost certainly will not lie exactly on a straight line. Suppose, for example, we measure the extension x for eight different loads m and get the results shown in Table 2.3. These values are plotted in Figure 2.S(a),
Table 2.3. Load and extension.
Load m (grams) (8m negligible)
200 300 400 500 600 700 800 900
Extension x (cm)
1.1
1.5
1.9
2.8
3.4
3.5
4.6
5.4
(all ±0.3)
which also shows a possible straight line that passes through the origin and is reasonably close to all eight points. As we should have expected, the eight points do not lie exactly on any line. The question is whether this result stems from experimental uncertainties (as we would hope), from mistakes we have made, or even from the possibility the extension x is not proportional to m. To answer this question, we must consider our uncertainties.
As usual, the measured quantities, extensions x and masses m, are subject to uncertainty. For simplicity, let us suppose that the masses used are known very accurately, so that the uncertainty in m is negligible. Suppose, on the other hand, that all measurements of x have an uncertainty of approximately 0.3 cm (as indicated in Table 2.3). For a load of 200 grams, for example, the extension would
probably be in the range 1.1 ± 0.3 cm. Our first experimental point on the graph thus lies on the vertical line m = 200 grams, somewhere between x = 0.8 and x = 1.4 cm. This range is indicated in Figure 2.S(b), which shows an error bar
through each point to indicate the range in which it probably lies. Obviously, we should expect to find a straight line that goes through the origin and passes through or close to all the error bars. Figure 2.S(b) has such a line, so we conclude that the data on which Figure 2.S(b) is based are consistent with x being proportional to m.
We saw in Equation (2.19) that the slope of the graph of x against mis g/k. By measuring the slope of the line in Figure 2.S(b), we can therefore find the constant k of the spring. By drawing the steepest and least steep lines that fit the data reasonably well, we could also find the uncertainty in this value fork. (See Problem 2.18.)
If the best straight line misses a high proportion of the error bars or if it misses any by a large distance (compared with the length of the error bars), our results
26
Chapter 2: How to Report and Use Uncertainties
5
0
500
1,000
m (grams)-
(a)
5
5
I
0
500
1,000
0
500
1,000
m (grams)-
m(grams)-
(b)
(c)
Figure 2.5. Three plots of extension x of a spring against the load m. (a) The data of Table 2.3 without error bars. (b) The same data with error bars to show the uncertainties in x. (The uncertainties in m are assumed to be negligible.) These data are consistent with the expected proportionality of x and m. (c) A different set of data, which are inconsistent with x being proportional tom.
would be inconsistent with x being proportional to m. This situation is illustrated in Figure 2.S(c). With the results shown there, we would have to recheck our measurements and calculations (including the calculation of the uncertainties) and consider whether x is not proportional to m for some reason. [In Figure 2.5(c), for instance, the first five points can be fitted to a straight line through the origin. This situation suggests that x may be proportional to m up to approximately 600 grams, but that Hooke's law breaks down at that point and the spring starts to stretch more rapidly.]
Thus far, we have supposed that the uncertainty in the mass (which is plotted along the horizontal axis) is negligible and that the only uncertainties are in x, as shown by the vertical error bars. If both x and m are subject to appreciable uncertainties the simplest way to display them is to draw vertical and horizontal error bars, whose lengths show the uncertainties in x and m respectively, as in Figure 2.6.
Section 2.6 Checking Relationships with a Graph
27
X
'----------------+-m
Figure 2.6. Measurements that have uncertainties in both variables can be shown by crosses made up of one error bar for each variable.
Each cross in this plot corresponds to one measurement of x and m, in which x probably lies in the interval defined by the vertical bar of the cross and m probably in that defined by the horizontal bar.
A slightly more complicated possibility is that some quantity may be expected to be proportional to a power of another. (For example, the distance traveled by a
freely falling object in a time t is d = ½gt2 and is proportional to the square oft.) Let us suppose that y is expected to be proportional to x2. Then
y = Ax2,
(2.20)
where A is some constant, and a graph of y against x should be a parabola with the general shape of Figure 2.7(a). If we were to measure a series of values for y and x and plot y against x, we might get a graph something like that in Figure 2.7(b). Unfortunately, visually judging whether a set of points such as these fit a parabola (or any other curve, except a straight line) is very hard. A better way to check that y oc x2 is to plot y against x squared. From Equation (2.20), we see that such a plot should be a straight line, which we can check easily as in Figure 2.7(c).
y
y
y
'----------+ X
(a)
(b)
(c)
Figure 2.7. (a) If y is proportional to x2, a graph of y against x should be a parabola with this
general shape. (b) A plot of y against x for a set of measured values is hard to check visually for fit with a parabola. (c) On the other hand, a plot of y against x2 should be a straight line through the origin, which is easy to check. (In the case shown, we see easily that the points do fit a straight line through the origin.)
28
Chapter 2: How to Report and Use Uncertainties
In the same way, if y = Ax" (where n is any power), a graph of y against x'
should be a straight line, and by plotting the observed values of y against x', we can check easily for such a fit. There are various other situations in which a nonlinear relation (that is, one that gives a curved-nonlinear-graph) can be converted into a linear one by a clever choice of variables to plot. Section 8.6 discusses an important example of such "linearization," which is worth mentioning briefly here. Often one variable y depends exponentionally on another variable x:
y = Ae8x.
(For example, the activity of a radioactive sample depends exponentially on time.) For such relations, the natural logarithm of y is easily shown to be linear in x; that is, a graph of ln(y) against x should be a straight line for an exponential relationship.
Many other, nongraphical ways are available to check the proportionality of two quantities. For example, if y oc x, the ratio y/x should be constant. Thus, having tabulated the measured values of y and x, you could simply add a column to the table that shows the ratios y/x and check that these ratios are constant within their experimental uncertainties. Many calculators have a built-in function (called the correlation coefficient) to show how well a set of measurements fits a straight line. (This function is discussed in Section 9.3.) Even when another method is used to check that y oc x, making the graphical check as well is an excellent practice. Graphs such as those in Figures 2.S(b) and (c) show clearly how well (or badly) the measurements verify the predictions; drawing such graphs helps you understand the experiment and the physical laws involved.
2.7 Fractional Uncertainties
The uncertainty & in a measurement,
(measured x) = xbest ± &,
indicates the reliability or precision of the measurement. The uncertainty & by itself does not tell the whole story, however. An uncertainty of one inch in a distance of one mile would indicate an unusually precise measurement, whereas an uncertainty of one inch in a distance of three inches would indicate a rather crude estimate. Obviously, the quality of a measurement is indicated not just by the uncertainty & but also by the ratio of & to xbest, which leads us to consider the fractional uncertainty,
(2.21)
(The fractional uncertainty is also called the relative uncertainty or the precision.) In this definition, the symbol /xbesi/ denotes the absolute value 5 of xbest· The uncer-
5 The absolute value lxl of a number xis equal to x when xis positive but is obtained by omitting the minus
sign if xis negative. We use the absolute value in (2.21) to guarantee that the fractional uncertainty, like the uncertainty & itself, is always positive, whether xbest is positive or negative. In practice, you can often arrange matters so that measured numbers are positive, and the absolute-value signs in (2.21) can then be omitted.
Section 2.7 Fractional Uncertainties
29
tainty & is sometimes called the absolute uncertainty to avoid confusion with the fractional uncertainty.
In most serious measurements, the uncertainty & is much smaller than the measured value xbest· Because the fractional uncertainty &/lxbesil is therefore usually a small number, multiplying it by 100 and quoting it as the percentage uncertainty is often convenient. For example, the measurement
length l = 50 ± 1 cm
(2.22)
has a fractional uncertainty
1cm 50 cm 0.02
and a percentage uncertainty of 2%. Thus, the result (2.22) could be given as
length l = 50 cm± 2%.
Note that although the absolute uncertainty 81 has the same units as /, the fractional uncertainty 81/llbesil is a dimensionless quantity, without units. Keeping this difference in mind can help you avoid the common mistake of confusing absolute uncertainty with fractional uncertainty.
The fractional uncertainty is an approximate indication of the quality of a measurement, whatever the size of the quantity measured. Fractional uncertainties of 10% or so are usually characteristic of fairly rough measurements. (A rough measurement of 10 inches might have an uncertainty of 1 inch; a rough measurement of 10 miles might have an uncertainty of 1 mile.) Fractional uncertainties of 1 or 2% are characteristic of reasonably careful measurements and are about the best to hope for in many experiments in the introductory physics laboratory. Fractional uncertainties much less than 1% are often hard to achieve and are rather rare in the introductory laboratory.
These divisions are, of course, extremely rough. A few simple measurements can have fractional uncertainties of 0.1 % or less with little trouble. A good tape
ro measure can easily measure a distance of 10 feet with an uncertainty of inch, or
approximately 0.1 %; a good timer can easily measure a period of an hour with an uncertainty of less than a second, or 0.03%. On the other hand, for many quantities that are very hard to measure, a 10% uncertainty would be regarded as an experimental triumph. Large percentage uncertainties, therefore, do not necessarily mean that a measurement is scientifically useless. In fact, many important measurements in the history of physics had experimental uncertainties of 10% or more. Certainly plenty can be learned in the introductory physics laboratory from equipment that has a minimum uncertainty of a few percent.
Quick Check 2.4. Convert the errors in the following measurements of the
velocities of two carts on a track into fractional errors and percent errors: (a)
v = 55 ± 2 cm/s; (b) u = -20 ± 2 cm/s. (c) A cart's kinetic energy is measured as K = 4.58 J ± 2%; rewrite this finding in terms of its absolute uncer-
tainty. (Because the uncertainties should be given to one significant figure, you
ought to be able to do the calculations in your head.)
30
Chapter 2: How to Report and Use Uncertainties
2.8 Significant Figures and Fractional Uncertainties
The concept of fractional uncertainty is closely related to the familiar notion of significant figures. In fact, the number of significant figures in a quantity is an approximate indicator of the fractional uncertainty in that quantity. To clarify this connection, let us review briefly the notion of significant figures and recognize that this concept is both approximate and somewhat ambiguous.
To a mathematician, the statement that x = 21 to two significant figures means unambiguously that x is closer to 21 than to either 20 or 22; thus, the number 21,
with two significant figures, means 21 ± 0.5. To an experimental scientist, most
numbers are numbers that have been read off a meter (or calculated from numbers read off a meter). In particular, if a digital meter displays two significant figures and
reads 21, it may mean 21 ± 0.5, but it may also mean 21 ± 1 or even something like 21 ± 5. (Many meters come with a manual that explains the actual uncertaint-
ies.) Under these circumstances, the statement that a measured number has two significant figures is only a rough indicator of its uncertainty. Rather than debate exactly how the concept should be defined, I will adopt a middle-of-the-road defini-
tion that 21 with two significant figures means 21 ± 1, and more generally that a
number with N significant figures has an uncertainty of about 1 in the N th digit. Let us now consider two numbers,
x = 21 and y = 0.21,
both of which have been certified accurate to two significant figures. According to the convention just agreed to, these values mean
x = 21 ± 1 and y = 0.21 ± 0.01.
Although the two numbers both have two significant figures, they obviously have very different uncertainties. On the other hand, they both have the same fractional uncertainty, which in this case is 5%:
& = oy = l_ = O.Ol = 0 05 r 5%
X
Y 21 0.21
• O "fl •
Evidently, the statement that the numbers 21 and 0.21 (or 210, or 2.1, or 0.0021, etc.) have two significant figures is equivalent to saying that they are 5% uncertain. In the same way, 21.0, with three significant figures, is 0.5% uncertain, and so on.
Unfortunately, this useful connection is only approximate. For example, the
statement that s = 10, with two significant figures, means
s = 10 ± 1 or 10 ± 10%.
At the opposite extreme, t = 99 (again with two significant figures) means
t = 99 ± 1 or 99 ± 1%.
Evidently, the fractional uncertainty associated with two significant figures ranges from 1% to 10%, depending on the first digit of the number concerned.
The approximate correspondence between significant figures and fractional uncertainties can be summarized as in Table 2.4.
Section 2.9 Multiplying Two Measured Numbers
31
Table 2.4. Approximate correspondence between significant figures and fractional uncertainties.
Number of significant
figures
Corresponding fractional uncertainty is
between
or roughly
1
10% and 100%
50%
2
1% and 10%
5%
3
0.1% and 1%
0.5%
2.9 Multiplying Two Measured Numbers
Perhaps the greatest importance of fractional errors emerges when we start multiplying measured numbers by each other. For example, to find the momentum of a body, we might measure its mass m and its velocity u and then multiply them to
give the momentum p = mu. Both m and u are subject to uncertainties, which we
will have to estimate. The problem, then, is to find the uncertainty in p that results from the known uncertainties in m and u.
First, for convenience, let us rewrite the standard form
(measured value of x) = xbest ± &
in terms of the fractional uncertainty, as
~ (measured value of x) = xbest( 1 ±
).
jxbest!
For example, if the fractional uncertainty is 3%, we see from (2.23) that
(2.23)
(measured value of x) = xbest( 1 ± 1~0);
that is, 3% uncertainty means that x probably lies between xbest times 0.97 and xbest times 1.03,
(0.97) Xxbest ~ X ~ (1.03) X Xbest·
We will find this a useful way to think about a measured number that we will have to multiply.
Let us now return to our problem of calculating p = mu, when m and u have
been measured, as
(2.24)
and
(measured u)
(2.25)
32
Chapter 2: How to Report and Use Uncertainties
Because mbest and ubest are our best estimates for m and u, our best estimate for
p = muis
(best estimate for p) = Pbest = mbestubest·
The largest probable values of m and u are given by (2.24) and (2.25) with the plus signs. Thus, the largest probable value for p = mu is
(largest value for p) = + mbestubesi{1 _§!!!_)(1 + ~ ) .
jmbest!
jubest!
(2.26)
The smallest probable value for p is given by a similar expression with two minus signs. Now, the product of the parentheses in (2.26) can be multiplied out as
(1 + _§!!!_)(1 + ~ ) = 1 +
Sm
~ +
+ _§!!!_~.
lmbest!
jubest!
jmbestl hest! jmbest! jubestl
(2.27)
Because the two fractional uncertainties Sm!lmbestl and Su/jubestl are small numbers (a few percent, perhaps), their product is extremely small. Therefore, the last term in (2.27) can be neglected. Returning to (2.26), we find
(largest value of p) = mbestubest( 1 + -Sm- + -S-u) •
jmbest! jubest!
The smallest probable value is given by a similar expression with two minus signs.
Our measurements of m and u, therefore, lead to a value of p = mu given by
(value of p) = mbestubest(l ± [ Sm + ~ ] ) .
jmbest! jubestl
Comparing this equation with the general form
(value of p) = ± Pbesi{1 -3!__),
jpbestl
we see that the best estimate for pis Pbest = mbestubest (as we already knew) and that
the fractional uncertainty in p is the sum of the fractional uncertainties in m and u,
-3!__ = _§!!!_ + ~-
IPbestl
jmbestl jubest!
If, for example, we had the following measurements for m and u,
m = 0.53 ± 0.01 kg
and
u = 9.1 ± 0.3 m/s,
the best estimate for p = mu is
(0.53) X (9.1) 4.82 kg·m/s.
Section 2.9 Multiplying Two Measured Numbers
33
To compute the uncertainty in p, we would first compute the fractional errors Sm 0.01
0.53 = 0.02 = 2%
and
Su = 0.3 = 0.03 3%.
Vbest
9.1
The fractional uncertainty in p is then the sum:
Sp = 2% + 3% = 5%.
Pbest
If we want to know the absolute uncertainty in p, we must multiply by Pbest:
Sp
=
Sp p-
X
Pbest
=
0.05 X 4.82
=
0.241.
best
We then round Sp and Aest to give us our final answer
(value of p) = 4.8 ± 0.2 kg· m/s.
The preceding considerations apply to any product of two measured quantities. We have therefore discovered our second general rule for the propagation of errors. If we measure any two quantities x and y and form their product, the uncertainties in the original two quantities "propagate" to cause an uncertainty in their product. This uncertainty is given by the following rule:
Uncertainty in a Product (Provisional Rule)
If two quantities x and y have been measured with small fractional uncertainties &/lxbestl and Sy/lYbestl, and if the measured
values of x and y are used to calculate the product q = xy,
then the fractional uncertainty in q is the sum of the fractional uncertainties in x and y,
-&- + -S-y .
lxbestl IYbestl
(2.28)
I call this rule "provisional," because, just as with the rule for uncertainty in a difference, I will replace it with a more precise rule later on. Two other features of this rule also need to be emphasized. First, the derivation of (2.28) required that the fractional uncertainties in x and y both be small enough that we could neglect their product. This requirement is almost always true in practice, and I will always assume it. Nevertheless, remember that if the fractional uncertainties are not much smaller than 1, the rule (2.28) may not apply. Second, even when x and y have different dimensions, (2.28) balances dimensionally because all fractional uncertainties are dimensionless.
34
Chapter 2: How to Report and Use Uncertainties
In physics, we frequently multiply numbers together, and the rule (2.28) for finding the uncertainty in a product will obviously be an important tool in error analysis. For the moment, our main purpose is to emphasize that the uncertainty in any product q = xy is expressed most simply in terms of fractional uncertainties, as in (2.28).
Quick Check 2.5. To find the area of a rectangular plate, a student measures
its sides as l = 9.1 ± 0.1 cm and b = 3.3 ± 0.1 cm. Express these uncertainties as percent uncertainties and then find the student's answer for the area A = lb
with its uncertainty. (Find the latter as a percent uncertainty first and then con-
vert to an absolute uncertainty. Do all error calculations in your head.)
Principal Definitions and Equations of Chapter 2
STANDARD FORM FOR STATING UNCERTAINTIES The standard form for reporting a measurement of a physical quantity x is
(measured value of x) = Xbest ± &,
where
xbest (best estimate for x)
and
Bx = (uncertainty or error in the measurement).
[See (2.3)]
This statement expresses our confidence that the correct value of x probably lies in
(or close to) the range from xbest - & to xbest + &.
DISCREPANCY
The discrepancy between two measured values of the same physical quantity is
discrepancy = difference between two measured
values of the same quantity.
[See (2.10)]
FRACTIONAL UNCERTAINTY
If x is measured in the standard form xbest ± &, the fractional uncertainty in x is
fractional uncertainty
[See (2.21)]
Problems for Chapter 2
35
The percent uncertainty is just the fractional uncertainty expressed as a percentage (that is, multiplied by 100%).
We have found two provisional rules, (2.18) and (2.28), for error propagation that show how the uncertainties in two quantities x and y propagate to cause uncertainties in calculations of the difference x - y or the product xy. A complete discussion of error propagation appears in Chapter 3, where I show that the rules (2.18) and (2.28) can frequently be replaced with more refined rules (given in Section 3.6). For this reason, I have not reproduced (2.18) and (2.28) here.
Problems for Chapter 2
Notes: The problems at the end of each chapter are arranged by section number. A problem listed for a specific section may, of course, involve ideas from previous sections but does not require knowledge of later sections. Therefore, you may try problems listed for a specific section as soon as you have read that section.
The approximate difficulty of each problem is indicated by one, two, or three stars. A one-star problem should be straightforward and usually involves a single concept. Two-star problems are more difficult or require more work (drawing a graph, for instance). Three-star problems are the most difficult and may require considerably more labor.
Answers to the odd-numbered problems can be found in the Answers Section at the back of the book.
For Section 2.1 : Best Estimate ± Uncertainty
* 2.1. In Chapter 1, a carpenter reported his measurement of the height of a door-
way by stating that his best estimate was 210 cm and that he was confident the height was between 205 and 215 cm. Rewrite this result in the standard form
xbest ± &. Do the same for the measurements reported in Equations (1.1), (1.2),
and (1.4).
* 2.2. A student studying the motion of a cart on an air track measures its posi-
tion, velocity, and acceleration at one instant, with the results shown in Table 2.5.
Rewrite these results in the standard form xbest ± &.
Table 2.5. Measurements of position, velocity, and acceleration; for Problem 2.2.
Variable
Best estimate
Probable range
Position, x Velocity, v Acceleration, a
53.3 -13.5
93
53.1 to 53.5 (cm) -14.0 to -13.0 (cm/s)
90 to 96 (cm/s2)
36
Chapter 2: How to Report and Use Uncertainties
For Section 2.2: Significant Figures
* 2.3. Rewrite the following results in their clearest forms, with suitable numbers
of significant figures:
(a) measured height = 5.03 ± 0.04329 m (b) measured time = 1.5432 ± 1 s
° (c) measured charge = -3.21 X 10-19 ± 2.67 X 10-2 C
(d) measured wavelength = 0.000,000,563 ± 0.000,000,07 m (e) measured momentum = 3.267 X 103 ± 42 g·cm/s.
* 2.4. Rewrite the following equations in their clearest and most appropriate
forms:
(a) X 3.323 ± 1.4 mm (b) t 1,234,567 ± 54,321 s (c) X. 5.33 X 10-7 ± 3.21 X 10-9 m
(d) r 0.000,000,538 ± 0.000,000,03 mm
For Section 2.3: Discrepancy
* 2.5. Two students measure the length of the same rod and report the results
135 ± 3 mm and 137 ± 3 mm. Draw an illustration like that in Figure 2.1 to repre-
sent these two measurements. What is the discrepancy between the two measurements, and is it significant?
* 2.6. Each of two research groups discovers a new elementary particle. The two
reported masses are
m1 = (7.8 ± 0.1) X 10-27 kg
and
m2 = (7.0 ± 0.2) X 10-27 kg.
Draw an illustration like that in Figure 2.1 to represent these two measurements. The question arises whether these two measurements could actually be of the same particle. Based on the reported masses, would you say they are likely to be the same particle? In particular, what is the discrepancy in the two measurements (assuming they really are measurements of the same mass)?
For Section 2.4: Comparison of Measured and Accepted Values
* 2.7. (a) A student measures the density of a liquid five times and gets the results
(all in gram/cm3) 1.8, 2.0, 2.0, 1.9, and 1.8. What would you suggest as the best estimate and uncertainty based on these measurements? (b) The student is told that the accepted value is 1.85 gram/cm3. What is the discrepancy between the student's best estimate and the accepted value? Do you think it is significant?
* 2.8. 1\vo groups of students measure the charge of the electron and report their
results as follows:
Group A: e = (1.75 ± 0.04) X 10-19 C
Problems for Chapter 2
37
and
Group B: e = (1.62 ± 0.04) X 10-19 C.
What should each group report for the discrepancy between its value and the accepted value,
e = 1.60 X 10- 19 C
(with negligible uncertainty)? Draw an illustration similar to that in Figure 2.2 to show these results and the accepted value. Which of the results would you say is satisfactory?
For Section 2.5: Comparison of Two Measured Numbers
* 2.9. In an experiment on the simple pendulum, a student uses a steel ball sus-
pended from a light string, as shown in Figure 2.8. The effective length l of the
X
Figure 2.8. A simple pendulum; for Problem 2.9.
pendulum is the distance from the top of the string to the center of the ball, as shown. To find /, he first measures the distance x from the top of the string to the
bottom of the ball and the radius r of the ball; he then subtracts to give l = x - r.
If his two measurements are
x = 95.8 ± 0.1 cm and r = 2.30 ± 0.02 cm,
what should be his answer for the length l and its uncertainty, as given by the provisional rule (2.18)?
* 2.10. The time a carousel takes to make one revolution is measured by noting
the starting and stopping times using the second hand of a wrist watch and sub-
tracting. If the starting and stopping times are uncertain by ± 1 second each, what
is the uncertainty in the time for one revolution, as given by the provisional rule (2.18)?
* 2.11. In an experiment to check conservation of angular momentum, a student
obtains the results shown in Table 2.6 for the initial and final angular momenta (L and L') of a rotating system. Add an extra column to the table to show the difference L - L' and its uncertainty. Are the student's results consistent with conservation of angular momentum?
38
Chapter 2: How to Report and Use Uncertainties
Table 2.6. Initial and final angular momenta (in kg·m2/s); for Problems 2.11 and 2.14.
Initial L
Final L'
3.0 ± 0.3 7.4 ± 0.5 14.3 ± 1 25 ± 2 32 ± 2 37 ± 2
2.7 ± 0.6 8.0 ± 1 16.5 ± 1 24 ± 2 31 ± 2 41 ± 2
* 2.12. The acceleration a of a cart sliding down a frictionless incline with slope
0 is expected to be gsin 0. To test this, a student measures the acceleration a of a cart on an incline for several different values of 0; she also calculates the corresponding expected accelerations gsin 0 for each 0 and obtains the results shown in Table 2.7. Add a column to the table to show the discrepancies a - gsin 0 and their uncertainties. Do the results confirm that a is given by gsin 0? If not, can you suggest a reason they do not?
Table 2.7. Measured and expected accelerations; for Problem 2.12.
Trial number
Acceleration a (m/s2)
Expected acceleration gsin 0 (m/s2)
1
2.04 ± 0.04
2
3.58 ± 0.06
3
4.32 ± 0.08
4
4.85 ± 0.09
5
5.53 ± 0.1
2.36 ± 0.1 3.88 ± 0.08 4.57 ± 0.05 5.05 ± 0.04 5.72 ± 0.03
** 2.13.
An experimenter measures the separate masses M and m of a car and
trailer. He gives his results in the standard form Mbest ± SM and mbest ± Sm. What
would be his best estimate for the total mass M + m? By considering the largest
and smallest probable values of the total mass, show that his uncertainty in the total
mass is just the sum of SM and Sm. State your arguments clearly; don't just write
down the answer. (This problem provides another example of error propagation: The
uncertainties in the measured numbers, M and m, propagate to cause an uncertainty
in the sum M + m.)
For Section 2.6: Checking Relationships with a Graph
** 2.14.
Using the data of Problem 2.11, make a plot of final angular momentum
L' against initial angular momentum L for the experiment described there. (Include
Problems for Chapter 2
39
vertical and horizontal error bars, and be sure to include the origin. As with all
graphs, label your axes, including units, use squared paper, and choose the scales
so that the graph fills a good proportion of the page.) On what curve would you
expect the points to lie? Do they lie on this curve within experimental uncertainties?
** 2.15.
According to the ideal gas law, if the volume of a gas is kept constant,
the pressure P should be proportional to the absolute temperature T. To check this
proportionality, a student measures the pressure of a gas at five different tempera-
tures (always with the same volume) and gets the results shown in Table 2.8. Plot
these results in a graph of P against T, and decide whether they confirm the expected
proportionality of P and T.
Table 2.8. Temperature and pressure of a gas; for Problem 2.15.
Temperature (K) (negligible uncertainty)
Pressure (atm) (all ±0.04)
100
0.36
150
0.46
200
0.71
250
0.83
300
1.04
** 2.16.
You have learned (or will learn) in optics that certain lenses (namely,
thin spherical lenses) can be characterized by a parameter called the focal length f
and that if an object is placed at a distance p from the lens, the lens forms an image
at a distance q, satisfying the lens equation, llf = (lip) + (liq), where f always has
the same value for a given lens. To check if these ideas apply to a certain lens, a
student places a small light bulb at various distances p from the lens and measures
the location q of the corresponding images. She then calculates the corresponding
values off from the lens equation and obtains the results shown in Table 2.9. Make
a plot of f against p, with appropriate error bars, and decide if it is true that this
particular lens has a unique focal length f
Table 2.9. Object distances p (in cm) and corresponding focal lengths f (in cm); for Problem 2.16.
Object distance p (negligible uncertainty)
Focal length f (all ± 2)
45
28
55
34
65
33
75
37
85
40
40
Chapter 2: How to Report and Use Uncertainties
** 2.17.
The power P delivered to a resistance R by a current I is supposed to
be given by the relation P = RI2. To check this relation, a student sends several
different currents through an unknown resistance immersed in a cup of water and
measures the power delivered (by measuring the water's rise in temperature). Use
the results shown in Table 2.10 to make plots of P against I and P against 12,
including error bars. Use the second plot to decide if this experiment is consistent
with the expected proportionality of P and 12.
Table 2.1 O. Current I and power P; for Problem 2.17.
Current I (amps) (negligible uncertainty)
Power P (watts) (all ±50)
1.5
270
2.0
380
2.5
620
3.0
830
3.5
1280
4.0
1600
*** 2.18.
If a stone is thrown vertically upward with speed v, it should rise to a
height h given by v2 = 2gh. In particular, v2 should be proportional to h. To test
this proportionality, a student measures v2 and h for seven different throws and gets
the results shown in Table 2.11. (a) Make a plot of v2 against h, including vertical
and horizontal error bars. (As usual, use squared paper, label your axes, and choose
your scale sensibly.) Is your plot consistent with the prediction that v2 ex h? (b) The
slope of your graph should be 2g. To find the slope, draw what seems to be the best
straight line through the points and then measure its slope. To find the uncertainty
in the slope, draw the steepest and least steep lines that seem to fit the data reason-
ably. The slopes of these lines give the largest and smallest probable values of the
slope. Are your results consistent with the accepted value 2g = 19.6 m/s2?
Table 2.11. Heights and speeds of a stone thrown vertically upward; for Problem 2.18.
h (m) all ±0.05
vz
(mz;sz)
0.4
7± 3
0.8
17 ± 3
1.4
25 ± 3
2.0
38 ± 4
2.6
45 ± 5
3.4
62 ± 5
3.8
72 ± 6
Problems for Chapter 2
41
*** 2.19.
In an experiment with a simple pendulum, a student decides to check
whether the period T is independent of the amplitude A (defined as the largest angle
that the pendulum makes with the vertical during its oscillations). He obtains the
Table 2.12. Amplitude and period of a pendulum; for Problem 2.19.
Amplitude A (deg)
Period T (s)
5±2 17 ± 2 25 ± 2 40 ± 4 53 ± 4 67 ± 6
1.932 ± 0.005 1.94 ± 0.01 1.96 ± 0.01 2.01 ± 0.01 2.04 ± 0.01 2.12 ± 0.02
results shown in Table 2.12. (a) Draw a graph of T against A. (Consider your choice
of scales carefully. If you have any doubt about this choice, draw two graphs, one
including the origin, A = T = 0, and one in which only values of T between 1.9
and 2.2 s are shown.) Should the student conclude that the period is independent of the amplitude? (b) Discuss how the conclusions of part (a) would be affected if all
the measured values of T had been uncertain by ± 0.3 s.
For Section 2.7: Fractional Uncertainties
* 2.20. Compute the percentage uncertainties for the five measurements reported
in Problem 2.3. (Remember to round to a reasonable number of significant figures.)
* 2.21. Compute the percentage uncertainties for the four measurements in Prob-
lem 2.4.
* 2.22. Convert the percent errors given for the following measurements into ab-
solute uncertainties and rewrite the results in the standard form xbest ± & rounded
appropriately.
(a) x 543.2 m ± 4%
(b) v = -65.9 m/s ± 8%
(c) A = 671 X 10-9 m ± 4%
* 2.23. A meter stick can be read to the nearest millimeter; a traveling microscope
can be read to the nearest 0.1 mm. Suppose you want to measure a length of 2 cm
with a precision of 1%. Can you do so with the meter stick? Is it possible to do so
with the microscope?
* 2.24. (a) A digital voltmeter reads voltages to the nearest thousandth of a volt.
What will be its percent uncertainty in measuring a voltage of approximately 3
volts? (b) A digital balance reads masses to the nearest hundredth of a gram. What
will be its percent uncertainty in measuring a mass of approximately 6 grams?
** 2.25.
To find the acceleration of a cart, a student measures its initial and final
velocities, vi and Vr, and computes the difference (vr - vJ Her data in two separate
42
Chapter 2: How to Report and Use Uncertainties
Table 2.13. Initial and final
velocities (all in cm/s and all ± 1%); for Problem 2.25.
First run
14.0
18.0
Second run
19.0
19.6
trials are shown in Table 2.13. All have an uncertainty of ± 1%. (a) Calculate the
absolute uncertainties in all four measurements; find the change (vr - vJ and its uncertainty in each run. (b) Compute the percent uncertainty for each of the two values of (vr - vJ. Your answers, especially for the second run, illustrate the disastrous results of finding a small number by taking the difference of two much larger numbers.
For Section 2.8: Significant Figures and Fractional Uncertainties
* 2.26. (a) A student's calculator shows an answer 123.123. If the student decides
that this number actually has only three significant figures, what are its absolute and fractional uncertainties? (To be definite, adopt the convention that a number with N significant figures is uncertain by ± 1 in the N th digit.) (b) Do the same for the number 1231.23. (c) Do the same for the number 321.321. (d) Do the fractional uncertainties lie in the range expected for three significant figures?
** 2.27. (a) My calculator gives the answer x = 6.1234, but I know that x has a
fractional uncertainty of 2%. Restate my answer in the standard form xbest ± &
properly rounded. How many significant figures does the answer really have? (b) Do the same for y = 1.1234 with a fractional uncertainty of 2%. (c) Likewise, for
z = 9.1234.
For Section 2.9: Multiplying Two Measured Numbers
* 2.28. (a) A student measures two quantities a and b and obtains the results
a= 11.5 ± 0.2 cm and b = 25.4 ± 0.2 s. She now calculates the product q = ab.
Find her answer, giving both its percent and absolute uncertainties, as found using
the provisional rule (2.28). (b) Repeat part (a) using a= 5.0 m ± 7% and b = 3.0
N ± 1%.
* 2.29. (a) A student measures two quantities a and b and obtains the results
a= 10 ± 1 N and b = 272 ± 1 s. He now calculates the product q = ab. Find his
answer, giving both its percent and absolute uncertainties, as found using the provi-
sional rule (2.28). (b) Repeat part (a) using a = 3.0 ft ± 8% and b = 4.0 lb ± 2%.
** 2.30.
A well-known rule states that when two numbers are multiplied together,
the answer will be reliable if rounded to the number of significant figures in the less
precise of the original two numbers. (a) Using our rule (2.28) and the fact that
significant figures correspond roughly to fractional uncertainties, prove that this rule
Problems for Chapter 2
43
is approximately valid. (To be definite, treat the case that the less precise number has two significant figures.) (b) Show by example that the answer can actually be somewhat less precise than the "well-known" rule suggests. (This reduced precision is especially true if several numbers are multiplied together.)
** 2.31. (a) A student measures two numbers x and y as
x = 10 ± 1 and y = 20 ± 1.
What is her best estimate for their product q = xy? Using the largest probable values
for x and y (11 and 21), calculate the largest probable value of q. Similarly, find the smallest probable value of q, and hence the range in which q probably lies. Compare your result with that given by the rule (2.28). (b) Do the same for the measurements
x = 10 ± 8 and y = 20 ± 15.
[Remember that the rule (2.28) was derived by assuming that the fractional uncertainties are much less than 1.]
Chapter 3 Propagation of Uncertainties
Most physical quantities usually cannot be measured in a single direct measurement but are instead found in two distinct steps. First, we measure one or more quantities that can be measured directly and from which the quantity of interest can be calculated. Second, we use the measured values of these quantities to calculate the quantity of interest itself. For example, to find the area of a rectangle, you actually
measure its length l and height h and then calculate its area A as A = lh. Similarly, the most obvious way to find the velocity v of an object is to measure the distance
traveled, d, and the time taken, t, and then to calculate v as v = d/t. Any reader
with experience in an introductory laboratory can easily think of more examples. In fact, a little thought will show that almost all interesting measurements involve these two distinct steps of direct measurement followed by calculation.
When a measurement involves these two steps, the estimation of uncertainties also involves two steps. We must first estimate the uncertainties in the quantities measured directly and then determine how these uncertainties "propagate" through the calculations to produce an uncertainty in the final answer.1 This propagation of errors is the main subject of this chapter.
In fact, examples of propagation of errors were presented in Chapter 2. In Section 2.5, I discussed what happens when two numbers x and y are measured and the
results are used to calculate the difference q = x - y. We found that the uncertainty
in q is just the sum 8q = & + Sy of the uncertainties in x and y. Section 2.9 dis-
cussed the product q = xy, and Problem 2.13 discussed the sum q = x + y. I review
these cases in Section 3.3; the rest of this chapter is devoted to more general cases of propagation of uncertainties and includes several examples.
Before I address error propagation in Section 3.3, I will briefly discuss the estimation of uncertainties in quantities measured directly in Sections 3.1 and 3.2. The methods presented in Chapter 1 are reviewed, and further examples are given of error estimation in direct measurements.
Starting in Section 3.3, I will take up the propagation of errors. You will learn that almost all problems in error propagation can be solved using three simple rules.
1 In Chapter 4, I discuss another way in which the final uncertainty can sometimes be estimated. If all
measurements can be repeated several times, and if all uncertainties are known to be random in character,
then the uncertainty in the quantity of interest can be estimated by examining the spread in answers. Even
when this method is possible, it is usually best used as a check on the two-step procedure discussed in this
chapter.
45
46
Chapter 3: Propagation of Uncertainties
A single, more complicated, rule will also be presented that covers all cases and from which the three simpler rules can be derived.
This chapter is long, but its length simply reflects its great importance. Error propagation is a technique you will use repeatedly in the laboratory, and you need to become familiar with the methods described here. The only exception is that the material of Section 3.11 is not used again until Section 5.6; thus, if the ideas of this chapter are all new to you, consider skipping Section 3.11 on your first reading.
3.1 Uncertainties in Direct Measurements
Almost all direct measurements involve reading a scale (on a ruler, clock, or voltmeter, for example) or a digital display (on a digital clock or voltmeter, for example). Some problems in scale reading were discussed in Section 1.5. Sometimes the main sources of uncertainty are the reading of the scale and the need to interpolate between the scale markings. In such situations, a reasonable estimate of the uncertainty is easily made. For example, if you have to measure a clearly defined length I with a ruler graduated in millimeters, you might reasonably decide that the length could be read to the nearest millimeter but no better. Here, the uncertainty 81 would be
81 = 0.5 mm. If the scale markings are farther apart (as with tenths of an inch), you
might reasonably decide you could read to one-fifth of a division, for example. In any case, the uncertainties associated with the reading of a scale can obviously be estimated quite easily and realistically.
Unfortunately, other sources of uncertainty are frequently much more important than difficulties in scale reading. In measuring the distance between two points, your main problem may be to decide where those two points really are. For example, in an optics experiment, you may wish to measure the distance q from the center of a lens to a focused image, as in Figure 3.1. In practice, the lens is usually several millimeters thick, so locating its center is hard; if the lens comes in a bulky mounting, as it often does, locating the center is even harder. Furthermore, the image may appear to be well-focused throughout a range of many millimeters. Even though the apparatus is mounted on an optical bench that is clearly graduated in millimeters, the uncertainty in the distance from lens to image could easily be a centimeter or so. Since this uncertainty arises because the two points concerned are not clearly defined, this kind of problem is called a problem of definition.
image focused
on this screen
/
Figure 3.1. An image of the light bulb on the right is focused by the lens onto the screen at the left.
Section 3.1 Uncertainties in Direct Measurements
4 7
This example illustrates a serious danger in error estimation. If you look only at the scales and forget about other sources of uncertainty, you can badly underestimate the total uncertainty. In fact, the beginning student's most common mistake is to overlook some sources of uncertainty and hence underestimate uncertainties, often by a factor of 10 or more. Of course, you must also avoid overestimating errors. Experimenters who decide to play safe and to quote generous uncertainties on all measurements may avoid embarrassing inconsistencies, but their measurements may not be of much use. Clearly, the ideal is to find all possible causes of uncertainty and estimate their effects accurately, which is often not quite as hard as it sounds.
Superficially, at least, reading a digital meter is much easier than a conventional analog meter. Unless a digital meter is defective, it should display only significant figures. Thus, it is usually safe to say that the number of significant figures in a digital reading is precisely the number of figures displayed. Unfortunately, as discussed in Section 2.8, the exact meaning of significant figures is not always clear. Thus, a digital voltmeter that tells us that V = 81 microvolts could mean that the uncertainty is anything from 8V = 0.5 to 8V = 1 or more. Without a manual to tell you the uncertainty in a digital meter, a reasonable assumption is that the uncertainty in the final digit is ± 1 (so that the voltage just mentioned is V = 81 ± 1).
The digital meter, even more than the analog scale, can give a misleading impression of accuracy. For example, a student might use a digital timer to time the fall of a weight in an Atwood machine or similar device. If the timer displays 8.01 seconds, the time of fall is apparently
t = 8.01 ± 0.01 s.
(3.1)
However, the careful student who repeats the experiment under nearly identical conditions might find a second measurement of 8.41 s; that is,
t = 8.41 ± 0.01 s.
One likely explanation of this large discrepancy is that uncertainties in the starting procedure vary the initial conditions and hence the time of fall; that is, the measured times really are different. In any case, the accuracy claimed in Equation (3.1) clearly is ridiculously too good. Based on the two measurements made, a more realistic answer would be
t = 8.2 ± 0.2 s.
In particular, the uncertainty is some 20 times larger than suggested in Equation (3.1) based on the original single reading.
This example brings us to another point mentioned in Chapter 1: Whenever a measurement can be repeated, it should usually be made several times. The resulting spread of values often provides a good indication of the uncertainties, and the average of the values is almost certainly more trustworthy than any one measurement. Chapters 4 and 5 discuss the statistical treatment of multiple measurements. Here, I emphasize only that if a measurement is repeatable, it should be repeated, both to obtain a more reliable answer (by averaging) and, more important, to get an estimate of the uncertainties. Unfortunately, as also mentioned in Chapter 1, repeating a measurement does not always reveal uncertainties. If the measurement is subject to a systematic error, which pushes all results in the same direction (such as a clock that
48
Chapter 3: Propagation of Uncertainties
runs slow), the spread in results will not reflect this systematic error. Eliminating such systematic errors requires careful checks of calibration and procedures.
3.2 The Square-Root Rule for a Counting Experiment
Another, different kind of direct measurement has an uncertainty that can be estimated easily. Some experiments require you to count events that occur at random but have a definite average rate. For example, the babies born in a hospital arrive in a fairly random way, but in the long run births in any one hospital probably occur at a definite average rate. Imagine that a demographer who wants to know this rate counts 14 births in a certain two-week period at a local hospital. Based on this result, he would naturally say that his best estimate for the expected number of births in two weeks is 14. Unless he has made a mistake, 14 is exactly the number of births in the two-week period he chose to observe. Because of the random way births occur, however, 14 obviously may not equal the actual average number of births in all two-week periods. Perhaps this number is 13, 15, or even a fractional number such as 13.5 or 14.7.
Evidently, the uncertainty in this kind of experiment is not in the observed number counted (14 in our example). Instead, the uncertainty is in how well this observed number approximates the true average number. The problem is to estimate how large this uncertainty is. Although I discuss the theory of these counting experiments in Chapter 11, the answer is remarkably simple and is easily stated here: The uncertainty in any counted number of random events, as an estimate of the true average number, is the square root of the counted number. In our example, the demographer counted 14 births in a certain two-week period. Therefore, his uncer-
tainty is ffe = 4, and his final conclusion would be
(average births in a two-week period) = 14 ± 4.
To make this statement more general, suppose we count the occurrences of any event (such as the births of babies in a hospital) that occurs randomly but at a definite average rate. Suppose we count for a chosen time interval T (such as two weeks), and we denote the number of observed events by the Greek letter v. (Pronounced "nu," this symbol is the Greek form of the letter n and stands for number.) Based on this experiment, our best estimate for the average number of events in time T is, of course, the observed number v, and the uncertainty in this estimate is the square root of the number, that is, -{;,. Therefore, our answer for the average number of events in time T is
(3.2)
I refer to this important result as the Square-Root Rule for Counting Experiments. Counting experiments of this type occur frequently in the physics laboratory.
The most prominent example is in the study of radioactivity. In a radioactive material, each nucleus decays at a random time, but the decays in a large sample occur at a definite average rate. To find this rate, you can simply count the number v of
Section 3.3 Sums and Differences; Products and Quotients
49
decays in some convenient time interval T; the expected number of decays in time T, with its uncertainty, is then given by the square-root rule, (3.2).
Quick Check 3.1. (a) To check the activity of a radioactive sample, an inspector places the sample in a liquid scintillation counter to count the number of decays in a two-minute interval and obtains 33 counts. What should he report as the number of decays produced by the sample in two minutes? (b) Suppose, instead, he had monitored the same sample for 50 minutes and obtained 907 counts. What would be his answer for the number of decays in 50 minutes? (c) Find the percent uncertainties in these two measurements, and comment on the usefulness of counting for a longer period as in part (b).
3.3 Sums and Differences; Products and Quotients
For the remainder of this chapter, I will suppose that we have measured one or more quantities x, y, ... , with corresponding uncertainties &, c5y, ... , and that we now wish to use the measured values of x, y, ... , to calculate the quantity of real interest, q. The calculation of q is usually straightforward; the problem is how the uncertainties, &, c5y, ... , propagate through the calculation and lead to an uncertainty &j in the final value of q.
SUMS AND DIFFERENCES
Chapter 2 discussed what happens when you measure two quantities x and y and calculate their sum, x + y, or their difference, x - y. To estimate the uncertainty in the sum or difference, we had only to decide on their highest and lowest probable values. The highest and lowest probable values of x are xbest ± &, and those of y are Ybest ± c5y. Hence, the highest probable value of x + y is
Xbest + Ybest + (CU: + c5y),
and the lowest probable value is
+ Xbest Ybest - ( CU: + c5y). Thus, the best estimate for q = x + y is
and its uncertainty is
= + qbest
Xbest Ybest,
(3.3)
A similar argument (be sure you can reconstruct it) shows that the uncertainty in the difference x - y is given by the same formula (3.3). That is, the uncertainty in
either the sum x + y or the difference x - y is the sum & + c5y of the uncertainties in x and y.
50
Chapter 3: Propagation of Uncertainties
If we have several numbers x, ... , w to be added or subtracted, then repeated application of (3.3) gives the following provisional rule.
Uncertainty in Sums and Differences (Provisional Rule)
If several quantities x, ... , w are measured with uncertainties &, ... , Sw, and the measured values used to compute
q = x + · •· + z - (u + · · · + w),
then the uncertainty in the computed value of q is the sum,
Sq=&+ .. ·+ Sz+ Su+ .. ·+ Sw,
(3.4)
of all the original uncertainties.
In other words, when you add or subtract any number of quantities, the uncertainties
in those quantities always add. As before, I use the sign = to emphasize that this
rule is only provisional.
Example: Adding and Subtracting Masses
As a simple example of rule (3.4), suppose an experimenter mixes together the liquids in two flasks, having first measured their separate masses when full and empty, as follows:
Ml mass of first flask and contents
540 ± 10 grams
ml mass of first flask empty
72 ± 1 grams
M2 mass of second flask and contents 940 ± 20 grams
m2 mass of second flask empty
97 ± 1 grams
He now calculates the total mass of liquid as
M = M1 - m1 + M2 - m2 = (540 - 72 + 940 - 97) grams = 1,311 grams.
According to rule (3.4), the uncertainty in this answer is the sum of all four uncertainties,
(10 + 1 + 20 + 1) grams
32 grams.
Thus, his final answer (properly rounded) is
total mass of liquid = 1,310 ± 30 grams.
Section 3.3 Sums and Differences; Products and Quotients
5 I
Notice how the much smaller uncertainties in the masses of the empty flasks made a negligible contribution to the final uncertainty. This effect is important, and we will discuss it later on. With experience, you can learn to identify in advance those uncertainties that are negligible and can be ignored from the outset. Often, this can greatly simplify the calculation of uncertainties.
PRODUCTS AND QUOTIENTS
Section 2.9 discussed the uncertainty in the product q = xy of two measured
quantities. We saw that, provided the fractional uncertainties concerned are small,
the fractional uncertainty in q = xy is the sum of the fractional uncertainties in x
and y. Rather than review the derivation of this result, I discuss here the similar
case of the quotient q = x/y. As you will see, the uncertainty in a quotient is given by the same rule as for a product; that is, the fractional uncertainty in q = x/y is
equal to the sum of the fractional uncertainties in x and y. Because uncertainties in products and quotients are best expressed in terms of
fractional uncertainties, a shorthand notation for the latter will be helpful. Recall that if we measure some quantity x as
(measured value of x) = xbest ± &
in the usual way, then the fractional uncertainty in x is defined to be
(fractional uncertainty in x) = ~ -
lxbes1I
(The absolute value in the denominator ensures that the fractional uncertainty is always positive, even when xbest is negative.) Because the symbol &/lxbestl is clumsy to write and read, from now on I will abbreviate it by omitting the subscript "best" and writing
(fractional uncertainty in x) = & .
lxl The result of measuring any quantity x can be expressed in terms of its fractional error &/lxl as
(value of x) = xbesll ± 8x/lxl).
Therefore, the value of q = x/y can be written as
(va1ue of q) = X-best 1 ± &/lxl .
Ybest 1 ± 8y/lYI
Our problem now is to find the extreme probable values of the second factor on the right. This factor is largest, for example, if the numerator has its largest value,
1 + &/lxl, and the denominator has its smallest value, 1 - 8y!IYI- Thus, the largest
52
Chapter 3: Propagation of Uncertainties
probable value for q = x/y is
(largest value of q) Xbest 1 + &/lxl
(3.5)
Ybest 1 - 8yf!YI
The last factor in expression (3.5) has the form (1 + a)/(1 - b), where the
numbers a and b are normally small (that is, much less than 1). It can be simplified by two approximations. First, because b is small, the binomial theorem 2 implies that
(1
1 - b)
=
1 + b.
(3.6)
Therefore,
1 + a
1- b
=
(1
+ a)(l
+ b)
1 +a+ b + ab
= 1 +a+ b,
where, in the second line, we have neglected the product ab of two small quantities.
Returning to (3.5) and using these approximations, we find for the largest probable
value of q = x/y
(largest value of q) = X-best ( 1 + -& + -8y).
Ybest
lxl IYI
A similar calculation shows that the smallest probable value is given by a similar expression with two minus signs. Combining these two, we find that
(value of q) = X-be-st ( 1 ± [-& + -8y]) .
Ybest
lxl IYI
Comparing this equation with the standard form,
(1 (value of q) = qbest ± 8q ), lql
we see that the best value for q is %est = xbesilYbest, as we would expect, and that
the fractional uncertainty is
-8q = -&+ -8y.
(3.7)
lql lxl IYI
We conclude that when we divide or multiply two measured quantities x and y, the fractional uncertainty in the answer is the sum of the fractional uncertainties in x and y, as in (3.7). If we now multiply or divide a series of numbers, repeated application of this result leads to the following provisional rule.
2 The binomial theorem expresses 1/(1 - b) as the infinite series 1 + b + b2 + • • •. If bis much less than
1, then 1/(1-b) = 1 +bas in (3.6). If you are unfamiliar with the binomial theorem, you can find more
details in Problem 3.8.
Section 3.3 Sums and Differences; Products and Quotients
53
Uncertainty in Products and Quotients (Provisional Rule)
If several quantities x, ... , w are measured with small uncertainties Bx, ... , 8w, and the measured values are used to compute
q = x X ••• X Z
u X •• • X w'
then the fractional uncertainty in the computed value of q is the sum,
8q Bx
Bz Bu
8w
-lq/ = -lxl+···+lz-l +l-ul+···+lw-l '
(3.8)
of the fractional uncertainties in x, ... , w.
Briefly, when quantities are multiplied or divided the fractional uncertainties add.
Example: A Problem in Surveying
In surveying, sometimes a value can be found for an inaccessible length l (such as the height of a tall tree) by measuring three other lengths 11, 12, 13 in terms of which
l = l1l2
l3 •
Suppose we perform such an experiment and obtain the following results (in feet):
z, = 200 ± 2, 12 = 5.5 ± 0.1, 13 = 10.0 ± 0.4.
Our best estimate for l is
l = 200 X 5.5 = ll0 f
best
l0.0
t.
According to (3.8), the fractional uncertainty in this answer is the sum of the fractional uncertainties in 11, 12, and 13, which are 1%, 2%, and 4%, respectively. Thus
Bl
=
-8z1,1+
8-12
12
+8-13
13
(1 + 2 + 4)%
7%,
and our final answer is
110 ± 8 ft.
54
Chapter 3: Propagation of Uncertainties
Quick Check 3.2. Suppose you measure the three quantities x, y, and z as follows:
X = 8.0 ± 0.2, y = 5.0 ± 0.1, z = 4.0 ± 0.1.
Express the given uncertainties as percentages, and then calculate q = xy/z with
its uncertainty 8q_ [as given by the provisional rule (3.8)].
3.4 Two Important Special Cases
Two important special cases of the rule (3.8) deserve mention. One concerns the product of two numbers, one of which has no uncertainty; the other involves a power (such as .x3) of a measured number.
MEASURED QUANTITY TIMES EXACT NUMBER
Suppose we measure a quantity x and then use the measured value to calculate
the product q = Bx, where the number B has no uncertainty. For example, we might
measure the diameter of a circle and then calculate its circumference, c = n X d;
or we might measure the thickness T of 200 identical sheets of paper and then
calculate the thickness of a single sheet as t = (1/200) X T. According to the rule (3.8), the fractional uncertainty in q = Bx is the sum of the fractional uncertainties in B and x. Because BB = 0, this implies that
8q_ &
JqJ JxJ
That is, the fractional uncertainty in q = Bx (with B known exactly) is the same as that in x. We can express this result differently if we multiply through by JqJ = JBxJ
to give 8q_ = JBJ Bx, and we have the following useful rule: 3
(3.9)
3This rule (3.9) was derived from the rule (3.8), which is provisional and will be replaced by the more complete rules (3.18) and (3.19). Fortunately, the same conclusion (3.9) follows from these improved rules. Thus (3.9) is already in its final form.
'
Section 3.4 Two Important Special Cases
55
This rule is especially useful in measuring something inconveniently small but available many times over, such as the thickness of a sheet of paper or the time for a revolution of a rapidly spinning wheel. For example, if we measure the thickness T of 200 sheets of paper and get the answer
(thickness of 200 sheets) = T = 1.3 ± 0.1 inches,
it immediately follows that the thickness t of a single sheet is
(thickness of one sheet) = t
1 200 X T
0.0065 ± 0.0005 inches.
Notice how this technique (measuring the thickness of several identical sheets and dividing by their number) makes easily possible a measurement that would otherwise require quite sophisticated equipment and that this technique gives a remarkably small uncertainty. Of course, the sheets must be known to be equally thick.
Quick Check 3.3. Suppose you measure the diameter of a circle as
d = 5.0 ± 0.l cm and use this value to calculate the circumference c = red. What is your answer,
with its uncertainty?
POWERS
The second special case of the rule (3.8) concerns the evaluation of a power of some measured quantity. For example, we might measure the speed v of some object and then, to find its kinetic energy ½mv2, calculate the square v2. Because v2 is just v Xv, it follows from (3.8) that the fractional uncertainty in v2 is twice the fractional uncertainty in v. More generally, from (3.8) the general rule for any power is clearly as follows.
Uncertainty in a Power
If the quantity x is measured with uncertainty & and the measured value is used to compute the power
q = x',
then the fractional uncertainty in q is n times that in x, n&- .
lxl
(3.10)
56
Chapter 3: Propagation of Uncertainties
The derivation of this rule required that n be a positive integer. In fact, however, the rule generalizes to include any exponent n, as we will see later in Equation (3.26).
Quick Check 3.4. To find the volume of a certain cube, you measure its side
as 2.00 ± 0.02 cm. Convert this uncertainty to a percent and then find the
volume with its uncertainty.
Example: Measurement of g
Suppose a student measures g, the acceleration of gravity, by measuring the time t for a stone to fall from a height h above the ground. After making several timings, she concludes that
t = 1.6 ± 0.1 s,
and she measures the height h as
h = 46.2 ± 0.3 ft. Because his given by the well-known formula h = ½gt2, she now calculates gas
2h g = t2
2 X 46.2 ft (1.6 s)2
36.1 ft/s2.
What is the uncertainty in her answer? The uncertainty in her answer can be found by using the rules just developed.
To this end, we need to know the fractional uncertainties in each of the factors in
the expression g = 2h!t2 used to calculate g. The factor 2 has no uncertainty. The
fractional uncertainties in h and t are
8h 0.3 h 46.2 0.7%
and
t8t 0.1
= 1.6 = 6.3%.
According to the rule (3.10), the fractional uncertainty of t2 is twice that oft. Therefore, applying the rule (3.8) for products and quotients to the formula g = 2h/t2, we find the fractional uncertainty
8g 8h+ 2 8t
g h
t
0.7% + 2 X (6.3%) 13.3%,
(3.11)
Section 3.5 Independent Uncertainties in a Sum
57
and hence the uncertainty
Sg
=
(36.1
ft/s 2)
x
13.3 100
--
4.80 ft/s2.
Thus, our student's final answer (properly rounded) is
g = 36 ± 5 ft/s2.
This example illustrates how simple the estimation of uncertainties can often i,e. It also illustrates how error analysis tells you not only the size of uncertainties but also how to reduce them. In this example, (3.11) shows that the largest contribution comes from the measurement of the time. If we want a more precise value of g, then the measurement of t must be improved; any attempt to improve the measurement of h will be wasted effort.
Finally, the accepted value of g is 32 ft/s2, which lies within our student's margins of error. Thus, she can conclude that her measurement, although not especially accurate, is perfectly consistent with the known value of g.
3.5 Independent Uncertainties in a Sum
The rules presented thus far can be summarized quickly: When measured quantities are added or subtracted, the uncertainties add; when measured quantities are multiplied or divided, the fractional uncertainties add. In this and the next section, I discuss how, under certain conditions, the uncertainties calculated by using these rules may be unnecessarily large. Specifically, you will see that if the original uncertainties are independent and random, a more realistic (and smaller) estimate of the final uncertainty is given by similar rules in which the uncertainties (or fractional uncertainties) are added in quadrature (a procedure defined shortly).
Let us first consider computing the sum, q = x + y, of two numbers x and y
that have been measured in the standard form
(measured value of x) = xbest ± &,
with a similar expression for y. The argument used in the last section was as follows:
First, the best estimate for q = x + y is obviously %est = xbest + Ybest· Second, since
the highest probable values for x and y are xbest + & and Ybest + Sy, the highest
probable value for q is
Xbest + Ybest + & + Sy.
Similarly, the lowest probable value of q is
(3.12)
+ Xbest Ybest - & - Sy.
Therefore, we concluded, the value of q probably lies between these two numbers, and the uncertainty in q is
Sq = & + Sy.
58
Chapter 3: Propagation of Uncertainties
To see why this formula is likely to overestimate Sq, let us consider how the actual value of q could equal the highest extreme (3.12). Obviously, this occurs if we have underestimated x by the full amount & and underestimated y by the full oy, obviously, a fairly unlikely event. If x and y are measured independently and our errors are random in nature, we have a 50% chance that an underestimate of x is accompanied by an overestimate of y, or vice versa. Clearly, then, the probability we will underestimate both x and y by the full amounts & and 8y is fairly small.
Therefore, the value Sq = & + 8y overstates our probable error.
What constitutes a better estimate of Sq? The answer depends on precisely what we mean by uncertainties (that is, what we mean by the statement that q is "proba-
bly" somewhere between %est - Sq and qbest + Sq). It also depends on the statistical
laws governing our errors in measurement. Chapter 5 discusses the normal, or Gauss, distribution, which describes measurements subject to random uncertainties. It shows that if the measurements of x and y are made independently and are both
governed by the normal distribution, then the uncertainty in q = x + y is given by
(3.13)
When we combine two numbers by squaring them, adding the squares, and taking the square root, as in (3.13), the numbers are said to be added in quadrature. Thus, the rule embodied in (3.13) can be stated as follows: If the measurements of x and y are independent and subject only to random uncertainties, then the uncer-
tainty Sq in the calculated value of q = x + y is the sum in quadrature or quadratic
sum of the uncertainties & and 8y.
Compare the new expression (3.13) for the uncertainty in q = x + y with our
old expression,
Sq = & + 8y.
(3.14)
First, the new expression (3.13) is always smaller than the old (3.14), as we can see
from a simple geometrical argument: For any two positive numbers a and b, the
numbers a, b, and ✓a2 + b2 are the three sides of a right-angled triangle (Figure
3.2). Because the length of any side of a triangle is always less than the sum of the
✓a2 + bz ~ b
a Figure 3.2. Because any side of a triangle is less than the sum of the other
two sides, the inequality ✓a2 + b2 < a + b is always true.
other two sides, it follows that ✓a2 + b2 < a + b and hence that (3.13) is always
less than (3.14).
Because expression (3.13) for the uncertainty in q = x + y is always smaller
Section 3.5 Independent Uncertainties in a Sum
59
than (3.14), you should always use (3.13) when it is applicable. It is, however, not always applicable. Expression (3.13) reflects the possibility that an overestimate of x can be offset by an underestimate of y or vice versa, but there are measurements for which this cancellation is not possible.
Suppose, for example, that q = x + y is the sum of two lengths x and y mea-
sured with the same steel tape. Suppose further that the main source of uncertainty is our fear that the tape was designed for use at a temperature different from the present temperature. If we don't know this temperature (and don't have a reliable tape for comparison), we have to recognize that our tape may be longer or shorter than its calibrated length and hence may yield readings under or over the correct length. This uncertainty can be easily allowed for. 4 The point, however, is that if the tape is too long, then we underestimate both x and y; and if the tape is too short, we overestimate both x and y. Thus, there is no possibility for the cancellations that
justified using the sum in quadrature to compute the uncertainty in q = x + y.
I will prove later (in Chapter 9) that, whether or not our errors are independent
and random, the uncertainty in q = x + y is certainly no larger than the simple sum
& + 8y:
8q_ ,,s; & + 8y.
(3.15)
That is, our old expression (3.14) for 8q_ is actually an upper bound that holds in all cases. If we have any reason to suspect the errors in x and y are not independent and random (as in the example of the steel tape measure), we are not justified in using the quadratic sum (3.13) for 8q. On the other hand, the bound (3.15) guaran-
tees that 8q_ is certainly no worse than & + 8y, and our safest course is to use the
old rule
8q_ = & + 8y.
Often, whether uncertainties are added in quadrature or directly makes little difference. For example, suppose that x and y are lengths both measured with uncer-
tainties & = 8y = 2 mm. If we are sure these uncertainties are independent and
random, we would estimate the error in x + y to be the sum in quadrature,
✓(&)2 + (8y)2 = -/4+4 mm = 2.8 mm = 3 mm,
but if we suspect that the uncertainties may not be independent, we would have to use the ordinary sum,
& + 8y = (2 + 2) mm = 4 mm.
In many experiments, the estimation of uncertainties is so crude that the difference between these two answers (3 mm and 4 mm) is unimportant. On the other hand, sometimes the sum in quadrature is significantly smaller than the ordinary sum. Also, rather surprisingly, the sum in quadrature is sometimes easier to compute than the ordinary sum. Examples of these effects are given in the next section.
4 Suppose, for example, that the tape has a coefficient of expansion a= 10-5 per degree and that we decide that the difference between its calibration temperature and the present temperature is unlikely to be more than 10 degrees. The tape is then unlikely to be more than 10-4, or 0.01 %, away from its correct length, and our uncertainty is therefore 0.01 %.
60
Chapter 3: Propagation of Uncertainties
Quick Check 3.5. Suppose you measure the volumes of water in two beakers as
V1 = 130 ± 6 ml and V2 = 65 ± 4 ml
and then carefully pour the contents of the first into the second. What is your
prediction for the total volume V = V1 + V2 with its uncertainty, SV, assuming
the original uncertainties are independent and random? What would you give for SV if you suspected the original uncertainties were not independent?
3.6 More About Independent Uncertainties
In the previous section, I discussed how independent random uncertainties in two quantities x and y propagate to cause an uncertainty in the sum x + y. We saw that for this type of uncertainty the two errors should be added in quadrature. We can naturally consider the corresponding problem for differences, products, and quotients. As we will see in Section 5.6, in all cases our previous rules (3.4) and (3.8) are modified only in that the sums of errors (or fractional errors) are replaced by quadratic sums. Further, the old expressions (3.4) and (3.8) will be proven to be upper bounds that always hold whether or not the uncertainties are independent and random. Thus, the final versions of our two main rules are as follows:
(3.16) (3.17) and
Section 3.6 More About Independent Uncertainties
61
(3.18) (3.19)
Notice that I have not yet justified the use of addition in quadrature for independent random uncertainties. I have argued only that when the various uncertainties are independent and random, there is a good chance of partial cancellations of errors and that the resulting uncertainty (or fractional uncertainty) should be smaller than the simple sum of the original uncertainties (or fractional uncertainties); the sum in quadrature does have this property. I give a proper justification of its use in Chapter 5. The bounds (3.17) and (3.19) are proved in Chapter 9.
Example: Straight Addition vs Addition in Quadrature
As discussed, sometimes there is no significant difference between uncertainties computed by addition in quadrature and those computed by straight addition. Often, however, there is a significant difference, and-surprisingly enough-the sum in quadrature is often much simpler to compute. To see how this situation can arise, consider the following example.
Suppose we want to find the efficiency of a D.C. electric motor by using it to lift a mass m through a height h. The work accomplished is mgh, and the electric energy delivered to the motor is Vlt, where V is the applied voltage, / the current, and t the time for which the motor runs. The efficiency is then
eff1.c.1ency, e
=
work done by motor energy deh.vered to motor
mgh Vlt .
Let us suppose that m, h, V, and / can all be measured with 1% accuracy,
(fractional uncertainty for m, h, V, and /) = 1%,
62
Chapter 3: Propagation of Uncertainties
and that the time t has an uncertainty of 5%,
(fractional uncertainty fort) = 5%.
(Of course, g is known with negligible uncertainty.) If we now compute the effi-
ciency e, then according to our old rule ("fractional errors add"), we have an uncer-
tainty
e = -8mm+ -8hh+ -8VV+ -8I/+ -att
(1 + 1 + 1 + 1 + 5)% = 9%.
On the other hand, if we are confident that the various uncertainties are independent and random, then we can compute 8e/e by the quadratic sum to give
8e e
✓(1%)2 + (1%)2 + (1%)2 + (1%)2 + (5%)2
--./29% = 5%.
Clearly, the quadratic sum leads to a significantly smaller estimate for 8e. Furthermore, to one significant figure, the uncertainties in m, h, V, and / make no contribution at all to the uncertainty in e computed in this way; that is, to one significant figure, we have found (in this example)
8e at e t
This striking simplification is easily understood. When numbers are added in quadrature, they are squared first and then summed. The process of squaring greatly exaggerates the importance of the larger numbers. Thus, if one number is 5 times any of the others (as in our example), its square is 25 times that of the others, and we can usually neglect the others entirely.
This example illustrates how combining errors in quadrature is usually better and often easier than computing them by straight addition. The example also illustrates the type of problem in which the errors are independent and for which addition in quadrature is justified. (For the moment I take for granted that the errors are random and will discuss this more difficult point in Chapter 4.) The five quantities measured (m, h, V, I, and t) are physically distinct quantities with different units and are measured by entirely different processes. For the sources of error in any quantity to be correlated with those in any other is almost inconceivable. Therefore, the errors can reasonably be treated as independent and combined in quadrature.
Quick Check 3.6. Suppose you measure three numbers as follows:
X = 200 ± 2, y = 50 ± 2, Z = 20 ± 1,
where the three uncertainties are independent and random. What would you
give for the values of q = x + y - z and r = xylz with their uncertainties?
Section 3.7 Arbitrary Functions of One Variable
63
3.7 Arbitrary Functions of One Variable
You have now seen how uncertainties, both independent and otherwise, propagate through sums, differences, products, and quotients. However, many calculations re-
quire more complicated operations, such as computation of a sine, cosine, or square root, and you will need to know how uncertainties propagate in these cases.
As an example, imagine finding the refractive index n of glass by measuring
the critical angle 0. We know from elementary optics that n = 1/sin 0. Therefore, if
we can measure the angle 0, we can easily calculate the refractive index n, but we
must then decide what uncertainty 8n in n = 1/sin 0 results from the uncertainty 80
in our measurement of 0. More generally, suppose we have measured a quantity x in the standard form
xbest ± 8x and want to calculate some known function q(x), such as q(x) = 1/sinx
"5. or q(x) = A simple way to think about this calculation is to draw a graph of
q(x) as in Figure 3.3. The best estimate for q(x) is, of course, %est = q(xbest), and
the values xbest and qbest are shown connected by the heavy lines in Figure 3.3. To decide on the uncertainty 8q, we employ the usual argument. The largest
probable value of x is xbest + &; using the graph, we can immediately find the
largest probable value of q, which is shown as qmax· Similarly, we can draw in the smallest probable value, qmin, as shown. If the uncertainty 8x is small (as we always suppose it is), then the section of graph involved in this construction is approxi-
mately straight, and qmax and qmin are easily seen to be equally spaced on either
side of qbest· The uncertainty 8q can then be taken from the graph as either of the
lengths shown, and we have found the value of q in the standard form qbest ± 8q.
Occasionally, uncertainties are calculated from a graph as just described. (See
Problems 3.26 and 3.30 for examples.) Usually, however, the function q(x) is known
q
q(x)
--r;;------------------
qbest 1--_-_-+-l-fi_q____________________.,,
qmin
t ~ - - - - - - - ~ - - ~ - ~ - - - - - - - - x
Xbest - fix
Xbest + fix
Xbest
Figure 3.3. Graph of q(x) vs x. If xis measured as xhest ± 8.x, then the best estimate for q(x) is qhest = q(xbesi)- The largest and smallest probable values of q(x) correspond to the values Xbest ± 8x of X.
64
Chapter 3: Propagation of Uncertainties
q
I
- - - - - - - - - - - - - - ~I - - - - 1 I I I I I I
t ' - - - - - - - - - - ' - - - - - ' - - - - - - ' - - - - - - - - + - x Xbe,,-8x
xbe,<
Figure 3.4. If the slope of q(x) is negative, the maximum probable value of q corresponds to the minimum value of x, and vice versa.
explicitly-q(x) = sinx or q(x) = -VX, for example-and the uncertainty Bq can be
calculated analytically. From Figure 3.3, we see that
(3.20)
Now, a fundamental approximation of calculus asserts that, for any function q(x) and any sufficiently small increment u,
q(x + u) - q(x) = ; u.
ax Thus, provided the uncertainty is small (as we always assume it is), we can
rewrite the difference in (3.20) to give
(3.21)
Thus, to find the uncertainty Bq_, we just calculate the derivative dq/dx and multiply by the uncertainty Bx.
The rule (3.21) is not quite in its final form. It was derived for a function, like that of Figure 3.3, whose slope is positive. Figure 3.4 shows a function with negative slope. Here, the maximum probable value qmax obviously corresponds to the minimum value of x, so that
u" q
=
_ddqx
"··
CM.
(3.22)
Because dq/dx is negative, we can write - dq/dx as jdq!dxl, and we have the following general rule.
Section 3.7 Arbitrary Functions of One Variable
65
(3.23)
This rule usually allows us to find 8q quickly and easily. Occasionally, if q(x) is very complicated, evaluating its derivative may be a nuisance, and going back to (3.20) is sometimes easier, as we discuss in Problem 3.32. Particularly if you have
programmed your calculator or computer to find q(x), then finding q(xbest + &) and
q(xbest) and their difference may be easier than differentiating q(x) explicitly.
Example: Uncertainty in a Cosine
As a simple application of the rule (3.23), suppose we have measured an angle 0 as
0 = 20 ± 3°
and that we wish to find cos 0. Our best estimate of cos 0 is, of course,
cos 20° = 0.94, and according to (3.23), the uncertainty is
8(cos 0) I d;:0180
I sin 0 I 80 (in rad).
(3.24)
We have indicated that 80 must be expressed in radians, because the derivative of
cos 0 is - sin 0 only if 0 is expressed in radians. Therefore, we rewrite 80 = 3° as
80 = 0.05 rad; then (3.24) gives
8(cos0)
(sin20°) X 0.05
0.34 X 0.05 = 0.02.
Thus, our final answer is
cos 0 0.94 ± 0.02.
Quick Check 3.7. Suppose you measure x as 3.0 ± 0.1 and then calculate
q = ex. What is your answer, with its uncertainty? (Remember that the deriva-
tive of ex is ex.)
As another example of the rule (3.23), we can rederive and generalize a result found in Section 3.4. Suppose we measure the quantity x and then calculate the
66
Chapter 3: Propagation of Uncertainties
power q(x) = x', where n is any known, fixed number, positive or negative. Ac-
cording to (3.23), the resulting uncertainty in q is
If we divide both sides of this equation by /q/ = /x'/, we find that
: = lnl:;
(3.25)
that is, the fractional uncertainty in q = x' is lnl times that in x. This result (3.25) is
just the rule (3.10) found earlier, except that the result here is more general, because
n can now be any number. For example, if n = 1/2, then q = '\P,:, and
&/=!&.
lq/ 2 /xi'
that is, the fractional uncertainty in '\P,: is half that in x itself. Similarly, the fractional
uncertainty in 1/x = x- 1 is the same as that in X itself.
The result (3.25) is just a special case of the rule (3.23). It is sufficiently important, however, to deserve separate statement as the following general rule.
(3.26)
Quick Check 3.8. If you measure x as 100 ± 6, what should you report for
'\P,:, with its uncertainty?
3.8 Propagation Step by Step
We now have enough tools to handle almost any problem in the propagation of errors. Any calculation can be broken down into a sequence of steps, each involving just one of the following types of operation: (1) sums and differences; (2) products and quotients; and (3) computation of a function of one variable, such as x', sinx,
Section 3.8 Propagation Step by Step
67
ex, or ln x. For example, we could calculate
q = x(y - z sinu)
(3.27)
from the measured quantities x, y, z, and u in the following steps: Compute the function sinu, then the product of z and sinu, next the difference of y and z sinu, and finally the product of x and (y - z sinu).
We know how uncertainties propagate through each of these separate operations. Thus, provided the various quantities involved are independent, we can calculate the uncertainty in the final answer by proceeding in steps from the uncertainties in the original measurement. For example, if the quantities x, y, z, and u in (3.27) have been measured with corresponding uncertainties &, . . . , ou, we could calculate the uncertainty in q as follows. First, find the uncertainty in the function sin u; knowing this, find the uncertainty in the product z sin u, and then that in the difference y - z sinu; finally, find the uncertainty in the complete product (3.27).
Quick Check 3.9. Suppose you measure three numbers as follows:
X = 200 ± 2, y = 50 ± 2, Z = 40 ± 2,
where the three uncertainties are independent and random. Use step-by-step
propagation to find the quantity q = x/(y - z) with its uncertainty. [First find
the uncertainty in the difference y - z and then the quotient x/(y - z).]
Before I discuss some examples of this step-by-step calculation of errors, let me emphasize three general points. First, because uncertainties in sums or differences involve absolute uncertainties (such as &) whereas those in products or quotients involve fractional uncertainties (such as &/lxl), the calculations will require some facility in passing from absolute to fractional uncertainties and vice versa, as demonstrated below.
Second, an important simplifying feature of all these calculations is that (as repeatedly emphasized) uncertainties are seldom needed to more than one significant figure. Hence, much of the calculation can be done rapidly in your head, and many smaller uncertainties can be completely neglected. In a typical experiment involving several trials, you may need to do a careful calculation on paper of all error propagations for the first trial. After that, you will often find that all trials are sufficiently similar that no further calculation is needed or, at worst, that for subsequent trials the calculations of the first trial can be modified in your head.
Finally, you need to be aware that you will sometimes encounter functions q(x) whose uncertainty cannot be found reliably by the stepwise method advocated here. These functions always involve at least one variable that appears more than once. Suppose, for example, that in place of the function (3.27), we had to evaluate
q = y - xsiny.
68
Chapter 3: Propagation of Uncertainties
This function is the difference of two terms, y and x sin y, but these two terms are definitely not independent because both depend on y. Thus, to estimate the uncertainty, we would have to treat the terms as dependent (that is, add their uncertainties directly, not in quadrature). Under some circumstances, this treatment may seriously overestimate the true uncertainty. Faced with a function like this, we must recognize that a stepwise calculation may give an uncertainty that is unnecessarily big, and the only satisfactory procedure is then to use the general formula to be developed in Section 3.11.
3. 9 Examples
In this and the next section, I give three examples of the type of calculation encountered in introductory laboratories. None of these examples is especially complicated; in fact, few real problems are much more complicated than the ones described here.
Example: Measurement of g with a Simple Pendulum
As a first example, suppose that we measure g, the acceleration of gravity, usin,s_a
simple pendulum. The period of such a pendulum is well known to be T = 2rc'\Jl/g,
where l is the length of the pendulum. Thus, if l and T are measured, we can find gas
(3.28)
This result gives g as the product or quotient of three factors, 4rc2, l, and T 2. If the various uncertainties are independent and random, the fractional uncertainty in our answer is just the quadratic sum of the fractional uncertainties in these factors. The factor 4rc2 has no uncertainty, and the fractional uncertainty in T 2 is twice that in T:
Thus, the fractional uncertainty in our answer for g will be
og
(3.29) g Suppose we measure the period T for one value of the length l and get the results 5
l 92.95 ± 0.1 cm, T l.936 ± 0.004 s.
5 Although at first sight an uncertainty 8T = 0.004 s may seem unrealistically small, you can easily achieve it by timing several oscillations. If you can measure with an accuracy of 0.1 s, as is certainly possible with a stopwatch, then by timing 25 oscillations you will find T within 0.004 s.
Section 3.9 Examples
69
Our best estimate for g is easily found from (3.28) as
= 41t2 X (92.95 cm) = 979 crn/s2
gbest
(1.936 s)2
To find our uncertainty in g using (3.29), we need the fractional uncertainties in l and T. These are easily calculated (in the head) as
8l
1
=
0.1%
and
8T T
0.2%.
Substituting into (3.29), we find
f>g = ✓(0.1)2 + (2 X 0.2)2 % 0.4%;
g
from which
8g = 0.004 X 979 cm/s2 = 4 cm/s2.
Thus, based on these measurements, our final answer is
g = 979 ± 4 cm/s2.
Having found the measured value of g and its uncertainty, we would naturally compare these values with the accepted value of g. If the latter has its usual value of 981 cm/s2, the present value is entirely satisfactory.
If this experiment is repeated (as most such experiments should be) with different values of the parameters, the uncertainty calculations usually do not need to be repeated in complete detail. We can often easily convince ourselves that all uncertainties (in the answers for g) are close enough that no further calculations are needed; sometimes the uncertainty in a few representative values of g can be calculated and the remainder estimated by inspection. In any case, the best procedure is almost always to record the various values of l, T, and g and the corresponding uncertainties in a single table. (See Problem 3.40.)
Example: Refractive Index Using Snell's Law
If a ray of light passes from air into glass, the angles of incidence i and refraction
rare defined as in Figure 3.5 and are related by Snell's law, sini = n sinr, where
n is the refractive index of the glass. Thus, if you measure the angles i and r, you
Air Glass
Figure 3.5. The angles of incidence i and refraction r when a ray of light passes from air into glass.
70
Chapter 3: Propagation of Uncertainties
can calculate the refractive index n as
n = sini
sinr
(3.30)
The uncertainty in this answer is easily calculated. Because n is the quotient of sini and sinr, the fractional uncertainty in n is the quadratic sum of those in sini and sinr:
8n n
( 8sms. izn.i)2 + (8ss.minrr)2 .
(3.31)
To find the fractional uncertainty in the sine of any angle 0, we note that
I 8sin0 = d~~ 0 180
leas 01 80 (in rad).
Thus, the fractional uncertainty is
8sin0 = lcot 01 80 (in rad).
lsin01
(3.32)
Suppose we now measure the angle r for a couple of values of i and get the results shown in the first two columns of Table 3.1 (with all measurements judged
to be uncertain by ± 1°, or 0.02 rad). The calculation of n = sin i/sin r is easily
carried out as shown in the next three columns of Table 3.1. The uncertainty in n can then be found as in the last three columns; the fractional uncertainties in sin i and sinr are calculated using (3.32), and finally the fractional uncertainty in n is found using (3.31).
Table 3.1. Finding the refractive index.
i (deg)
r (deg)
all ± 1
all ± 1
sini
sinr
8sini
8sinr
8n
n
lsinil
lsinrl
n
20
13
0.342
0.225
1.52
5%
8%
9%
40
23.5
0.643
0.399
1.61
2%
4%
5%
Before making a series of measurements like the two shown in Table 3.1, you should think carefully how best to record the data and calculations. A tidy display like that in Table 3.1 makes the recording of data easier and reduces the danger of mistakes in calculation. It is also easier for the reader to follow and check.
If you repeat an experiment like this one several times, the error calculations can become tedious if you do them for each repetition. If you have a programmable calculator, you may decide to write a program to do the repetitive calculations automatically. You should recognize, however, that you almost never need to do the error calculations for all the repetitions; if you find the uncertainties in n corresponding to the smallest and largest values of i (and possibly a few intermediate values), then these uncertainties suffice for most purposes.
Section 3.10 A More Complicated Example
71
3.10 A More Complicated Example
The two examples just given are typical of many experiments in the introductory physics laboratory. A few experiments require more complicated calculations, however. As an example of such an experiment, I discuss here the measurement of the acceleration of a cart rolling down a slope.6
Example: Acceleration of a Cart Down a Slope
photocell 1
Figure 3.6. A cart rolls down an incline of slope 0. Each photocell is connected to a timer to measure the time for the cart to pass it.
Let us consider a cart rolling down an incline of slope 0 as in Figure 3.6. The expected acceleration is gsin 0 and, if we measure 0, we can easily calculate the
expected acceleration and its uncertainty (Problem 3.42). We can measure the actual acceleration a by timing the cart past two photocells as shown, each connected to a timer. If the cart has length l and takes time t1 to pass the first photocell, its speed
there is v1 = l/t1. In the same way, v2 = l!t2. (Strictly speaking, these speeds are
the cart's average speeds while passing the two photocells. However, provided l is small, the difference between the average and instantaneous speeds is unimportant.) If the distance between the photocells is s, then the well-known formula v/ = v/ + 2as implies that
v/- v/
a -
2s
(3.33)
Using this formula and the measured values of /, s, t1, and t2, we can easily find the observed acceleration and its uncertainty.
6If you wish, you could omit this section without loss of continuity or return to study it in connection with Problem 3.42.
72
Chapter 3: Propagation of Uncertainties
One set of data for this experiment, including uncertainties, was as follows (the numbers in parentheses are the corresponding percentage uncertainties, as you can easily check):
l 5.00 ± 0.05 cm (1%) s = 100.0 ± 0.2 cm (0.2%) t1 0.054 ± 0.001 s (2%) t2 0.031 ± 0.001 s (3%).
(3.34)
From these values, we can immediately calculate the first factor in (3.33) as
1212s = 0.125 cm. Because the fractional uncertainties in l and s are 1% and 0.2%, that in 1212s is
(fractional uncertainty in 1212s)
✓(2 X 1%)2 + (0.2%)2 = 2%.
(Note how the uncertainty in s makes no appreciable contribution and could have been ignored.) Therefore,
1212s = 0.125 cm ± 2%.
(3.35)
To calculate the second factor in (3.33) and its uncertainty, we proceed in steps. Because the fractional uncertainty in t1 is 2%, that in lit/ is 4%. Thus, since
t1 = 0.054 s,
lit/ = 343 ± 14 s-2.
In the same way, the fractional uncertainty in lit/ is 6% and
lit/ = 1041 ± 62 s-2.
Subtracting these (and combining the errors in quadrature), we find
1 2
-
1 2
= 698 ± 64 s-2
(or 9%).
t2 t1
(3.36)
Finally, according to (3.33), the required acceleration is the product of (3.35) and (3.36). Multiplying these equations together (and combining the fractional uncertainties in quadrature), we obtain
a - (0.125 cm ± 2%) X (698 s-2 ± 9%) 87.3 cmls2 ± 9%
or
a = 87 ± 8 cmls2.
(3.37)
This answer could now be compared with the expected acceleration g sin 0, if the latter had been calculated.
When the calculations leading to (3.37) are studied carefully, several interesting features emerge. First, the 2% uncertainty in the factor 1212s is completely swamped
Section 3.1 I General Formula for Error Propagation
73
by the 9% uncertainty in (lit/) - (lit/). If further calculations are needed for subsequent trials, the uncertainties in land s can therefore be ignored (so long as a quick check shows they are still just as unimportant).
Another important feature of our calculation is the way in which the 2% and 3% uncertainties in t1 and t2 grow when we evaluate lit/, lit/, and the difference (lit/) - (lit/), so that the final uncertainty is 9%. This growth results partly from taking squares and partly from taking the difference of large numbers. We could imagine extending the experiment to check the constancy of a by giving the cart an initial push, so that the speeds v1 and v2 are both larger. If we did, the times t1 and t2 would get smaller, and the effects just described would get worse (see Problem 3.42).
3.1 I General Formula for Error Propagation 7
So far, we have established three main rules for the propagation of errors: that for sums and differences, that for products and quotients, and that for arbitrary functions of one variable. In the past three sections, we have seen how the computation of a complicated function can often be broken into steps and the uncertainty in the function computed stepwise using our three simple rules.
In this final section, I give a single general formula from which all three of these rules can be derived and with which any problem in error propagation can be solved. Although this formula is often rather cumbersome to use, it is useful theoretically. Furthermore, there are some problems in which, instead of calculating the uncertainty in steps as in the past three sections, you will do better to calculate it in one step by means of the general formula.
To illustrate the kind of problem for which the one-step calculation is preferable, suppose that we measure three quantities x, y, and z and have to compute a function such as
x+y
q = X+Z
(3.38)
in which a variable appears more than once (x in this case). If we were to calculate the uncertainty Sq in steps, then we would first compute the uncertainties in the two
sums x + y and x + z, and then that in their quotient. Proceeding in this way, we
would completely miss the possibility that errors in the numerator due to errors in x may, to some extent, cancel errors in the denominator due to errors in x. To understand how this cancellation can happen, suppose that x, y, and z are all positive numbers, and consider what happens if our measurement of x is subject to error. If
we overestimate x, we overestimate both x + y and x + z, and (to a large extent) these overestimates cancel one another when we calculate (x + y)l(x + z). Similarly, an underestimate of x leads to underestimates of both x + y and x + z, which
again cancel when we form the quotient. In either case, an error in x is substantially
7You can postpone reading this section without a serious loss of continuity. The material covered here is not used again until Section 5.6.
74
Chapter 3: Propagation of Uncertainties
canceled out of the quotient (x + y)J(x + z), and our stepwise calculation com-
pletely misses these cancellations. Whenever a function involves the same quantity more than once, as in (3.38),
some errors may cancel themselves (an effect, sometimes called compensating errors). If this cancellation is possible, then a stepwise calculation of the uncertainty may overestimate the final uncertainty. The only way to avoid this overestimation is to calculate the uncertainty in one step by using the method I will now develop. 8
Let us suppose at first that we measure two quantities x and y and then calculate
some function q = q(x, y). This function could be as simple as q = x + y or some-
thing more complicated such as q = (x3 + y) sin(xy). For a function q(x) of a single
variable, we argued that if the best estimate for x is the number xbest, then the best estimate for q(x) is q(xbest). Next, we argued that the extreme (that is, largest and
smallest) probable values of x are xbest ± & and that the corresponding extreme
values of q are therefore
q(xbest ± &).
(3.39)
Finally, we used the approximation
q(x + u) = q(x) + dq u
dx
(3.40)
(for any small increment u) to rewrite the extreme probable values (3.39) as
(3.41)
where the absolute value is to allow for the possibility that dq/dx may be negative.
The result (3.41) means that 8q = ldq!dxl&.
When q is a function of two variables, q(x, y), the argument is similar. If xbest
and Ybest are the best estimates for x and y, we expect the best estimate for q to be
in the usual way. To estimate the uncertainty in this result, we need to generalize the approximation (3.40) for a function of two variables. The required generalization is
q(x + u, y + v) = q(x, y) + aq u + aq v,
ax ay
(3.42)
where u and v are any small increments in x and y, and aqJax and aqJay are the socalled partial derivatives of q with respect to x and y. That is, aqJax is the result of differentiating q with respect to x while treating y as fixed, and vice versa for aqJay. [For further discussion of partial derivatives and the approximation (3.42), see Problems 3.43 and 3.44.]
The extreme probable values for x and y are xbest ± & and Ybest ± By. If we insert these values into (3.42) and recall that aqJax and aqJay may be positive or
8 Sometimes a function that involves a variable more than once can be rewritten in a different form that does not. For example, q = xy - xz can be rewritten as q = x(y - z). In the second form, the uncertainty 8q can be calculated in steps without any danger of overestimation.
Section 3.11 General Formula for Error Propagation
75
negative, we find, for the extreme values of q,
This means that the uncertainty in q(x, y) is
(3.43)
Before I discuss various generalizations of this new rule, let us apply it to rederive some familiar cases. Suppose, for instance, that
q(x, y) = X + y;
(3.44)
that is, q is just the sum of x and y. The partial derivatives are both one,
aq = 1
ay
,
(3.45)
and so, according to (3.43),
&; = 8x + 8y.
(3.46)
This is just our original provisional rule that the uncertainty in x + y is the sum of
the uncertainties in x and y.
In much the same way, if q is the product q = xy, you can check that (3.43)
implies the familiar rule that the fractional uncertainty in q is the sum of the fractional uncertainties in x and y (see Problem 3.45).
The rule (3.43) can be generalized in various ways. You will not be surprised to learn that when the uncertainties 8x and 8y are independent and random, the sum (3.43) can be replaced by a sum in quadrature. If the function q depends on more than two variables, then we simply add an extra term for each extra variable. In this way, we arrive at the following general rule (whose full justification will appear in Chapters 5 and 9).
(3.47) (3.48)
Chapter 3: Propagation of Uncertainties
Although the formulas (3.47) and (3.48) look fairly complicated, they are easy to understand if you think about them one term at a time. For example, suppose for a moment that among all the measured quantities, x, y, ... , z, only x is subject to any uncertainty. (That is, 8y = ... = 8z = 0.) Then (3.47) contains only one term and we would find
I:: I sq =
ax (if ay = • • • = 8z = 0).
(3.49)
In other words, the term Jaq/axJ8x by itself is the uncertainty, or partial uncertainty, in q caused by the uncertainty in x alone. In the same way, Jaq/ayJ8y is the partial uncertainty in q due to 8y alone, and so on. Referring back to (3.47), we see that the total uncertainty in q is the quadratic sum of the partial uncertainties due to each of the separate uncertainties &, 8y, ... , 8z (provided the latter are independent). This is a good way to think about the result (3.47), and it suggests the simplest way to use (3.47) to calculate the total uncertainty in q: First, calculate the partial uncertainties in q due to &, 8y, ... , 8z separately, using (3.49) and its analogs for y, ... , z; then simply combine these separate uncertainties in quadrature to give the total uncertainty as in (3.47).
In the same way, whether or not the uncertainties &, 8y, ... , 8z are independent, the rule (3.48) says that the total uncertainty in q never exceeds the simple sum of the partial uncertainties due to each of &, 8y, ... , 8z separately.
Example: Using the General Formula (3.47) To determine the quantity
q = xzy _ xyz,
a scientist measures x and y as follows:
X = 3.0 ± 0.1 and
y = 2.0 ± 0.1.
What is his answer for q and its uncertainty, as given by (3.47)?
His best estimate for q is easily seen to be %est = 6.0. To find Sq, we follow
the steps just outlined. The uncertainty in q due to 8x alone, which we denote by 8qx, is given by (3.49) as
8qx (error in q due to 8x alone)
I!! Iax
(3.50)
12xy - y2l8x = 112 - 41 X 0.1 0.8.
Similarly, the uncertainty in q due to 8y is
(error in q due to 8y alone)
I!! Iay
lx2 - 2xyl8y 19 - 121 X 0.1 0.3.
(3.51)
.....
Principal Definitions and Equations of Chapter 3
77
Finally, according to (3.47), the total uncertainty in q is the quadratic sum of these two partial uncertainties:
Sq ✓(oqx)2 + (&/y)2
(3.52)
✓(0.8)2 + (0.3)2 0.9.
Thus, the final answer for q is
q 6.0 ± 0.9.
The use of (3.47) or (3.48) to calculate uncertainties is reasonably straightforward if you follow the procedure used in this example; that is, first calculate each separate contribution to oq and only then combine them to give the total uncertainty. This procedure breaks the problem into calculations small enough that you have a good chance of getting them right. It has the further advantage that it lets you see which of the measurements x, y, ... , z are the main contributors to the final uncer-
tainty. (For instance, in the example above, the contribution &/y = 0.3 was so small compared with oqx = 0.8 that the former could almost be ignored.)
Generally speaking, when the stepwise propagation described in Sections 3.8 to 3.10 is possible, it is usually simpler than the general rules (3.47) or (3.48) discussed here. Nevertheless, you must recognize that if the function q(x, ... , z) involves any variable more than once, there may be compensating errors; if so, a stepwise calculation may overestimate the final uncertainty, and calculating oq in one step using (3.47) or (3.48) is better.
Principal Definitions and Equations of Chapter 3
THE SQUARE-ROOT RULE FOR A COUNTING EXPERIMENT If we observe the occurrences of an event that happens at random but with a definite average rate and we count v occurrences in a time T, our estimate for the true average number is
(average number of events in time T) = v ± ~- [See (3.2)]
RULES FOR ERROR PROPAGATION The rules of error propagation refer to a situation in which we have found various quantities, x, ... , w with uncertainties ox, ... , ow and then use these values to calculate a quantity q. The uncertainties in x, ... , w "propagate" through the calculation to cause an uncertainty in q as follows:
78
Chapter 3: Propagation of Uncertainties
Sums and Differences: If
q = x + · · · + z - (u + · · · + w),
then
Sq = ✓(&)2 + · · · + (8z)2 + (8u)2 + · · · + (8w)2
(provided all errors are independent and random)
and
Sq,;;; &+···+8z+8u+···+8w (always).
[See (3.16) & (3.17)]
Products and Quotients: If
XX··· X z
q = u X •• • X w'
then
✓ :
(~r + --- + (~Zr + (~Ur + --- + (~r
(provided all errors are independent and random)
and
-I&x!+
·
·
·
+8lz-zl
+8-u+ Jul
·
·
·
+8lw-wl
(always).
[See (3.18) & (3.19)]
Measured Quantity Times Exact Number: If Bis known exactly and
q = Bx,
then
~I Sq = JBI & or, equivalently,
[See (3.9)]
Uncertainty in a Power: If n is an exact number and
q = x!',
then
Uncertainty in a Function of One Variable: If q then
[See (3.26)] q(x) is any function of x,
[See (3.23)]
Sometimes, if q(x) is complicated and if you have written a program to calculate q(x) then, instead of differentiating q(x), you may find it easier to use the equivalent
Problems for Chapter 3
79
formula,
[See Problem 3.32)
General Formula for Error Propagation: If q = q(x, ... , z) is any function of
x, ... , z, then
8q = (:!ax)2 + •·· + (!;sz)2
(provided all errors are independent and random)
and
8q ~ l!!I ax+···+ l!!l az
(always).
[See (3.47) & (3.48))
Problems for Chapter 3
For Section 3.2: The Square-Root Rule for a Counting Experiment
* 3.1. To measure the activity of a radioactive sample, two students count the
alpha particles it emits. Student A watches for 3 minutes and counts 28 particles; Student B watches for 30 minutes and counts 310 particles. (a) What should Student A report for the average number emitted in 3 minutes, with his uncertainty? (b) What should Student B report for the average number emitted in 30 minutes, with her uncertainty? (c) What are the fractional uncertainties in the two measurements? Comment.
* 3.2. A nuclear physicist studies the particles ejected by a beam of radioactive
nuclei. According to a proposed theory, the average rates at which particles are ejected in the forward and backward directions should be equal. To test this theory, he counts the total number ejected forward and backward in a certain 10-hour interval and finds 998 forward and 1,037 backward. (a) What are the uncertainties associated with these numbers? (b) Do these results cast any doubt on the theory that the average rates should be equal?
* 3.3. Most of the ideas of error analysis have important applications in many
different fields. This applicability is especially true for the square-root rule (3.2) for counting experiments, as the following example illustrates. The normal average incidence of a certain kind of cancer has been established as 2 cases per 10,000 people per year. The suspicion has been aired that a certain town (population 20,000) suffers a high incidence of this cancer because of a nearby chemical dump. To test this claim, a reporter investigates the town's records for the past 4 years and finds 20 cases of the cancer. He calculates that the expected number is 16 (check this) and concludes that the observed rate is 25% more than expected. Is he justified in claiming that this result proves that the town has a higher than normal rate for this cancer?
80
Chapter 3: Propagation of Uncertainties
** 3.4.
As a sample of radioactive atoms decays, the number of atoms steadily
diminishes and the sample's radioactivity decreases in proportion. To study this
effect, a nuclear physicist monitors the particles ejected by a radioactive sample for
2 hours. She counts the number of particles emitted in a I -minute period and repeats
the measurement at half-hour intervals, with the following results:
Time elapsed, t (hours):
0.0 0.5 1.0 1.5 2.0
Number counted, v, in 1 min: 214 134 101 61 54
(a) Plot the number counted against elapsed time, including error bars to show the uncertainty in the numbers. (Neglect any uncertainty in the elapsed time.) (b) Theory predicts that the number of emitted particles should diminish exponen-
tially as v = v0 exp(-rt), where (in this case) v0 = 200 and r = 0.693 h- 1. On the
same graph, plot this expected curve and comment on how well the data seem to fit the theoretical prediction.
For Section 3.3: Sums and Differences; Products and Quotients
* 3.5. Using the provisional rules (3.4) and (3.8), compute the following:
(a) (5 ± 1) + (8 ± 2) - (10 ± 4) (b) (5 ± 1) X (8 ± 2) (c) (10 ± 1)/(20 ± 2) (d) (30 ± 1) X (50 ± 1)/(5.0 ± 0.1)
* 3.6. Using the provisional rules (3.4) and (3.8), compute the following:
(a) (3.5 ± 0.1) + (8.0 ± 0.2) - (5.0 ± 0.4) (b) (3.5 ± 0.1) X (8.0 ± 0.2) (c) (8.0 ± 0.2)/(5.0 ± 0.4) (d) (3.5 ± 0.1) X (8.0 ± 0.2)/(5.0 ± 0.4)
* 3.7. A student makes the following measurements:
a = 5 ± 1 cm, b = 18 ± 2 cm, c = 12 ± 1 cm, t = 3.0 ± 0.5 s, m = 18 ± 1 gram
Using the provisional rules (3.4) and (3.8), compute the following quantities with
their uncertainties and percentage uncertainties: (a) a + b + c, (b) a + b - c, (c)
ct, and (d) mb/t.
** 3.8.
The binomial theorem states that for any number n and any x with
/x/ < 1,
(1 + xr
1 + nx + ~ n(n--~1x) _-2 + n~(n--~l~)(n--~2,),_._1 + ...
1·2
1·2·3
(a) Show that if n is a positive integer, this infinite series terminates (that is, has only a finite number of nonzero terms). Write the series down explicitly for the
cases n = 2 and n = 3. (b) Write down the binomial series for the case n = -1. This case gives an infinite series for 1/(1 + x), but when x is small, you get a good
approximation if you keep just the first two terms:
-1- = 1-x
l+x
'
Problems for Chapter 3
81
as quoted in (3.6). Calculate both sides of this approximation for each of the values
x = 0.5, 0.1, and 0.01, and in each case find the percentage by which the approxi-
mation (1 - x) differs from the exact value of 1/(1 + x).
For Section 3.4: Two Important Special Cases
* 3.9. I measure the diameter of a circular disc as d = 6.0 ± 0.l cm and use
this value to calculate the circumference c = red and radius r = d/2. What are
my answers? [The rule (3.9) for "measured quantity X exact number" applies to
both of these calculations. In particular, you can write r as d X 1/2, where the
number 1/2 is, of course, exact.]
* 3.10. I have a set of callipers that can measure thicknesses of a few inches with
an uncertainty of ±0.005 inches. I measure the thickness of a deck of 52 cards and
get 0.590 in. (a) If I now calculate the thickness of 1 card, what is my answer
(including its uncertainty)? (b) I can improve this result by measuring several decks
together. If I want to know the thickness of 1 card with an uncertainty of only
0.00002 in, how many decks do I need to measure together?
* 3.11. With a good stopwatch and some practice, you can measure times ranging
from approximately 1 second up to many minutes with an uncertainty of 0.1 second
= or so. Suppose that we wish to find the period T of a pendulum with T 0.5 s. If
we time 1 oscillation, we have an uncertainty of approximately 20%; but by timing
several oscillations together, we can do much better, as the following questions
illustrate:
(a) If we measure the total time for 5 oscillations and get 2.4 ± 0.1 s, what is
our final answer for r, with its absolute and percent uncertainties? [Remember the
rule (3.9).]
(b) What if we measure 20 oscillations and get 9.4 ± 0.1 s?
(c) Could the uncertainty in T be improved indefinitely by timing more oscilla-
tions?
* x 3.12. If has been measured as 4.0 ± 0.1 cm, what should I report for x2 and
.x3? Give percent and absolute uncertainties, as determined by the rule (3.10) for a
power.
* 3.13. If I have measured the radius of a sphere as r = 2.0 ± 0.1 m, what should
I report for the sphere's volume?
* 3.14. A visitor to a medieval castle measures the depth of a well by dropping a
stone and timing its fall. She finds the time to fall is t = 3.0 ± 0.5 sec and calcu-
lates the depth as d = ½gt2. What is her conclusion, if she takes g = 9.80 m/s2 with
negligible uncertainty?
** 3.15.
Two students are asked to measure the rate of emission of alpha particles
from a certain radioactive sample. Student A watches for 2 minutes and counts 32
particles. Student B watches for 1 hour and counts 786 particles. (The sample de-
cays slowly enough that the expected rate of emission can be assumed to be constant
during the measurements.) (a) What is the uncertainty in Student A's result, 32, for
the number of particles emitted in 2 minutes? (b) What is the uncertainty in Student
B's result, 786, for the number of particles emitted in 1 hour? (c) Each student now
82
Chapter 3: Propagation of Uncertainties
divides his count by his number of minutes to find the rate of emission in particles per minute. Assuming the times, 2 min and 60 min, have negligible uncertainty, what are the two students' answers for the rate, with their uncertainties? Comment.
For Section 3.5: Independent Uncertainties in a Sum
* 3.16. A student measures five lengths:
a = 50 ± 5, b = 30 ± 3, c = 60 ± 2, d = 40 ± 1, e = 5.8 ± 0.3
(all in cm) and calculates the four sums a + b, a + c, a + d, a + e. Assuming the
original errors were independent and random, find the uncertainties in her four answers [rule (3.13), "errors add in quadrature"]. If she has reason to think the original errors were not independent, what would she have to give for her final uncertainties [rule (3.14), "errors add directly"]? Assuming the uncertainties are needed with only one significant figure, identify those cases in which the second uncertainty (that in b, c, d, e) can be entirely ignored. If you decide to do the additions in quadrature on a calculator, note that the conversion from rectangular to polar coordinates auto-
matically calculates ✓x2 + y2 for given x and y.
* 3.17. Evaluate each of the following:
(a) (5.6 ± 0.7) + (3.70 ± 0.03) (b) (5.6 ± 0.7) + (2.3 ± 0.1) (c) (5.6 ± 0.7) + (4.1 ± 0.2) (d) (5.6 ± 0.7) + (1.9 ± 0.3)
For each sum, consider both the case that the original uncertainties are independent and random ("errors add in quadrature") and that they are not ("errors add directly"). Assuming the uncertainties are needed with only one significant figure, identify those cases in which the second of the original uncertainties can be ignored entirely. If you decide to do the additions in quadrature on a calculator, note that the conver-
sion from rectangular to polar coordinates automatically calculates ✓x2 + y2 for
given x and y.
For Section 3.6: More About Independent Uncertainties
* 3.18. If you have not yet done it, do Problem 3.7 (assuming that the original
uncertainties are not independent), and repeat each calculation assuming _that the original uncertainties are independent and random. Arrange your answers in a table so that you can compare the two different methods of propagating errors.
* 3.19. If you have not yet done it, do Problem 3.5 (assuming that the original
uncertainties are not independent) and repeat each calculation assuming that the original uncertainties are independent and random. Arrange your answers in a table so that you can compare the two different methods of propagating errors.
* 3.20. If you have not yet done it, do Problem 3.6 (assuming that the original
uncertainties are not independent) and repeat each calculation assuming that the original uncertainties are independent and random. Arrange your answers in a table so that you can compare the two different methods of propagating errors.
Problems for Chapter 3
83
* 3.21. (a) To find the velocity of a cart on a horizontal air track, a student mea-
sures the distance d it travels and the time taken t as
d = 5.10 ± 0.01 m and t = 6.02 ± 0.02 s.
What is his result for v = dlt, with its uncertainty? (b) If he measures the cart's mass as m = 0.711 ± 0.002 kg, what would be his answer for the momentum p = mv = mdlt? (Assume all errors are random and independent.)
* 3.22. A student is studying the properties of a resistor. She measures the current
flowing through the resistor and the voltage across it as
I = 2.10 ± 0.02 amps and V = 1.02 ± 0.01 volts.
(a) What should be her calculated value for the power delivered to the resistor,
P = Iv, with its uncertainty? (b) What for the resistance R = VII? (Assume the
original uncertainties are independent. With / in amps and V in volts, the power P
comes out in watts and the resistance R in ohms.)
* 3.23. In an experiment on the conservation of angular momentum, a student
needs to find the angular momentum L of a uniform disc of mass M and radius R
as it rotates with angular velocity w. She makes the following measurements:
M 1.10 ± 0.01 kg, R 0.250 ± 0.005 m, w 21.5 ± 0.4 radls
and then calculates L as L = WR2w. (The factor WR 2 is just the moment of inertia
of the uniform disc.) What is her answer for L with its uncertainty? (Consider the
three original uncertainties independent and remember that the fractional uncertainty
in R2 is twice that in R.)
** 3.24.
In his famous experiment with electrons, J.J. Thomson measured the
"charge-to-mass ratio" r = elm, where e is the electron's charge and m its mass. A
modern classroom version of this experiment finds the ratio r by accelerating elec-
trons through a voltage V and then bending them in a magnetic field. The ratio
r = elm is given by the formula
125 D2V r - 32µ~N2 d212 •
(3.53)
In this equation, µ 0 is the permeability constant of the vacuum (equal to 47t X 10 - 7NIA2 exactly) and N is the number of turns in the coil that produces the magnetic field; D is the diameter of the field coils, V is the voltage that accelerates the electrons, dis the diameter of the electrons' curved path, and/ is the current in the field coils. A student makes the following measurements:
N 72 (exactly)
D 661 ± 2 mm V 45.0 ± 0.2 volts d 91.4 ± 0.5 mm I 2.48 ± 0.04 amps
84
Chapter 3: Propagation of Uncertainties
(a) Find the student's answer for the charge-to-mass ratio of the electron, with its
uncertainty. [Assume all uncertainties are independent and random. Note that the
first factor in (3.53) is known exactly and can thus be treated as a single known constant, K. The second factor is a product and quotient of four numbers, D 2, V, d 2,
and /2, so the fractional uncertainty in the final answer is given by the rule (3.18).
Remember that the fractional uncertainty in D 2 is twice that in D, and so on.] (b)
How well does this answer agree with the accepted valuer = 1.759 X 1011 C/kg?
(Note that you don't actually need to understand the theory of this experiment to do
the problem. Nor do you need to worry about the units; if you use SI units for all
the input quantities, the answer automatically comes out in the units given.)
** 3.25.
We know from the rule (3.10) for uncertainties in a power that if
q = x2, the fractional uncertainty in q is twice that in x;
Consider the following {fallacious) argument. We can regard x2 as x times x; so
q = XX x;
therefore, by the rule (3.18),
f>q q
( &;)2 +
x
(f>x)Z x
=
~
fJx.
lxl
This conclusion is wrong. In a few sentences, explain why.
For Section 3.7: Arbitrary Functions of One Variable
* 3.26. In nuclear physics, the energy of a subatomic article can be measured in
various ways. One way is to measure how quickly the particle is stopped by an obstacle such as a piece of lead and then to use published graphs of energy versus stopping rate. Figure 3.7 shows such a graph for photons (the particles of light) in lead. The vertical axis shows the photons' energy E in MeV (millions of electron volts), and the horizontal axis shows the corresponding absorption coefficient µ in cm2/g. (The precise definition of this coefficient need not concern us here; µ is simply a suitable measure of how quickly the photon is stopped in the lead.) From this graph, you can obviously find the energy E of a photon as soon as you know its absorption coefficient µ. (a) A student observes a beam of photons (all with the same energy, E) and finds
that their absorption coefficient in lead is µ = 0.10 ± 0.01 cm2/gram. Using the
graph, find the energy E and the uncertainty BE. (You may find it helpful to draw on the graph the lines connecting the various points of interest, as done in Figure 3.3.) (b) What answer would the student have found if he had measured
µ = 0.22 ± 0.01 cm2/gram?
* 3.27. A student finds the refractive index n of a piece of glass by measuring the
critical angle 0 for light passing from the glass into air as 0 = 41 ± 1°. The relation
Problems for Chapter 3
85
0.9
0.8
0.7
t 0.6
>
0
6 0.5
~
0.4
0.3
0.2
0
0.1
0.2
0.3
0.4
µ, (cm2/gram)-
Figure 3.7. Energy E against absorption coefficientµ, for photons in lead; for Problem 3.26.
between these is known to be n = 1/sin 0. Find the student's answer for n and use
the rule (3.23) to find its uncertainty. (Don't forget to express 80 in radians.)
* 3.28. (a) According to theory, the period T of a simple pendulum is T =
2rc-vi]i, where L is the length of the pendulum. If L is measured as
L = 1.40 ± 0.01 m, what is the predicted value of T! (b) Would you say that a
measured value of T = 2.39 ± 0.01 s is consistent with the theoretical prediction of
part (a)?
* 3.29. (a) An experiment to measure Planck's constant h gives it in the form
h = J0..113 where K is a constant known exactly and X. is the measured wavelength
emitted by a hydrogen lamp. If a student has measured X. with a fractional uncer-
tainty she estimates as 0.3%, what will be the fractional uncertainty in her answer
for h? Comment. (b) If the student's best estimate for h is 6.644 X 10-34 J·s, is her
result in satisfactory agreement with the accepted value of 6.626 X 10-34 J-s?
** 3.30.
A spectrometer is a device for separating the different wavelengths in a
beam of light and measuring the wavelengths. It deflects the different wavelengths
through different angles 0, and, if the relation between the angle 0 and wavelength
X. is known, the experimenter can find X. by measuring 0. Careful measurements
with a certain spectrometer have established the calibration curve shown in Figure
3.8; this figure is simply a graph of X. (in nanometers, or nm) against 0, obtained by
measuring 0 for several accurately known wavelengths A.. A student directs a narrow
beam of light from a hydrogen lamp through this spectrometer and finds that the
86
Chapter 3: Propagation of Uncertainties
t,---. 600
s='--'
-<
~
550
=Oil
CL)
Q)
:>
~
500
450
51
52
53
54
55
Deflection 0 (degrees)-
Figure 3.8. Calibration curve of wavelength A against deflection 0 for a spectrometer; for Problem 3.30.
light consists of just three well-defined wavelengths; that is, he sees three narrow beams (one red, one turquoise, and one violet) emerging at three different angles. He measures these angles as
01 = 51.0 ± 0.2°
02 52.6 ± 0.2° 03 54.0 ± 0.2°
(a) Use the calibration curve of Figure 3.8 to find the corresponding wavelengths
A1, A2, and A3 with their uncertainties. (b) According to theory, these wavelengths
should be 656, 486, and 434 nm. Are the student's measurements in satisfactory
agreement with these theoretical values? (c) If the spectrometer has a vernier scale
to read the angles, the angles can be measured with an uncertainty of 0.05° or even
less. Let us suppose the three measurements above have uncertainties of ±0.05°.
Given this new, smaller uncertainty in the angles and without drawing any more
lines on the graph, use your answers from part (a) to find the new uncertainties in
the three wavelengths, explaining clearly how you do it. (Hint: the calibration curve
is nearly straight in the vicinity of any one measurement.) (d) To take advantage of
more accurate measurements, an experimenter may need to enlarge the calibration
curve. The inset in Figure 3.8 is an enlargement of the vicinity of the angle 02. Use
this graph to find the wavelength A2 if 02 has been measured as 52.72 ± 0.05°;
check that your prediction for the uncertainty of A2 in part (c) was correct.
** 3.31.
(a) An angle 0 is measured as 125 ± 2°, and this value is used to com-
pute sin 0. Using the rule (3.23), calculate sin 0 and its uncertainty. (b) If a is mea-
sured as abest ± &, and this value used to compute f(a) = ea, what are /best and
Problems for Chapter 3
87
8f? If a= 3.0 ± 0.1, what are ea and its uncertainty? (c) Repeat part (b) for the
function f(a) = In a.
*** 3.32.
The rule (3.23), 8q = /dq!dx/&, usually allows the uncertainty in a
function q(x) to be found quickly and easily. Occasionally, if q(x) is very compli-
cated, evaluating its derivative may be a nuisance, and going back to (3.20), from
which (3.23) was derived, is sometimes easier. Note, however, that (3.20) was de-
rived for a function whose slope was positive; if the slope is negative, the signs
need to be reversed, and the general form of (3.20) is
(3.54)
Particularly if you have programmed your calculator or computer to find q(x), then
finding q(xbest + &) and q(xbest) and their difference will be easy.
(a) If you have a computer or programmable calculator, write a program to calculate the function
q(x) = (1 + x2)3 x2 + cot x
Use this program to find q(x) if x = 0.75 ± 0.1, using the new rule {3.54) to find
8q. (b) If you have the courage, differentiate q(x) and check your value of 8q using
the rule (3.23).
*** 3.33.
Do Problem 3.32 but use the function
2) q(x) = (1 - x2) cos ( X~ +
and the measured value x = 1.70 ± 0.02.
For Section 3.8: Propagation Step by Step
* 3.34 Use step-by-step propagation to find the following quantities (assuming
that all given uncertainties are independent and random):
(a) (20 ± 1) + [(5.0 ± 0.4) X (3.0 ± 0.2)]
(b) (20 ± 1)/[(5.0 ± 0.1) - (3.0 ± 0.1)]
(c) (1.5 ± 0.1) - 2 sin(30 ± 6°)
[In part (c), the number 2 is exact.]
* 3.35. Use step-by-step propagation to find the following quantities (assuming
that all given uncertainties are independent and random):
(a) (20 ± 1) + [(50 ± 1)/(5.0 ± 0.2)]
(b) (20 ± 1) X [(30 ± 1) - (24 ± 1)]
(c) (2.0 ± 0.1) X tan (45 ± 3°)
* 3.36. Calculate the following quantities in steps as described in Section 3.8.
Assume all uncertainties are independent and random.
(a) (12 ± 1) X [(25 ± 3) - (10 ± 1)] (b) ✓16 ± 4 + (3.0 ± 0.1)3(2.0 ± 0.1) (c) (20 ± 2)e-c1.o ± 0-1)
88
Chapter 3: Propagation of Uncertainties
* 3.37. (a) To find the acceleration of a glider moving down a sloping air track, I
measure its velocities (v1 and v2) at two points and the time tit takes between them, as follows:
V1 = 0.21 ± 0.05, V2 = 0.85 ± 0.05
(both in m/s) and
t = 8.0 ± 0.1 s.
Assuming all uncertainties are independent and random, what should I report for
the acceleration, a = (v2 - v1)/t and its uncertainty? (b) I have calculated theoretically that the acceleration should be 0.13 ± 0.01 m/s2. Does my measurement agree
with this prediction?
* 3.38. (a) As in Problem 3.37, I measure the velocities, v1 and u2, of a glider at
two points on a sloping air track with the results given there. Instead of measuring
the time between the two points, I measure the distance as
d = 3.740 ± 0.002 m.
If I now calculate the acceleration as a = (v/ - v/)!2d, what should be my answer
with its uncertainty? (b) How well does it agree with my theoretical prediction that
a = 0.13 ± 0.01 m/s2?
** 3.39. (a) The glider on a horizontal air track is attached to a spring that causes
it to oscillate back and forth. The total energy of the system is E = ½mv2 + ½kx2,
where m is the glider's mass, v is its velocity, k is the spring's force constant, and
x is the extension of the spring from equilibrium. A student makes the following measurements:
m 0.230 ± 0.001 kg, k 1.03 ± 0.01 Nim,
v = 0.89 ± 0.01 m/s, X - 0.551 ± 0.005 m.
What is her answer for the total energy E? (b) She next measures the position Xmax
of the glider at the extreme end of its oscillation, where v = 0, as
Xmax = 0.698 ± 0.002 m.
What is her value for the energy at the end point? (c) Are her results consistent with conservation of energy, which requires that these two energies should be the same?
For Section 3.9: Examples
** 3.40.
Review the discussion of the simple pendulum in Section 3.9. In a real
experiment, one should measure the period T for several different lengths l and
hence obtain several different values of g for comparison. With a little thought, you
can organize all data and calculations so that they appear in a single convenient
tabulation, as in Table 3.2. Using Table 3.2 (or some other arrangement that you
prefer), calculate g and its uncertainty 8g for the four pairs of data shown. Are your
answers consistent with the accepted value, 980 cm/s2? Comment on the variation
of 8g as l gets smaller. (The answers given for the first pair of data will let you
check your method of calculation.)