Review of weighing the odds, by D. Williams

SIGSAM Bull. Pub Date : 2002-03-01 DOI:10.1145/565145.565151
W. J. Braun, Hao Yu
{"title":"Review of weighing the odds, by D. Williams","authors":"W. J. Braun, Hao Yu","doi":"10.1145/565145.565151","DOIUrl":null,"url":null,"abstract":"Weighing the Odds, by David Williams, is published by Cambridge University Press (publishing date 2001, ISBN 0-521-00618-X). 556 pp. £ 24.95 The author's motivation in writing this book might be inferred from a statement made at the end of the eighth chapter: 'that I have not been more actively involved in Statistics throughout my career and my wish to persuade others not to follow my example in that regard'. Weighing the Odds is a book on Probabili ty and Statistics, written from the perspective of a probabilist. Its intended audience is mathematics students who have not yet been exposed to these subjects. The author's objective is to entice these students by introducing the more mathematical elements of the subjects. The book is highly idiosyncratic, and the writer's personal views are never far from the surface. Thus, the book is perhaps the most lively account of these two subjects that we are aware of. The book contains a relatively small, but interesting treatment of traditional problems and some newer-looking problems. Some excellent hints are provided for some of the more challenging problems. A lot of statistics (all?) is based on conditioning. We cannot write down a model or compute a probability without conditioning on something. At the same time, conditioning is often one of the beginning student's greatest difficulties with the subject of probability and statistics. Therefore, it is refreshing to see a book which opens by addressing conditioning so boldly with an attempt at an intuitive look. Many of the examples are very good: the 'Monty Hall problem' (referred to as the 'Car and Goats Problem') , the 'Two Envelopes' problem, and the \"Birthday problem' are all described clearly, and analyzed carefully either in Chapter 1 or later on in the book. The example referencing system seems confusing, at first. The author warns that the preface should be read first; this warning should be heeded if for no other reason than to discover that problem 19A is really problem A on page 19. Chapter Two contains, among other things, a collection of measure-theoretic results. We're scratching our heads a bit, wondering why the author seems so insistent on avoiding measure theory, when he has gone almost halfway there. Some of the results, such as the monotone convergence theorem, are useful as 'Facts ' , but the description of the n system lemma seems to be deficient. We are not sure anyone without a background in measure theory already would really know what the author is talking about. Similarly, the Banach-Tarski paradox may be bewildering to many readers. On the plus side, we think that the hat-matching problem is a good nontrivial application of the inclusionexclusion principle. Chapter Three gives a concise discussion of random variables, density functions, mass functions, and expectation, and Chapter Four begins with a discussion of conditional probability and independence before moving into the laws of large numbers. The author claims that the proof of the Kolmogorov strong law is much too difficult to give. We are a bit perplexed about this, since all of the ingredients are given, and that the proof really is not that much difficult than, say the proof of Stirling's formula (admittedly as optional, but on page 13!). The inclusion of the Bernstein polynomial proof of the Weierstrass approximation theorem is a nice idea. The only real look at Markov chains occurs in the section on random walks. The reflection principle is described briefly. The section the strong Markov property is rightly indicated as optional reading. The treatment of simulation is excellent. It is unusual for pseudorandom number generation to feature in a text on probability, but it fits into the context naturally. The presentation is careful, and it is nice to see the Wichman and Hill (1982) pseudorandom number generator featured.","PeriodicalId":314801,"journal":{"name":"SIGSAM Bull.","volume":"76 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2002-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"SIGSAM Bull.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/565145.565151","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Weighing the Odds, by David Williams, is published by Cambridge University Press (publishing date 2001, ISBN 0-521-00618-X). 556 pp. £ 24.95 The author's motivation in writing this book might be inferred from a statement made at the end of the eighth chapter: 'that I have not been more actively involved in Statistics throughout my career and my wish to persuade others not to follow my example in that regard'. Weighing the Odds is a book on Probabili ty and Statistics, written from the perspective of a probabilist. Its intended audience is mathematics students who have not yet been exposed to these subjects. The author's objective is to entice these students by introducing the more mathematical elements of the subjects. The book is highly idiosyncratic, and the writer's personal views are never far from the surface. Thus, the book is perhaps the most lively account of these two subjects that we are aware of. The book contains a relatively small, but interesting treatment of traditional problems and some newer-looking problems. Some excellent hints are provided for some of the more challenging problems. A lot of statistics (all?) is based on conditioning. We cannot write down a model or compute a probability without conditioning on something. At the same time, conditioning is often one of the beginning student's greatest difficulties with the subject of probability and statistics. Therefore, it is refreshing to see a book which opens by addressing conditioning so boldly with an attempt at an intuitive look. Many of the examples are very good: the 'Monty Hall problem' (referred to as the 'Car and Goats Problem') , the 'Two Envelopes' problem, and the "Birthday problem' are all described clearly, and analyzed carefully either in Chapter 1 or later on in the book. The example referencing system seems confusing, at first. The author warns that the preface should be read first; this warning should be heeded if for no other reason than to discover that problem 19A is really problem A on page 19. Chapter Two contains, among other things, a collection of measure-theoretic results. We're scratching our heads a bit, wondering why the author seems so insistent on avoiding measure theory, when he has gone almost halfway there. Some of the results, such as the monotone convergence theorem, are useful as 'Facts ' , but the description of the n system lemma seems to be deficient. We are not sure anyone without a background in measure theory already would really know what the author is talking about. Similarly, the Banach-Tarski paradox may be bewildering to many readers. On the plus side, we think that the hat-matching problem is a good nontrivial application of the inclusionexclusion principle. Chapter Three gives a concise discussion of random variables, density functions, mass functions, and expectation, and Chapter Four begins with a discussion of conditional probability and independence before moving into the laws of large numbers. The author claims that the proof of the Kolmogorov strong law is much too difficult to give. We are a bit perplexed about this, since all of the ingredients are given, and that the proof really is not that much difficult than, say the proof of Stirling's formula (admittedly as optional, but on page 13!). The inclusion of the Bernstein polynomial proof of the Weierstrass approximation theorem is a nice idea. The only real look at Markov chains occurs in the section on random walks. The reflection principle is described briefly. The section the strong Markov property is rightly indicated as optional reading. The treatment of simulation is excellent. It is unusual for pseudorandom number generation to feature in a text on probability, but it fits into the context naturally. The presentation is careful, and it is nice to see the Wichman and Hill (1982) pseudorandom number generator featured.
《权衡胜算》书评,D. Williams著
大卫·威廉姆斯的《权衡利弊》由剑桥大学出版社出版(出版日期2001年,ISBN 0-521-00618-X)。作者写这本书的动机可以从第八章末尾的一段话中推断出来:“在我的职业生涯中,我从未如此积极地参与过统计学,我希望说服别人不要在这方面以我为榜样。”权衡赔率是一本关于概率和统计的书,从概率学家的角度写的。它的目标读者是尚未接触过这些科目的数学学生。作者的目的是通过引入科目中更多的数学元素来吸引这些学生。这本书非常独特,作者的个人观点从未远离表面。因此,这本书可能是我们所知道的对这两个主题最生动的描述。这本书对传统问题和一些新问题的处理相对较少,但很有趣。为一些更具挑战性的问题提供了一些很好的提示。很多统计数据(全部?)都是基于条件作用。我们不能在没有条件的情况下写出一个模型或计算一个概率。与此同时,条件反射通常是初学概率和统计的学生最大的困难之一。因此,看到一本书如此大胆地以直觉的方式来解决条件作用,这是令人耳目一新的。许多例子都非常好:“蒙蒂霍尔问题”(也被称为“汽车和山羊问题”)、“两个信封问题”和“生日问题”都在第一章或后面的书中有清晰的描述和仔细的分析。首先,示例引用系统似乎令人困惑。作者提醒,应该先读序言;如果不是为了发现问题19A真的是第19页上的问题A,那么就应该注意这个警告。第二章包含了一系列测量理论的结果。我们有点摸不着头脑,想知道为什么作者似乎如此坚持地回避测量理论,而他几乎已经走到一半了。一些结果,如单调收敛定理,作为“事实”是有用的,但对n系统引理的描述似乎是有缺陷的。我们不确定没有测量理论背景的人是否真的知道作者在说什么。同样,巴拿赫-塔斯基悖论可能会让许多读者感到困惑。从好的方面来看,我们认为帽子匹配问题是包含排除原理的一个很好的非平凡应用。第三章给出了随机变量、密度函数、质量函数和期望的简明讨论,第四章在进入大数定律之前,首先讨论了条件概率和独立性。作者认为,柯尔莫哥洛夫强律的证明是很难给出的。我们对此有点困惑,因为所有的成分都给出了,而且证明真的没有那么难,比如说斯特林公式的证明(不可否认是可选的,但在第13页!)用Bernstein多项式证明Weierstrass近似定理是一个很好的想法。对马尔可夫链的唯一真正了解出现在随机漫步部分。简要介绍了反射原理。强马尔可夫属性部分是可选的。模拟处理非常出色。伪随机数生成在概率文本中是不寻常的,但它很自然地适应了上下文。演示是仔细的,很高兴看到Wichman和Hill(1982)伪随机数生成器的特点。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信