集智翻译-Cognitive Machine Learning(1):Learning to Explain

来自集智百科
跳转到: 导航搜索

目录

Part1 -【推理】

(Translated by -jeff, 译校 by dan)

The Zaamenkomst panel. [Iziko Museums]/撒阿门科莫斯特石板. [伊兹科博物馆南非开普敦]

This is an image of the Zaamenkomst panel: one of the best remaining exemplars of rock art from the San people of Southern Africa. As soon as you see it, you are inevitably herded, like the eland in the scene, through a series of thoughts. Does it have a meaning? Why are the eland running? What do the white lines coming from the mouths of the humans and animals signify? What event is unfolding in this scene? These are questions of interpretation, and of explanation. Explanation is something people actively seek, and can almost effortlessly provide. Having lost their traditional knowledge, the descendants of the San are unable to explain what the scene in the Zaamenkomst panel means. In what ways can machine learning systems be saved from such a fate? For this, we turn to the psychology of explanation, the topic we explore in this post.

Explanation is an omnipresent factor in our reasoning: it guides our actions, influences our interactions with others, and drives our efforts to expand scientific knowledge. Reasoning can be split into its three constituents: deduction, reaching conclusions using consistent and logical premises; induction, the generalisation and prediction of events based on observed evidence and frequency of occurrence; and abduction, providing the simplest explanation of an event. We don't often use these words in machine learning, but these concepts are part of our widespread practice: deduction is implied when we use rule-learning, set construction and logic programming; inductive and transductive testing, i.e. making use of test data sets, is our standard protocol in assessing predictive algorithms; and abduction is implied when we discuss probabilistic inference and inverse problems. It is this third type of reasoning, abductive inference, that in both minds and machines gives us our ability to provide explanations, and to use explanations to learn and act in the future.


这是一幅撒阿门科莫斯特1石板(Zaamenkomst panel)的照片,这块石板是南部非洲撒恩人2岩画艺术最好的留存样本之一。你一看到它,就会不由自主地产生一系列关于放牧场景的浮想,就如画面中依兰羚羊3所呈现的那样。这场景有意义吗?为什么羚羊们在奔跑?从人类和动物嘴部延伸出的白线代表什么?这个场景背后隐藏着什么事件和活动?这些都是诠释和解读4的问题。解释(意义)是人们积极寻求的东西,却能几乎毫不费力地产生。由于传统知识已经失传,即使撒恩人的后裔也无法完整地解释撒阿门科莫斯特石板所描绘场景的意义。 通过哪些途径,可以让机器学习系统免于陷入类似的宿命呢?为此,我们转向寻求心理学的解释,这也是本篇所探讨的主题。

在我们的推理过程中,解释无处不在:它指导我们的行动,影响我们与他人的互动,并且驱动我们拓展科学知识的努力。推理可以分解为三个部分:演绎,即基于一致性5和逻辑的前提得到结论;归纳,即根据观察证据及其发生频率,对事件进行概括和预测;溯因,即对发生的某一事件做出最简单的解释;虽然在机器学习领域,我们没有频繁使用这些词汇,但这些理念方法却被广泛践行:规则学习、集合构建和逻辑编程中隐含了演绎;归纳式和外展,比如利用数据集进行验证,是评估预测算法的标准做法;谈及概率推断和反问题,则暗合了溯因。正是在机器和头脑中产生的第三种类型的推理(即溯因推断),使我们有能力提供解释,并在后续的学习和行动中加以运用。

补充说明

induction, the generalisation and prediction of events based on observed evidence and frequency of occurrence; and abduction, providing the simplest explanation of an event. (归纳,即根据观察证据及其发生频率,对事件进行概括和预测;溯因,即对发生的某一事件做出最直观的解释。)

abduction is implied when we discuss probabilistic inference and inverse problems. It is this third type of reasoning, abductive inference, that in both minds and machines gives us our ability to provide explanations, and to use explanations to learn and act in the future. (谈及概率推断和反问题,则暗合了溯因推理。正是在机器和头脑中产生的第三种类型的推理(即溯因推断),使我们有能力提供解释,并在后续的学习和行动中加以运用。)

注:abduction是美国逻辑学家Charles Peirce(1839-1914)专门提出的逻辑推理形式,湖南大学曾凡桂教授专门写过一篇文章讨论abduction的翻译,暂时考虑采用溯因

曾凡贵. 皮尔士 “Abduction” 译名探讨[J]. 外语教学与研究: 外国语文双月刊, 2003, 35(6): 469-472.

Part2 -【解释的心理学】

(Translated by dan, 译校 by jeff)

The Psychology of Explanation

Abductive inference is the aspect of reasoning concerned with developing and choosing hypotheses that best explain a situation or data: an emphasis on how explanatory hypotheses are generated, evaluated, tested, and extended [1]. Psychologists often refer to the ability for abduction in people—the ability to spontaneously and effortlessly draw conclusions from our knowledge and experiences—as everyday inference. While seemingly effortless, abduction requires us to engage a multitude of complex cognitive functions, including causal reasoning, mental modelling, categorisation, inductive inference, and metacognitive processing. And in turn, a multitude of cognitive functions depend on our ability to offer explanatory judgements, including plan recognition, diagnosis and theory refinement. It is then not surprising that explanation continues to be active area of research, with many of the most influential ideas in modern psychology being views on explanation and abduction [2].

Let's sample some of this thinking, beginning with the commonsense psychology or attribution theory of Fritz Heider [3]. The Heiderian view is that we explain events in order to relate them to more general processes. This allows us to obtain a more stable environment, and ultimately, the possibility to control our environment. From this stems the psychology of naive, folk, or intuitive theories [4]. In this framework, we form 'theories' of the world that allow us make useful explanatory judgements: again, better explanations leads to better predictions leads to better control. George Kelly recognised explanation as a key part of his personal construct psychology [5] of which an important component is the now famous analogy that we think of 'man-the-scientist' rather than 'man-the-biological-organism', because of the need to hypothesise, test and explain. The mechanisms of abduction, shown in the flow diagram above, is a basis for scientific explanation and explanatory coherence [6], and is triggered by an initial puzzlement or surprise, leading to a search for explanation, the generation of potential hypotheses, the evaluation of these hypotheses, and finally some satisfaction or conclusion. The role of explanation is supported by several sources of evidence that leaves us with a broad insight:


解释的心理学

溯因推理是一个针对所获得的数据和所处情景进行分析,以寻求最合理解释的推理分支:溯因主要关注解释性假设如何形成,然后再进行评估、测试和拓展[1]。心理学家经常把人类溯因推理的能力-从知识和经验中自发地、不费力地得出结论的能力-称为日常推理。虽然看起来不费劲,溯因推理要求我们应用许多复杂的认知功能,包括因果推理,心智建模,分类,归纳推理和元知识处理。反过来,许多人类认知功能依赖着我们进行解释性判断的能力,比如规划认知,诊断和理论优化。不足为奇地,解释,还有现代心理学中最有影响的审视解释和溯因推理的想法,一直是活跃的研究领域[2]。

我们举一些例子,就从常识心理学或者Fritz Heider的归因理论开始[3]。Heider的观点是,我们之所以解释事件,是为了把它们与那些更普遍的过程联系起来。解释事件使我们有一个更稳定的环境,最终产生掌控环境的可能性。由此引出了单纯朴素的心理学,或者称之为直觉理论[4]。在这个直觉理论的框架下,我们形成解释世界的各种“理论”,这些理论使我们能做出有意义的解释判断:更好的解释会得到更好的预测,进而获得更好的支配能力。George Kelly认为“解释”是他的个人建构心理学[5]的关键部分,其中一个要素是如今著名的类比,即相比较于“人是生物有机体”,我们更倾向于认为“人是科学家”。因为人有假设、尝试和解释的需要。如同之前的流程图所示,溯因推理的机制是科学解释及一致性的基础[6],溯因由原始困惑或者惊奇触发,接着寻找解释,产生潜在假设,假设的评估,最终得到满足或者得出结论。几种源头的证据能够支持解释的地位,给我们提供了宽广的视野:

(Translated by 阎赫, 译校 by W先森)

  • Explanation is commonplace. Povinelli and Dunphy-Lelli (2011) assess the capacity for explanation in chimpanzees; Billargeon (2004) shows the role of explanation in prelinguistic infants; and Keil (in several papers) highlights the role of explanation from children to adults.
  • Explanations are contrastive in that they account for one state of affairs in contrast to another, e.g., Van Fraasen (1980).
  • Explanations are needed for many cognitive functions. Carey (1985) shows its importance for concept learning and categorisation; Holyoak and Cheng (2011) in supporting inductive inference and learning; Chi et al. (1994) in metacognition.
  • There are different types of explanations. Keil (1992) explains that children use 'modes of construal' whereby they prefer different types of explanations in different situations. One taxonomy is of three types of explanation (Lombrozo, 2012):functional, based on goals, intentions or design; mechanistic, explanation using the mechanism and process by which an event arises; and formal, which explains by virtue of membership to a category. Mechanistic and formal explanations are also types ofcausal explanations.
  • Functional explanations are default. Researchers from Piaget (1929) to Keleman (1999) show that functional explanations are the default, and used more naturally in children.
  • Subjective probability assessments are biased by our explanations. In people, just thinking that an explanation is true is enough to boost its subjective probability of being true, e.g., Koehler (1991).
  • Explanations are limited. We can struggle to detect circular explanations (Rips 2002) and we overestimate the accuracy and depth of our explanations (Rozenblit and Keil, 2002).


  • 解释能力的普遍性。 Povinelli和Dunphy-Lelli(2011)评估了黑猩猩的解释能力; Billargeon(2004)显示了解释在学语前婴儿中的作用; Keil(在几篇论文中)强调了从儿童到成人的解释的作用。
  • 解释需要有对照。例如Van Fraasen(1980)提出的解释是在事物的一种状态与另一个事物的对比。
  • 许多认知功能需要解释。 Carey(1985)表明解释对概念学习和分类的重要性; Holyoak和Cheng(2011)表明在支持归纳推理和学习方面的重要性; Chi 等人(1994)表明了解释在元认知方面的重要性。
  • 解释具有多种不同的类型。 Keil(1992)解释说,儿童使用“解释模式”,他们喜欢在不同情况下不同类型的解释[他们更喜欢在不同情况下给出不同类型的解释]。有一种分类法有三种类型的解释(Lombrozo,2012):功能性的,基于目标的,意图或设计的解释; 机械的解释,使用事件发生的机制和过程的解释;和形式的解释,它通过一个成员对类别来解释。机械的和形式的解释也是原因解释的类型[的一种]。
  • 功能性解释是最原始的形式。从 Piaget(1929)到Keleman(1999)的研究人员表明,功能解释是默认值,儿童会更多地自然地使用这种方式。主观概率评估受我们解释的偏见所影响。对于人类来说,仅仅认为解释是真的就足以提高其主观的真实概率[人们会仅依靠可信的解释而提高对其真实概率的可信度],见Koehler(1991)。
  • 解释是有限的。我们可以努力检测循环解释(Rips 2002),我们高估了我们的解释的准确性和深度(Rozenblit和Keil,2002)。

(Translated by W先森, 译校 by 阎赫)

This discussion paints a high level view of explanation, and I suggest these sources for more depth: Explanation and Abductive Inference by Tania Lombrozo is a comprehensive and essential introductory source. The Cambridge Handbook of Computational Psychology is an excellent collection in its entirety, with many relevant chapters. The chapter on Models of Scientific Explanation by Thagard and Litt is a view on the cognitive science of science. We began with the cognitive observation that explanation is a central part of human reasoning, and found much evidence for this in animals, infants, children and adults, along with an understanding of the impacts and limitations of our human explanatory powers. In building machines that think and learn, our cognitive inspiration is clear: systems for abductive inference, hypothesis formation and evaluation are essential. These systems help form the basis of our intuitive theories, models of the world, and of our decision making. Fortunately, abduction is not unfamiliar to machine learning.

本篇文章在更高的层次解读了“解释”,我建议深入研究以下理论源头:


从认知观察开始,解释是人类推理的核心部分,并且在动物、婴儿、儿童和成人中找到了许多证据,从此我们能够了解到我们人类解释力的影响和局限性。

在构建能够思考和学习的机器时,我们的受到的启发是清晰的:用于溯因推理,假说形成和评价的系统是必要的。这些系统是形成关于直觉理论、世界模型和决策的基础。幸运的是,溯因推理在机器学习领域并不陌生。

Img3.png
一些机器学习的溯因推理方法,从左到右:图解模型及因果网络,统计关系学习及半参建模,贝叶斯优化。

Part3 -【学会解释】

(Translated by Henry, 译校 by Jamie)

Learning to Explain

Explanation has always been a core topic in machine learning and artificial intelligence. We constantly seek new tools to derive and develop machine learning systems that offer a spectrum of explanations, knowing that different types of explanations are possible; valid or not, we will tend to hold the explanatory requirements of our machine learning systems to a higher standard than we hold ourselves. Learning to explain then becomes a central research question.

Our basic tool with which to incorporate the ideals of ubiquitous explanation and abduction remains, as always, probability. This forms the basis of all types of explanation—functional, mechanistic and formal—in machine learning. Again, it is useful to align our language. An inductive hypothesis is a general theory that explains observations across a wide range of cases: we use the words generalisation and prediction. An abductive hypothesis is an explanation that is related to a specific observation or case: we simply use the word inference, the estimation of unobserved events or probabilities in our models given data [7]. This alignment tells us how the taxonomy of reasoning is related to probabilistic machine learning, and allows us to combine tools from many subject areas. Along with introductory sources, these areas include:

  • Probabilistic graphical models. We can build models that allow us to capture the ways in which data has been generated, and is a reason to be interested in generative models. Knowing the probability of unobserved variables in such models is the basic unit of explanation: through their inference we can offer explanation, and explanations can be used to improve generalisation.
- Probabilistic Graphical Models: Principles and Techniques, by D. Koller and N. Friedman is comprehensive.
- Building machines that imagine and reason. A tutorial on inference in deep generative models.
  • Causality. A causal explanation is a particularly powerful type of explanation. We can infer the strength of causal effects, and even try to learn the causal structure of events from data.

Causality, R. Silva.

学会解释

在机器学习和人工智能中,解释一直以来都是一个核心的话题。我们一直在寻找新的工具去创造和升级拥有理解能力的机器学习系统,使其可以判断哪些解释是可能的,哪些是有效的,哪些又是不行的,我们希望机器学习系统可以拥有比我们人类自己更强的理解能力。所以学会解释也就成为了一个主要的研究问题。

我们的基本工具包含了普遍存在的解释,并使用概率来进行推导。这组成了所有机器学习的解释类型---函数型,机制型和形式型的基础。同时,这对我们语言的匹配也有用处。归纳假设指的就是根据归纳和预测,用一个通用的理论去解释许多不同的现象。引导性假设针对的是特殊现象和特殊情况的解释,使用推理的方法,根据所得数据建立模型,来估计概率或者是未观察到的事件。通过上述对两种假设的定义,我们知道了利用分类来进行推理的方法和概率性的机器学习是如何联系上的,并且可以将这些工具用于许多场合。一些介绍性的资源包括:

  • 概率图解模型。我们可以建立模型,找出数据产生的方法,这也是我们对生成模型感兴趣的原因。使用概率图模型来获取未观测到变量的概率是解释的根本: 通过概率推理我们能够提供解释,而解释反过来进一步改善一般化。
- Probabilistic Graphical Models: Principles and Techniques,由D. Koller and N. Friedman编著,叙述较为全面。
- Building machines that imagine and reason.一本基于深度生成模型进行推理的讲义。
  • 因果关系。因果性解释是一种特别且强大的解释方式。我们可以推断出因果关系的强度,甚至可以尝试从数据中了解事件的因果结构。参考 Causality, R. Silva.

(Translated by Jamie, 译校 by Henry)

  • Relational learning and inductive logic. Formal explanations are based on categorisation and involves learning the relationships between objects, entities, agents, and events in our world. This is a problem of relational learning, a highly active topic in machine learning.
- Statistical Relational Artificial Intelligence. From Distributions through Actions to Optimization, K. Kersting and S. Natarajan is a quick introduction to the topic often referred to as StarAI.
  • Active learning and Bayesian optimisation. Active and sequential machine learning embodies the role of explanation. One important area is Bayesian optimisation, which allows us to find the best explanation by forming a series of hypotheses and refinements, until reaching a satisfactory conclusion.
- Taking the Human Out of the Loop: A Review of Bayesian Optimization, by B. Shahriari et al.
  • Semi-parametric modelling. Cognitive systems need not only be able to make inferences of observations, propose hypotheses, refine and evaluate them, but also be able to extend these hypotheses. This is a hard problem, but the tools of non-parametric modelling lies at our disposal and allows us to extend our models and explanations in a natural and consistent way.
- Probabilistic machine learning and artificial intelligence by Z. Ghahramani is a short review that covers all the topics in this list.
- The discovery of structural form, C. Kemp and J. Tenenbaum.


  • 关系学习和归纳逻辑。要想获得正式的解释,我们需要对不同的对象、实体、代理和事件进行分组并学习出他们之间的关系。该问题被称为关系学习,在机器学习中是一个高热话题。
- 统计关系人工智能.从分布区块经由行为再到最优化 (Statistical Relational Artificial Intelligence. From Distributions through Actions to Optimization).来自k . Kersting及美国Natarajan.hua.是对这个经常被提及为starAI的话题的一个快捷介绍。
  • 主动学习和贝叶斯最优化。主动学习和序列学习具备解释的能力。一个重要的领域是贝叶斯最优化,使得我们能够通过建立一系列假设和筛选,直至达到一个满足条件的结论。
- 将人类从循环中解放出来:贝叶斯优化综述(Taking the Human Out of the Loop: A Review of Bayesian Optimization), by B. Shahriari et al.
  • 半参数建模。认知系统不需要仅仅是能够做推论和观察,提出假设,做检验并评价她们,也应该是在我们作处理时的无参建模工具,并使得我们能够以自然和可持续的方式去延展我们的模型和解释。
- 概率机器学习与人工智能(Probabilistic machine learning and artificial intelligence) by Z. Ghahramani ,一个简短的综述,涵盖了本列表的所有主题。
- 结构形态学习(The discovery of structural form), C. Kemp and J. Tenenbaum.

Part4 -【结语】

(Translated by Jamie, 译校 by Henry)

Final Words

It was recently suggested that forthcoming European regulations create a 'right to explanation' when machine learning systems are used in decision making. There are arguments both for and against this suggestion, but policy considerations aside, this emphasises further the importance of explanation and abductive inference in our current practice. The eland in the rock painting are running towards the elusive spirit world. By taking inspiration from cognitive science, and the many other computational sciences, and combining them into our machine learning efforts, we take the positive steps on our own path to the hopefully less elusive world of machine learning with explanations.


“写在最后的话”

最近听说即将成型的欧洲法规创建了一个“解释的权利”用于当利用机器学习系统做决策时(即将成型的欧洲法规为用于决策的机器学习系统创建了一个“解释的权利”)。对于这个提议,有支持有反对。但除了政策上的考虑,这进一步强调了在当前的实际应用中,解释和延伸推理的重要性。岩石上画出来的大羚羊正奔向难以捉摸的思维境界。通过来自认知科学的灵感,以及许多其他的计算机科学,并将它们与我们的机器学习工作结合起来,我们在走向充满希望但难以捉摸的机器学习世界之路上迈出了积极的步伐。

Some References -【推荐阅读】

[1] Charles S Peirce, Deduction, induction, and hypothesis, Popular science monthly, 1878

[2] Tania Lombrozo, Explanation and abductive inference, Oxford handbook of thinking and reasoning, 2012

[3] Fritz Heider, The psychology of interpersonal relations, , 1958

[4] Tobias Gerstenberg, Joshua B Tenenbaum, Intuitive theories, , 2016

[5] George Kelly, Personal construct psychology, , 1955

[6] Paul Thagard, Abninder Litt, Models of scientific explanation, The Cambridge handbook of computational psychology, 2008

[7] Raymond J Mooney, Integrating abduction and induction in machine learning, Abduction and Induction, 2000

译注

1. 撒阿门科莫斯特(Zaamenkomst)是南非北部林波波(Limpopo)省的一个农场。

2. 萨恩人(San)又称布须曼人(Bushmen)是生活於南非、博茨瓦纳、纳米比亚和安哥拉的一个以狩猎和采集为生的原住民族。

3. 伊兰羚羊,又名巨羚、大羚羊,是东非及南部非洲大草原及平原的一种羚羊。

4. "interpretation"着重阐明概念,"explanation"倾向于解读事实。

5. 一致性是机器学习的前提,即观测样本与真实世界有某种相似的特性。

友情提醒

从这次任务开始,增加了译者互校环节,一是提高翻译质量,二是平衡译者任务量,所以这次咱们就1、2互校,3、4互校,5、6互校。下周一晚上8点之前,请把翻译的内容中英文上传到WIKI上,然后进行互校,在周三完成即可。任务列表如下:1,Jeff周文杰;2,dan;3,阎赫;4,W先森;5,Henry;6,Jamie。由于这是首次增加了译者互校,群主大人特意为大家增加了积分,每位译者4分,优秀译者5分哦!

个人工具
名字空间
操作
导航
工具箱