当前位置: X-MOL 学术Open Philosophy › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Editorial introduction to the Topical Issue “Computer Modeling in Philosophy”
Open Philosophy ( IF 0.3 ) Pub Date : 2019-12-13 , DOI: 10.1515/opphil-2019-0049
Patrick Grim 1
Affiliation  

The role played by logic in 20th century philosophy, it can be argued, will be played by computational modeling in the 21st. This special issue is devoted to discussion, analysis, but primarily to examples of computer-aided or computer-instantiated modeling. Over the past several decades, social epistemology and philosophy of science have been important areas in the development of computational philosophy.1 Here we focus on current work in a wider spread of sub-disciplines: ethics, social philosophy, philosophy of perception, philosophy of mind, metaphysics and philosophy of religion. The first two pieces in the collection concentrate on computational techniques and philosophical methodology quite generally. Istvan Berkeley’s “The Curious Case of Connectionism” opens the collection, with an examination and analysis of three stages in the history of a major theoretical approach that continues in contemporary computational philosophy. He characterizes a first stage of connectionism as ending abruptly with the critique by Minsky and Papert.2 A second stage had an important impact on philosophy, but Berkeley documents its waning influence with the declining appearance of the terms ‘connectionism’ and ‘connectionist’ in the Philosopher’s Index. He proposes deep learning as a third stage of connectionism, with new computational technologies promising the possibility of important philosophical application. The search for formal methods of inquiry and discovery, as opposed to mere justification, can be seen historically as a project in Aristotle, Bacon, Leibniz, and Mill. But in the 20th century, at the hands of Popper, Reichenbach, Rawls, and others, that search was largely abandoned. In “The Evaluation of Discovery: Models, Simulation and Search through ‘Big Data’,” Joseph Ramsey, Kun Zhang, and Clark Glymour argue that the contemporary development of algorithms for search through big data offers a rebirth for formal methods of discovery. The authors point out that search algorithms also pose a major problem of validation, however. What we want is output with both ‘precision,’ a high probability that the hypotheses it returns are true, and ‘recall,’ the probability that the true hypotheses are returned. How are we to assess precision and recall for causal relations if, as in many cases, our data base is huge, the potential correlations are many, but the empirical base available for direct assessment is vanishingly small? Here recourse is often made to modeling or simulation, assessing a search method using not actual data but data simulated from known patterns, substructures, or ‘motifs’ within a domain. Ramsey, Zhang, and Glymour illustrate the approach with two cases, one from neuroscience and another from astrophysics. They emphasize the inherent risk in the simulated data strategy, particularly in cases in which sample selection is not automated and not guaranteed to be representative. In specific contexts, with appropriate safeguards, careful application of search algorithms

中文翻译:

关于“哲学中的计算机建模”主题的社论介绍

可以说,逻辑在20世纪哲学中所扮演的角色将在21世纪由计算模型扮演。这个特殊问题专门讨论,分析,但主要用于计算机辅助或计算机实例化建模的示例。在过去的几十年中,社会认识论和科学哲学一直是计算哲学发展的重要领域。1在这里,我们将重点放在当前的工作上,这些工作涉及以下学科:伦理,社会哲学,感知哲学,哲学哲学。思想,形而上学和宗教哲学。集合中的前两个部分大体上集中在计算技术和哲学方法论上。伊斯特·伯克利(Istvan Berkeley)的“好奇的连接主义案例”打开了该系列,通过对主要计算方法历史上三个阶段的研究和分析,该方法在当代计算哲学中仍在继续。他将连接主义的第一阶段描述为以Minsky和Papert的批判突然结束。2第二阶段对哲学产生了重要影响,但伯克利证明,随着“连接主义”和“连接主义”两个词的出现,这一影响逐渐减弱。哲学家指数。他提出深度学习是连接主义的第三阶段,新的计算技术有望在重要的哲学应用中得到应用。与单纯的辩解相反,寻求正式的探究和发现方法的历史可以看作是亚里士多德,培根,莱布尼兹和米尔的一个项目。但在20世纪,在波普尔(Reichenbach)的手中,罗尔斯(Rolls)和其他人在很大程度上放弃了这种搜寻。Joseph Ramsey,Kun Zhang和Clark Glymour在“评估发现:通过大数据进行的模型,模拟和搜索”中指出,当代通过大数据进行搜索的算法开发为正式的发现方法提供了重生。作者指出,搜索算法也是一个主要的验证问题。我们想要的是既具有“精度”(即其返回的假设为真的高概率)又有“召回”(即返回了真实的假设的概率)的输出。如果在许多情况下(如在许多情况下)我们的数据库很大,潜在的相关性很多,那么我们如何评估精度和因果关系的回忆,但是可用于直接评估的经验基础很小吗?在这里,通常使用建模或仿真,而不是使用实际数据,而是使用从域内已知模式,子结构或“基序”模拟的数据来评估搜索方法。Ramsey,Zhang和Glymour用两种情况说明了这种方法,一种来自神经科学,另一种来自天体物理学。他们强调了模拟数据策略中的固有风险,特别是在样本选择不是自动化且不能保证具有代表性的情况下。在特定情况下,在有适当保护措施的情况下,请谨慎应用搜索算法 一个来自神经科学,另一个来自天体物理学。他们强调了模拟数据策略中的固有风险,特别是在样本选择不是自动化且不能保证具有代表性的情况下。在特定情况下,在有适当保护措施的情况下,请谨慎应用搜索算法 一个来自神经科学,另一个来自天体物理学。他们强调了模拟数据策略中的固有风险,特别是在样本选择不是自动化且不能保证具有代表性的情况下。在特定情况下,在有适当保护措施的情况下,请谨慎应用搜索算法
更新日期:2019-12-13
down
wechat
bug