Ragionamento in condizioni di incertezza
"Uncertainty" has different meanings, depending on scientific contexts. This essay presents different approaches to reasoning with incomplete, vague, or approximate premises, and to inferences where the link between premises and conclusions is weaker than in classical logic. Most of the methods treated here originate, or reached full recognition, in research on artificial intelligence in the 1980s. From the beginning, the debate on uncertain reasoning included contributions from logic and computer science, but also from philosophy, mathematics and psychology, and it provided a useful toolbox for applications in a variety of fields, from legal argumentation to medical diagnosis. In the last decades, the expression "artificial intelligence" became ambiguous, given that it is used, especially in the media, as synonym of Machine Learning or Big Data. Exploring the consequences of the "new" artificial intelligence on the notion of uncertainty is beyond the scope of this essay. Here we focus on "classical" approaches because: a) we do not think that they are obsolete, as they are currently in use; moreover, they can help us to understand, by contrast, the new AI; (b) they helped to clarify, or to see from new perspectives, concepts that previously were often confused with each other – such as vagueness, uncertainty, probability, approximation. From a historical point of view, the effort to clarify uncertain reasoning offers a very interesting example of interdisciplinary debate.
EUT Edizioni Università di Trieste
Margherita Benzi, "Ragionamento in condizioni di incertezza", in "APhEx 20", 2019, pp. 35