Proceedings of Machine Learning ResearchProceedings of the Twelveth International Symposium on Imprecise Probability: Theories and Applications
Held in University of Granada, Granada, Spain on 06-09 July 2021
Published as Volume 147 by the Proceedings of Machine Learning Research on 18 August 2021.
Volume Edited by:
Andrés Cano
Jasper De Bock
Enrique Miranda
Serafı́n Moral
Series Editors:
Neil D. Lawrence
* Mark Reid
https://proceedings.mlr.press/v147/
Mon, 20 Dec 2021 13:10:22 +0000Mon, 20 Dec 2021 13:10:22 +0000Jekyll v3.9.0Cautious Random Forests: a New Decision Strategy and some ExperimentsRandom forest is an accurate classification strategy, which estimates the posterior probabilities of the classes by averaging frequencies provided by trees. When data are scarce, this estimation becomes difficult. The Imprecise Dirichlet Model can be used to make the estimation robust, providing intervals of probabilities as outputs. Here, we propose a new aggregation strategy based on the theory of belief functions. We also propose to assign weights to the trees according to their amount of uncertainty when classifying a new instance. Our approach is compared experimentally to the baseline approach on several datasets.Wed, 18 Aug 2021 00:00:00 +0000
https://proceedings.mlr.press/v147/zhang21a.html
https://proceedings.mlr.press/v147/zhang21a.htmlThe Sure ThingIf we prefer action $a$ to $b$ both under an event and under its complement, then we should just prefer $a$ to $b$. This is Savage’s <em>sure-thing principle</em>. In spite of its intuitive- and simple-looking nature, for which it gets almost immediate acceptance, the sure thing is not a logical principle. So where does it get its support from? In fact, the sure thing may actually fail. This is related to a variety of deep and foundational concepts in causality, decision theory, and probability, as well as to Simpsons’ paradox and Blyth’s game. In this paper we try to systematically clarify such a network of relations. Then we propose a general desirability theory for nonlinear utility scales. We use that to show that the sure thing is primitive to many of the previous concepts: In non-causal settings, the sure thing follows from considerations of temporal coherence and coincides with conglomerability; it can be understood as a rationality axiom to enable well-behaved conditioning in logic. In causal settings, it can be derived using only coherence and a causal independence condition.Wed, 18 Aug 2021 00:00:00 +0000
https://proceedings.mlr.press/v147/zaffalon21a.html
https://proceedings.mlr.press/v147/zaffalon21a.htmlDiscounting Desirable GamblesThe desirable gambles framework offers the most comprehensive foundations for the theory of lower previsions, which in turn affords the most general account of imprecise probabilities. Nevertheless, for all its generality, the theory of lower previsions rests on the notion of linear utility. This commitment to linearity is clearest in the coherence axioms for sets of desirable gambles. This paper considers two routes to relaxing this commitment. The first preserves the additive structure of the desirable gambles framework and the machinery for coherent inference but detaches the interpretation of desirability from the multiplicative scale invariance axiom. The second strays from the additive combination axiom to accommodate repeated gambles that return rewards by a non-stationary processes that is not necessarily additive. Unlike the first approach, which is a conservative amendment to the desirable gambles framework, the second is a radical departure. Yet, common to both is a method for describing rewards called <em>discounted utility</em>.Wed, 18 Aug 2021 00:00:00 +0000
https://proceedings.mlr.press/v147/wheeler21a.html
https://proceedings.mlr.press/v147/wheeler21a.htmlIndependent Natural Extension for Choice FunctionsWe investigate epistemic independence for choice functions in a multivariate setting. This work is a continuation of earlier work of one of the authors [23], and our results build on the characterization of choice functions in terms of sets of binary preferences recently established by De Bock and De Cooman [7]. We obtain the independent natural extension in this framework. Given the generality of choice functions, our expression for the independent natural extension is the most general one we are aware of, and we show how it implies the independent natural extension for sets of desirable gambles, and therefore also for less informative imprecise-probabilistic models. Once this is in place, we compare this concept of epistemic independence to another independence concept for choice functions proposed by Seidenfeld [22], which De Bock and De Cooman [1]:S-independence have called S-independence. We show that neither is more general than the other.Wed, 18 Aug 2021 00:00:00 +0000
https://proceedings.mlr.press/v147/van-camp21a.html
https://proceedings.mlr.press/v147/van-camp21a.htmlRobust Model Checking with Imprecise Markov Reward ModelsIn recent years probabilistic model checking has become an important area of research because of the diffusion of computational systems of stochastic nature. Despite its great success, standard probabilistic model checking suffers the limitation of requiring a sharp specification of the probabilities governing the model behaviour. The theory of imprecise probabilities offers a natural approach to overcome such limitation by a sensitivity analysis with respect to the values of these parameters. However, only extensions based on discrete-time imprecise Markov chains have been considered so far for such a robust approach to model checking. We present a further extension based on imprecise Markov reward models. In particular, we derive efficient algorithms to compute lower and upper bounds of the expected cumulative reward and probabilistic bounded rewards based on existing results for imprecise Markov chains. These ideas are tested on a real case study involving the spend-down costs of geriatric medicine departments.Wed, 18 Aug 2021 00:00:00 +0000
https://proceedings.mlr.press/v147/termine21a.html
https://proceedings.mlr.press/v147/termine21a.htmlGlobal Upper Expectations for Discrete-Time Stochastic Processes: In Practice, They Are All The Same!We consider three different types of global uncertainty models for discrete-time stochastic processes: measure-theoretic upper expectations, game-theoretic upper expectations and axiomatic upper expectations. The last two are known to be identical. We show that they coincide with measure-theoretic upper expectations on two distinct domains: monotone pointwise limits of finitary gambles, and bounded below Borel-measurable variables. We argue that these domains cover most practical inferences, and that therefore, in practice, it does not matter which model is used.Wed, 18 Aug 2021 00:00:00 +0000
https://proceedings.mlr.press/v147/t-joens21a.html
https://proceedings.mlr.press/v147/t-joens21a.htmlStochastic Optimization for Numerical Evaluation of Imprecise ProbabilitiesIn applications of imprecise probability, analysts must compute lower (or upper) expectations, defined as the infimum of an expectation over a set of parameter values. Monte Carlo methods consistently approximate expectations at fixed parameter values, but can be costly to implement in grid search to locate minima over large subsets of the parameter space. We investigate the use of stochastic iterative root-finding methods for efficiently computing lower expectations. In two examples we illustrate the use of various stochastic approximation methods, and demonstrate their superior performance in comparison to grid search.Wed, 18 Aug 2021 00:00:00 +0000
https://proceedings.mlr.press/v147/syring21a.html
https://proceedings.mlr.press/v147/syring21a.htmlImprecise Hypothesis-Based Bayesian Decision Making with Composite HypothesesStatistical analyses with composite hypotheses are omnipresent in empirical sciences, and a decision-theoretic account is required in order to formally consider their practical relevance. A Bayesian hypothesis-based decision-theoretic analysis requires the specification of a prior distribution, the hypotheses, and a loss function, and determines the optimal decision by minimizing the expected posterior loss of each hypothesis. However, specifying such a decision problem unambiguously is rather difficult as, typically, the relevant information is available only partially. In order to include such incomplete information into the analysis and to facilitate the use of decision-theoretic approaches in applied sciences, this paper extends the framework of hypothesis-based Bayesian decision making with composite hypotheses into the framework of imprecise probabilities, such that imprecise specifications for the prior distribution, for the composite hypotheses, and for the loss function are allowed. Imprecisely specified composite hypotheses are sets of parameter sets that are able to incorporate blurring borders between hypotheses into the analysis. The imprecisely specified prior distribution gets updated via generalized Bayes rule, such that imprecise probabilities of the (imprecise) hypotheses can be calculated. These lead – together with the (imprecise) loss function – to a set-valued expected posterior loss for finding the optimal decision. Beneficially, the result will also indicate whether or not the available information is sufficient to guide the decision unambiguously, without pretending a level of precision that is not available.Wed, 18 Aug 2021 00:00:00 +0000
https://proceedings.mlr.press/v147/schwaferts21a.html
https://proceedings.mlr.press/v147/schwaferts21a.htmlComputing Simple Bounds for Regression Estimates for Linear Regression with Interval-valued CovariatesIn this paper, we deal with linear regression where the covariates are interval-valued and the dependent variable is precise. Opposed to the case where the dependent variable is interval-valued and the covariates are precise, it is far more difficult to compute the set of all ordinary least squares (OLS) estimates as the precise values of the covariates vary over all possible values, compatible with the given intervals of the covariates. Though the exact solution is difficult to obtain, there are still some simple possibilities to compute bounds for the regression parameters. In this paper we deal with simple linear regression and present three different approaches: The first one uses a simple interval-arithmetic consideration for the equation for the slope parameter. The second approach uses reverse regression to swap the roles of the dependent and the independent variable to make the computation analytically solvable. The obtained solution for the reverse regression then gives an analytical upper bound for the slope parameter of the original regression. The third approach does not directly give bounds for the OLS estimator. Instead, before the actual interval analysis, in a first step, we modify the OLS estimator to another linear estimator which is simply a reasonably weighted convex combination of a number of unbiased estimators, which are themselves based on only two data points of the data set, respectively. It turns out that for the degenerate case of a precise independent variable, this estimator coincides with the OLS estimator. Additionally, the third method does also work if both the independent variable, as well as the dependent variable are interval-valued. Also the case of more than one covariate is manageable. A further nice point is that because of the analytical accessibility of the third estimator, also confidence intervals for the bounds can be established. To compare all three approaches, we conduct a short simulation study.Wed, 18 Aug 2021 00:00:00 +0000
https://proceedings.mlr.press/v147/schollmeyer21a.html
https://proceedings.mlr.press/v147/schollmeyer21a.htmlDecision-making from Partial Instances by Active Feature QueryingWe consider a classification problem in which test instances are not available as complete feature vectors, but must rather be uncovered by repeated queries to an oracle. We have a limited budget of queries: the problem is then to find the best features to ask the oracle for. We consider here a strategy where features are uncovered one by one, so as to maximize the separation between the classes. Once an instance has been uncovered, the distribution of the remaining instances is updated according to the observation. Experiments on synthetic and real data show that our strategy remains reasonably accurate when a decision must be made based on a limited amount of observed features. We briefly discuss the case of imprecise answers, and list out the many problems arising in this case.Wed, 18 Aug 2021 00:00:00 +0000
https://proceedings.mlr.press/v147/quost21a.html
https://proceedings.mlr.press/v147/quost21a.htmlBetting Schemes for Assessing Coherent Numerical and Comparative Conditional PossibilitiesWe introduce coherence conditions having a betting scheme interpretation both for a numerical and a comparative conditional possibility assessment. The conditional bets are considered under partially resolving uncertainty and assuming consonance. This means that we allow situations in which the agent may only acquire the information that a non-impossible event occurs, without knowing which is the true state of the world. Further, he/she can only consider families of nested non-impossible events in computing the gain and has a systematically optimistic behavior. Both conditions are proved to be equivalent to the existence of a conditional possibility agreeing with an axiomatic definition based on the algebraic product t-norm, that extends, either numerically or comparatively (through the induced comparative conditional possibility relation), the given assessment.Wed, 18 Aug 2021 00:00:00 +0000
https://proceedings.mlr.press/v147/petturiti21a.html
https://proceedings.mlr.press/v147/petturiti21a.htmlA Remarkable Equivalence between Non-Stationary Precise and Stationary Imprecise Uncertainty Models in Computable RandomnessThe field of algorithmic randomness studies what it means for infinite binary sequences to be random for some given uncertainty model. Classically, such randomness involves precise uncertainty models, and it is only recently that imprecision has been introduced into this field. As a consequence, the investigation into how imprecision alters our view on random sequences has only just begun. In this contribution, we establish a close and surprising connection between precise and imprecise uncertainty models in this randomness context. In particular, we show that there are stationary imprecise models and non-stationary precise models that have the exact same set of computably random sequences. We also discuss the possible implications of this result for a statistics based on imprecise probabilities.Wed, 18 Aug 2021 00:00:00 +0000
https://proceedings.mlr.press/v147/persiau21a.html
https://proceedings.mlr.press/v147/persiau21a.htmlImproving Algorithms for Decision Making with the Hurwicz CriterionWe propose two improved algorithms for evaluating the Hurwicz criterion in the context of decision making with lower previsions, along with a new benchmarking algorithm for measuring these improvements. The Hurwicz criterion is a well-known criterion for decision making with lower previsions under severe uncertainty when decision makers want to balance between pessimistic and optimistic extremes. When the domain of the lower prevision, the set of possible outcomes and the set of possible decisions are all finite, the classic method for applying this criterion goes by solving a sequence of linear programs. We show how to improve this classic algorithm, based on similar improvements that we have proposed for other decision criteria. Additionally, to allow benchmarking these improvements, we provide a new algorithm for randomly generating artificial decision problems with a set number of Hurwicz gambles. In our simulation, our proposed algorithms for Hurwicz outperform the standard algorithm in most scenarios except when the set of outcomes is small, the domain of the lower prevision is large, and there are many Hurwicz optimal decisions at once, in which case our proposed algorithms are slightly slower.Wed, 18 Aug 2021 00:00:00 +0000
https://proceedings.mlr.press/v147/nakharutai21a.html
https://proceedings.mlr.press/v147/nakharutai21a.htmlBasic Probability Assignments Representable via Belief Intervals for Singletons in Dempster-Shafer TheoryDempster-Shafer Theory (DST) or Evidence theory has been commonly employed in the literature to deal with uncertainty-based information. The basis of this theory is the concept of basic probability assignment (BPA). The belief intervals for singletons obtained from a BPA have recently received considerable attention for quantifying uncertainty in DST. Indeed, they are easier to manage than the corresponding BPA to represent uncertainty-based information. Nonetheless, the set of probability distributions consistent with a BPA is smaller than the one compatible with the corresponding belief intervals for singletons. In this research, we give a new characterization of BPAs representable by belief intervals for singletons. Such a characterization might be easier to check than the one provided in previous works. In practical applications, this result allows efficiently knowing when uncertainty can be represented via belief intervals for singletons rather than the associated BPA without loss of information.Wed, 18 Aug 2021 00:00:00 +0000
https://proceedings.mlr.press/v147/moral-garci-a21b.html
https://proceedings.mlr.press/v147/moral-garci-a21b.htmlUsing Credal C4.5 for Calibrated Label Ranking in Multi-Label Classification The Multi-Label Classification (MLC) task aims to predict the set of labels that correspond to an instance. It differs from traditional classification, which assumes that each instance has associated a single value of a class variable. Within MLC, the Calibrated Label Ranking algorithm (CLR) considers a binary classification problem for each pair of labels to determine a label ranking for a given instance, exploiting in this way correlations between pairs of labels. Moreover, CLR mitigates the class imbalance problem that frequently appears in MLC motivated by the fact that, in MLC, there are usually very few instances that have associated a certain label. For solving the binary classification problems, a traditional classification algorithm is needed. The C4.5 algorithm, based on Decision Trees, has been widely employed in this domain. In this work, we show that the Credal C4.5 method, a version of C4.5 recently proposed that uses imprecise probabilities, is more suitable than C4.5 for solving the binary classification problems in CLR. An exhaustive experimental analysis carried out in this research shows that Credal C4.5 performs better than C4.5 when both algorithms are employed in CLR, being the improvement more notable as there is more noise in the labels.Wed, 18 Aug 2021 00:00:00 +0000
https://proceedings.mlr.press/v147/moral-garci-a21a.html
https://proceedings.mlr.press/v147/moral-garci-a21a.htmlOn the Comonotone Natural Extension of Marginal p-BoxesThe relationship between several random variables is gathered by their joint distribution. While this distribution can be easily determined by the marginals when an assumption of independence is satisfied, there are situations where the random variables are connected by some dependence structure. One such structure that arises often in practice is comonotonicity. This type of dependence refers to random variables that increase or decrease simultaneously. This paper studies the property of comonotonicity when the uncertainty about the random variables is modelled using p-boxes and the induced coherent lower probabilities. In particular, we analyse the problem of finding a comonotone lower probability with given marginal p-boxes, focusing on the existence, construction and uniqueness of such a model. Also, we prove that, under some conditions, there is a most conservative comonotone lower probability with the given marginal p-boxes, that will be called the comonotone natural extension.Wed, 18 Aug 2021 00:00:00 +0000
https://proceedings.mlr.press/v147/montes21a.html
https://proceedings.mlr.press/v147/montes21a.htmlTowards Improving Electoral Forecasting by Including Undecided Voters and Interval-valued Prior KnowledgeIncreasing numbers of undecided voters constitute a severe challenge for conventional pre-election polls in multi-party systems. While these polls only provide the still pondering individuals with the options to either state a precise party or to drop out, we suggest to regard their valuable information in a set-valued way. The resulting consideration set, listing all the options the individual is still pondering between, can be interpreted under epistemic imprecision. Within this paper we extend the already existing approaches including this valuable information, by making first steps to utilize interval-valued prior information. Including background information is common in election forecasting while we focus on realistically obtainable and credible interval-valued prior information about transition probabilities from the undecided to the eventual choice. We introduce two approaches utilizing this interval-valued information, weighting the credibility against the precision of the results. For the first approach, we narrow the most cautious and wide so-called Dempster bounds by deploying the prior information on the transition probabilities as new worst and best case scenarios for each party. The second approach applies if these interval-valued results are still too wide for useful application. We hereby narrow them towards a good guess of the eventual choice, estimated by a further model-based source of information making use of the covariates. These single-valued estimates on the individual level are regarded as realizations of an underlying probability distribution, which we combine with the prior knowledge in a Bayesian way. The approach can thus be seen as an attempt to combine two, for the needed outcome by themselves inadequate, sources of information to obtain more concise results. We conduct a simulation study showing the applicability and virtues of the new approaches and compare them to conventional ones.Wed, 18 Aug 2021 00:00:00 +0000
https://proceedings.mlr.press/v147/kreiss21a.html
https://proceedings.mlr.press/v147/kreiss21a.htmlInformation Algebras of Coherent Sets of Gambles in General Possibility SpacesIn this paper, we show that coherent sets of gambles can be embedded into the algebraic structure of <em>information algebra</em>. This leads firstly, to a new perspective of the algebraic and logical structure of desirability and secondly, it connects desirability, hence imprecise probabilities, to other formalism in computer science sharing the same underlying structure. Both the <em>domain-free</em> and the <em>labeled</em> view of the information algebra of coherent sets of gambles are presented, considering general possibility spaces.Wed, 18 Aug 2021 00:00:00 +0000
https://proceedings.mlr.press/v147/kohlas21a.html
https://proceedings.mlr.press/v147/kohlas21a.htmlA Recursive Formulation of Possibilistic FiltersWe derive a recursive formulation of possibilistic filters that allow inference on the states in non-linear time-discrete dynamical systems in the presence of both aleatory and epistemic uncertainty with an imprecise probabilistic interpretation, and we present a particle-based implementation thereof.Wed, 18 Aug 2021 00:00:00 +0000
https://proceedings.mlr.press/v147/hose21a.html
https://proceedings.mlr.press/v147/hose21a.htmlDependent Possibilistic Arithmetic using CopulasWe describe two functions on possibility distributions which allow one to compute binary operations with dependence either specified by a copula or partially defined by an imprecise copula. We use the fact that possibility distributions are consonant belief functions to aggregate two possibility distributions into a bivariate belief function using a version of Sklar’s theorem for minitive belief functions, i.e. necessity measures. The results generalise previously published independent and Fréchet methods, allowing for any stochastic dependence to be specified in the form of a (imprecise) copula. This new method produces tighter extensions than previous methods when a precise copula is used. These latest additions to possibilistic arithmetic give it the same capabilities as p-box arithmetic, and provides a basis for a p-box/possibility hybrid arithmetic. This combined arithmetic provides tighter bounds on the exact upper and lower probabilities than either method alone for the propagation of general belief functions.Wed, 18 Aug 2021 00:00:00 +0000
https://proceedings.mlr.press/v147/gray21a.html
https://proceedings.mlr.press/v147/gray21a.htmlTotal Evidence and Learning with Imprecise ProbabilitiesIn dynamic learning, a rational agent must revise their credence about a question of interest in accordance with the total evidence available between the earlier and later times. We discuss situations in which an observable event $F$ that is sufficient for the total evidence can be identified, yet its probabilistic modeling cannot be performed in a precise manner. The agent may employ imprecise (IP) models of reasoning to account for the identified sufficient event, and perform change of credence or sequential decisions accordingly. Our proposal is illustrated with three case studies: the classic Monty Hall problem, statistical inference with non-ignorable missing data, and the use of forward induction in a two-person sequential game.Wed, 18 Aug 2021 00:00:00 +0000
https://proceedings.mlr.press/v147/gong21a.html
https://proceedings.mlr.press/v147/gong21a.htmlTowards a Theory of Confidence in Market-Based PredictionsPrediction markets are a way to yield probabilistic predictions about future events, theoretically incorporating all available information. In this paper, we focus on the <em>confidence</em> that we should place in the prediction of a market. When should we believe that the market probability meaningfully reflects underlying uncertainty, and when should we not? We discuss two notions of confidence. The first is based on the expected profit that a trader could make from correcting the market if it were wrong, and the second is based on expected market volatility in the future. Our paper is a stepping stone to future work in this area, and we conclude by discussing some key challenges.Wed, 18 Aug 2021 00:00:00 +0000
https://proceedings.mlr.press/v147/freeman21a.html
https://proceedings.mlr.press/v147/freeman21a.htmlAn Imprecise Bayesian Approach to Thermal Runaway ProbabilityIn this pioneering work, an assessment of thermal runaway probability based on simplified <em>chemical kinetics</em> has been performed with imprecise Bayesian methods relying on several priors. The physical phenomenon is governed by two chemical kinetic parameters $A$ and $Ea$. We suppose that their values are considerably uncertain but also that we know the experimental profiles of a chemical species corresponding to their true values, thereby allowing us to compute likelihoods and posteriors corresponding to different levels of information. We are interested in the critical delay time $tc$ beyond which an explosion will certainly occur. The use of several priors allows us to see when the data truly dominate the prior with respect to the probability distribution of $tc$. It does not appear possible to do so in an orthodox precise Bayesian framework that reduces all forms of uncertainty to a single probability distribution.Wed, 18 Aug 2021 00:00:00 +0000
https://proceedings.mlr.press/v147/fischer21a.html
https://proceedings.mlr.press/v147/fischer21a.htmlExtending the Domain of Imprecise Jump Processes from Simple Variables to Measurable OnesWe extend the domain of imprecise jump processes, also known as imprecise continuous-time Markov chains, from inferences that depend on a finite number of time points to inferences that can depend on the state of the system at <em>all</em> time points. We also investigate the continuity properties of the resulting lower and upper expectations with respect to point-wise convergent sequences that are monotone or dominated. For two particular inferences, integrals over time and the number of jumps to a subset of states, we strengthen these continuity properties and present an iterative scheme to approximate their lower and upper expectations.Wed, 18 Aug 2021 00:00:00 +0000
https://proceedings.mlr.press/v147/erreygers21a.html
https://proceedings.mlr.press/v147/erreygers21a.htmlProbabilistic Risk Management in Project PortfoliosWe discuss a novel method for business risk handling in project portfolios under strong uncertainty, where we utilise event trees including adverse consequences together with mitigation costs and expected effects, where consequence and event probabilities and costs are represented by random parameters. The method has been developed to support large-scale real-life applications of portfolio risk management, where the properties of the probabilities and values are entered by domain experts with often very limited knowledge of probability theory, and we demonstrate how this can be accomplished with minor information loss. The method is currently in use in one of the world’s largest telecom equipment manufacturers which has a vast project portfolio of tenders, with each successful tender subsequently becoming an order in the order book portfolio.Wed, 18 Aug 2021 00:00:00 +0000
https://proceedings.mlr.press/v147/ekenberg21a.html
https://proceedings.mlr.press/v147/ekenberg21a.htmlCredal sets of Coherent Conditional Probabilities Defined by Hausdorff MeasuresCredal sets containing coherent conditional probabilities defined by Hausdorff measures on the Borel sigma-field of metric spaces with bi-Lipschitz equivalent metrics, are proven to represent merging opinions with increasing information.Wed, 18 Aug 2021 00:00:00 +0000
https://proceedings.mlr.press/v147/doria21a.html
https://proceedings.mlr.press/v147/doria21a.htmlProcessing Multiple Distortion Models: a Comparative StudyWhen dealing with uncertain information, distortion or neighbourhood models are convenient practical tools, as they rely on very few parameters. In this paper, we study their behaviour when such models are combined and processed. More specifically, we study their behaviour when merging different distortion models quantifying uncertainty on the same quantity, and when manipulating distortion models defined over multiple variables.Wed, 18 Aug 2021 00:00:00 +0000
https://proceedings.mlr.press/v147/destercke21a.html
https://proceedings.mlr.press/v147/destercke21a.htmlAverage Behaviour of Imprecise Markov Chains: A Single Pointwise Ergodic Theorem for Six Different ModelsWe study the average behaviour of imprecise Markov chains; a generalised type of Markov chain where local probabilities are partially specified, and where structural assumptions such as Markovianity are weakened. In particular, we prove a pointwise ergodic theorem that provides (strictly) almost sure bounds on the long term average of any real function of the state of such an imprecise Markov chain. Compared to an earlier ergodic theorem by De Cooman et al. (2006), our result requires weaker conditions, provides tighter bounds, and applies to six different types of models.Wed, 18 Aug 2021 00:00:00 +0000
https://proceedings.mlr.press/v147/de-bock21a.html
https://proceedings.mlr.press/v147/de-bock21a.htmlConstructing Consonant Predictive Beliefs from Data with Scenario TheoryA method for constructing consonant predictive beliefs for multivariate datasets is presented. We make use of recent results in scenario theory to construct a family of enclosing sets that are associated with a predictive lower probability of new data falling in each given set. We show that the sequence of lower bounds indexed by enclosing set yields a consonant belief function. The presented method does not rely on the construction of a likelihood function, therefore possibility distributions can be obtained without the need for normalization. We present a practical example in two dimensions for the sake of visualization, to demonstrate the practical procedure of obtaining the sequence of nested sets.Wed, 18 Aug 2021 00:00:00 +0000
https://proceedings.mlr.press/v147/de-angelis21a.html
https://proceedings.mlr.press/v147/de-angelis21a.htmlWhen Belief Functions and Lower Probabilities are IndistinguishableThis paper reports on a geometrical investigation of de Finetti’s Dutch Book method as an operational foundation for a wide range of generalisations of probability measures, including lower probabilities, necessity measures and belief functions. Our main result identifies a number of non-limiting circumstances under which de Finetti’s coherence fails to lift from less to more general models. In particular our result shows that rich enough sets of events exist such that the coherence criteria for belief functions and lower probability collapse.Wed, 18 Aug 2021 00:00:00 +0000
https://proceedings.mlr.press/v147/corsi21a.html
https://proceedings.mlr.press/v147/corsi21a.htmlRandomness and Imprecision: A Discussion of Recent ResultsWe discuss our recent work on incorporating imprecision in the field of algorithmic randomness, based on the martingale-theoretic approach of game-theoretic probability. We consider several notions of randomness associated with interval, rather than precise, forecasting systems. We study their properties and argue that there are quite a number of reasons for wanting to do so. First, the richer mathematical structure in this generalisation provides a useful backdrop for a better understanding of precise randomness. Second, randomness associated with non-stationary precise forecasting systems can be captured by a constant but less precise interval forecast: greater model simplicity requires more imprecision. Third, imprecise randomness can’t always be explained away as a result of (over)simplification: there are sequences that are random for a constant interval forecast, but never random for any computable (more) precise forecasting system. Incorporating imprecision into randomness therefore allows us to do more than was hitherto possible. Finally, the random sequences for a non-vacuous interval forecast constitute a meagre set, as they do for precise forecasts: imprecise and precise random sequences are equally rare from a topological point of view, and are, in that sense, equally interesting.Wed, 18 Aug 2021 00:00:00 +0000
https://proceedings.mlr.press/v147/cooman21a.html
https://proceedings.mlr.press/v147/cooman21a.htmlValid Inferential Models for Prediction in Supervised Learning ProblemsPrediction, where observed data is used to quantify uncertainty about a future observation, is a fundamental problem in statistics. Prediction sets with coverage probability guarantees are a common solution, but these do not provide probabilistic uncertainty quantification in the sense of assigning beliefs to relevant assertions about the future observable. Alternatively, we recommend the use of a <em>probabilistic predictor</em>, a fully-specified (imprecise) probability distribution for the to-be-predicted observation given the observed data. It is essential that the probabilistic predictor is reliable or valid in some sense, and here we offer a notion of validity and explore its implications. We also provide a general inferential model construction that yields a provably valid probabilistic predictor, with illustrations in regression and classification.Wed, 18 Aug 2021 00:00:00 +0000
https://proceedings.mlr.press/v147/cella21a.html
https://proceedings.mlr.press/v147/cella21a.htmlNonlinear Desirability as a Linear Classification ProblemThe present paper proposes a generalization of linearity axioms of coherence through a geometrical approach, which leads to an alternative interpretation of desirability as a <em>classification problem</em>. In particular, we analyze different sets of rationality axioms and, for each one of them, we show that proving that a subject, who provides finite accept and reject statements, respects these axioms, corresponds to solving a binary classification task using, each time, a different (usually nonlinear) family of classifiers. Moreover, by borrowing ideas from machine learning, we show the possibility to define a <em>feature mapping</em> allowing us to reformulate the above nonlinear classification problems as linear ones in a higher-dimensional space. This allows us to interpret gambles directly as payoffs vectors of <em>monetary lotteries</em>, as well as to reduce the task of proving the rationality of a subject to a linear classification task.Wed, 18 Aug 2021 00:00:00 +0000
https://proceedings.mlr.press/v147/casanova21a.html
https://proceedings.mlr.press/v147/casanova21a.htmlDistributionally Robust, Skeptical Binary Inferences in Multi-label ProblemsIn this paper, we consider the problem of making distributionally robust, skeptical inferences for the multi-label problem, or more generally for Boolean vectors. By distributionally robust, we mean that we consider sets of probability distributions, and by skeptical we understand that we consider as valid only those inferences that are true for every distribution within this set. Such inferences will provide partial predictions whenever the considered set is sufficiently big. We study in particular the Hamming loss case, a common loss function in multi-label problems, showing how skeptical inferences can be made in this setting. We also perform some experiments demonstrating the interest of our results.Wed, 18 Aug 2021 00:00:00 +0000
https://proceedings.mlr.press/v147/carranza-alarcon21a.html
https://proceedings.mlr.press/v147/carranza-alarcon21a.htmlISIPTA 2021: PrefaceThe ISIPTA meetings are the primary international forum for presenting and discussing advances in the theory and applications of imprecise probabilities. They are organized once every two years by SIPTA, {\em The Society for Imprecise Probabilities: Theories and Applications}. The first edition took place in Ghent in 1999. This time the {\em 12th International Symposium on Imprecise Probabilities: Theories and Applications} was planned to take place in Granada, Spain, from 6 to 9 July, 2021, but will now be an essentially virtual event due to the Covid-19 crisis. Even so, we will uphold the ISIPTA tradition of combining a friendly and cooperative style with a strong emphasis on in-depth discussion and interaction. Wed, 18 Aug 2021 00:00:00 +0000
https://proceedings.mlr.press/v147/cano21a.html
https://proceedings.mlr.press/v147/cano21a.htmlProbability Filters as a Model of Belief; Comparisons to the Framework of Desirable GamblesWe propose a model of uncertain belief. This models coherent beliefs by a filter, ${F}$, on the set of probabilities. That is, it is given by a collection of sets of probabilities which are closed under supersets and finite intersections. This can naturally capture your probabilistic judgements. When you think that it is more likely to be sunny than rainy, we have$\{ p | p(\textsc{Sunny}\xspace)>p(\textsc{Rainy}\xspace)\} \in {F}$. When you think that a gamble $g$ is desirable, we have $\{ p | \mathrm{Exp}_p[g]>0 \} \in {F}$. It naturally extends the model of credal sets; and we will show it captures all the expressive power of the desirable gambles model. It also captures the expressive power of sets of desirable gamble sets (with a mixing axiom, but no Archimadean axiom).Wed, 18 Aug 2021 00:00:00 +0000
https://proceedings.mlr.press/v147/campbell-moore21a.html
https://proceedings.mlr.press/v147/campbell-moore21a.htmlCREPO: An Open Repository to Benchmark Credal Network AlgorithmsCredal networks are a popular class of imprecise probabilistic graphical models obtained as a Bayesian network generalization based on, so-called <em>credal</em>, sets of probability mass functions. A Java library called CREMA has been recently released to model, process and query credal networks. Despite the NP-hardness of the (exact) task, a number of algorithms is available to approximate credal network inferences. In this paper we present CREPO, an open repository of synthetic credal networks, provided together with the exact results of inference tasks on these models. A Python tool is also delivered to load these data and interact with CREMA, thus making extremely easy to evaluate and compare existing and novel inference algorithms. To demonstrate such benchmarking scheme, we propose an approximate heuristic to be used inside variable elimination schemes to keep a bound on the maximum number of vertices generated during the combination step. A CREPO-based validation against approximate procedures based on linearization and exact techniques performed in CREMA is finally discussed.Wed, 18 Aug 2021 00:00:00 +0000
https://proceedings.mlr.press/v147/cabanas21a.html
https://proceedings.mlr.press/v147/cabanas21a.htmlGeneralized Hartley Measures on Credal SetsThe paper considers various extensions of the Hartley measure on credal sets and their investigation based on a system of axioms.Wed, 18 Aug 2021 00:00:00 +0000
https://proceedings.mlr.press/v147/bronevich21a.html
https://proceedings.mlr.press/v147/bronevich21a.htmlEpistemic Argumentation with Conditional Probabilities and Labeling ConstraintsWe extend epistemic graphs, a powerful representation language employed in argumentation theory, first, by allowing conditional probabilities in that language. We also offer a new way of interpreting the graph as a set of restrictions based on a selected semantics for the abstract argumentation frameworks. The resulting semantics for epistemic graphs are given by credal sets that we characterize through inequalities. We illustrate the main issues in our proposals by resorting to arguments related to climate change.Wed, 18 Aug 2021 00:00:00 +0000
https://proceedings.mlr.press/v147/bona21a.html
https://proceedings.mlr.press/v147/bona21a.htmlQuantum Indistinguishability through Exchangeable Desirable GamblesTwo particles are identical if all their intrinsic properties, such as spin and charge, are the same, meaning that no quantum experiment can distinguish them. In addition to the well known principles of quantum mechanics, understanding systems of identical particles requires a new postulate, the so called <em>symmetrization postulate</em>. In this work, we show that the postulate corresponds to exchangeability assessments for sets of observables (gambles) in a quantum experiment, when quantum mechanics is seen as a normative and algorithmic theory guiding an agent to assess her subjective beliefs represented as (coherent) sets of gambles. Finally, we show how sets of exchangeable observables (gambles) may be updated after a measurement and discuss the issue of defining entanglement for indistinguishable particle systems.Wed, 18 Aug 2021 00:00:00 +0000
https://proceedings.mlr.press/v147/benavoli21a.html
https://proceedings.mlr.press/v147/benavoli21a.htmlLogical Approximations of Qualitative ProbabilityWe provide approximations of qualitative probability, that is, comparative structures which are representable by probability measures. We introduce sequences of qualitative belief structures, based on the ideas of Depth-Bounded logics D’Agostino et al. (2013b), and identify the conditions under which: <ul> <li>a qualitative sequence <em>approximates</em> a qualitative probability;</li> <li>a qualitative probability can be approximated.</li> </ul> Wed, 18 Aug 2021 00:00:00 +0000
https://proceedings.mlr.press/v147/baldi21a.html
https://proceedings.mlr.press/v147/baldi21a.htmlAn Info-gap Framework for Comparing Epistemic Uncertainty Models in Hybrid Structural Reliability AnalysisThe main objective of this work is to study the effect of the choice of the input uncertainty model on robustness evaluations of probabilities of failure. Aleatory and epistemic uncertainty are jointly propagated by considering hybrid models and applying random set theory. The notion of horizon of uncertainty found in the info-gap theory, which is usually used to assess the robustness of a model to uncertainty, allows the bounds on the failure probability obtained from different epistemic uncertainty models to be compared at increasing levels of uncertainty. Info-gap robustness and opportuneness curves are obtained and compared considering the interval model, triangular and trapezoidal possibility distributions, the probabilistic uniform distribution and the paralellepiped convex model on two toy cases. A specific demand value, as introduced in the info-gap theory, is used as a value of information metric to quantify the gain of information on the probability of failure between a less informative uncertainty model and a more informative one.Wed, 18 Aug 2021 00:00:00 +0000
https://proceedings.mlr.press/v147/ajenjo21a.html
https://proceedings.mlr.press/v147/ajenjo21a.html