Ali Fallah Tehrani, W. Cheng, E. Hüllermeier
Preference Learning using the Choquet Integral: The Case of Multipartite Ranking
IEEE Transactions on Fuzzy Systems, vol. 20, no. 6, pp. 1102-1113
We propose a novel method for preference learning or, more specifically, learning to rank, where the task is to learn a ranking model that takes a subset of alternatives as input and produces a ranking of these alternatives as output. Just like in the case of conventional classifier learning, training information is provided in the form of a set of labeled instances, with labels or, say, preference degrees taken from an ordered categorical scale. This setting is known as multipartite ranking in the literature. Our approach is based on the idea of using the (discrete) Choquet integral as an underlying model for representing ranking functions. Being an established aggregation function in fields such as multiple criteria decision making and information fusion, the Choquet integral offers a number of interesting properties that make it attractive from a machine learning perspective, too. The learning problem itself comes down to properly specifying the fuzzy measure on which the Choquet integral is defined. This problem is formalized as a margin maximization problem and solved by means of a cutting plane algorithm. The performance of our method is tested on a number of benchmark datasets.
Ali Fallah Tehrani, W. Cheng, K. Dembczýnski, E. Hüllermeier
Learning Monotone Nonlinear Models using the Choquet Integral
Machine Learning, vol. 89, no. 1, pp. 183-211
The learning of predictive models that guarantee monotonicity in the input variables has received increasing attention in machine learning in recent years. By trend, the difficulty of ensuring monotonicity increases with the flexibility or, say, nonlinearity of a model. In this paper, we advocate the so-called Choquet integral as a tool for learning monotone nonlinear models. While being widely used as a flexible aggregation operator in different fields, such as multiple criteria decision making, the Choquet integral is much less known in machine learning so far. Apart from combining monotonicity and flexibility in a mathematically sound and elegant manner, the Choquet integral has additional features making it attractive from a machine learning point of view. Notably, it offers measures for quantifying the importance of individual predictor variables and the interaction between groups of variables. Analyzing the Choquet integral from a classification perspective, we provide upper and lower bounds on its VC-dimension. Moreover, as a methodological contribution, we propose a generalization of logistic regression. The basic idea of our approach, referred to as choquistic regression, is to replace the linear function of predictor variables, which is commonly used in logistic regression to model the log odds of the positive class, by the Choquet integral. First experimental results are quite promising and suggest that the combination of monotonicity and flexibility offered by the Choquet integral facilitates strong performance in practical applications.
Beitrag (Sammelband oder Tagungsband)
E. Hüllermeier, Ali Fallah Tehrani
Efficient Learning of Classifiers based on the 2-additive Choquet Integral Computational Intelligence
Computational Intelligence in Intelligent Data Analysis, Berlin; New York, vol. Volume 445
In a recent work, we proposed a generalization of logistic regression based on the Choquet integral. Our approach, referred to as choquistic regression, makes it possible to capture non-linear dependencies and interactions among predictor variables while preserving two important properties of logistic regression, namely the comprehensibility of the model and the possibility to ensure its monotonicity in individual predictors. Unsurprisingly, these benefits come at the expense of an increased computational complexity of the underlying maximum likelihood estimation. In this paper, we propose two approaches for reducing this complexity in the specific though practically relevant case of the 2-additive Choquet integral. Apart from theoretical results, we also present an experimental study in which we compare the two variants with the original implementation of choquistic regression.
M. Agarwal, Ali Fallah Tehrani, E. Hüllermeier
Preference-based Learning of Ideal Solutions in TOPSIS-like Decision Models
Journal of Multi-Criteria Decision Analysis, vol. 22, no. 3-4, pp. 175-183
Combining established modelling techniques from multiple-criteria decision aiding with recent algorithmic advances in the emerging field of preference learning, we propose a new method that can be seen as an adaptive version of TOPSIS, the technique for order preference by similarity to ideal solution decision model (or at least a simplified variant of this model). On the basis of exemplary preference information in the form of pairwise comparisons between alternatives, our method seeks to induce an ‘ideal solution’ that, in conjunction with a weight factor for each criterion, represents the preferences of the decision maker. To this end, we resort to probabilistic models of discrete choice and make use of maximum likelihood inference. First experimental results on suitable preference data suggest that our approach is not only intuitively appealing and interesting from an interpretation point of view but also competitive to state-of-the-art preference learning methods in terms of prediction accuracy.