Impact Statement:The frequencies of occurrence of attribute values called probabilities and the variations in their values called possibilities are necessary to describe a pattern or a cl...Show More
Abstract:
This article gives the representation of both probabilistic uncertainty and possibilistic certainty including the Bayesian learning in the framework of information set th...Show MoreMetadata
Impact Statement:
The frequencies of occurrence of attribute values called probabilities and the variations in their values called possibilities are necessary to describe a pattern or a class. The occurrences of specific words (probabilities) in a document help find its type (i.e., science or history) through the representation of their probabilistic uncertainty. The distribution of marks (possibility) of students in a class helps find the class performance (i.e., low or high) through the representation of its possibilistic certainty. As classical entropy functions only give probabilistic uncertainty from the probabilities, use is made of information-theoretic Hanman–Anirban entropy function that provides both the probabilistic uncertainty and possibilistic certainty of probabilities and possibilities, respectively.
Abstract:
This article gives the representation of both probabilistic uncertainty and possibilistic certainty including the Bayesian learning in the framework of information set theory which is an offshoot of the Hanman–Anirban entropy function. Being information theoretic and parametric, this function deals with both probability and possibility. If a set of information source (attribute) values is fitted with this entropy function it gives rise to information values and the sum of these values is certainty. An adaptive form of this function yields the Hanman transform (HT) that gives the higher order certainty. An optimal entropy classifier is developed by learning the weight (support) vectors of all classes by minimizing this entropy of all the error vectors between the training feature vectors and the weight vector. To this end, we have proposed prudent learning model that favors competition with both the worst performer and the best performer based on the HT. The conversion of Renyi entropy ...
Published in: IEEE Transactions on Artificial Intelligence ( Volume: 3, Issue: 2, April 2022)