Characterizing the principle of minimum cross-entropy within a conditional-logical framework
The principle of minimum cross-entropy (ME-principle) is often used as an elegant and powerful tool to build up complete probability distributions when only partial knowledge is available. The inputs it may be applied to are a prior distribution P and some new information R, and it yields as a result the one distribution p that satisfies R and is closest to P in an information-theoretic sense. More generally, it provides a "best" solution to the problem "How to adjust P to R?" In this paper, we are concerned with the logical aspects of the ME-principle. We show in a direct and constructive manner that adjusting P to R by means of this principle follows a simple and intelligible conditional-logical pattern. The characterization of ME-adjustment within this framework rests on only four very fundamental assumptions, the first of which being the principle of conditional preservation. This principle states that conditional probabilities in the prior distribution are preserved "as far as possible", and it indeed provides a straightforward approach to the adaptation problem. As the second assumption, we introduce the idea of a functional concept that underlies the adjustment and that is to eliminate any arbitrariness in finding solutions. The third and fourth postulates which we expect the adaptation to comply with are logical consistency and representation invariance. Both of them influence decidingly the functions involved in the functional concept. Finally, the ME-distribution arises as the only solution which follows all these axioms. Thus a characterization of the ME-principle within a conditional-logical framework is achieved, and its implicit logical mechanisms are revealed clearly.
Nutzung und Vervielfältigung:
Alle Rechte vorbehalten