Upon observing a signal, a Bayesian decision maker updates her probability distribution over the state space, chooses an action, and receives a payoff that depends on the state and the action taken. An information structure determines the set of possible signals and the probability of each signal given a state. For a fixed decision problem (consisting of a state space, action set and utility function) the value of an information structure is the maximal expected utility that the decision maker can get when the observed signals are governed by this structure. This note studies the functions defined over information structures that measure their value. It turns out that two conditions play a major role in the characterization of these functions: additive separability and convexity.