The Bayesian program in statistics starts from the assumption that an individual can always ascribe a definite probability to any event. It will be demonstrated that this assumption is incompatible with the natural requirement that the individual's subjective probability distribution should be computable. We shall construct a probabilistic algorithm producing with probability extremely close to 1 an infinite binary sequence which is not random with respect to any computable probability distribution (we use Dawid's notion of randomness, computable calibration, but the results hold for other widely known notions of randomness as well). Since the Bayesian knows the algorithm, he must believe that this sequence will be noncalibrable. On the other hand, it seems that the Bayesian must believe that the sequence is random with respect to his own probability distribution. We hope that the discussion of this apparent paradox will clarify the foundations of Bayesian statistics. We analyse also the time of computation and the place of "losing randomness." We show that we need only polynomial time and space to demonstrate non-calibration effects on finite sequences.