Tag Archives: Truth vs consistency

Re: Paper and slides on indefiniteness of CH

Dear all,

The consistency issue has come up quite a bit and to me it is obviously a central one. I feel compelled to respond to Sol’s answer to Harvey’s question on this.

Consider the following problem:

Problem: Is ZFC+PD consistent?

By Sol’s criterion this is certainly a definite logical problem. But is it a mathematical problem? I claim Sol has no choice but to argue that it is a mathematical problem and moreover one of the greatest problems of modern mathematics (even if it is not recognized as such by the mathematical community).

Why? By Sol’s conception of mathematical truth (mathematical universe?), he must take the position that

  1. It is possible that ZFC+PD is inconsistent.
  2. No finite evidence can diminish the likelihood that ZFC+PD is inconsistent.

Therefore the only conjecture that Sol can plausibly make is: ZFC+PD is inconsistent. So Sol’s claim ”but Gödel’s theorem is discouraging as to ‘solutions’” is a bit mysterious.

What about impact? I think it is clear that an inconsistency in ZFC+PD would be widely regarded as the greatest theorem in the history of mathematics and would have tremendous intellectual impact. It would certainly generate considerable press.

Perhaps I am too close to PD, but replace ZFC+PD by ZFC in this discussion. It does not really change anything except the impact factor increases.


Re: Paper and slides on indefiniteness of CH

Dear Sy,

Ok one more round. This is a short one since you did not raise many new questions etc. in your last response.

On Aug 7, 2014, at 9:32 AM, Sy David Friedman wrote:

Unless there is something fundamentally different about LC which there is.

My point here has nothing to do with large cardinals. I am just saying that the tests analogous to those used to argue in favour of PD (success and structure theory) are inadequate in the number theory context. Doesn’t that cast doubt on the use of those tests to justify PD?

Absolutely not,  given the special nature of LC.

Aside: Suppose an Oracle informs us that RH is equivalent to Con PD. Then I would say RH is true (and it seems you would agree here). But suppose that the Oracle informs us that RH is equivalent to Con Reinhardt cardinal. Then I actually would conjecture that RH is false. But by your criteria of evidence you would seem to argue RH is true.

I guess you mean Con(Reinhardt without AC).

Of course, I thought that was clear.

Why would you conjecture in this setting that RH is false?

Because I think “Reinhardt without AC” Is inconsistent. The Oracle could be malicious after all.

(Aside: I actually think that  “ZF + Reinhardt + extendible” is inconsistent. The situation for “ZF + Reinhardt” is a bit less clear to me at this stage. But this distinction is not really relevant to this discussion, e.g. everything in these exchanges could have been in the context of super-Reinhardt).

I thought that you had evidence of statements of consistency strength below a Reinhardt cardinal but above that of large cardinals with AC?

I am not sure what you are referring to here. The hierarchy of axioms past I0 that I have discussed in the JML papers are all AC based.

With such evidence I would indeed conjecture that RH is true; wouldn’t you?

This seems an odd position. Suppose that the Oracle matched 100 number theoretic (\Pi^0_1) sentences with the consistency of variations of the notion of Reinhardt cardinals.  This increases one confidence in these statements?

Again, I don’t think we need to justify the consistency of large cardinals, the “empirical proof theory” takes care of that.

Yes, theoretically the whole edifice of large cardinal consistency could collapse, even at a measurable, we simply have to live with that, but I am not really worried. There is just too much evidence for a strict hierarchy of consistency strengths going all the way up to the level of supercompactness, using quasi-lower bounds instead of core model lower bounds. This reminds me of outdated discussions of how to justify the consistency of second-order arithmetic through ordinal analysis. The ordinal analysis is important, but no longer necessary for the justification of consistency.

However one conceives of truth in set theory, one must have answers to:

1) Is PD true?

I don’t know.

2) Is PD consistent?


You have examples of how HP could lead to answering the first question.  But no examples of how HP could ever answer the second question.  Establishing Con LC for levels past PD looks even more problematic.

It is not my intention to try to use the HP to justify the already-justified consistency of large cardinals.

There is strong meta-mathematical evidence that the only way to ultimately answer (2) with “yes” is to answer (1) with “yes”.  This takes us back to my basic confusion about the basis for your conviction in Con PD.

Note that the IMH yields inner models with measurables but does not imply Pi-1-1 determinacy. This is a “local” counterexample to your suggestion that to get Con(Definable determinacy) we need to get Definable determinacy.

But I have not suggested that to get Con(Definable determinacy) one needs to get Definable determinacy.  I have suggested that to get Con PD one needs to get PD.  (For me, PD is boldface PD, perhaps you have interpreted PD as light-face PD).

The local/global issue is not present at the level you indicate.  It only occurs past the level of 1 Woodin cardinal, I have said this repeatedly.

Why? If  0^\# exists then it is unique. M_1^\# (the analog of 0^\#) at the next projective level has a far more subtle uniqueness.

(For those unfamiliar with the notation: M_1 is the “minimum” fine-structural inner model with 1 Woodin cardinal and the notion of minimality makes perfect sense for iterable models through elementary embeddings).

The iterable M_1^\# is unique but the iterable $M_1^\#$ implies all sets have sharps. In fact in the context of all sets have sharps, the existence of M_1^# is equivalent to the existence of a proper class inner model with a Woodin cardinal.

Without a background of sharps there examples where there are no definable inner models past the level of 1 Woodin cardinal no matter what inner models one assumes exist. The example is not contrived, it is L[x] for a Turing cone of x and this example lies at the core of the consistency proof of IMH.


Some final comments.

The inner model program for me has come down to one main conjecture (the Ultimate-L conjecture) and two secondary conjectures, the \Omega Conjecture and the HOD Conjecture. These are not vague conjectures, they are each precisely stated. None of these conjectures involves any concept of fine-structure or related issues.

The stage is also set for the possibility of an anti-inner model theorem.  A refutation of the \Omega Conjecture would in my view be such an anti-inner model theorem and there are other possibilities.

So the entire program as presently conceived is for me falsifiable.

If the Ultimate-L Conjecture is provable then I think this makes a far more compelling case for LC than anything coming out of HP for denying LC. I would (perhaps unwisely) go much further. If the Ultimate-L Conjecture is provable then there is an absolutely compelling case for CH and in fact for V = Ultimate L. (The precise formulation of V = Ultimate L is already specified, it is again not some vague axiom).

How about this:  We each identify a critical conjecture  whose proof we think absolutely confirms our position and whose refutation we also admit sends us back to “square 1”. For me it is the Ultimate-L Conjecture.

HP is still in its infancy so this may not be a fair request. So maybe we have to wait on this. But you should at least be able to articulate why you think HP even has a chance.

Aside: IMH simply traces back to Turing determinacy as will \text{IMH}^*. For each real x let M_x be the minimum model of ZFC containing x. The theory of M_x is constant on a cone as is its second order theory. Obviously this (Turing) stable theory will have a rich structure theory.  But this is just one instance of many analogous stable theories (this is the power of PD and beyond) and HP is just borrowing this. It is also a theorem that Turing-PD is equivalent to PD.

But why should this have anything to do with V.

Here is a question: Why is not the likely scenario simply that HP ends up stair-stepping up to PD and that the ultimate conclusion of the entire enterprise is simply yet another argument for PD?