Tag Archives: Reinhardt cardinals

Re: Paper and slides on indefiniteness of CH

Dear Peter,

On Sun, 26 Oct 2014, Koellner, Peter wrote:

Dear Sy,

I have one more comment on choiceless large cardinal axioms that concerns \textsf{IMH}^\#.

It is worth pointing out that Hugh’s consistency proof of \textsf{IMH}^\# shows a good deal more (as pointed out by Hugh):

Theorem: Assume that every real has a sharp. Then, in the hyperuniverse there exists a real x_0 such that every #-generated M to which x_0 belongs, satisfies \textsf{IMH}^\# and in the following very strong sense:

(*) Every sentence \phi which holds in a definable inner model of some #-generated model N, holds in a definable inner model of M.

There is no requirement here that N be an outer model of M. In this sense, \textsf{IMH}^\# is not really about outer models. It is much more general.

It follows that not only is \textsf{IMH}^\# consistent with all (choice) large cardinal axioms (assuming, of course, that they are consistent) but also that \textsf{IMH}^\# is consistent with all choiceless large cardinal axioms (assuming, of course, that they are consistent).

The point is that \textsf{IMH}^\# is powerless to provide us with insight into where inconsistency sets in.

Before you protest let me clarify: I know that you have not claimed otherwise! You take the evidence for consistency of large cardinal axioms to be entirely extrinsic.

I protest for a different reason: The above argument is too special to \textsf{IMH}^\#. For example, consider Hugh’s variant which he called \textsf{IMH}^\#(\text{card arith}). I don’t see how you argue as above with this stronger principle, which is known to be consistent, using my proof with Radek based on #-generated Jensen coding.

My point is simply to observe to everyone that \textsf{IMH}^\# makes no predictions on this matter.

So what? How do you know that \textsf{IMH}^\#(\text{card arith}) makes no predictions on this matter?

And, more generally, I doubt that you think that the hyperuniverse program has the resources to make predictions on this question since you take evidence for consistency of large cardinal axioms to be extrinsic.

In contrast “V = Ultimate L” does make predictions on this question, in the following precise sense:

Theorem (Woodin). Assume the Ultimate L Conjecture. Then if there is an extendible cardinal then there are no Reinhardt cardinals.

Theorem (Woodin). Assume the Ultimate L Conjecture. Then there are no Super Reinhardt cardinals and there are no Berkeley cardinals.

Theorem (Woodin). Assume the Ultimate L Conjecture. Then if there is an extendible cardinal then there are no Reinhardt cardinals (or Super Reinhardt cardinals or Berkeley Cardinals, etc.)
(Here the Ultimate-L Conjecture is a conjectured theorem of ZFC.)

Interesting. (Did you intend there to be a difference between the first and third theorems above?)

But probably there’s a proof of no Reinhardt cardinals in ZF, even without Ultimate L:

Conjecture: In ZF, the Stable Core is rigid.

Note that V is generic over the Stable Core.


Re: Paper and slides on indefiniteness of CH

Dear Sy,

Ok one more round. This is a short one since you did not raise many new questions etc. in your last response.

On Aug 7, 2014, at 9:32 AM, Sy David Friedman wrote:

Unless there is something fundamentally different about LC which there is.

My point here has nothing to do with large cardinals. I am just saying that the tests analogous to those used to argue in favour of PD (success and structure theory) are inadequate in the number theory context. Doesn’t that cast doubt on the use of those tests to justify PD?

Absolutely not,  given the special nature of LC.

Aside: Suppose an Oracle informs us that RH is equivalent to Con PD. Then I would say RH is true (and it seems you would agree here). But suppose that the Oracle informs us that RH is equivalent to Con Reinhardt cardinal. Then I actually would conjecture that RH is false. But by your criteria of evidence you would seem to argue RH is true.

I guess you mean Con(Reinhardt without AC).

Of course, I thought that was clear.

Why would you conjecture in this setting that RH is false?

Because I think “Reinhardt without AC” Is inconsistent. The Oracle could be malicious after all.

(Aside: I actually think that  “ZF + Reinhardt + extendible” is inconsistent. The situation for “ZF + Reinhardt” is a bit less clear to me at this stage. But this distinction is not really relevant to this discussion, e.g. everything in these exchanges could have been in the context of super-Reinhardt).

I thought that you had evidence of statements of consistency strength below a Reinhardt cardinal but above that of large cardinals with AC?

I am not sure what you are referring to here. The hierarchy of axioms past I0 that I have discussed in the JML papers are all AC based.

With such evidence I would indeed conjecture that RH is true; wouldn’t you?

This seems an odd position. Suppose that the Oracle matched 100 number theoretic (\Pi^0_1) sentences with the consistency of variations of the notion of Reinhardt cardinals.  This increases one confidence in these statements?

Again, I don’t think we need to justify the consistency of large cardinals, the “empirical proof theory” takes care of that.

Yes, theoretically the whole edifice of large cardinal consistency could collapse, even at a measurable, we simply have to live with that, but I am not really worried. There is just too much evidence for a strict hierarchy of consistency strengths going all the way up to the level of supercompactness, using quasi-lower bounds instead of core model lower bounds. This reminds me of outdated discussions of how to justify the consistency of second-order arithmetic through ordinal analysis. The ordinal analysis is important, but no longer necessary for the justification of consistency.

However one conceives of truth in set theory, one must have answers to:

1) Is PD true?

I don’t know.

2) Is PD consistent?


You have examples of how HP could lead to answering the first question.  But no examples of how HP could ever answer the second question.  Establishing Con LC for levels past PD looks even more problematic.

It is not my intention to try to use the HP to justify the already-justified consistency of large cardinals.

There is strong meta-mathematical evidence that the only way to ultimately answer (2) with “yes” is to answer (1) with “yes”.  This takes us back to my basic confusion about the basis for your conviction in Con PD.

Note that the IMH yields inner models with measurables but does not imply Pi-1-1 determinacy. This is a “local” counterexample to your suggestion that to get Con(Definable determinacy) we need to get Definable determinacy.

But I have not suggested that to get Con(Definable determinacy) one needs to get Definable determinacy.  I have suggested that to get Con PD one needs to get PD.  (For me, PD is boldface PD, perhaps you have interpreted PD as light-face PD).

The local/global issue is not present at the level you indicate.  It only occurs past the level of 1 Woodin cardinal, I have said this repeatedly.

Why? If  0^\# exists then it is unique. M_1^\# (the analog of 0^\#) at the next projective level has a far more subtle uniqueness.

(For those unfamiliar with the notation: M_1 is the “minimum” fine-structural inner model with 1 Woodin cardinal and the notion of minimality makes perfect sense for iterable models through elementary embeddings).

The iterable M_1^\# is unique but the iterable $M_1^\#$ implies all sets have sharps. In fact in the context of all sets have sharps, the existence of M_1^# is equivalent to the existence of a proper class inner model with a Woodin cardinal.

Without a background of sharps there examples where there are no definable inner models past the level of 1 Woodin cardinal no matter what inner models one assumes exist. The example is not contrived, it is L[x] for a Turing cone of x and this example lies at the core of the consistency proof of IMH.


Some final comments.

The inner model program for me has come down to one main conjecture (the Ultimate-L conjecture) and two secondary conjectures, the \Omega Conjecture and the HOD Conjecture. These are not vague conjectures, they are each precisely stated. None of these conjectures involves any concept of fine-structure or related issues.

The stage is also set for the possibility of an anti-inner model theorem.  A refutation of the \Omega Conjecture would in my view be such an anti-inner model theorem and there are other possibilities.

So the entire program as presently conceived is for me falsifiable.

If the Ultimate-L Conjecture is provable then I think this makes a far more compelling case for LC than anything coming out of HP for denying LC. I would (perhaps unwisely) go much further. If the Ultimate-L Conjecture is provable then there is an absolutely compelling case for CH and in fact for V = Ultimate L. (The precise formulation of V = Ultimate L is already specified, it is again not some vague axiom).

How about this:  We each identify a critical conjecture  whose proof we think absolutely confirms our position and whose refutation we also admit sends us back to “square 1”. For me it is the Ultimate-L Conjecture.

HP is still in its infancy so this may not be a fair request. So maybe we have to wait on this. But you should at least be able to articulate why you think HP even has a chance.

Aside: IMH simply traces back to Turing determinacy as will \text{IMH}^*. For each real x let M_x be the minimum model of ZFC containing x. The theory of M_x is constant on a cone as is its second order theory. Obviously this (Turing) stable theory will have a rich structure theory.  But this is just one instance of many analogous stable theories (this is the power of PD and beyond) and HP is just borrowing this. It is also a theorem that Turing-PD is equivalent to PD.

But why should this have anything to do with V.

Here is a question: Why is not the likely scenario simply that HP ends up stair-stepping up to PD and that the ultimate conclusion of the entire enterprise is simply yet another argument for PD?