# Re: Paper and slides on indefiniteness of CH

Dear Sy,

I see no virtue in “back to square one” conjectures. In the HP the whole point is to put out maximality criteria and test them; it is foolish to make conjectures without doing the mathematics.

Of course you do not see any virtue in “back to square one” conjectures. Fine, we have different views (yet again),

Why should your programme be required to make “make or break” conjectures, and what is so attractive about that?

I find it quite interesting if philosophical considerations lead to specific “make or break” conjectures. Especially if there is no obvious purely mathematical basis on which to make the conjecture. The HOD Conjecture is a good example. There is no purely mathematical reason (that I know of) to make that conjecture (that the HOD Hypothesis is provable from say ZFC + extendible).  It is a prediction from the Ultimate L scenario (just as is the (provability) of the $\Omega$ Conjecture).

Of course there is another reason for identifying such conjectures. They provide test questions for future progress. If one can refute from large cardinals that the $\Omega$ Conjecture holds then one refutes the Ultimate L Conjecture and moreover shows that there is a failure of inner model theory based on sequences of extenders.

One more question at this point: Suppose that Jack had succeeded in proving in ZFC that $0^\#$ does not exist. Would you infer from this that V = L is true? On what grounds?

Not necessarily. But I certainly would no longer declare as evident that V is not L. The question of V versus L would for me,  reopen.

Your V = Ultimate-L programme (apologies if I misunderstand it) sounds very much like saying that Ultimate L is provably close to V so we might as well just take V = Ultimate-L to be true. If I haven’t misunderstood then I find this very dubious indeed. As Pen would say, axioms which restrict set-existence are never a good idea.

If the Ultimate L Conjecture is true (provable in ZFC + extendible) then V = Ultimate L becomes a serious possibility which (to me anyway) cannot just be dismissed as is now the possibility that V = L.

For me, the “validation” of V = Ultimate L will have to come from the insights V = Ultimate L gives for the hierarchy of large cardinals beyond supercompact. (These are the “other tests which will have to be passed”).

If that does not happen or if the genuine insights come from outer models of V = Ultimate L, or even from something entirely unrelated to Ultimate L, then for me the case for V = Ultimate L will weaken, possibly significantly.

On the other hand, if in the setting of V = Ultimate L, a whole new hierarchy of large cardinals is revealed, otherwise invisible, then things get interesting. Here it might be the Axiom $\textsf{I}0$, in but the context of V = Ultimate L, which could be key.

You will respond that is sheer speculation without foundation or solid evidence. It is sheer speculation. We shall see about the evidence.

Maybe it is time to try once again to simply agree that we disagree and wait for future mathematical developments before continuing this debate.

Regards,
Hugh

# Re: Paper and slides on indefiniteness of CH

PS:

On Oct 28, 2014, at 4:37 PM, Sy David Friedman wrote:

So, if you fix the proof, you have proved the HOD Conjecture.

I’ll try not to let that scare me

?? This seems like an odd comment. The HOD Conjecture is a prediction of the Ultimate L Conjecture. But there is no reason it could not have a proof which has nothing to do with the Ultimate L Conjecture.

But I’m also not surprised that there was a bug in my proof!

Why? Do you believe the HOD Conjecture? If so why?

Regards,
Hugh

# Re: Paper and slides on indefiniteness of CH

Dear Sy,

I am a bit bewildered by your criticism. The scenario I described rests largely on one single conjecture, the Ultimate L Conjecture. While the motivations for this conjecture are complicated, the statement uses just basic notions in modern set theory: HOD, some specific large cardinal notions, universally Baire sets, and L relativized to sets $A \subset \mathbb R$.

If this conjecture is refuted then the scenario collapses. If this conjecture is proved then a substantial portion of the scenario will be verified.

As for your “conjecture” and I assume you make this conjecture only for the purposes of your criticism:

How’s this for a conjecture: It is consistent to have a supercompact but none in an inner model of HOD. Do you have more evidence against that conjecture than for its negation?

This conjecture is not relevant. There is no reason I should even have an opinion on this.

Technical aside for those who are interested:

Sy’s “conjecture” is focusing on the “resource problem” within inner model constructions. If there is an inner model construction for a $\Phi$-cardinal, the resource problem is whether that construction be implemented if one only assumes the existence of a $\Phi$-cardinal. There are examples where the inner model constructions use a bit more and the reduction in what is required is subtle.

If one has “more” than a $\Phi$-cardinal then there is a trivial solution to the problem of finding an inner model of HOD with a $\Phi$-cardinal. For example if there is a supercompact cardinal with a measurable cardinal above then there is an inner model for a supercompact cardinal within HOD. This why simply asking whether there is an inner model of a $\Phi$ cardinal contained within HOD is not so interesting — just assume more in V and it has a trivial solution.

For Sy’s “conjecture” a positive answer, while certainly surprising, is not obviously in any way (to me at least) an anti-inner model theorem (the Ultimate L Conjecture could still be true), and a negative answer does not require solving the inner model problem for one supercompact (since that inner model theory only has to work in the situation that there is a supercompact cardinal and no inner model with a supercompact cardinal and a measurable cardinal above). This why it is not relevant to the Ultimate L Conjecture.

Finally for many axiomatic schemes of large cardinal beyond supercompact (so $\Phi$ is now a scheme) the inner model of HOD problem likely has a solution (without assuming anything more in V) which again does not require the fine-structure models — this is the point of the HOD Conjecture and the solution is HOD itself. A precursor to this is the theorem (outright and not requiring any conjectures) that if there is a supercompact cardinal in V then there is a measurable cardinal in HOD.

Regards,
Hugh

# Re: Paper and slides on indefiniteness of CH

Dear Sy,

Ok one more round. This is a short one since you did not raise many new questions etc. in your last response.

On Aug 7, 2014, at 9:32 AM, Sy David Friedman wrote:

Unless there is something fundamentally different about LC which there is.

My point here has nothing to do with large cardinals. I am just saying that the tests analogous to those used to argue in favour of PD (success and structure theory) are inadequate in the number theory context. Doesn’t that cast doubt on the use of those tests to justify PD?

Absolutely not,  given the special nature of LC.

Aside: Suppose an Oracle informs us that RH is equivalent to Con PD. Then I would say RH is true (and it seems you would agree here). But suppose that the Oracle informs us that RH is equivalent to Con Reinhardt cardinal. Then I actually would conjecture that RH is false. But by your criteria of evidence you would seem to argue RH is true.

I guess you mean Con(Reinhardt without AC).

Of course, I thought that was clear.

Why would you conjecture in this setting that RH is false?

Because I think “Reinhardt without AC” Is inconsistent. The Oracle could be malicious after all.

(Aside: I actually think that  “ZF + Reinhardt + extendible” is inconsistent. The situation for “ZF + Reinhardt” is a bit less clear to me at this stage. But this distinction is not really relevant to this discussion, e.g. everything in these exchanges could have been in the context of super-Reinhardt).

I thought that you had evidence of statements of consistency strength below a Reinhardt cardinal but above that of large cardinals with AC?

I am not sure what you are referring to here. The hierarchy of axioms past I0 that I have discussed in the JML papers are all AC based.

With such evidence I would indeed conjecture that RH is true; wouldn’t you?

This seems an odd position. Suppose that the Oracle matched 100 number theoretic ($\Pi^0_1$) sentences with the consistency of variations of the notion of Reinhardt cardinals.  This increases one confidence in these statements?

Again, I don’t think we need to justify the consistency of large cardinals, the “empirical proof theory” takes care of that.

Yes, theoretically the whole edifice of large cardinal consistency could collapse, even at a measurable, we simply have to live with that, but I am not really worried. There is just too much evidence for a strict hierarchy of consistency strengths going all the way up to the level of supercompactness, using quasi-lower bounds instead of core model lower bounds. This reminds me of outdated discussions of how to justify the consistency of second-order arithmetic through ordinal analysis. The ordinal analysis is important, but no longer necessary for the justification of consistency.

However one conceives of truth in set theory, one must have answers to:

1) Is PD true?

I don’t know.

2) Is PD consistent?

Yes.

You have examples of how HP could lead to answering the first question.  But no examples of how HP could ever answer the second question.  Establishing Con LC for levels past PD looks even more problematic.

It is not my intention to try to use the HP to justify the already-justified consistency of large cardinals.

There is strong meta-mathematical evidence that the only way to ultimately answer (2) with “yes” is to answer (1) with “yes”.  This takes us back to my basic confusion about the basis for your conviction in Con PD.

Note that the IMH yields inner models with measurables but does not imply Pi-1-1 determinacy. This is a “local” counterexample to your suggestion that to get Con(Definable determinacy) we need to get Definable determinacy.

But I have not suggested that to get Con(Definable determinacy) one needs to get Definable determinacy.  I have suggested that to get Con PD one needs to get PD.  (For me, PD is boldface PD, perhaps you have interpreted PD as light-face PD).

The local/global issue is not present at the level you indicate.  It only occurs past the level of 1 Woodin cardinal, I have said this repeatedly.

Why? If  $0^\#$ exists then it is unique. $M_1^\#$ (the analog of $0^\#$) at the next projective level has a far more subtle uniqueness.

(For those unfamiliar with the notation: $M_1$ is the “minimum” fine-structural inner model with 1 Woodin cardinal and the notion of minimality makes perfect sense for iterable models through elementary embeddings).

The iterable $M_1^\#$ is unique but the iterable $M_1^\#$ implies all sets have sharps. In fact in the context of all sets have sharps, the existence of M_1^# is equivalent to the existence of a proper class inner model with a Woodin cardinal.

Without a background of sharps there examples where there are no definable inner models past the level of 1 Woodin cardinal no matter what inner models one assumes exist. The example is not contrived, it is $L[x]$ for a Turing cone of $x$ and this example lies at the core of the consistency proof of IMH.

The inner model program for me has come down to one main conjecture (the Ultimate-L conjecture) and two secondary conjectures, the $\Omega$ Conjecture and the HOD Conjecture. These are not vague conjectures, they are each precisely stated. None of these conjectures involves any concept of fine-structure or related issues.

The stage is also set for the possibility of an anti-inner model theorem.  A refutation of the $\Omega$ Conjecture would in my view be such an anti-inner model theorem and there are other possibilities.

So the entire program as presently conceived is for me falsifiable.

If the Ultimate-L Conjecture is provable then I think this makes a far more compelling case for LC than anything coming out of HP for denying LC. I would (perhaps unwisely) go much further. If the Ultimate-L Conjecture is provable then there is an absolutely compelling case for CH and in fact for V = Ultimate L. (The precise formulation of V = Ultimate L is already specified, it is again not some vague axiom).

How about this:  We each identify a critical conjecture  whose proof we think absolutely confirms our position and whose refutation we also admit sends us back to “square 1”. For me it is the Ultimate-L Conjecture.

HP is still in its infancy so this may not be a fair request. So maybe we have to wait on this. But you should at least be able to articulate why you think HP even has a chance.

Aside: IMH simply traces back to Turing determinacy as will $\text{IMH}^*$. For each real $x$ let $M_x$ be the minimum model of ZFC containing $x$. The theory of $M_x$ is constant on a cone as is its second order theory. Obviously this (Turing) stable theory will have a rich structure theory.  But this is just one instance of many analogous stable theories (this is the power of PD and beyond) and HP is just borrowing this. It is also a theorem that Turing-PD is equivalent to PD.

But why should this have anything to do with V.

Here is a question: Why is not the likely scenario simply that HP ends up stair-stepping up to PD and that the ultimate conclusion of the entire enterprise is simply yet another argument for PD?

Regards,
Hugh